Updates from: 09/22/2022 06:06:32
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS700022 | InvalidMultipleResourcesScope - The provided value for the input parameter scope isn't valid because it contains more than one resource. | | AADSTS700023 | InvalidResourcelessScope - The provided value for the input parameter scope isn't valid when request an access token. | | AADSTS7000215 | Invalid client secret is provided. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.|
+| AADSTS7000218 | The request body must contain the following parameter: 'client_assertion' or 'client_secret'. |
| AADSTS7000222 | InvalidClientSecretExpiredKeysProvided - The provided client secret keys are expired. Visit the Azure portal to create new keys for your app, or consider using certificate credentials for added security: [https://aka.ms/certCreds](./active-directory-certificate-credentials.md) | | AADSTS700005 | InvalidGrantRedeemAgainstWrongTenant - Provided Authorization Code is intended to use against other tenant, thus rejected. OAuth2 Authorization Code must be redeemed against same tenant it was acquired for (/common or /{tenant-ID} as appropriate) | | AADSTS1000000 | UserNotBoundError - The Bind API requires the Azure AD user to also authenticate with an external IDP, which hasn't happened yet. |
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
You can ensure the user's identity by validating the ID token with a public sign
To see this scenario in action, try the code samples in [Sign in users from a Web app](scenario-web-app-sign-user-overview.md).
-In addition to simple sign-in, a web server app might need to access another web service, such as a Representational State Transfer ([REST](https://docs.microsoft.com/rest/api/azure/)) API. In this case, the web server app engages in a combined OpenID Connect and OAuth 2.0 flow, by using the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). For more information about this scenario, refer to our code [sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md).
+In addition to simple sign-in, a web server app might need to access another web service, such as a [Representational State Transfer (REST) API](/rest/api/azure/). In this case, the web server app engages in a combined OpenID Connect and OAuth 2.0 flow, by using the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). For more information about this scenario, refer to our code [sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md).
## Web APIs
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
&response_type=code &redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F &response_mode=query
-&scope=https%3A%2F%2Fgraph.microsoft.com%2Fmail.read%20api%3A%2F%2F
+&scope=https%3A%2F%2Fgraph.microsoft.com%2Fmail.read
&state=12345 &code_challenge=YTFjNjI1OWYzMzA3MTI4ZDY2Njg5M2RkNmVjNDE5YmEyZGRhOGYyM2IzNjdmZWFhMTQ1ODg3NDcxY2Nl &code_challenge_method=S256
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 09/19/2022 Last updated : 09/21/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on September 19th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on September 21st, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
active-directory Secure With Azure Ad Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-multiple-tenants.md
# Resource isolation with multiple tenants
-There are specific scenarios when delegating administration within a single tenant boundary won't meet your needs. In this section, we'll discuss requirements that may drive you to create a multi-tenant architecture. Multi-tenant organizations might span two or more Azure AD tenants. This can result in unique cross-tenant collaboration and management requirements. Multi-tenant architectures increase management overhead and complexity and should be used with caution. We recommend using a single tenant if your needs can be met with that architecture. For more detailed information, see [Multi-tenant user management]../fundamentals/multi-tenant-user-management-introduction.md).
+There are specific scenarios when delegating administration within a single tenant boundary won't meet your needs. In this section, we'll discuss requirements that may drive you to create a multi-tenant architecture. Multi-tenant organizations might span two or more Azure AD tenants. This can result in unique cross-tenant collaboration and management requirements. Multi-tenant architectures increase management overhead and complexity and should be used with caution. We recommend using a single tenant if your needs can be met with that architecture. For more detailed information, see [Multi-tenant user management](multi-tenant-user-management-introduction.md).
A separate tenant creates a new boundary, and therefore decoupled management of Azure AD directory roles, directory objects, conditional access policies, Azure resource groups, Azure management groups, and other controls as described in previous sections.
Devices: This tenant contains a reduced number of devices; only those that are n
* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
-* [Best practices](secure-with-azure-ad-best-practices.md)
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-user-accounts.md
The following are listed in order of importance based on the effect and severity
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- | | Users authenticating to other Azure AD tenants.| Low| Azure AD Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID isn't equal to Home Tenant ID <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/UsersAuthenticatingtoOtherAzureADTenants.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
-|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure))
-|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)
+|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
### Monitoring for failed unusual sign ins
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following table shows the scheduling (trigger) relevant attributes and the m
|Attribute|Type|Supported in HR Inbound Provisioning|Support in Azure AD Connect Cloud Sync|Support in Azure AD Connect Sync| |--|--|--|--|--| |employeeHireDate|DateTimeOffset|Yes|Yes|Yes|
-|employeeLeaveDateTime|DateTimeOffset|Not currently(manually setting supported)|Not currently(manually setting supported)|Not currently(manually setting supported)|
+|employeeLeaveDateTime|DateTimeOffset|Yes|Not currently|Not currently|
> [!NOTE]
-> Currently, automatic synchronization of the employeeLeaveDateTime attribute for HR Inbound scenarios is not available. To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually. Manually setting the attribute can be done in the portal or with Graph. For more information see [User profile in Azure](../fundamentals/active-directory-users-profile-azure-portal.md) and [Update user](/graph/api/user-update?view=graph-rest-beta&tabs=http).
+> To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually for cloud-only users. For more information, see: [Set employeeLeaveDateTime](set-employee-leave-date-time.md)
This document explains how to set up synchronization from on-premises Azure AD Connect cloud sync and Azure AD Connect for the required attributes.
active-directory Set Employee Leave Date Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/set-employee-leave-date-time.md
+
+ Title: Set employeeLeaveDateTime
+description: Explains how to manually set employeeLeaveDateTime.
++++ Last updated : 09/07/2022+++
+# Set employeeLeaveDateTime
+
+This article describes how to manually set the employeeLeaveDateTime attribute for a user. This attribute can be set as a trigger for leaver workflows created using Lifecycle Workflows.
+
+## Required permission and roles
+
+To set the employeeLeaveDateTime attribute, you must make sure the correct delegated roles and application permissions are set. They are as follows:
+
+### Delegated
+
+In delegated scenarios, the signed-in user needs the Global Administrator role to update the employeeLeaveDateTime attribute. One of the following delegated permissions is also required:
+- User-LifeCycleInfo.ReadWrite.All
+- Directory.AccessAsUser.All
+
+### Application
+
+Updating the employeeLeaveDateTime requires the User-LifeCycleInfo.ReadWrite.All application permission.
+
+>[!NOTE]
+> The User-LifeCycleInfo.ReadWrite.All permissions is currently hidden and cannot be configured in Graph Explorer or the API permission blade of app registrations.
+
+## Set employeeLeaveDateTime via PowerShell
+To set the employeeLeaveDateTime for a user using PowerShell enter the following information:
+
+ ```powershell
+ Connect-MgGraph -Scopes "User-LifeCycleInfo.ReadWrite.All"
+ Select-MgProfile -Name "beta"
+
+ $UserId = "<Object ID of the user>"
+ $employeeLeaveDateTime = "<Leave date>"
+
+ $Body = '{"employeeLeaveDateTime": "' + $employeeLeaveDateTime + '"}'
+ Update-MgUser -UserId $UserId -BodyParameter $Body
+
+ $User = Get-MgUser -UserId $UserId -Property employeeLeaveDateTime
+ $User.AdditionalProperties
+ ```
+
+ This script is an example of a user who will leave on September 30, 2022 at 23:59.
+
+ ```powershell
+ Connect-MgGraph -Scopes "User-LifeCycleInfo.ReadWrite.All"
+ Select-MgProfile -Name "beta"
+
+ $UserId = "528492ea-779a-4b59-b9a3-b3773ef6da6d"
+ $employeeLeaveDateTime = "2022-09-30T23:59:59Z"
+
+ $Body = '{"employeeLeaveDateTime": "' + $employeeLeaveDateTime + '"}'
+ Update-MgUser -UserId $UserId -BodyParameter $Body
+
+ $User = Get-MgUser -UserId $UserId -Property employeeLeaveDateTime
+ $User.AdditionalProperties
+```
++
+## Next steps
+
+- [How to synchronize attributes for Lifecycle workflows](how-to-lifecycle-workflow-sync-attributes.md)
+- [Lifecycle Workflows templates](lifecycle-workflow-templates.md)
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
Caveat: If there are synchronized accounts that need to have non-expiring passwo
> [!NOTE] > The Set-MsolPasswordPolicy PowerShell command will not work on federated domains.
+> [!NOTE]
+> The Set-AzureADUser PowerShell command will not work on federated domains.
+ #### Synchronizing temporary passwords and "Force Password Change on Next Logon" It is typical to force a user to change their password during their first logon, especially after an admin password reset occurs. It is commonly known as setting a "temporary" password and is completed by checking the "User must change password at next logon" flag on a user object in Active Directory (AD).
active-directory How To Connect Sync Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-whatis.md
The sync service consists of two components, the on-premises **Azure AD Connect
> >To find out if you are already eligible for Cloud Sync, please verify your requirements in [this wizard](https://admin.microsoft.com/adminportal/home?Q=setupguidance#/modernonboarding/identitywizard). >
->To learn more about Cloud Sync please read [this article](https://docs.microsoft.com/azure/active-directory/cloud-sync/what-is-cloud-sync), or watch this [short video](https://www.microsoft.com/en-us/videoplayer/embed/RWJ8l5).
+>To learn more about Cloud Sync please read [this article](/azure/active-directory/cloud-sync/what-is-cloud-sync), or watch this [short video](https://www.microsoft.com/videoplayer/embed/RWJ8l5).
>
active-directory Reference Connect Sync Attributes Synchronized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-attributes-synchronized.md
Device objects are created in Active Directory. These objects can be devices joi
## Notes * When using an Alternate ID, the on-premises attribute userPrincipalName is synchronized with the Azure AD attribute onPremisesUserPrincipalName. The Alternate ID attribute, for example mail, is synchronized with the Azure AD attribute userPrincipalName.
+* Although there is no enforcement of uniqueness on the Azure AD onPremisesUserPrincipalName attribute, it is not supported to sync the same UserPrincipalName value to the Azure AD onPremisesUserPrincipalName attribute for multiple different Azure AD users.
* In the lists above, the object type **User** also applies to the object type **iNetOrgPerson**. ## Next steps
active-directory How To View Applied Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md
Title: How to view applied conditional access policies in the Azure AD sign-in logs | Microsoft Docs
-description: Learn how to view applied conditional access policies in the Azure AD sign-in logs
+ Title: View applied Conditional Access policies in Azure AD sign-in logs
+description: Learn how to view Conditional Access policies in Azure AD sign-in logs so that you can assess the impact of those policies.
documentationcenter: ''
-# How to: View applied conditional access policies in the Azure AD sign-in logs
+# View applied Conditional Access policies in Azure AD sign-in logs
-With conditional access policies, you can control, how your users get access to the resources of your Azure tenant. As a tenant admin, you need to be able to determine what impact your conditional access policies have on sign-ins to your tenant, so that you can take action if necessary. The sign-in logs in Azure AD provide you with the information you need to assess the impact of your policies.
-
-
-This article explains how you can get access to the information about applied conditional access policies.
+With Conditional Access policies, you can control how your users get access to the resources of your Azure tenant. As a tenant admin, you need to be able to determine what impact your Conditional Access policies have on sign-ins to your tenant, so that you can take action if necessary.
+The sign-in logs in Azure Active Directory (Azure AD) give you the information that you need to assess the impact of your policies. This article explains how to view applied Conditional Access policies in those logs.
## What you should know As an Azure AD administrator, you can use the sign-in logs to: -- Troubleshoot sign in problems-- Check on feature performance-- Evaluate security of a tenant-
-Some scenarios require you to get an understanding for how your conditional access policies were applied to a sign-in event. Common examples include:
--- **Helpdesk administrators** who need to look at applied conditional access policies to understand if a policy is the root cause of a ticket opened by a user. --- **Tenant administrators** who need to verify that conditional access policies have the intended impact on the users of a tenant.
+- Troubleshoot sign-in problems.
+- Check on feature performance.
+- Evaluate the security of a tenant.
+Some scenarios require you to get an understanding of how your Conditional Access policies were applied to a sign-in event. Common examples include:
-You can access the sign-in logs using the Azure portal, MS Graph, and PowerShell.
+- *Helpdesk administrators* who need to look at applied Conditional Access policies to understand if a policy is the root cause of a ticket that a user opened.
+- *Tenant administrators* who need to verify that Conditional Access policies have the intended impact on the users of a tenant.
+You can access the sign-in logs by using the Azure portal, Microsoft Graph, and PowerShell.
## Required administrator roles
+To see applied Conditional Access policies in the sign-in logs, administrators must have permissions to view both the logs and the policies.
-To see applied conditional access policies in the sign-in logs, administrators must have permissions to:
--- View sign-in logs -- View conditional access policies-
-The least privileged built-in role that grants both permissions is the **Security Reader**. As a best practice, your global administrator should add the **Security Reader** role to the related administrator accounts.
-
+The least privileged built-in role that grants both permissions is *Security Reader*. As a best practice, your global administrator should add the Security Reader role to the related administrator accounts.
-The following built in roles grant permissions to read conditional access policies:
+The following built-in roles grant permissions to read Conditional Access policies:
- Global Administrator
The following built in roles grant permissions to read conditional access polici
- Conditional Access Administrator
-The following built in roles grant permission to view sign-in logs:
+The following built-in roles grant permission to view sign-in logs:
- Global Administrator
The following built in roles grant permission to view sign-in logs:
- Reports Reader - ## Permissions for client apps
-If you use a client app to pull sign-in logs from Graph, your app needs permissions to receive the **appliedConditionalAccessPolicy** resource from Graph. As a best practice, assign **Policy.Read.ConditionalAccess** because it's the least privileged permission. Any of the following permissions is sufficient for a client app to access applied CA policies in sign-in logs through Graph:
+If you use a client app to pull sign-in logs from Microsoft Graph, your app needs permissions to receive the `appliedConditionalAccessPolicy` resource from Microsoft Graph. As a best practice, assign `Policy.Read.ConditionalAccess` because it's the least privileged permission.
-- Policy.Read.ConditionalAccess
+Any of the following permissions is sufficient for a client app to access applied certificate authority (CA) policies in sign-in logs through Microsoft Graph:
-- Policy.ReadWrite.ConditionalAccess
+- `Policy.Read.ConditionalAccess`
-- Policy.Read.All
+- `Policy.ReadWrite.ConditionalAccess`
-
+- `Policy.Read.All`
## Permissions for PowerShell
-Like any other client app, the Microsoft Graph PowerShell module needs client permissions to access applied conditional access policies in the sign-in logs. To successfully pull applied conditional access in the sign-in logs, you must consent to the necessary permissions with your administrator account for MS Graph PowerShell. As a best practice, consent to:
+Like any other client app, the Microsoft Graph PowerShell module needs client permissions to access applied Conditional Access policies in the sign-in logs. To successfully pull applied Conditional Access policies in the sign-in logs, you must consent to the necessary permissions with your administrator account for Microsoft Graph PowerShell. As a best practice, consent to:
-- Policy.Read.ConditionalAccess-- AuditLog.Read.All -- Directory.Read.All
+- `Policy.Read.ConditionalAccess`
+- `AuditLog.Read.All`
+- `Directory.Read.All`
These permissions are the least privileged permissions with the necessary access. To consent to the necessary permissions, use:
-` Connect-MgGraph -Scopes Policy.Read.ConditionalAccess, AuditLog.Read.All, Directory.Read.All `
+`Connect-MgGraph -Scopes Policy.Read.ConditionalAccess, AuditLog.Read.All, Directory.Read.All`
To view the sign-in logs, use:
-`Get-MgAuditLogSignIn `
-
-The output of this cmdlet contains a **AppliedConditionalAccessPolicies** property that shows all the conditional access policies applied to the sign-in.
+`Get-MgAuditLogSignIn`
For more information about this cmdlet, see [Get-MgAuditLogSignIn](https://learn.microsoft.com/powershell/module/microsoft.graph.reports/get-mgauditlogsignin?view=graph-powershell-1.0).
-The AzureAD Graph PowerShell module doesn't support viewing applied conditional access policies; only the Microsoft Graph PowerShell module returns applied conditional access policies.
+The Azure AD Graph PowerShell module doesn't support viewing applied Conditional Access policies. Only the Microsoft Graph PowerShell module returns applied Conditional Access policies.
## Confirming access
-In the **Conditional Access** tab, you see a list of conditional access policies applied to that sign-in event.
-
+On the **Conditional Access** tab, you see a list of Conditional Access policies applied to that sign-in event.
-To confirm that you have admin access to view applied conditional access policies in the sign-ins logs, do:
+To confirm that you have admin access to view applied Conditional Access policies in the sign-in logs:
-1. Navigate to the Azure portal.
+1. Go to the Azure portal.
-2. In the top-right corner, select your directory, and then select **Azure Active Directory** in the left navigation pane.
+2. In the upper-right corner, select your directory, and then select **Azure Active Directory** on the left pane.
3. In the **Monitoring** section, select **Sign-in logs**.
-4. Click an item in the sign-in row table to bring up the Activity Details: Sign-ins context pane.
-
-5. Click on the Conditional Access tab in the context pane. If your screen is small, you may need to click the ellipsis […] to see all context pane tabs.
--
+4. Select an item in the sign-in table to open the **Activity Details: Sign-ins context** pane.
+5. Select the **Conditional Access** tab on the context pane. If your screen is small, you might need to select the ellipsis (**...**) to see all tabs on the context pane.
## Next steps
-* [Sign-ins error codes reference](./concept-sign-ins.md)
-* [Sign-ins report overview](concept-sign-ins.md)
+* [Sign-in error code reference](./concept-sign-ins.md)
+* [Sign-in report overview](concept-sign-ins.md)
active-directory Panorama9 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/panorama9-tutorial.md
Previously updated : 06/07/2021 Last updated : 09/19/2022 # Tutorial: Azure Active Directory integration with Panorama9
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to your Panorama9 company site as an administrator.
-2. In the toolbar on the top, click **Manage**, and then click **Extensions**.
-
- ![Extensions](./media/panorama9-tutorial/toolbar.png "Extensions")
-
-3. On the **Extensions** dialog, click **Single Sign-On**.
-
- ![Single Sign-On](./media/panorama9-tutorial/extension.png "Single Sign-On")
+2. Navigate to **Manage** -> **Extensions** -> **Single Sign-On**.
4. In the **Settings** section, perform the following steps: ![Settings](./media/panorama9-tutorial/configuration.png "Settings")
- a. In **Identity provider URL** textbox, paste the value of **Login URL**, which you have copied from Azure portal.
+ a. Enable the Single Sign-On.
+
+ b. In **Identity URL** textbox, paste the value of **Identifier(Entity ID)**, which you have copied from Azure portal.
- b. In **Certificate fingerprint** textbox, paste the **Thumbprint** value of certificate, which you have copied from Azure portal.
+ c. In **Certificate fingerprint** textbox, paste the **Thumbprint** value of certificate, which you have copied from Azure portal.
-5. Click **Save**.
+5. Click **Save Changes**.
### Create Panorama9 test user
In the case of Panorama9, provisioning is a manual task.
1. Sign in to your **Panorama9** company site as an administrator.
-2. In the menu on the top, click **Manage**, and then click **Users**.
-
- ![Screenshot that shows the "Manage" and "Users" tabs selected.](./media/panorama9-tutorial/user.png "Users")
-
-3. In the Users section, Click **+** to add new user.
+1. In the Users section, type the email address of a valid Azure Active Directory user you want to provision into the **Email** textbox and give a valid **Name**.
![Users](./media/panorama9-tutorial/new-user.png "Users")
-4. Go to the User data section, type the email address of a valid Azure Active Directory user you want to provision into the **Email** textbox.
-
-5. Come to the Users section, Click **Save**.
-
- > [!NOTE]
- > The Azure Active Directory account holder receives an email and follows a link to confirm their account before it becomes active.
+5. Click **Create user**.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Panorama9 Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Panorama9 Sign on URL where you can initiate the login flow.
-* Go to Panorama9 Sign-on URL directly and initiate the login flow from there.
+* Go to Panorama9 Sign on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Panorama9 tile in the My Apps, this will redirect to Panorama9 Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Panorama9 tile in the My Apps, this will redirect to Panorama9 Sign on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Learn more about [Front Door Profile - UpgradeCDNToLatestSDKLanguage (Upgrade SD
## Cognitive Services
+### 429 Throttling Detected on this resource
+
+We observed that there have been 1,000 or more 429 throttling errors on this resource in a one day timeframe. Consider enabling autoscale to better handle higher call volumes and reduce the number of 429 errors.
+
+Learn more about [Cognitive Service - AzureAdvisor429LimitHit (429 Throttling Detected on this resource)](/azure/cognitive-services/autoscale?tabs=portal).
+ ### Upgrade to the latest Cognitive Service Text Analytics API version Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personally identifiable information recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have opinion mining in SA endpoint, redacted text property in personally identifiable information endpoint
Learn more about [Communication service - UpgradeTurnSdk (Use recommended versio
## Compute
-### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
+### Update Automanage to the latest API Version
-We have determined that your VMs are located in a region different or far from where your users are connecting from, using Azure Virtual Desktop. This may lead to prolonged connection response times and will impact overall user experience on Azure Virtual Desktop.
+We have identified sdk calls from outdated API for resources under this subscription. We recommend switching to the latest sdk versions. This ensures you receive the latest features and performance improvements.
-Learn more about [Virtual machine - RegionProximitySessionHosts (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md).
+Learn more about [Virtual machine - UpdateToLatestApi (Update Automanage to the latest API Version)](/azure/automanage/reference-sdk).
-### Consider increasing the size of your NVA to address persistent high CPU
+### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
-When NVAs run at high CPU, packets can get dropped resulting in connection failures or high latency due to network retransmits. Your NVA is running at high CPU, so you should consider increasing the VM size as allowed by the NVA vendor's licensing requirements.
+We have determined that your VMs are located in a region different or far from where your users are connecting from, using Azure Virtual Desktop. This may lead to prolonged connection response times and will impact overall user experience on Azure Virtual Desktop.
-Learn more about [Virtual machine - NVAHighCPU (Consider increasing the size of your NVA to address persistent high CPU)](https://aka.ms/NVAHighCPU).
+Learn more about [Virtual machine - RegionProximitySessionHosts (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md).
### Use Managed disks to prevent disk I/O throttling
We noticed that you are using SSD disks while also using Standard HDD disks on t
Learn more about [Virtual machine - MixedDiskTypeToSSDPublic (Use SSD Disks for your production workloads)](/azure/virtual-machines/windows/disks-types#disk-comparison).
-### Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.
-
-We have identified that your Virtual Machine might be running a version of Barracuda Networks NextGen Firewall Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Barracuda Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - BarracudaNVAAccelNet (Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.
-
-We have identified that your Virtual Machine might be running a version of Arista Networks vEOS Router Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Arista Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - AristaNVAAccelNet (Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.
-
-We have identified that your Virtual Machine might be running a version of Cisco Cloud Services Router 1000V Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Cisco for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - CiscoCSRNVAAccelNet (Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.
-
-We have identified that your Virtual Machine might be running a version of Palo Alto Networks VM-Series Firewall Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Palo Alto Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - PaloAltoNVAAccelNet (Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.
-
-We have identified that your Virtual Machine might be running a version of NetApp Cloud Volumes ONTAP Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact NetApp for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - NetAppNVAAccelNet (NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
- ### Match production Virtual Machines with Production Disk for consistent performance and better latency Production virtual machines need production disks if you want to get the best performance. We see that you are running a production level virtual machine, however, you are using a low performing disk with standard HDD. Upgrading your disks that are attached to your production disks, either Standard SSD or Premium SSD, will benefit you with a more consistent experience and improvements in latency. Learn more about [Virtual machine - MatchProdVMProdDisks (Match production Virtual Machines with Production Disk for consistent performance and better latency)](/azure/virtual-machines/windows/disks-types#disk-comparison).
-### Update to the latest version of your Arista VEOS product for Accelerated Networking support.
-
-We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - AristaVeosANUpgradeRecommendation (Update to the latest version of your Arista VEOS product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.
-
-We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - BarracudaNgANUpgradeRecommendation (Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.
-
-We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - Cisco1000vANUpgradeRecommendation (Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### Update to the latest version of your F5 BigIp product for Accelerated Networking support.
-
-We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - F5BigIpANUpgradeRecommendation (Update to the latest version of your F5 BigIp product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### Update to the latest version of your NetApp product for Accelerated Networking support.
-
-We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - NetAppANUpgradeRecommendation (Update to the latest version of your NetApp product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.
-
-We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - PaloAltoFWANUpgradeRecommendation (Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-
-### Update to the latest version of your Check Point product for Accelerated Networking support.
-
-We have identified that your Virtual Machine (VM) might be running a version of software image that is running older drivers for Accelerated Networking (AN). Your VM has a synthetic network interface that is either not AN capable or is not compatible with all Azure hardware. We recommend that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-
-Learn more about [Virtual machine - CheckPointCGANUpgradeRecommendation (Update to the latest version of your Check Point product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
- ### Accelerated Networking may require stopping and starting the VM We have detected that Accelerated Networking is not engaged on VM resources in your existing deployment even though the feature has been requested. In rare cases like this, it may be necessary to stop and start your VM, at your convenience, to re-engage AccelNet. Learn more about [Virtual machine - AccelNetDisengaged (Accelerated Networking may require stopping and starting the VM)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-### NVA may see traffic loss due to hitting the maximum number of flows.
-
-Packet loss has been observed for this Virtual Machine due to hitting or exceeding the maximum number of flows for a VM instance of this size on Azure
-
-Learn more about [Virtual machine - NvaMaxFlowLimit (NVA may see traffic loss due to hitting the maximum number of flows.)](../virtual-network/virtual-machine-network-throughput.md).
- ### Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance. Ultra disk is available in the same region as your database workload. Ultra disk offers high throughput, high IOPS, and consistent low latency disk storage for your database workloads: For Oracle DBs, you can now use either 4k or 512E sector sizes with Ultra disk depending on your Oracle DB version. For SQL server, leveraging Ultra disk for your log disk might offer more performance for your database. See instructions here for migrating your log disk to Ultra disk.
Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with
Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions).
-## Data Factory
+## DataFactory
### Review your throttled Data Factory Triggers
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlReadR
## PostgreSQL
-### Scale the storage limit for PostgreSQL server
-
-Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits).
- ### Increase the work_mem to avoid excessive disk spilling from sort and hash Our internal telemetry shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server which will help to reduce the scenarios when the sort or hash happens on disk, thereby improving the overall query performance. Learn more about [PostgreSQL server - OrcasPostgreSqlWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration).
+### Scale the storage limit for PostgreSQL server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits).
+ ### Distribute data in server group to distribute workload among nodes It looks like the data has not been distributed in this server group but stays on the coordinator. For full Hyperscale (Citus) benefits distribute data on worker nodes in this server group.
Our internal telemetry shows that there is high churn in the buffer pool for thi
Learn more about [PostgreSQL server - OrcasMeruMemoryUpsell (Move your PostgreSQL Flexible Server to Memory Optimized SKU)](https://aka.ms/azure_postgresql_flexible_server_pricing).
-## Desktop Virtualization
+## DesktopVirtualization
### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change
## Cosmos DB
-### Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK
-
-We noticed that your Azure Cosmos DB applications are using Gateway mode via the Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability.
-
-Learn more about [Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking).
- ### Configure your Azure Cosmos DB query page size (MaxItemCount) to -1 You are using the query page size of 100 for queries for your Azure Cosmos container. We recommend using a page size of -1 for faster scans.
This account has a custom setting that allows the logical partition size in a co
Learn more about [Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hierarchical partition keys for optimal data distribution)](https://devblogs.microsoft.com/cosmosdb/hierarchical-partition-keys-private-preview/).
+### Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK
+
+We noticed that your Azure Cosmos DB applications are using Gateway mode via the Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability.
+
+Learn more about [Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking).
+ ## HDInsight
+### Unsupported Kubernetes version is detected
+
+Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with a supported version.
+
+Learn more about [HDInsight Cluster Pool - UnsupportedHiloAKSVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions).
+ ### Reads happen on most recent data More than 75% of your read requests are landing on the memstore. That indicates that the reads are primarily on recent data. This suggests that even if a flush happens on the memstore, the recent file needs to be accessed and that file needs to be in the cache.
You are seeing this advisor recommendation because HDInsight team's system log s
These conditions are indicators that your cluster is suffering from high write latencies. This could be due to heavy workload performed on your cluster. To improve the performance of your cluster, you may want to consider utilizing the Accelerated Writes feature provided by Azure HDInsight HBase. The Accelerated Writes feature for HDInsight Apache HBase clusters attaches premium SSD-managed disks to every RegionServer (worker node) instead of using cloud storage. As a result, provides low write-latency and better resiliency for your applications.
+To read more on this feature, please visit link:
Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](../hdinsight/hbase/apache-hbase-accelerated-writes.md).
Learn more about [HDInsight cluster - FlushQueueCandidate (Consider increasing t
The compaction queue in your region servers is more than 2000 suggesting that more data requires compaction. Slower compactions can impact read performance as the number of files to read are more. More files without compaction can also impact the heap usage related to how files interact with Azure file system.
-Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](../hdinsight/hbase/apache-hbase-advisor.md).
+Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](/azure/hdinsight/hbase/apache-hbase-advisor).
-## Key Vault
+## Automanage
+
+### Update Automanage to the latest API Version
+
+We have identified sdk calls from outdated API for resources under this subscription. We recommend switching to the latest sdk versions. This ensures you receive the latest features and performance improvements.
+
+Learn more about [Machine - Azure Arc - UpdateToLatestApiHci (Update Automanage to the latest API Version)](/azure/automanage/reference-sdk).
+
+## KeyVault
### Update Key Vault SDK Version
Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables
## Networking
+### Configure DNS Time to Live to 60 seconds
+
+Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible.
+
+Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5).
+ ### Configure DNS Time to Live to 20 seconds Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible.
Because you are running IaaS virtual machine workloads on Standard HDD managed d
Learn more about [Storage Account - StandardSSDForNonPremVM (Upgrade to Standard SSD Disks for consistent and improved performance)](/azure/virtual-machines/windows/disks-types#standard-ssd).
-### Upgrade your Storage Client Library to the latest version for better reliability and performance
-
-The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
- ### Use premium performance block blob storage One or more of your storage accounts has a high transaction rate per GB of block blob data stored. Use premium performance block blob storage instead of standard performance storage for your workloads that require fast storage response times and/or high transaction rates and potentially save on storage costs.
Clustered columnstore tables are organized in data into segments. Having high se
Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](https://aka.ms/AzureSynapseCCIGuidance).
-### CCI Tables with Deleted Records Over the Recommended Threshold
-
-Deleting a row from a compressed row group only logically marks the row as deleted. The row remains in the compressed row group until the partition or table is rebuilt.
-
-Learn more about [Synapse workspace - SynapseCCIHealthDeletedRowgroups (CCI Tables with Deleted Records Over the Recommended Threshold)](https://aka.ms/AzureSynapseCCIDeletedRowGroups).
- ### Update SynapseManagementClient SDK Version New SynapseManagementClient is using .NET SDK 4.0 or above.
Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/
Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](https://aka.ms/antbc-socket).
+## SAP on Azure Workloads
+
+### For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter
+
+The parameter net.ipv4.tcp_wmem specifies minimum, default, and maximum send buffer sizes that are used for a TCP socket. Set the parameter as per SAP note: 302436 to certify HANA DB to run with ANF and improve file system performance. The maximum value should not exceed net.core.wmem_max parameter
+
+Learn more about [Database Instance - WriteBuffersAllocated (For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### For improved file system performance in HANA DB with ANF, optimize wmem_max OS parameter
+
+In HANA DB with ANF storage type, the maximum write socket buffer, defined by the parameter, net.core.wmem_max must be set large enough to handle outgoing network packets. This configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346
+
+Learn more about [Database Instance - MaxWriteBuffer (For improved file system performance in HANA DB with ANF, optimize wmem_max OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter
+
+The parameter net.ipv4.tcp_rmem specifies minimum, default, and maximum receive buffer sizes used for a TCP socket. Set the parameter as per SAP note 3024346 to certify HANA DB to run with ANF and improve file system performance. The maximum value should not exceed net.core.rmem_max parameter
+
+Learn more about [Database Instance - OptimizeReadTcp (For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### For improved file system performance in HANA DB with ANF, optimize rmem_max OS parameter
+
+In HANA DB with ANF storage type, the maximum read socket buffer, defined by the parameter, net.core.rmem_max must be set large enough to handle incoming network packets. This configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346.
+
+Learn more about [Database Instance - MaxReadBuffer (For improved file system performance in HANA DB with ANF, optimize rmem_max OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### For improved file system performance in HANA DB with ANF, set receiver backlog queue size to 300000
+
+The parameter net.core.netdev_max_backlog specifies the size of the receiver backlog queue, used if a Network interface receives packets faster than the kernel can process. Set the parameter as per SAP note: 3024346. This configuration certifies HANA DB to run with ANF and improves file system performance.
+
+Learn more about [Database Instance - BacklogQueueSize (For improved file system performance in HANA DB with ANF, set receiver backlog queue size to 300000)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### To improve file system performance in HANA DB with ANF, enable the TCP window scaling OS parameter
+
+Enable the TCP window scaling parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads
+
+Learn more about [Database Instance - EnableTCPWindowScaling (To improve file system performance in HANA DB with ANF, enable the TCP window scaling OS parameter )](https://launchpad.support.sap.com/#/notes/3024346).
+
+### For improved file system performance in HANA DB with ANF, disable IPv6 protocol in OS
+
+Disable IPv6 as per recommendation for SAP on Azure for HANA DB with ANF to improve file system performance
+
+Learn more about [Database Instance - DisableIPv6Protocol (For improved file system performance in HANA DB with ANF, disable IPv6 protocol in OS)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+
+### To improve file system performance in HANA DB with ANF, disable parameter for slow start after idle
+
+The parameter net.ipv4.tcp_slow_start_after_idle disables the need to scale-up incrementally the TCP window size for TCP connections which were idle for some time. By setting this parameter to zero as per SAP note: 302436, the maximum speed is used from beginning for previously idle TCP connections
+
+Learn more about [Database Instance - ParamterSlowStart (To improve file system performance in HANA DB with ANF, disable parameter for slow start after idle)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### For improved file system performance in HANA DB with ANF optimize tcp_max_syn_backlog OS parameter
+
+To prevent the kernel from using SYN cookies in a situation where lots of connection requests are sent in a short timeframe and to prevent a warning about a potential SYN flooding attack in the system log, the size of the SYN backlog should be set to a reasonably high value. See SAP note 2382421
+
+Learn more about [Database Instance - TCPMaxSynBacklog (For improved file system performance in HANA DB with ANF optimize tcp_max_syn_backlog OS parameter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+
+### For improved file system performance in HANA DB with ANF, enable the tcp_sack OS parameter
+
+Enable the tcp_sack parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads
+
+Learn more about [Database Instance - TCPSackParameter (For improved file system performance in HANA DB with ANF, enable the tcp_sack OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### In high-availaility scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter
+
+Disable the tcp_timestamps parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in high-availability scenarios for HANA DB with ANF in SAP workloads
+
+Learn more about [Database Instance - DisableTCPTimestamps (In high-availaility scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### For improved file system performance in HANA DB with ANF, enable the tcp_timestamps OS parameter
+
+Enable the tcp_timestamps parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads
+
+Learn more about [Database Instance - EnableTCPTimestamps (For improved file system performance in HANA DB with ANF, enable the tcp_timestamps OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### To improve file system performance in HANA DB with ANF, enable auto-tuning TCP receive buffer size
+
+The parameter net.ipv4.tcp_moderate_rcvbuf enables TCP to perform receive buffer auto-tuning, to automatically size the buffer (no greater than tcp_rmem to match the size required by the path for full throughput. Enable this parameter as per SAP note: 302436 for improved file system performance
+
+Learn more about [Database Instance - EnableAutoTuning (To improve file system performance in HANA DB with ANF, enable auto-tuning TCP receive buffer size)](https://launchpad.support.sap.com/#/notes/3024346).
+
+### For improved file system performance in HANA DB with ANF, optimize net.ipv4.ip_local_port_range
+
+As HANA uses a considerable number of connections for the internal communication, it makes sense to have as many client ports available as possible for this purpose. Set the OS parameter, net.ipv4.ip_local_port_range parameter as per SAP note 2382421 to ensure optimal internal HANA communication.
+
+Learn more about [Database Instance - IPV4LocalPortRange (For improved file system performance in HANA DB with ANF, optimize net.ipv4.ip_local_port_range)](https://launchpad.support.sap.com/#/notes/2382421).
+
+### To improve file system performance in HANA DB with ANF, optimize sunrpc.tcp_slot_table_entries
+
+Set the parameter sunrpc.tcp_slot_table_entries to 128 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads
+
+Learn more about [Database Instance - TCPSlotTableEntries (To improve file system performance in HANA DB with ANF, optimize sunrpc.tcp_slot_table_entries)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+
+### All disks in LVM for /hana/data volume should be of the same type to ensure high performance in HANA DB
+
+If multiple disk types are selected in the /hana/data volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure
+
+Learn more about [Database Instance - HanaDataDiskTypeSame (All disks in LVM for /hana/data volume should be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Configuration%20for%20SAP%20/hana/data%20volume).
+
+### Stripe size for /hana/data should be 256 kb for improved performance of HANA DB in SAP workloads
+
+If you are using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. Based on experience with recentLinux versions, Azure recommends using stripe size of 256 kb for /hana/data filesystem for better performance of HANA DB
+
+Learn more about [Database Instance - HanaDataStripeSize (Stripe size for /hana/data should be 256 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=As%20stripe%20sizes%20the%20recommendation%20is%20to%20use).
+
+### To improve file system performance in HANA DB with ANF, optimize the parameter vm.swappiness
+
+Set the OS parameter vm.swappiness to 10 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads
+
+Learn more about [Database Instance - VmSwappiness (To improve file system performance in HANA DB with ANF, optimize the parameter vm.swappiness)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+
+### To improve file system performance in HANA DB with ANF, disable net.ipv4.conf.all.rp_filter
+
+Disable the reverse path filter linux OS parameter, net.ipv4.conf.all.rp_filter as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads
+
+Learn more about [Database Instance - DisableIPV4Conf (To improve file system performance in HANA DB with ANF, disable net.ipv4.conf.all.rp_filter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+
+### If using Ultradisk, the IOPS for /hana/data volume should be >=7000 for better HANA DB performance
+
+IOPS of at least 7000 in /hana/data volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/data volume as per this requirement to ensure high performance of the DB
+
+Learn more about [Database Instance - HanaDataIOPS (If using Ultradisk, the IOPS for /hana/data volume should be >=7000 for better HANA DB performance)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana:~:text=1%20x%20P6-,Azure%20Ultra%20disk%20storage%20configuration%20for%20SAP%20HANA,-Another%20Azure%20storage).
+
+### To improve file system performance in HANA DB with ANF, change parameter tcp_max_slot_table_entries
+
+Set the OS parameter tcp_max_slot_table_entries to 128 as per SAP note: 302436 for improved file transfer performance in HANA DB with ANF in SAP workloads
+
+Learn more about [Database Instance - OptimizeTCPMaxSlotTableEntries (To improve file system performance in HANA DB with ANF, change parameter tcp_max_slot_table_entries)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+
+### Ensure the read performance of /hana/data volume is >=400 MB/sec for better performance in HANA DB
+
+Read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes is recommended for SAP workloads on Azure. Select the disk type for /hana/data as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA
+
+Learn more about [Database Instance - HanaDataVolumePerformance (Ensure the read performance of /hana/data volume is >=400 MB/sec for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Read%20activity%20of%20at%20least%20400%20MB/sec%20for%20/hana/data).
+
+### Read/write performance of /hana/log volume should be >=250 MB/sec for better performance in HANA DB
+
+Read/Write activity of at least 250 MB/sec for /hana/log for 1 MB I/O size is recommended for SAP workloads on Azure. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA
+
+Learn more about [Database Instance - HanaLogReadWriteVolume (Read/write performance of /hana/log volume should be >=250 MB/sec for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Read/write%20on%20/hana/log%20of%20250%20MB/sec%20with%201%20MB%20I/O%20sizes).
+
+### If using Ultradisk, the IOPS for /hana/log volume should be >=2000 for better performance in HANA DB
+
+IOPS of at least 2000 in /hana/log volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB
+
+Learn more about [Database Instance - HanaLogIOPS (If using Ultradisk, the IOPS for /hana/log volume should be >=2000 for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana:~:text=1%20x%20P6-,Azure%20Ultra%20disk%20storage%20configuration%20for%20SAP%20HANA,-Another%20Azure%20storage).
+
+### All disks in LVM for /hana/log volume should be of the same type to ensure high performance in HANA DB
+
+If multiple disk types are selected in the /hana/log volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure
+
+Learn more about [Database Instance - HanaDiskLogVolumeSameType (All disks in LVM for /hana/log volume should be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=For%20the%20/hana/log%20volume.%20the%20configuration%20would%20look%20like).
+
+### Enable Write Accelerator on /hana/log volume with Premium disk for improved write latency in HANA DB
+
+Azure Write Accelerator is a functionality for Azure M-Series VMs. It improves I/O latency of writes against the Azure premium storage. For SAP HANA, Write Accelerator is to be used against the /hana/log volume only.
+
+Learn more about [Database Instance - WriteAcceleratorEnabled (Enable Write Accelerator on /hana/log volume with Premium disk for improved write latency in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=different%20SAP%20applications.-,Solutions%20with%20premium%20storage%20and%20Azure%20Write%20Accelerator%20for%20Azure%20M%2DSeries%20virtual%20machines,-Azure%20Write%20Accelerator).
+
+### Stripe size for /hana/log should be 64 kb for improved performance of HANA DB in SAP workloads
+
+If you are using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. To get enough throughput with larger I/O sizes, Azure recommends using stripe size of 64 kb for /hana/log filesystem for better performance of HANA DB
+
+Learn more about [Database Instance - HanaLogStripeSize (Stripe size for /hana/log should be 64 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=As%20stripe%20sizes%20the%20recommendation%20is%20to%20use).
+ ## Next steps Learn more about [Performance Efficiency - Microsoft Azure Well Architected Framework](/azure/architecture/framework/scalability/overview)
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
For more information on core Kubernetes and AKS concepts, see:
- [Kubernetes / AKS scale][aks-concepts-scale] <!-- LINKS - External -->
-[kured]: https://github.com/weaveworks/kured
+[kured]: https://github.com/kubereboot/kured
[kubernetes-network-policies]: https://kubernetes.io/docs/concepts/services-networking/network-policies/ [secret-risks]: https://kubernetes.io/docs/concepts/configuration/secret/#risks [encryption-atrest]: ../security/fundamentals/encryption-atrest.md
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
To deploy the `kured` DaemonSet, install the following official Kured Helm chart
```console # Add the Kured Helm repository
-helm repo add kured https://weaveworks.github.io/kured
+helm repo add kubereboot https://kubereboot.github.io/charts/
# Update your local Helm chart repository cache helm repo update
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
For public preview the following limitations exist:
- Maximum configured number of authorization providers per API Management instance: 1,000 - Maximum configured number of authorizations per authorization provider: 10,000 - Maximum configured number of access policies per authorization: 100-- Maximum requests per minute per authorization: 100
+- Maximum requests per minute per service: 250
- Authorization code PKCE flow with code challenge isn't supported. - Authorizations feature isn't supported on self-hosted gateways. - API documentation is not available yet. Please see [this](https://github.com/Azure/APIManagement-Authorizations) GitHub repository with samples.
app-service App Service Web Tutorial Dotnet Sqldatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-dotnet-sqldatabase.md
The sample project contains a basic [ASP.NET MVC](https://www.asp.net/mvc) creat
1. Open the *dotnet-sqldb-tutorial-master/DotNetAppSqlDb.sln* file in Visual Studio.
-1. Type `Ctrl+F5` to run the app without debugging. The app is displayed in your default browser.
+1. Type `F5` to run the app. The app is displayed in your default browser.
+
+ > [!NOTE]
+ > If you only installed Visual Studio and the prerequisites, you may have to [install missing packages via NuGet](/nuget/consume-packages/install-use-packages-visual-studio).
1. Select the **Create New** link and create a couple *to-do* items.
More resources:
Want to optimize and save on your cloud spending? > [!div class="nextstepaction"]
-> [Start analyzing costs with Cost Management](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+> [Start analyzing costs with Cost Management](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
Other language stacks, likewise, get the app settings as environment variables a
- [ASP.NET Core](configure-language-dotnetcore.md#access-environment-variables) - [Node.js](configure-language-nodejs.md#access-environment-variables) - [PHP](configure-language-php.md#access-environment-variables)-- [Python](configure-language-python.md#access-environment-variables)
+- [Python](configure-language-python.md#access-app-settings-as-environment-variables)
- [Java](configure-language-java.md#configure-data-sources) - [Ruby](configure-language-ruby.md#access-environment-variables) - [Custom containers](configure-custom-container.md#configure-environment-variables)
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
In PowerShell:
Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"DB_HOST"="myownserver.mysql.database.azure.com"} ```
-When your app runs, the App Service app settings are injected into the process as environment variables automatically. You can verify container environment variables with the URL `https://<app-name>.scm.azurewebsites.net/Env)`.
+When your app runs, the App Service app settings are injected into the process as environment variables automatically. You can verify container environment variables with the URL `https://<app-name>.scm.azurewebsites.net/Env`.
If your app uses images from a private registry or from Docker Hub, credentials for accessing the repository are saved in environment variables: `DOCKER_REGISTRY_SERVER_URL`, `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD`. Because of security risks, none of these reserved variable names are exposed to the application.
When persistent storage is disabled, then writes to the `C:\home` directory are
The only exception is the `C:\home\LogFiles` directory, which is used to store the container and application logs. This folder will always persist upon app restarts if [application logging is enabled](troubleshoot-diagnostic-logs.md?#enable-application-logging-windows) with the **File System** option, independently of the persistent storage being enabled or disabled. In other words, enabling or disabling the persistent storage will not affect the application logging behavior.
+By default, persistent storage is *disabled* on Windows custom containers. To enable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `true` via the [Cloud Shell](https://shell.azure.com). In Bash:
+
+```azurecli-interactive
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=true
+```
+
+In PowerShell:
+
+```azurepowershell-interactive
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITES_ENABLE_APP_SERVICE_STORAGE"=true}
+```
+ ::: zone-end ::: zone pivot="container-linux"
The only exception is the `/home/LogFiles` directory, which is used to store the
It is recommended to write data to `/home` or a [mounted azure storage path](configure-connect-to-azure-storage.md?tabs=portal&pivots=container-linux). Data written outside these paths will not be persistent during restarts and will be saved to platform-managed host disk space separate from the App Service Plans file storage quota. -
-By default, persistent storage is **enabled** on custom containers, you can disable this through app settings. To disable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `false` via the [Cloud Shell](https://shell.azure.com). In Bash:
+By default, persistent storage is *enabled* on Linux custom containers. To disable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `false` via the [Cloud Shell](https://shell.azure.com). In Bash:
```azurecli-interactive az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=false
In PowerShell:
Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITES_ENABLE_APP_SERVICE_STORAGE"=false} ``` + > [!NOTE] > You can also [configure your own persistent storage](configure-connect-to-azure-storage.md).
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
If your certificate authority gives you multiple certificates in the certificate
Now, export your merged TLS/SSL certificate with the private key that was used to generate your certificate request. If you generated your certificate request using OpenSSL, then you created a private key file.
+> [!NOTE]
+> OpenSSL v3 creates certificate serials with 20 octets (40 chars) as the X.509 specification allows. Currently only 10 octets (20 chars) is supported when uploading certificate PFX files.
+> OpenSSL v3 also changed default cipher from 3DES to AES256, but this can be overridden on the command line.
+> OpenSSL v1 uses 3DES as default and only uses 8 octets (16 chars) in the serial, so the PFX files generated are supported without any special modifications.
+ 1. To export your certificate to a PFX file, run the following command, but replace the placeholders _&lt;private-key-file>_ and _&lt;merged-certificate-file>_ with the paths to your private key and your merged certificate file. ```bash
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli). ```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/--assignee-principal-type ServicePrincipal
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/ --assignee-principal-type ServicePrincipal
``` 1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
You can find our set of Actions grouped into different repositories on GitHub, e
- [K8s deploy](https://github.com/Azure/k8s-deploy) -- [Starter Workflows](https://github.com/actions/starter-workflows)
+- [Starter Workflows](https://github.com/actions/starter-workflows)
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-local-git.md
When you push commits to your App Service repository, App Service deploys the fi
git push azure main ```
+ You can also change the `DEPLOYMENT_BRANCH` app setting in the Azure Portal, by selecting **Configuration** under **Settings** and adding a new Application Setting with a name of `DEPLOYMENT_BRANCH` and value of `main`.
++ ## Troubleshoot deployment You may see the following common error messages when you use Git to publish to an App Service app in Azure:
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md
The app must be running in the **Standard**, **Premium**, or **Isolated** tier i
You can clone a configuration from any existing slot. Settings that can be cloned include app settings, connection strings, language framework versions, web sockets, HTTP version, and platform bitness. > [!NOTE]
- > Currently, VNET and the Private Endpoint are not cloned across slots.
+ > Currently, a Private Endpoint isn't cloned across slots.
> 4. After the slot is added, select **Close** to close the dialog box. The new slot is now shown on the **Deployment slots** page. By default, **Traffic %** is set to 0 for the new slot, with all customer traffic routed to the production slot.
To route production traffic automatically:
After the setting is saved, the specified percentage of clients is randomly routed to the non-production slot.
-After a client is automatically routed to a specific slot, it's "pinned" to that slot for the life of that client session. On the client browser, you can see which slot your session is pinned to by looking at the `x-ms-routing-name` cookie in your HTTP headers. A request that's routed to the "staging" slot has the cookie `x-ms-routing-name=staging`. A request that's routed to the production slot has the cookie `x-ms-routing-name=self`.
+After a client is automatically routed to a specific slot, it's "pinned" to that slot for one hour or until the cookies are deleted. On the client browser, you can see which slot your session is pinned to by looking at the `x-ms-routing-name` cookie in your HTTP headers. A request that's routed to the "staging" slot has the cookie `x-ms-routing-name=staging`. A request that's routed to the production slot has the cookie `x-ms-routing-name=self`.
> [!NOTE] > You can also use the [`az webapp traffic-routing set`](/cli/azure/webapp/traffic-routing#az-webapp-traffic-routing-set) command in the Azure CLI to set the routing percentages from CI/CD tools like GitHub Actions, DevOps pipelines, or other automation systems.
app-service Overview Patch Os Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-patch-os-runtime.md
This article shows you how to get certain version information regarding the OS or software in [App Service](overview.md).
-App Service is a Platform-as-a-Service, which means that the OS and application stack are managed for you by Azure; you only manage your application and its data. More control over the OS and application stack is available you in [Azure Virtual Machines](../virtual-machines/index.yml). With that in mind, it is nevertheless helpful for you as an App Service user to know more information, such as:
+App Service is a Platform-as-a-Service, which means that the OS and application stack are managed for you by Azure; you only manage your application and its data. More control over the OS and application stack is available for you in [Azure Virtual Machines](../virtual-machines/index.yml). With that in mind, it is nevertheless helpful for you as an App Service user to know more information, such as:
- How and when are OS updates applied? - How is App Service patched against significant vulnerabilities (such as zero-day)?
app-service Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md
You can deploy changes to this app by making edits in Visual Studio Code, saving
:::zone target="docs" pivot="development-environment-cli"
-2. Save your changes, then redeploy the app using the [az webapp up](/cli/azure/webapp#az-webapp-up) command again with no arguments:
+2. Save your changes, then redeploy the app using the [az webapp up](/cli/azure/webapp#az-webapp-up) command again with no arguments for Linux. Add `--os-type Windows` for Windows:
```azurecli az webapp up
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
az webapp up --runtime "PHP:8.0" --os-type=linux
``` - If the `az` command isn't recognized, be sure you have [Azure CLI](/cli/azure/install-azure-cli) installed.-- The `--runtime "php|8.0"` argument creates the web app with PHP version 8.0.
+- The `--runtime "PHP:8.0"` argument creates the web app with PHP version 8.0.
- The `--os-type=linux` argument creates the web app on App Service on Linux. - You can optionally specify a name with the argument `--name <app-name>`. If you don't provide one, then a name will be automatically generated. - You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az_appservice_list_locations) command.
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
Title: 'Quickstart: Create a WordPress site' description: Create your first WordPress site on Azure App Service in minutes. keywords: app service, azure app service, wordpress, preview, app service on linux, plugins, mysql flexible server, wordpress on linux, php+ Last updated 06/27/2022 ms.devlang: wordpress
[WordPress](https://www.wordpress.org) is an open source content management system (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure
-In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). This quickstart uses the **Basic** tier and [**incurs a cost**](https://azure.microsoft.com/pricing/details/app-service/linux/) for your Azure subscription. The WordPress installation comes with pre-installed plugins for performance improvements, [W3TC](https://wordpress.org/plugins/w3-total-cache/) for caching and [Smush](https://wordpress.org/plugins/wp-smushit/) for image compression.
+In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) with [Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). This quickstart uses the **Basic** tier for your app and a **Burstable, B1ms** tier for your database, and incurs a cost for your Azure Subscription. For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
To complete this quickstart, you need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs). > [!IMPORTANT]
-> - [After November 28, 2022, PHP will only be supported on App Service on Linux.](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#end-of-life-for-php-74).
-> - The MySQL Flexible Server is created behind a private [Virtual Network](/azure/virtual-network/virtual-networks-overview) and can't be accessed directly. To access the database, use phpMyAdmin that's deployed with the WordPress site. It can be found at the URL : https://`<sitename>`.azurewebsites.net/phpmyadmin
+> After November 28, 2022, [PHP will only be supported on App Service on Linux.](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#end-of-life-for-php-74).
>
-> Additional documentation, including [Migrating to App Service](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_migration_linux_appservices.md), can be found at [WordPress - App Service on Linux](https://github.com/Azure/wordpress-linux-appservice/tree/main/WordPress). If you have feedback to improve this WordPress offering on App Service, submit your ideas at [Web Apps Community](https://feedback.azure.com/d365community/forum/b09330d1-c625-ec11-b6e6-000d3a4f0f1c).
+> Additional documentation, including [Migrating to App Service](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_migration_linux_appservices.md), can be found at [WordPress - App Service on Linux](https://github.com/Azure/wordpress-linux-appservice/tree/main/WordPress).
>+ ## Create WordPress site using Azure portal 1. To start creating the WordPress site, browse to [https://ms.portal.azure.com/#create/WordPress.WordPress](https://ms.portal.azure.com/#create/WordPress.WordPress).
To complete this quickstart, you need an Azure account with an active subscripti
:::image type="content" source="./media/quickstart-wordpress/04-wordpress-basics-project-details.png?text=Azure portal WordPress Project Details" alt-text="Screenshot of WordPress project details.":::
-1. Under **Instance details**, type a globally unique name for your web app and choose **Linux** for **Operating System**. Select **Basic** for **Hosting plan**. See the table below for app and database SKUs for given hosting plans. You can view [hosting plans details in the announcement](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/the-new-and-better-wordpress-on-app-service/ba-p/3202594). For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
+1. Under **Hosting details**, type a globally unique name for your web app and choose **Linux** for **Operating System**. Select **Basic** for **Hosting plan**. Select **Compare plans** to view features and price comparisons. See the table below for app and database SKUs for given hosting plans. You can view [hosting plans details in the announcement](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/announcing-the-general-availability-of-wordpress-on-azure-app/ba-p/3593481). For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
:::image type="content" source="./media/quickstart-wordpress/05-wordpress-basics-instance-details.png?text=WordPress basics instance details" alt-text="Screenshot of WordPress instance details.":::
To complete this quickstart, you need an Azure account with an active subscripti
1. To access the WordPress Admin page, browse to `/wp-admin` and use the credentials you created in the [WordPress settings step](#wordpress-settings). :::image type="content" source="./media/quickstart-wordpress/wordpress-admin-login.png?text=WordPress admin login" alt-text="Screenshot of WordPress admin login.":::
+
+> [!NOTE]
+> If you have feedback to improve this WordPress offering on App Service, submit your ideas at [Web Apps Community](https://feedback.azure.com/d365community/forum/b09330d1-c625-ec11-b6e6-000d3a4f0f1c).
+>
+
## Clean up resources When no longer needed, you can delete the resource group, App service, and all related resources.
When no longer needed, you can delete the resource group, App service, and all r
1. From the *resource group* page, select **Delete resource group**. Confirm the name of the resource group to finish deleting the resources. :::image type="content" source="./media/quickstart-wordpress/delete-resource-group.png" alt-text="Delete resource group.":::
-## Change MySQL password
-The WordPress configuration is modified to use [Application Settings](reference-app-settings.md#wordpress) to connect to the MySQL database. To change the MySQL database password, see [update admin password](../mysql/single-server/how-to-create-manage-server-portal.md#update-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) also need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [Changing MySQL database password](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/changing_mysql_database_password.md).
+## Manage the MySQL flexible server, username, or password
+
+- The MySQL Flexible Server is created behind a private [Virtual Network](/azure/virtual-network/virtual-networks-overview.md) and can't be accessed directly. To access or manage the database, use phpMyAdmin that's deployed with the WordPress site. You can access phpMyAdmin by following these steps:
+ - Navigate to the URL: https://`<sitename>`.azurewebsites.net/phpmyadmin
+ - Login with the flexible server's username and password
+
+- Database username and password of the MySQL Flexible Server are generated automatically. To retrieve these values after the deployment go to Application Settings section of the Configuration page in Azure App Service. The WordPress configuration is modified to use these [Application Settings](reference-app-settings.md#wordpress) to connect to the MySQL database.
+
+- To change the MySQL database password, see [Reset admin password](/azure/mysql/flexible-server/how-to-manage-server-portal#reset-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/changing_mysql_database_password.md).
## Change WordPress admin password
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
Next, you configure your App Service app to connect to SQL Database with a manag
```sql SET aad_validate_oids_in_tenant = off;
- CREATE ROLE <postgresql-user-name> WITH LOGIN PASSWORD '<application-id-of-system-assigned-identity>' IN ROLE azure_ad_user;
+ CREATE ROLE <postgresql-user-name> WITH LOGIN PASSWORD '<application-id-of-user-assigned-identity>' IN ROLE azure_ad_user;
``` Whatever name you choose for *\<postgresql-user-name>*, it's the PostgreSQL user you'll use to connect to the database later from your code in App Service.
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
First, enable Azure Active Directory authentication to SQL Database by assigning
1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az-ad-user-list) and replace *\<user-principal-name>*. The result is saved to a variable. ```azurecli-interactive
- azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].objectId --output tsv)
+ azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query '[].id' --output tsv)
``` > [!TIP]
- > To see the list of all user principal names in Azure AD, run `az ad user list --query [].userPrincipalName`.
+ > To see the list of all user principal names in Azure AD, run `az ad user list --query '[].userPrincipalName'`.
> 1. Add this Azure AD user as an Active Directory admin using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az-sql-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<server-name>* with the server name (without the `.database.windows.net` suffix).
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
For information about scenarios where 502 errors occur, and how to troubleshoot
#### 504 ΓÇô Request timeout
-HTTP 504 errors are presented if a request is sent to application gateways using v2 sku, and the backend response exceeds the time-out value associated to the listener's rule. This value is defined in the HTTP setting.
+HTTP 504 errors are presented if a request is sent to application gateways using v2 sku, and the backend response time exceeds the time-out value associated to the listener's rule. This value is defined in the HTTP setting.
## Next steps
-If the information in this article doesn't help to resolve the issue, [submit a support ticket](https://azure.microsoft.com/support/options/).
+If the information in this article doesn't help to resolve the issue, [submit a support ticket](https://azure.microsoft.com/support/options/).
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
The Private link configuration defines the infrastructure used by Application Ga
**Configure Private Endpoint**
-A private endpoint is a network interface that uses a private IP address from the virtual network containing clients wishing to connect to your gateway. Each of the clients will use the private IP address of the Private Endpoint to tunnel traffic to the Application Gateway. To create a private endpoint, complete the following steps:
+A private endpoint is a network interface that uses a private IP address from the virtual network containing clients wishing to connect to your Application Gateway. Each of the clients will use the private IP address of the Private Endpoint to tunnel traffic to the Application Gateway. To create a private endpoint, complete the following steps:
1. Select the **Private endpoint connections** tab. 1. Select **Create**.
application-gateway Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-powershell.md
$defaultlistener = New-AzApplicationGatewayHttpListener `
$frontendRule = New-AzApplicationGatewayRequestRoutingRule ` -Name rule1 ` -RuleType Basic `
+ -Priority 100 `
-HttpListener $defaultlistener ` -BackendAddressPool $backendPool ` -BackendHttpSettings $poolSettings
applied-ai-services Compose Custom Models V2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-v2-1.md
The [**REST API**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language
* Java | [CustomFormModelInfo Class](/java/api/com.azure.ai.formrecognizer.training.models.customformmodelinfo?view=azure-java-stable&preserve-view=true#methods "Azure SDK for Java")
-* JavaScript | [CustomFormModelInfo interface](/javascript/api/@azure/ai-form-recognizer/customformmodelinfo?view=azure-node-latest&preserve-view=true&branch=main#properties "Azure SDK for JavaScript")
+* JavaScript | CustomFormModelInfo interface
* Python | [CustomFormModelInfo Class](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.customformmodelinfo?view=azure-python&preserve-view=true&branch=main#variables "Azure SDK for Python")
Use the programming language code of your choice to create a composed model that
* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md).
-* [**Java**](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeModel.java).
+* [**Java**](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeDocumentModel.java).
* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/createComposedModel.js).
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
See how data, including name, job title, address, email, and company name, is ex
| Model | LanguageΓÇöLocale code | Default | |--|:-|:|
-|Business card| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li><li>English (Japan)ΓÇöen-JP</li><li>Japanese (Japan)ΓÇöja-JP</li></ul> | Autodetected (en-US or ja-JP) |
+|Business card (v3.0 API)| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li><li>English (Japan)ΓÇöen-JP</li><li>Japanese (Japan)ΓÇöja-JP</li></ul> | Autodetected (en-US or ja-JP) |
+|Business card (v2.1 API)| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li> | Autodetected |
## Field extractions
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
The following resources are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-| |_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-| _**Composed model**_ |<ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</li><li>[JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/begincreatecomposedmodeloptions?view=azure-node-latest&preserve-view=true)</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
+| _**Composed model**_ |<ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</li><li>JavaScript SDK</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
## Next steps
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
This table provides links to the build mode programming language SDK references
|Programming language | SDK reference | Code sample | |||| | C#/.NET | [DocumentBuildMode Struct](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentbuildmode?view=azure-dotnet&preserve-view=true#properties) | [Sample_BuildCustomModelAsync.cs](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/tests/samples/Sample_BuildCustomModelAsync.cs)
-|Java| DocumentBuildMode Class | [BuildModel.java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/BuildModel.java)|
+|Java| DocumentBuildMode Class | [BuildModel.java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/BuildDocumentModel.java)|
|JavaScript | [DocumentBuildMode type](/javascript/api/@azure/ai-form-recognizer/documentbuildmode?view=azure-node-latest&preserve-view=true)| [buildModel.js](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js)| |Python | DocumentBuildMode Enum| [sample_build_model.py](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_build_model.py)|
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
The general document API supports most form types and will analyze your document
The following tools are supported by Form Recognizer v3.0:
-| Feature | Resources |
-|-|-|
-| **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
+| Feature | Resources | Model ID
+|-|-||
+| **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-document**|
### Try Form Recognizer
Keys can also exist in isolation when the model detects that a key exists, with
* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities. > [!div class="nextstepaction"]
-> [Try the Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+> [Try the Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
recommendations: false
# Form Recognizer W-2 model | v3.0
-The Form W-2, Wage and Tax Statement, is a [US Internal Revenue Service (IRS) tax form](https://www.irs.gov/forms-pubs/about-form-w-2). It's used to report employees' salary, wages, compensation, and taxes withheld. Employers send a W-2 form to each employee on or before January 31 each year and employees use the form to prepare their tax returns. W-2 is a key document used in employee's federal and state taxes filing, as well as other processes like mortgage loan and Social Security Administration (SSA).
-
-A W-2 is a multipart form divided into state and federal sections and consisting of more than 14 boxes that details an employee's income from the previous year. The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present. Both [single and multiple forms](https://en.wikipedia.org/wiki/Form_W-2#Filing_requirements) are also supported.
+The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported on [US Internal Revenue Service (IRS) tax forms](https://www.irs.gov/forms-pubs/about-form-w-2). A W-2 tax form is a multipart form divided into state and federal sections consisting of more than 14 boxes detailing an employee's income from the previous year. The W-2 tax form is a key document used in employees' federal and state tax filings, as well as other processes like mortgage loans and Social Security Administration (SSA) benefits. The Form Recognizer W-2 model supports both single and multiple standard and customized forms from 2018 to the present.
***Sample W-2 tax form processed using Form Recognizer Studio***
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
The following lists include the currently GA languages in for the v2.1 version a
> > Form Recognizer's deep learning based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
-To use the v3.0-supported languages, refer to the [v3.0 REST API migration guide](/rest/api/medi).
+To use the v3.0-supported languages, refer to the [v3.0 REST API migration guide](v3-migration-guide.md) to understand the differences from the v2.1 GA API and explore the [v3.0 SDK and REST API quickstarts](quickstarts/get-started-v3-sdk-rest-api.md).
### Handwritten text (v3.0 and v2.1)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
The patch addresses invoices that don't have subline item fields detected such a
> [!NOTE] > There are no updates to JavaScript SDK v3.1.0.
-| [Reference documentation](/javascript/api/@azure/ai-form-recognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
+| [Reference documentation](/javascript/api/@azure/cognitiveservices-formrecognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
### [**Python**](#tab/python)
The updated Layout API table feature adds header recognition with column headers
### [**JavaScript**](#tab/javascript)
-| [Reference documentation](/javascript/api/@azure/ai-form-recognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
+| [Reference documentation](/javascript/api/@azure/cognitiveservices-formrecognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
#### **Non-breaking changes**
npm package version 3.1.0-beta.3
* **New methods to analyze data from identity documents**:
- **[azure-ai-form-recognizer-formrecognizerclient-beginrecognizeidentitydocumentsfromurl](/javascript/api/@azure/ai-form-recognizer/formrecognizerclient?view=azure-node-preview&preserve-view=true&branch=main#beginRecognizeIdDocumentsFromUrl_string__BeginRecognizeIdDocumentsOptions_)**
+ **azure-ai-form-recognizer-formrecognizerclient-beginrecognizeidentitydocumentsfromurl**
- **[beginRecognizeIdDocuments](/javascript/api/@azure/ai-form-recognizer/formrecognizerclient?view=azure-node-preview&preserve-view=true&branch=main#@azure-ai-form-recognizer-formrecognizerclient-beginrecognizeidentitydocuments)**
+ **beginRecognizeIdDocuments**
For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Form Recognizer documentation.
npm package version 3.1.0-beta.3
* Added support for a **[ReadingOrder](/javascript/api/@azure/ai-form-recognizer/formreadingorder?view=azure-node-latest&preserve-view=true to the URL)** type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
-* Split **[FormField](/javascript/api/@azure/ai-form-recognizer/formfield?view=azure-node-preview&preserve-view=true)** type into several different interfaces. This update shouldn't cause any API compatibility issues except in certain edge cases (undefined valueType).
+* Split **FormField** type into several different interfaces. This update shouldn't cause any API compatibility issues except in certain edge cases (undefined valueType).
* Migrated to the **2.1-preview.3** Form Recognizer service endpoint for all REST API calls.
availability-zones Migrate Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-storage.md
description: Learn how to migrate your Azure storage accounts to availability zo
Previously updated : 05/09/2022 Last updated : 09/21/2022 # Migrate Azure Storage accounts to availability zone support
-
-This guide describes how to migrate Azure Storage accounts from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+This guide describes how to migrate or convert Azure Storage accounts to add availability zone support.
Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets the Service-Level Agreement (SLA) for Azure Storage even in the face of failures.
+By default, data in a storage account is replicated in a single data center in the primary region. If your application must be highly available, you can convert the data in the primary region to zone-redundant storage (ZRS). ZRS takes advantage of Azure availability zones to replicate data in the primary region across three separate data centers.
+ Azure Storage offers the following types of replication: - Locally redundant storage (LRS)
Azure Storage offers the following types of replication:
For an overview of each of these options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
-You can switch a storage account from one type of replication to any other type, but some scenarios are more straightforward than others. This article describes two basic options for migration. The first is a manual migration and the second is a live migration that you must initiate by contacting Microsoft support.
+This article describes two basic options for adding availability zone support to a storage account:
+
+- [Conversion](#option-1-conversion): If your application must be highly available, you can convert the data in the primary region to zone-redundant storage (ZRS). ZRS takes advantage of Azure availability zones to replicate data in the primary region across three separate data centers.
+- [Manual migration](#option-2-manual-migration): Manual migration gives you complete control over the migration process by allowing you to use tools such as AzCopy move to a new storage account with the desired replication settings at the time of your choosing.
+
+> [!NOTE]
+> For complete details on how to change how your storage account is replicated, see [Change how a storage account is replicated](../storage/common/redundancy-migration.md).
## Prerequisites -- Make sure your storage account(s) are in a region that supports ZRS. To determine whether or not the region supports ZRS, see [Zone-redundant storage](../storage/common/storage-redundancy.md#zone-redundant-storage).
+Before making any changes, review the [limitations for changing replication types](../storage/common/redundancy-migration.md#limitations-for-changing-replication-types) to make sure your storage account is eligible for migration or conversion, and to understand the options available to you. Many storage accounts can be converted directly to ZRS, while others either require a multi-step process or a manual migration. After reviewing the limitations, choose the right option in this article to convert your storage account based on:
-- Confirm that your storage account(s) is a general-purpose v2 account. If your storage account is v1, you'll need to upgrade it to v2. To learn how to upgrade your v1 account, see [Upgrade to a general-purpose v2 storage account](../storage/common/storage-account-upgrade.md).
+- [Storage account type](../storage/common/redundancy-migration.md#storage-account-type)
+- [Region](../storage/common/redundancy-migration.md#region)
+- [Access tier](../storage/common/redundancy-migration.md#access-tier)
+- [Protocols enabled](../storage/common/redundancy-migration.md#protocol-support)
+- [Failover status](../storage/common/redundancy-migration.md#failover-and-failback)
## Downtime requirements
-If you choose manual migration, downtime is required. If you choose live migration, there's no downtime requirement.
+During a conversion to ZRS, you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the conversion process and there is no data loss. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the conversion.
-## Migration option 1: Manual migration
+If you choose manual migration, some downtime is required, but you have more control over when the process starts and completes.
-### When to use a manual migration
+## Option 1: Conversion
-Use a manual migration if:
+During a conversion, you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the migration process and there is no data loss associated with a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
-- You need the migration to be completed by a certain date.
+### When to perform a conversion
-- You want to migrate your data to a ZRS storage account that's in a different region than the source account.
+Perform a conversion if:
-- You want to migrate data from ZRS to LRS, GRS or RA-GRS.
+- You want to convert your storage account from LRS to ZRS in the primary region with no application downtime.
+- You don't need the change to be completed by a certain date. While Microsoft handles your request for conversion promptly, there's no guarantee as to when it will complete. Generally, the more data you have in your account, the longer it takes to replicate that data.
+- You want to minimize the amount of manual effort required to complete the change.
-- Your storage account is a premium page blob or block blob account.
+### Conversion considerations
-- Your storage account includes data that's in the archive tier.
+Conversion can be used in most situations to add availability zone support, but in some cases you will need to use multiple steps or perform a manual migration. For example, if you also want to add or remove geo-redundancy (GRS) or read access (RA) to the secondary region, you will need to perform a two-step process. Perform the conversion to ZRS as one step and the GRS and/or RA change as a separate step. These steps can be performed in any order.
-### How to manually migrate Azure Storage accounts
+A full list of things to consider can be found in [Limitations](../storage/common/redundancy-migration.md#limitations-for-changing-replication-types).
-To manually migration your Azure Storage accounts:
+### How to perform a conversion
-1. Create a new storage account in the primary region with Zone Redundant Storage (ZRS) as the redundancy setting.
+A conversion can be accomplished in one of two ways:
-1. Copy the data from your existing storage account to the new storage account. To perform a copy operation, use one of the following options:
+- [A Customer-initiated conversion (preview)](#customer-initiated-conversion-preview)
+- [Request a conversion by creating a support request](#request-a-conversion-by-creating-a-support-request)
- - **Option 1:** Copy data by using an existing tool such as [AzCopy](../storage/common/storage-use-azcopy-v10.md), [Azure Data factory](../data-factory/connector-azure-blob-storage.md?tabs=data-factory), one of the Azure Storage client libraries, or a reliable third-party tool.
+#### Customer-initiated conversion (preview)
- - **Option 2:** If you're familiar with Hadoop or HDInsight, you can attach both the source storage account and destination storage account to your cluster. Then, parallelize the data copy process with a tool like [DistCp](https://hadoop.apache.org/docs/r1.2.1/distcp.html).
+> [!IMPORTANT]
+> Customer-initiated conversion is currently in preview, but is not available in the following regions:
+>
+> - (Europe) West Europe
+> - (Europe) UK South
+> - (North America) Canada Central
+> - (North America) East US
+> - (North America) East US 2
+>
+> This preview version is provided without a service level agreement, and might not be suitable for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Customer-initiated conversion adds a new option for customers to start a conversion. Now, instead of needing to open a support request, customers can request the conversion directly from within the Azure portal. Once initiated, the conversion could still take up to 72 hours to actually begin, but potential delays related to opening and managing a support request are eliminated.
+
+Customer-initiated conversion is only available from the Azure portal, not from PowerShell or the Azure CLI. To initiate the conversion, follow these steps:
+
+1. Navigate to your storage account in the Azure portal.
+1. Under **Data management** select **Redundancy**.
+1. Update the **Redundancy** setting.
+1. Select **Save**.
+
+ :::image type="content" source="../storage/common/media/redundancy-migration/change-replication-option.png" alt-text="Screenshot showing how to change replication option in portal." lightbox="../storage/common/media/redundancy-migration/change-replication-option.png":::
+
+#### Request a conversion by creating a support request
+
+Customers can still request a conversion by opening a support request with Microsoft.
+
+> [!IMPORTANT]
+> If you need to convert more than one storage account, create a single support ticket and specify the names of the accounts to convert on the **Additional details** tab.
-1. Determine which type of replication you need and follow the directions in [Switch between types of replication](../storage/common/redundancy-migration.md#switch-between-types-of-replication).
+Follow these steps to request a conversion from Microsoft:
-## Migration option 2: Request a live migration
+1. In the Azure portal, navigate to a storage account that you want to convert.
+1. Under **Support + troubleshooting**, select **New Support Request**.
+1. Complete the **Problem description** tab based on your account information:
+ - **Summary**: (some descriptive text).
+ - **Issue type**: Select **Technical**.
+ - **Subscription**: Select your subscription from the drop-down.
+ - **Service**: Select **My Services**, then **Storage Account Management** for the **Service type**.
+ - **Resource**: Select a storage account to convert. If you need to specify multiple storage accounts, you can do so on the **Additional details** tab.
+ - **Problem type**: Choose **Data Migration**.
+ - **Problem subtype**: Choose **Migrate to ZRS, GZRS, or RA-GZRS**.
-### When to request a live migration
+ :::image type="content" source="../storage/common/media/redundancy-migration/request-live-migration-problem-desc-portal.png" alt-text="Screenshot showing how to request a conversion - Problem description tab.":::
-Request a live migration if:
+1. Select **Next**. The **Recommended solution** tab might be displayed briefly before it switches to the **Solutions** page. On the **Solutions** page, you can check the eligibility of your storage account(s) for conversion:
+ - **Target replication type**: (choose the desired option from the drop-down)
+ - **Storage accounts from**: (enter a single storage account name or a list of accounts separated by semicolons)
+ - Select **Submit**.
-- You want to migrate your storage account from LRS to ZRS in the primary region with no application downtime.
+ :::image type="content" source="../storage/common/media/redundancy-migration/request-live-migration-solutions-portal.png" alt-text="Screenshot showing how to check the eligibility of your storage account(s) for conversion - Solutions page.":::
-- You want to migrate your storage account from ZRS to GZRS or RA-GZRS.
+1. Take the appropriate action if the results indicate your storage account is not eligible for conversion. If it is eligible, select **Return to support request**.
-- You don't need the migration to be completed by a certain date. While Microsoft handles your request for live migration promptly, there's no guarantee as to when a live migration will complete. Generally, the more data you have in your account, the longer it takes to migrate that data.
+1. Select **Next**. If you have more than one storage account to migrate, then on the **Details** tab, specify the name for each account, separated by a semicolon.
-### Live migration considerations
+ :::image type="content" source="../storage/common/media/redundancy-migration/request-live-migration-details-portal.png" alt-text="Screenshot showing how to request a conversion - Additional details tab.":::
-During a live migration, you can access data in your storage account with no loss of durability or availability. The Azure Storage SLA is maintained during the migration process. There's no data loss associated with a live migration. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
+1. Fill out the additional required information on the **Additional details** tab, then select **Review + create** to review and submit your support ticket. A support person will contact you to provide any assistance you may need.
-However, be aware of the following limitations:
+## Option 2: Manual migration
-- The archive tier is not currently supported for ZRS accounts.
+A manual migration provides more flexibility and control than a conversion. You can use this option if you need the migration to complete by a certain date, or if conversion is [not supported for your scenario](../storage/common/redundancy-migration.md#limitations-for-changing-replication-types). Manual migration is also useful when moving a storage account to another region. See [Move an Azure Storage account to another region](../storage/common/storage-account-move.md) for more details.
-- Unmanaged disks don't support ZRS or GZRS.
+### When to use a manual migration
+
+Use a manual migration if:
-- Only general-purpose v2 storage accounts support GZRS and RA-GZRS. GZRS and RA-GZRS support block blobs, page blobs (except for VHD disks), files, tables, and queues.
+- You need the migration to be completed by a certain date.
-- Live migration from LRS to ZRS isn't supported if the storage account contains Azure Files NFSv4.1 shares.
+- You want to migrate your data to a ZRS storage account that's in a different region than the source account.
-- For premium performance, live migration is supported for premium file share accounts, but not for premium block blob or premium page blob accounts.
+- You want to add or remove zone-redundancy and you don't want to use the customer-initiated migration feature in preview.
-### How to request a live migration
+- Your storage account is a premium page blob or block blob account.
-[Request a live migration](../storage/common/redundancy-migration.md) by creating a new support request from Azure portal.
+- Your storage account includes data that's in the archive tier.
+
+### How to manually migrate Azure Storage accounts
+
+To manually migration your Azure Storage accounts:
+
+1. Create a new storage account in the primary region with zone redundant storage (ZRS) as the redundancy setting.
+
+1. Copy the data from your existing storage account to the new storage account. To perform a copy operation, use one of the following options:
+
+ - **Option 1:** Copy data by using an existing tool such as [AzCopy](../storage/common/storage-use-azcopy-v10.md), [Azure Data factory](../data-factory/connector-azure-blob-storage.md?tabs=data-factory), one of the Azure Storage client libraries, or a reliable third-party tool.
+
+ - **Option 2:** If you're familiar with Hadoop or HDInsight, you can attach both the source storage account and destination storage account to your cluster. Then, parallelize the data copy process with a tool like [DistCp](https://hadoop.apache.org/docs/r1.2.1/distcp.html).
+
+1. Determine which type of replication you need and follow the directions in [Change how a storage account is replicated](../storage/common/redundancy-migration.md).
## Next steps
+For detailed guidance on changing the replication configuration for an Azure Storage account from any type to any other type, see:
+
+> [!div class="nextstepaction"]
+> [Change how a storage account is replicated](../storage/common/redundancy-migration.md)
+ For more guidance on moving an Azure Storage account to another region, see: > [!div class="nextstepaction"]
-> [Move an Azure Storage account to another region](../storage/common/storage-account-move.md).
+> [Move an Azure Storage account to another region](../storage/common/storage-account-move.md)
Learn more about:
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
editor: ''
ms.assetid: Previously updated : 05/02/2019 Last updated : 09/21/2022
configBuilder.AddAzureAppConfiguration(options => {
## References to external data
-App Configuration is designed to store any configuration data that you would normally save in configuration files or environment variables. However, some types of data may better suited to reside in other sources. For example, store secrets in Key Vault, files in Azure Storage, membership information in Azure AD groups, or customer lists in a database.
+App Configuration is designed to store any configuration data that you would normally save in configuration files or environment variables. However, some types of data may be better suited to reside in other sources. For example, store secrets in Key Vault, files in Azure Storage, membership information in Azure AD groups, or customer lists in a database.
You can still take advantage of App Configuration by saving a reference to external data in a key-value. You can [use content type](./concept-key-value.md#use-content-type) to differentiate each data source. When your application reads a reference, it loads the actual data from the referenced source, assuming it has the necessary permission to the source. If you change the location of your external data, you only need to update the reference in App Configuration instead of updating and redeploying your entire application.
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
Title: Private connectivity for Arc enabled Kubernetes clusters using private link (preview) Previously updated : 08/28/2021
+ Title: Private connectivity for Azure Arc-enabled Kubernetes clusters using private link (preview)
Last updated : 09/21/2021 description: With Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to use a single private endpoint.
-# Private connectivity for Arc enabled Kubernetes clusters using private link (preview)
+# Private connectivity for Arc-enabled Kubernetes clusters using private link (preview)
[Azure Private Link](/azure/private-link/private-link-overview) allows you to securely link Azure services to your virtual network using private endpoints. This means you can connect your on-premises Kubernetes clusters with Azure Arc and send all traffic over an Azure ExpressRoute or site-to-site VPN connection instead of using public networks. In Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to communicate with their Azure Arc resources using a single private endpoint. This document covers when to use and how to set up Azure Arc Private Link (preview). > [!IMPORTANT]
-> The Azure Arc Private Link feature is currently in PREVIEW in all regions where Azure Arc enabled Kubernetes is present, except South East Asia.
+> The Azure Arc Private Link feature is currently in PREVIEW in all regions where Azure Arc-enabled Kubernetes is present, except South East Asia.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Advantages
To connect your Kubernetes cluster to Azure Arc over a private link, you need to
1. Update the DNS configuration on your local network to resolve the private endpoint addresses. 1. Configure your local firewall to allow access to Azure Active Directory, Azure Resource Manager and Microsoft Container Registry. 1. Associate the Azure Arc-enabled Kubernetes clusters with the Azure Arc Private Link Scope.
-1. Optionally, deploy private endpoints for other Azure services your Azure Arc enabled Kubernetes cluster is managed by, such as Azure Monitor.
+1. Optionally, deploy private endpoints for other Azure services your Azure Arc-enabled Kubernetes cluster is managed by, such as Azure Monitor.
The rest of this document assumes you have already set up your ExpressRoute circuit or site-to-site VPN connection. ## Network configuration
There are two ways you can achieve this:
* If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, Azure Front Door and Microsoft Container Registry using [service tags](/azure/virtual-network/service-tags-overview). The NSG rules should look like the following:
- | Setting | Azure AD rule | Azure Resource Manager rule | AzureFrontDoorFirstParty rule | Microsoft Container Registry rule |
+ | Setting | Azure AD rule | Azure Resource Manager rule | AzureFrontDoorFirstParty rule | Microsoft Container Registry rule |
|-|||| | Source | Virtual Network | Virtual Network | Virtual Network | Virtual Network | Source Port ranges | * | * | * | *
There are two ways you can achieve this:
| Protocol | TCP | TCP | TCP | TCP | Action | Allow | Allow | Allow (Both inbound and outbound) | Allow | Priority | 150 (must be lower than any rules that block internet access) | 151 (must be lower than any rules that block internet access) | 152 (must be lower than any rules that block internet access) | 153 (must be lower than any rules that block internet access) |
- | Name | AllowAADOutboundAccess | AllowAzOutboundAccess | AllowAzureFrontDoorFirstPartyAccess | AllowMCROutboundAccess
+ | Name | AllowAADOutboundAccess | AllowAzOutboundAccess | AllowAzureFrontDoorFirstPartyAccess | AllowMCROutboundAccess
* Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, and Microsoft Container Registry, and inbound & outbound access to AzureFrontDoor.FirstParty using the downloadable service tag files. The JSON file contains all the public IP address ranges used by Azure AD, Azure Resource Manager, AzureFrontDoor.FirstParty, and Microsoft Container Registry and is updated monthly to reflect any changes. Azure Active Directory's service tag is AzureActiveDirectory, Azure Resource Manager's service tag is AzureResourceManager, Microsoft Container Registry's service tag is MicrosoftContainerRegistry, and Azure Front Door's service tag is AzureFrontDoor.FirstParty. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
The Private Endpoint on your virtual network allows it to reach Azure Arc-enable
1. Let validation pass. 1. Select **Create**. - ## Configure on-premises DNS forwarding Your on-premises Kubernetes clusters need to be able to resolve the private link DNS records to the private endpoint IP addresses. How you configure this depends on whether you are using Azure private DNS zones to maintain DNS records or using your own DNS server on-premises, along with how many clusters you are configuring.
If you opted out of using Azure private DNS zones during private endpoint creati
## Configure private links > [!NOTE]
-> Configuring private links for Azure Arc enabled Kubernetes clusters is supported starting from version 1.3.0 of connectedk8s CLI extension. Ensure that you are using connectedk8s CLI extension version greater than or equal to 1.2.9.
+> Configuring private links for Azure Arc-enabled Kubernetes clusters is supported starting from version 1.3.0 of the `connectedk8s` CLI extension, but requires Azure CLI version greater than 2.3.0. If you use a version greater than 1.3.0 for the `connectedk8s` CLI extension, we have introduced validations to check and successfully connect the cluster to Azure Arc only if you're running Azure CLI version greater than 2.3.0.
You can configure private links for an existing Azure Arc-enabled Kubernetes cluster or when onboarding a Kubernetes cluster to Azure Arc for the first time using the command below:
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
You can delete the Azure Arc-enabled Kubernetes resource, any associated configu
az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest ```
+If the deletion process hangs, use the following command to force deletion (adding `-y` if you want to bypass the confirmation prompt):
+
+```azurecli
+az connectedk8s delete -g AzureArcTest1 -n AzureArcTest --force
+```
+
+This command can also be used if you experience issues when creating a new cluster deployment (due to previously created resources not being completely removed).
+ >[!NOTE] > Deleting the Azure Arc-enabled Kubernetes resource using the Azure portal removes any associated configuration resources, but *does not* remove any agents running on the cluster. Best practice is to delete the Azure Arc-enabled Kubernetes resource using `az connectedk8s delete` rather than deleting the resource in the Azure portal.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
GitOps with Flux v2 can be enabled in Azure Kubernetes Service (AKS) managed clu
This tutorial describes how to use GitOps in a Kubernetes cluster. Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md).
-> [!IMPORTANT]
-> Add-on Azure management services, like Kubernetes Configuration, are charged when enabled. Costs related to use of Flux v2 will start to be billed on July 1, 2022. For more information, see [Azure Arc pricing](https://azure.microsoft.com/pricing/details/azure-arc/).
- >[!IMPORTANT] > The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension you can upgrade to the latest extension manually using the Azure CLI: "az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>" (use "-t connectedClusters" for Arc clusters and "-t managedClusters" for AKS clusters).
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
### Common to both cluster types
+* Read and write permissions on these resource types:
+ * `Microsoft.KubernetesConfiguration/extensions`
+ * `Microsoft.KubernetesConfiguration/fluxConfigurations`
+ * Azure CLI version 2.15 or later. [Install the Azure CLI](/cli/azure/install-azure-cli) or use the following commands to update to the latest version: ```console
azure-fluid-relay Container Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-recovery.md
Fluid framework periodically saves state, called summary, without any explicit b
We've added following methods to AzureClient that will enable developers to recover data from corrupted containers.
-[`getContainerVersions(ID, options)`](https://fluidframework.com/docs/apis/azure-client/azureclient#getcontainerversions-Method)
+[`getContainerVersions(ID, options)`](https://fluidframework.com/docs/apis/azure-client/azureclient-class#getcontainerversions-method)
`getContainerVersions` allows developers to view the previously generated versions of the container.
-[`copyContainer(ID, containerSchema)`](https://fluidframework.com/docs/apis/azure-client/azureclient#copycontainer-Method)
+[`copyContainer(ID, containerSchema)`](https://fluidframework.com/docs/apis/azure-client/azureclient-class#copycontainer-method)
`copyContainer` allows developers to generate a new detached container from a specific version of another container.
azure-fluid-relay Test Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/test-automation.md
fluid.url: https://fluidframework.com/docs/testing/testing/
Testing and automation are crucial to maintaining the quality and longevity of your code. Internally, Fluid uses a range of unit and integration tests powered by [Mocha](https://mochajs.org/), [Jest](https://jestjs.io/), [Puppeteer](https://github.com/puppeteer/puppeteer), and [Webpack](https://webpack.js.org/).
-You can run tests using the local [@fluidframework/azure-local-service](https://www.npmjs.com/package/@fluidframework/azure-local-service) or using a test tenant in Azure Fluid Relay service. [AzureClient](https://fluidframework.com/docs/apis/azure-client/azureclient) can be configured to connect to both a remote service and a local service, which enables you to use a single client type between tests against live and local service instances. The only difference is the configuration used to create the client.
+You can run tests using the local [@fluidframework/azure-local-service](https://www.npmjs.com/package/@fluidframework/azure-local-service) or using a test tenant in Azure Fluid Relay service. [AzureClient](https://fluidframework.com/docs/apis/azure-client/azureclient-class) can be configured to connect to both a remote service and a local service, which enables you to use a single client type between tests against live and local service instances. The only difference is the configuration used to create the client.
## Automation against Azure Fluid Relay
azure-fluid-relay Validate Document Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/validate-document-creator.md
fluid.url: https://fluidframework.com/docs/apis/azure-client/itokenprovider/
# How to: Validate a User Created a Document
-When you create a document in Azure Fluid Relay, the JWT provided by the [ITokenProvider](https://fluidframework.com/docs/apis/azure-client/itokenprovider/) for the creation request can only be used once. After creating a document, the client must generate a new JWT that contains the document ID provided by the service at creation time. If an application has an authorization service that manages document access control, it will need to know who created a document with a given ID in order to authorize the generation of a new JWT for access to that document.
+When you create a document in Azure Fluid Relay, the JWT provided by the [ITokenProvider](https://fluidframework.com/docs/apis/azure-client/itokenprovider-interface) for the creation request can only be used once. After creating a document, the client must generate a new JWT that contains the document ID provided by the service at creation time. If an application has an authorization service that manages document access control, it will need to know who created a document with a given ID in order to authorize the generation of a new JWT for access to that document.
## Inform an Authorization Service when a document is Created
-An application can tie into the document creation lifecycle by implementing a public [documentPostCreateCallback()](https://fluidframework.com/docs/apis/azure-client/itokenprovider#documentpostcreatecallback-MethodSignature) method in its `TokenProvider`. This callback will be triggered directly after creating the document, before a client requests the new JWT it needs to gain read/write permissions to the document that was created.
+An application can tie into the document creation lifecycle by implementing a public [documentPostCreateCallback()](https://fluidframework.com/docs/apis/azure-client/itokenprovider-interface#documentpostcreatecallback-methodsignature) method in its `TokenProvider`. This callback will be triggered directly after creating the document, before a client requests the new JWT it needs to gain read/write permissions to the document that was created.
The `documentPostCreateCallback()` receives two parameters: 1) the ID of the document that was created and 2) a JWT signed by the service with no permission scopes. The authorization service can verify the given JWT and use the information in the JWT to grant the correct user permissions for the newly created document.
export default httpTrigger;
### Implement the `documentPostCreateCallback`
-This example implementation below extends the [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider/) and uses the [axios](https://www.npmjs.com/package/axios) library to make a HTTP request to the Azure Function used for generating tokens.
+This example implementation below extends the [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider-class) and uses the [axios](https://www.npmjs.com/package/axios) library to make a HTTP request to the Azure Function used for generating tokens.
```typescript import { AzureFunctionTokenProvider, AzureMember } from "@fluidframework/azure-client";
azure-functions Functions Bindings Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md
public static void Run(
[ServiceBusTrigger("myqueue", Connection = "ServiceBusConnection")] Message email, [SendGrid(ApiKey = "CustomSendGridKeyAppSettingName")] out SendGridMessage message) {
-var emailObject = JsonSerializer.Deserialize<OutgoingEmail>(Encoding.UTF8.GetString(email.Body));
+ var emailObject = JsonSerializer.Deserialize<OutgoingEmail>(Encoding.UTF8.GetString(email.Body));
-message = new SendGridMessage();
-message.AddTo(emailObject.To);
-message.AddContent("text/html", emailObject.Body);
-message.SetFrom(new EmailAddress(emailObject.From));
-message.SetSubject(emailObject.Subject);
+ message = new SendGridMessage();
+ message.AddTo(emailObject.To);
+ message.AddContent("text/html", emailObject.Body);
+ message.SetFrom(new EmailAddress(emailObject.From));
+ message.SetSubject(emailObject.Subject);
} public class OutgoingEmail
public static async Task Run(
[ServiceBusTrigger("myqueue", Connection = "ServiceBusConnection")] Message email, [SendGrid(ApiKey = "CustomSendGridKeyAppSettingName")] IAsyncCollector<SendGridMessage> messageCollector) {
- var emailObject = JsonSerializer.Deserialize<OutgoingEmail>(Encoding.UTF8.GetString(email.Body));
+ var emailObject = JsonSerializer.Deserialize<OutgoingEmail>(Encoding.UTF8.GetString(email.Body));
- var message = new SendGridMessage();
- message.AddTo(emailObject.To);
- message.AddContent("text/html", emailObject.Body);
- message.SetFrom(new EmailAddress(emailObject.From));
- message.SetSubject(emailObject.Subject);
+ var message = new SendGridMessage();
+ message.AddTo(emailObject.To);
+ message.AddContent("text/html", emailObject.Body);
+ message.SetFrom(new EmailAddress(emailObject.From));
+ message.SetSubject(emailObject.Subject);
- await messageCollector.AddAsync(message);
+ await messageCollector.AddAsync(message);
} public class OutgoingEmail {
- public string To { get; set; }
- public string From { get; set; }
- public string Subject { get; set; }
- public string Body { get; set; }
+ public string To { get; set; }
+ public string From { get; set; }
+ public string Subject { get; set; }
+ public string Body { get; set; }
} ```
Optional properties may have default values defined in the binding and either ad
> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) [extension bundle]: ./functions-bindings-register.md#extension-bundles
-[Update your extensions]: ./functions-bindings-register.md
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
Poison message handling can't be controlled or configured in Azure Functions. Se
The Functions runtime receives a message in [PeekLock mode](../service-bus-messaging/service-bus-performance-improvements.md#receive-mode). It calls `Complete` on the message if the function finishes successfully, or calls `Abandon` if the function fails. If the function runs longer than the `PeekLock` timeout, the lock is automatically renewed as long as the function is running.
-The `maxAutoRenewDuration` is configurable in *host.json*, which maps to [OnMessageOptions.MaxAutoRenewDuration](/dotnet/api/microsoft.azure.servicebus.messagehandleroptions.maxautorenewduration). The maximum allowed for this setting is 5 minutes according to the Service Bus documentation, whereas you can increase the Functions time limit from the default of 5 minutes to 10 minutes. For Service Bus functions you wouldnΓÇÖt want to do that then, because youΓÇÖd exceed the Service Bus renewal limit.
+The `maxAutoRenewDuration` is configurable in *host.json*, which maps to [OnMessageOptions.MaxAutoRenewDuration](/dotnet/api/microsoft.azure.servicebus.messagehandleroptions.maxautorenewduration). The default value of this setting is 5 minutes.
::: zone pivot="programming-language-csharp" ## Message metadata
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
The following example shows a warmup trigger that runs when each new instance is
```java @FunctionName("Warmup")
-public void run( ExecutionContext context) {
- context.getLogger().info("Function App instance is warm 🌞🌞🌞");
+public void warmup( @WarmupTrigger Object warmupContext, ExecutionContext context) {
+ context.getLogger().info("Function App instance is warm 🌞🌞🌞");
} ```
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
Title: Develop and run Azure Functions locally
description: Learn how to code and test Azure Functions on your local computer before you run them on Azure Functions. Previously updated : 05/19/2022 Last updated : 09/22/2022 # Code and test Azure Functions locally
The way in which you develop functions on your local computer depends on your [l
|Environment |Languages |Description| |--|||
-|[Visual Studio Code](functions-develop-vs-code.md)| [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
-| [Command prompt or terminal](functions-run-local.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
-| [Visual Studio 2019](functions-develop-vs.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio 2019](https://www.visualstudio.com/vs/) and later versions. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
+|[Visual Studio Code](functions-develop-vs-code.md)| [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
+| [Command prompt or terminal](functions-run-local.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
+| [Visual Studio](functions-develop-vs.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated process)](dotnet-isolated-process-guide.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio](https://www.visualstudio.com/vs/), starting with Visual Studio 2019. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Maven archetype supports Core Tools to enable development of Java functions. Version 2.x supports development on Linux, macOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md). | [!INCLUDE [Don't mix development environments](../../includes/functions-mixed-dev-environments.md)]
Each of these local development environments lets you create function app projec
## Local settings file
-The local.settings.json file stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally.
+The local.settings.json file stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally. When you publish your project to Azure, be sure to also add any required settings to the app settings for the function app.
> [!IMPORTANT] > Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a remote repository. Tools that support Functions provide ways to synchronize settings in the local.settings.json file with the [app settings](functions-how-to-use-azure-function-app-settings.md#settings) in the function app to which your project is deployed.
The following application settings can be included in the **`Values`** array whe
|**`FUNCTIONS_WORKER_RUNTIME`** | `dotnet`<br/>`dotnet-isolated`<br/>`node`<br/>`java`<br/>`powershell`<br/>`python`| Indicates the targeted language of the Functions runtime. Required for version 2.x and higher of the Functions runtime. This setting is generated for your project by Core Tools. To learn more, see the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) reference.| | **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates to use PowerShell 7 when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. The PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, when it runs in Azure, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). |
+## Synchronize settings
+
+When you develop your functions locally, any local settings required by your app must also be present in app settings of the function app to which your code is deployed. You may also need to download current settings from the function app to your local project. While you can [manually configure app settings in the Azure portal](functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings), the following tools also let you synchronize app settings with local settings in your project:
+++ [Visual Studio Code](functions-develop-vs-code.md#application-settings-in-azure)++ [Visual Studio](functions-develop-vs.md#function-app-settings)++ [Azure Functions Core Tools](functions-run-local.md#local-settings)+ ## Next steps
-+ To learn more about local development of compiled C# functions using Visual Studio 2019, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md).
++ To learn more about local development of compiled C# functions (both in-process and isolated process) using Visual Studio, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). + To learn more about local development of functions using VS Code on a Mac, Linux, or Windows computer, see the Visual Studio Code getting started article for your preferred language:
- + [C# class library](create-first-function-vs-code-csharp.md)
- + [C# isolated process (.NET 5.0)](create-first-function-vs-code-csharp.md?tabs=isolated-process)
+ + [C# (in-process)](create-first-function-vs-code-csharp.md)
+ + [C# )isolated process)](create-first-function-vs-code-csharp.md?tabs=isolated-process)
+ [Java](create-first-function-vs-code-java.md) + [JavaScript](create-first-function-vs-code-node.md) + [PowerShell](create-first-function-vs-code-powershell.md)
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
There are few exceptions to the retirement policy outlined above. Here is a list
|Language Versions |EOL Date |Retirement Date| |--|--|-|
+|Node 12|30 Apr 2022|13 December 2022|
|PowerShell Core 6| 4 September 2020|30 September 2022| |Python 3.6 |23 December 2021|30 September 2022|
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md
To learn more about how Azure Monitor metric alerts work and how to configure th
"EnableClassic": false, "AutoStop_MetricName": "Percentage CPU", "AutoStop_Condition": "LessThan",
- "AutoStop_Description": "Alert to stop the VM if the CPU % exceed the threshold",
+ "AutoStop_Description": "Alert to stop the VM if the CPU % falls below the threshold",
"AutoStop_Frequency": "00:05:00", "AutoStop_Severity": "2", "AutoStop_Threshold": "5",
To learn more about how Azure Monitor metric alerts work and how to configure th
{ "Action": "stop", "AutoStop_Condition": "LessThan",
- "AutoStop_Description": "Alert to stop the VM if the CPU % exceed the threshold",
+ "AutoStop_Description": "Alert to stop the VM if the CPU % falls below the threshold",
"AutoStop_Frequency": "00:05:00", "AutoStop_MetricName": "Percentage CPU", "AutoStop_Severity": "2",
To learn more about how Azure Monitor metric alerts work and how to configure th
{ "Action": "stop", "AutoStop_Condition": "LessThan",
- "AutoStop_Description": "Alert to stop the VM if the CPU % exceed the threshold",
+ "AutoStop_Description": "Alert to stop the VM if the CPU % falls below the threshold",
"AutoStop_Frequency": "00:05:00", "AutoStop_MetricName": "Percentage CPU", "AutoStop_Severity": "2",
azure-maps Power Bi Visual Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-heat-map-layer.md
In this article, you will learn how to add a heat map layer to an Azure Maps Pow
:::image type="content" source="media/power-bi-visual/heat-map.png" alt-text="Heat map layer in an Azure Maps Power BI visual.":::
-Heat maps, also known as density maps, are a type of overlay on a map used to represent the density of data using different colors. Heat maps are often used to show the data ΓÇ£hot spotsΓÇ¥ on a map. Heat maps are a great way to render datasets with large number of points. Displaying a large number of data points on a map will result in a degradation in performance and can cover it with overlapping symbols, making it unusable. Rendering the data as a heat map results not only in better performance, it helps you make better sense of the data by making it easy to see the relative density of each data point.
+Heat maps, also known as density maps, are a type of overlay on a map used to represent the density of data using different colors. Heat maps are often used to show the data "hot spots" on a map. Heat maps are a great way to render datasets with large number of points. Displaying a large number of data points on a map will result in a degradation in performance and can cover it with overlapping symbols, making it unusable. Rendering the data as a heat map results not only in better performance, it helps you make better sense of the data by making it easy to see the relative density of each data point.
A heat map is useful when users want to visualize vast comparative data:
The following table shows the primary settings that are available in the **Heat
| Setting | Description | |-|| | Radius | The radius of each data point in the heat map.<br /><br />Valid values when Unit = ΓÇÿpixelsΓÇÖ: 1 - 200. Default: **20**<br />Valid values when Unit = ΓÇÿmetersΓÇÖ: 1 - 4,000,000|
-| Units | The distance units of the radius. Possible values are:<br /><br />**pixels**. When set to pixels the size of each data point will always be the same, regardless of zoom level.<br />**meters**. When set to meters, the size of the data points will scale based on zoom level, ensuring the radius is spatially accurate.<br /><br /> Default: **pixels** |
+| Units | The distance units of the radius. Possible values are:<br /><br />**pixels**. When set to pixels the size of each data point will always be the same, regardless of zoom level.<br />**meters**. When set to meters, the size of the data points will scale based on zoom level based on the equivalent pixel distance at the equator, providing better relativity between neighboring points. However, due to the Mercator projection, the actual radius of coverage in meters at a given latitude will be smaller than this specified radius.<br /><br /> Default: **pixels** |
| Transparency | Sets the Transparency of the heat map layer. Default: **1**<br/>Value should be from 0% to 100%. | | Intensity | The intensity of each heat point. Intensity is a decimal value between 0 and 1, used to specify how "hot" a single data point should be. Default: **0.5** | | Use size as weight | A boolean value that determines if the size field value should be used as the weight of each data point. If on, this causes the layer to render as a weighted heat map. Default: **Off** |
azure-monitor Activity Log Alerts Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/activity-log-alerts-webhook.md
Title: Configure the webhook to get activity log alerts
-description: Learn about the schema of the JSON that is posted to a webhook URL when an activity log alert activates.
+description: Learn about the schema of the JSON that's posted to a webhook URL when an activity log alert activates.
Last updated 03/31/2017 # Webhooks for activity log alerts+ As part of the definition of an action group, you can configure webhook endpoints to receive activity log alert notifications. With webhooks, you can route these notifications to other systems for post-processing or custom actions. This article shows what the payload for the HTTP POST to a webhook looks like. For more information on activity log alerts, see how to [create Azure activity log alerts](./activity-log-alerts.md).
For more information on activity log alerts, see how to [create Azure activity l
For information on action groups, see how to [create action groups](./action-groups.md). > [!NOTE]
-> You can also use the [common alert schema](./alerts-common-schema.md), which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor, for your webhook integrations. [Learn about the common alert schema definitions.](./alerts-common-schema-definitions.md)ΓÇï
-
+> You can also use the [common alert schema](./alerts-common-schema.md) for your webhook integrations. It provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. [Learn about the common alert schema definitions](./alerts-common-schema-definitions.md)ΓÇï.
## Authenticate the webhook+ The webhook can optionally use token-based authorization for authentication. The webhook URI is saved with a token ID, for example, `https://mysamplealert/webcallback?tokenid=sometokenid&someparameter=somevalue`. ## Payload schema
-The JSON payload contained in the POST operation differs based on the payload's data.context.activityLog.eventSource field.
+
+The JSON payload contained in the POST operation differs based on the payload's `data.context.activityLog.eventSource` field.
> [!NOTE]
-> Currently, the description that is part of the Activity log event is copied to the fired **"Alert Description"** property.
+> Currently, the description that's part of the activity log event is copied to the fired `Alert Description` property.
>
-> In order to align the Activity Log payload with other alert types, Starting April 1, 2021 the fired alert property **"Description"** will contain the alert rule description instead.
+> To align the activity log payload with other alert types, as of April 1, 2021, the fired alert property `Description` contains the alert rule description instead.
>
-> In preparation for this change, we created a new property **"Activity Log Event Description"** to the Activity Log fired Alert. This new property will be filled with the **"Description"** property that is already available for use. This means that the new field **"Activity Log Event Description"** will contain the description that is part of the Activity log event.
+> In preparation for that change, we created a new property, `Activity Log Event Description`, to the activity log fired alert. This new property is filled with the `Description` property that's already available for use. So, the new field `Activity Log Event Description` contains the description that's part of the activity log event.
>
-> Please review your alert rules, action rules, webhooks, logic app or any other configurations where you might be using the **"Description"** property from the fired alert and replace it with **"Activity Log Event Description"** property.
+> Review your alert rules, action rules, webhooks, logic app, or any other configurations where you might be using the `Description` property from the fired alert. Replace the `Description` property with the `Activity Log Event Description` property.
>
-> if your condition (in your action rules, webhooks, logic app or any other configurations) is currently based on the **"Description"** property for activity log alerts, you may need to modify it to be based on the **"Activity Log Event Description"** property instead.
+> If your condition in your action rules, webhooks, logic app, or any other configurations is currently based on the `Description` property for activity log alerts, you might need to modify it to be based on the `Activity Log Event Description` property instead.
>
-> In order to fill the new **"Description"** property, you can add a description in the alert rule definition.
-> ![Fired Activity Log Alerts](media/activity-log-alerts-webhook/activity-log-alert-fired.png)
+> To fill the new `Description` property, you can add a description in the alert rule definition.
+
+> ![Screenshot that shows fired activity log alerts.](media/activity-log-alerts-webhook/activity-log-alert-fired.png)
### Common
The JSON payload contained in the POST operation differs based on the payload's
} ```
-For specific schema details on service health notification activity log alerts, see [Service health notifications](../../service-health/service-notifications.md). Additionally, learn how to [configure service health webhook notifications with your existing problem management solutions](../../service-health/service-health-alert-webhook-guide.md).
+For specific schema details on service health notification activity log alerts, see [Service health notifications](../../service-health/service-notifications.md). You can also learn how to [configure service health webhook notifications with your existing problem management solutions](../../service-health/service-health-alert-webhook-guide.md).
### ResourceHealth
For specific schema details on service health notification activity log alerts,
| Element name | Description | | | |
-| status |Used for metric alerts. Always set to "activated" for activity log alerts. |
+| status |Used for metric alerts. Always set to `activated` for activity log alerts. |
| context |Context of the event. |
-| resourceProviderName |The resource provider of the impacted resource. |
-| conditionType |Always "Event." |
+| resourceProviderName |The resource provider of the affected resource. |
+| conditionType |Always `Event`. |
| name |Name of the alert rule. | | id |Resource ID of the alert. | | description |Alert description set when the alert is created. | | subscriptionId |Azure subscription ID. | | timestamp |Time at which the event was generated by the Azure service that processed the request. |
-| resourceId |Resource ID of the impacted resource. |
-| resourceGroupName |Name of the resource group for the impacted resource. |
+| resourceId |Resource ID of the affected resource. |
+| resourceGroupName |Name of the resource group for the affected resource. |
| properties |Set of `<Key, Value>` pairs (that is, `Dictionary<String, String>`) that includes details about the event. | | event |Element that contains metadata about the event. | | authorization |The Azure role-based access control properties of the event. These properties usually include the action, the role, and the scope. |
-| category |Category of the event. Supported values include Administrative, Alert, Security, ServiceHealth, and Recommendation. |
+| category |Category of the event. Supported values include `Administrative`, `Alert`, `Security`, `ServiceHealth`, and `Recommendation`. |
| caller |Email address of the user who performed the operation, UPN claim, or SPN claim based on availability. Can be null for certain system calls. |
-| correlationId |Usually a GUID in string format. Events with correlationId belong to the same larger action and usually share a correlationId. |
+| correlationId |Usually a GUID in string format. Events with `correlationId` belong to the same larger action and usually share a `correlationId`. |
| eventDescription |Static text description of the event. | | eventDataId |Unique identifier for the event. | | eventSource |Name of the Azure service or infrastructure that generated the event. |
-| httpRequest |The request usually includes the clientRequestId, clientIpAddress, and HTTP method (for example, PUT). |
-| level |One of the following values: Critical, Error, Warning and Informational. |
-| operationId |Usually a GUID shared among the events corresponding to single operation. |
+| httpRequest |The request usually includes the `clientRequestId`, `clientIpAddress`, and HTTP method (for example, PUT). |
+| level |One of the following values: `Critical`, `Error`, `Warning`, and `Informational`. |
+| operationId |Usually a GUID shared among the events corresponding to a single operation. |
| operationName |Name of the operation. | | properties |Properties of the event. |
-| status |String. Status of the operation. Common values include Started, In Progress, Succeeded, Failed, Active, and Resolved. |
-| subStatus |Usually includes the HTTP status code of the corresponding REST call. It might also include other strings that describe a substatus. Common substatus values include OK (HTTP Status Code: 200), Created (HTTP Status Code: 201), Accepted (HTTP Status Code: 202), No Content (HTTP Status Code: 204), Bad Request (HTTP Status Code: 400), Not Found (HTTP Status Code: 404), Conflict (HTTP Status Code: 409), Internal Server Error (HTTP Status Code: 500), Service Unavailable (HTTP Status Code: 503), and Gateway Timeout (HTTP Status Code: 504). |
+| status |String. Status of the operation. Common values include `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, and `Resolved`. |
+| subStatus |Usually includes the HTTP status code of the corresponding REST call. It might also include other strings that describe a substatus. Common substatus values include `OK` (HTTP Status Code: 200), `Created` (HTTP Status Code: 201), `Accepted` (HTTP Status Code: 202), `No Content` (HTTP Status Code: 204), `Bad Request` (HTTP Status Code: 400), `Not Found` (HTTP Status Code: 404), `Conflict` (HTTP Status Code: 409), `Internal Server Error` (HTTP Status Code: 500), `Service Unavailable` (HTTP Status Code: 503), and `Gateway Timeout` (HTTP Status Code: 504). |
For specific schema details on all other activity log alerts, see [Overview of the Azure activity log](../essentials/platform-logs-overview.md). ## Next steps+ * [Learn more about the activity log](../essentials/platform-logs-overview.md).
-* [Execute Azure automation scripts (Runbooks) on Azure alerts](https://go.microsoft.com/fwlink/?LinkId=627081).
+* [Execute Azure Automation scripts (Runbooks) on Azure alerts](https://go.microsoft.com/fwlink/?LinkId=627081).
* [Use a logic app to send an SMS via Twilio from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-text-message-with-logic-app). This example is for metric alerts, but it can be modified to work with an activity log alert. * [Use a logic app to send a Slack message from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-slack-with-logic-app). This example is for metric alerts, but it can be modified to work with an activity log alert. * [Use a logic app to send a message to an Azure queue from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-queue-with-logic-app). This example is for metric alerts, but it can be modified to work with an activity log alert.
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Title: Alert schema definitions in Azure Monitor
-description: Understanding the common alert schema definitions for Azure Monitor
+description: This article explains the common alert schema definitions for Azure Monitor.
Last updated 07/20/2021
# Common alert schema definitions
-This article describes the [common alert schema definitions](./alerts-common-schema.md) for Azure Monitor, including those for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks.
+This article describes the [common alert schema definitions](./alerts-common-schema.md) for Azure Monitor. It includes the definitions for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks.
Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections:
-* **Essentials**: A set of standardized fields, common across all alert types, which describe what resource the alert is on, along with additional common alert metadata (for example, severity or description).
-* **Alert context**: A set of fields that describes the cause of the alert, with fields that vary based on the alert type. For example, a metric alert includes fields like the metric name and metric value in the alert context, whereas an activity log alert has information about the event that generated the alert.
+
+* **Essentials**: A set of standardized fields that are common across all alert types. Fields describe what resource the alert is on, along with other common alert metadata. Examples are severity or description.
+* **Alert context**: A set of fields that describe the cause of the alert. Fields vary based on the alert type. For example, a metric alert includes fields like the metric name and metric value in the alert context. An activity log alert has information about the event that generated the alert.
**Sample alert payload**+ ```json { "schemaId": "azureMonitorCommonAlertSchema",
Any alert instance describes the resource that was affected and the cause of the
| Field | Description| |:|:|
-| alertId | The unique resource ID identifying the alert instance. |
+| alertId | The unique resource ID that identifies the alert instance. |
| alertRule | The name of the alert rule that generated the alert instance. |
-| Severity | The severity of the alert. Possible values: Sev0, Sev1, Sev2, Sev3, or Sev4. |
-| signalType | Identifies the signal on which the alert rule was defined. Possible values: Metric, Log, or Activity Log. |
+| Severity | The severity of the alert. Possible values are Sev0, Sev1, Sev2, Sev3, or Sev4. |
+| signalType | Identifies the signal on which the alert rule was defined. Possible values are Metric, Log, or Activity Log. |
| monitorCondition | When an alert fires, the alert's monitor condition is set to **Fired**. When the underlying condition that caused the alert to fire clears, the monitor condition is set to **Resolved**. | | monitoringService | The monitoring service or solution that generated the alert. The fields for the alert context are dictated by the monitoring service. | | alertTargetIds | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. |
-| configurationItems |The list of affected resources of an alert.<br>In some cases, the configuration items can be different from the alert targets. For example, in metric-for-log or log alerts defined on a Log Analytics workspace, the configuration items are the actual resources sending the telemetry, and not the workspace.<br><ul><li>In the log alerts API (Scheduled Query Rules) v2021-08-01, the configurationItem values are taken from explicitly defined dimensions in this priority: 'Computer', '_ResourceId', 'ResourceId', 'Resource'.</li><li>In earlier versions of the log alerts API, the configurationItem values are taken implicitly from the results in this priority: 'Computer', '_ResourceId', 'ResourceId', 'Resource'.</li></ul>In ITSM systems, the configurationItems field is used to correlate alerts to resources in a CMDB. |
+| configurationItems |The list of affected resources of an alert.<br>In some cases, the configuration items can be different from the alert targets. For example, in metric-for-log or log alerts defined on a Log Analytics workspace, the configuration items are the actual resources sending the telemetry and not the workspace.<br><ul><li>In the log alerts API (Scheduled Query Rules) v2021-08-01, the `configurationItem` values are taken from explicitly defined dimensions in this priority: `Computer`, `_ResourceId`, `ResourceId`, `Resource`.</li><li>In earlier versions of the log alerts API, the `configurationItem` values are taken implicitly from the results in this priority: `Computer`, `_ResourceId`, `ResourceId`, `Resource`.</li></ul>In ITSM systems, the `configurationItems` field is used to correlate alerts to resources in a configuration management database. |
| originAlertId | The ID of the alert instance, as generated by the monitoring service generating it. | | firedDateTime | The date and time when the alert instance was fired in Coordinated Universal Time (UTC). | | resolvedDateTime | The date and time when the monitor condition for the alert instance is set to **Resolved** in UTC. Currently only applicable for metric alerts.|
Any alert instance describes the resource that was affected and the cause of the
|alertContextVersion | The version number for the `alertContext` section. | **Sample values**+ ```json { "essentials": {
Any alert instance describes the resource that was affected and the cause of the
## Alert context
+The following fields describe the cause of an alert.
+ ### Metric alerts - Static threshold #### `monitoringService` = `Platform` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
#### `monitoringService` = `Platform` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
#### `monitoringService` = `Platform` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
### Log alerts > [!NOTE]
-> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook cannot use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. You can determine this by checking the flag `IncludedSearchResults`. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
+> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook can't use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. You can determine this by checking the flag `IncludedSearchResults`. When the search results aren't included, use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
#### `monitoringService` = `Log Analytics` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
#### `monitoringService` = `Application Insights` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
#### `monitoringService` = `Log Alerts V2` > [!NOTE]
-> Log alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when using this version. You should use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links the generate a custom payload.
+> Log alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload.
**Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
#### `monitoringService` = `Activity Log - Administrative` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
#### `monitoringService` = `Activity Log - Policy` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
#### `monitoringService` = `Activity Log - Security` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
#### `monitoringService` = `ServiceHealth` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
} } ```+ #### `monitoringService` = `Resource Health` **Sample values**+ ```json { "alertContext": {
Any alert instance describes the resource that was affected and the cause of the
} ``` - ## Next steps - Learn more about the [common alert schema](./alerts-common-schema.md).
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
The *sampleActivityLogAlert.parameters.json* file contains the values provided f
If you're creating a new log alert rule, please note that current alert rule wizard is a little different from the earlier experience: - Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action:
- - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
+ - We recommend using [Dimensions](alerts-types.md#narrow-the-target-by-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
- When you need to investigate in the logs, use the link in the alert to the search results in Logs. - If you need the raw search results or for any other advanced customizations, use Logic Apps. - The new alert rule wizard doesn't support customization of the JSON payload.
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
Title: Types of Azure Monitor Alerts
+ Title: Types of Azure Monitor alerts
description: This article explains the different types of Azure Monitor alerts and when to use each type.
# Types of Azure Monitor alerts
-This article describes the kinds of Azure Monitor alerts you can create, and helps you understand when to use each type of alert.
+This article describes the kinds of Azure Monitor alerts you can create and helps you understand when to use each type of alert.
+
+Azure Monitor has four types of alerts:
-There are four types of alerts:
- [Metric alerts](#metric-alerts) - [Log alerts](#log-alerts) - [Activity log alerts](#activity-log-alerts) - [Smart detection alerts](#smart-detection-alerts)
-## Choosing the right alert type
+## Choose the right alert type
-This table can help you decide when to use what type of alert. For more detailed information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+This table can help you decide when to use each type of alert. For more information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-|Alert Type |When to Use |Pricing Information|
+|Alert type |When to use |Pricing information|
||||
-|Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed, so metric alerts are less expensive than log alerts. If the data you want to monitor is available in metric data, using metric alerts is recommended.|Each metrics alert rule is charged based on the number of time-series that are monitored. |
-|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts. Log alerts are more expensive than metric alerts.|Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
-|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
+|Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed, so metric alerts are less expensive than log alerts. If the data you want to monitor is available in metric data, we recommend that you use metric alerts.|Each metric alert rule is charged based on the number of time series that are monitored. |
+|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation by using log alerts. Log alerts are more expensive than metric alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log alerts configured for [at-scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
+|Activity log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to receive an alert when a resource experiences a specific event. Examples are a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
## Metric alerts A metric alert rule monitors a resource by evaluating conditions on the resource metrics at regular intervals. If the conditions are met, an alert is fired. A metric time-series is a series of metric values captured over a period of time.
-You can create rules using these metrics:
+You can create rules by using these metrics:
+ - [Platform metrics](alerts-metric-near-real-time.md#metrics-and-dimensions-supported) - [Custom metrics](../essentials/metrics-custom-overview.md) - [Application Insights custom metrics](../app/api-custom-events-metrics.md) - [Selected logs from a Log Analytics workspace converted to metrics](alerts-metric-logs.md) Metric alert rules include these features:+ - You can use multiple conditions on an alert rule for a single resource.-- You can add granularity by [monitoring multiple metric dimensions](#narrow-the-target-using-dimensions). -- You can use [Dynamic thresholds](#dynamic-thresholds) driven by machine learning.
+- You can add granularity by [monitoring multiple metric dimensions](#narrow-the-target-by-using-dimensions).
+- You can use [dynamic thresholds](#dynamic-thresholds) driven by machine learning.
- You can configure if metric alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). Metric alerts are stateful by default. The target of the metric alert rule can be:-- A single resource, such as a VM. See [this article](alerts-metric-near-real-time.md) for supported resource types.+
+- A single resource, such as a VM. For supported resource types, see [Supported resources for metric alerts in Azure Monitor](alerts-metric-near-real-time.md).
- [Multiple resources](#monitor-multiple-resources) of the same type in the same Azure region, such as a resource group. ### Multiple conditions
-When you create an alert rule for a single resource, you can apply multiple conditions. For example, you could create an alert rule to monitor an Azure virtual machine and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items". When an alert rule has multiple conditions, the alert fires when all the conditions in the alert rule are true and is resolved when at least one of the conditions is no longer true for three consecutive checks.
-### Narrow the target using Dimensions
+When you create an alert rule for a single resource, you can apply multiple conditions. For example, you could create an alert rule to monitor an Azure virtual machine and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items." When an alert rule has multiple conditions, the alert fires when all the conditions in the alert rule are true. The alert resolves when at least one of the conditions is no longer true for three consecutive checks.
+
+### Narrow the target by using dimensions
-Dimensions are name-value pairs that contain more data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
-For example, the Transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). You can choose to have an alert fired when there's a high number of transactions in any API name (which is the aggregated data), or you can use dimensions to further break it down to alert only when the number of transactions is high for specific API names.
-If you use more than one dimension, the metric alert rule can monitor multiple dimension values from different dimensions of a metric.
-The alert rule separately monitors all the dimensions value combinations.
-See [this article](alerts-metric-multiple-time-series-single-rule.md) for detailed instructions on using dimensions in metric alert rules.
+Dimensions are name-value pairs that contain more data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
-### Create resource-centric alerts using splitting by dimensions
+For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction. Examples are GetBlob, DeleteBlob, and PutPage. You can choose to have an alert fired when there's a high number of transactions in any API name, which is the aggregated data. Or you can use dimensions to further break it down to alert only when the number of transactions is high for specific API names.
-To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations. Splitting on Azure resource ID column makes the specified resource into the alert target.
+If you use more than one dimension, the metric alert rule can monitor multiple dimension values from different dimensions of a metric. The alert rule separately monitors all the dimension value combinations.
+For instructions on how to use dimensions in metric alert rules, see [Monitor multiple time series in a single metric alert rule](alerts-metric-multiple-time-series-single-rule.md).
-You may also decide not to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+### Create resource-centric alerts by splitting by dimensions
+
+To monitor for the same condition on multiple Azure resources, you can use the technique of splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations. Splitting on the Azure resource ID column makes the specified resource into the alert target.
+
+You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
### Monitor multiple resources You can monitor at scale by applying the same metric alert rule to multiple resources of the same type for resources that exist in the same Azure region. Individual notifications are sent for each monitored resource.
-The platform metrics for these services in the following Azure clouds are supported:
+Platform metrics are supported in the Azure cloud for the following
| Service | Global Azure | Government | China | |:--|:-|:--|:--|
-| Virtual machines* | Yes |Yes | Yes |
-| SQL server databases | Yes | Yes | Yes |
-| SQL server elastic pools | Yes | Yes | Yes |
-| NetApp files capacity pools | Yes | Yes | Yes |
-| NetApp files volumes | Yes | Yes | Yes |
-| Key vaults | Yes | Yes | Yes |
+| Azure Virtual Machines | Yes |Yes | Yes |
+| SQL Server databases | Yes | Yes | Yes |
+| SQL Server elastic pools | Yes | Yes | Yes |
+| Azure NetApp Files capacity pools | Yes | Yes | Yes |
+| Azure NetApp Files volumes | Yes | Yes | Yes |
+| Azure Key Vault | Yes | Yes | Yes |
| Azure Cache for Redis | Yes | Yes | Yes | | Azure Stack Edge devices | Yes | Yes | Yes | | Recovery Services vaults | Yes | No | No |
-| Azure Database for PostgreSQL - Flexible Servers | Yes | Yes | Yes |
+| Azure Database for PostgreSQL - Flexible servers | Yes | Yes | Yes |
> [!NOTE]
- > Multi-resource metric alerts are not supported for the following scenarios:
- > - Alerting on virtual machines' guest metrics
- > - Alerting on virtual machines' network metrics (Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, Outbound Flows Maximum Creation Rate).
+ > Multi-resource metric alerts aren't supported for the following scenarios:
+ >
+ > - Alerting on virtual machines' guest metrics.
+ > - Alerting on virtual machines' network metrics. These metrics include Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, and Outbound Flows Maximum Creation Rate.
-You can specify the scope of monitoring with a single metric alert rule in one of three ways. For example, with virtual machines you can specify the scope as:
+You can specify the scope of monitoring with a single metric alert rule in one of three ways. For example, with virtual machines you can specify the scope as:
-- a list of virtual machines (in one Azure region) within a subscription-- all virtual machines (in one Azure region) in one or more resource groups in a subscription-- all virtual machines (in one Azure region) in a subscription
+- A list of virtual machines in one Azure region within a subscription.
+- All virtual machines in one Azure region in one or more resource groups in a subscription.
+- All virtual machines in one Azure region in a subscription.
### Dynamic thresholds
-Dynamic thresholds use advanced machine learning (ML) to:
-- Learn the historical behavior of metrics-- Identify patterns and adapt to metric changes over time, such as hourly, daily or weekly patterns. -- Recognize anomalies that indicate possible service issues-- Calculate the most appropriate threshold for the metric
+Dynamic thresholds use advanced machine learning to:
+
+- Learn the historical behavior of metrics.
+- Identify patterns and adapt to metric changes over time, such as hourly, daily, or weekly patterns.
+- Recognize anomalies that indicate possible service issues.
+- Calculate the most appropriate threshold for the metric.
-Machine Learning continuously uses new data to learn more and make the threshold more accurate. Because the system adapts to the metricsΓÇÖ behavior over time, and alerts based on deviations from its pattern, you don't have to know the "right" threshold for each metric.
+Machine learning continuously uses new data to learn more and make the threshold more accurate. The system adapts to the metrics' behavior over time and alerts based on deviations from its pattern. For this reason, you don't have to know the "right" threshold for each metric.
Dynamic thresholds help you:+ - Create scalable alerts for hundreds of metric series with one alert rule. If you have fewer alert rules, you spend less time creating and managing alerts rules.-- Create rules without having to know what threshold to configure-- Configure up metric alerts using high-level concepts without extensive domain knowledge about the metric-- Prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern
+- Create rules without having to know what threshold to configure.
+- Configure metric alerts by using high-level concepts without extensive domain knowledge about the metric.
+- Prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern.
- Handle noisy metrics (such as machine CPU or memory) and metrics with low dispersion (such as availability and error rate).
-See [this article](alerts-dynamic-thresholds.md) for detailed instructions on using dynamic thresholds in metric alert rules.
+For instructions on how to use dynamic thresholds in metric alert rules, see [Dynamic thresholds in metric alerts](alerts-dynamic-thresholds.md).
## Log alerts A log alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. If the conditions are met, an alert is fired. Because you can use Log Analytics queries, you can perform advanced logic operations on your data and use the robust KQL features to manipulate log data. The target of the log alert rule can be:-- A single resource, such as a VM. -- Multiple resources of the same type in the same Azure region, such as a resource group. This is currently available for selected resource types.-- Multiple resources using [cross-resource query](../logs/cross-workspace-query.md#querying-across-log-analytics-workspaces-and-from-application-insights). +
+- A single resource, such as a VM.
+- Multiple resources of the same type in the same Azure region, such as a resource group. This capability is currently available for selected resource types.
+- Multiple resources using [cross-resource query](../logs/cross-workspace-query.md#querying-across-log-analytics-workspaces-and-from-application-insights).
Log alerts can measure two different things, which can be used for different monitoring scenarios:-- Table rows: The number of rows returned can be used to work with events such as Windows event logs, syslog, application exceptions.-- Calculation of a numeric column: Calculations based on any numeric column can be used to include any number of resources. For example, CPU percentage.+
+- **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions.
+- **Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage.
You can configure if log alerts are [stateful or stateless](alerts-overview.md#alerts-and-state) (currently in preview). > [!NOTE]
-> Log alerts work best when you are trying to detect specific data in the logs, as opposed to when you are trying to detect a **lack** of data in the logs. Since logs are semi-structured data, they are inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you are trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
+> Log alerts work best when you're trying to detect specific data in the logs, as opposed to when you're trying to detect a lack of data in the logs. Because logs are semi-structured data, they're inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you're trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs by using [metric alerts for logs](alerts-metric-logs.md).
### Dimensions in log alert rules
-You can use dimensions when creating log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually notifications are sent for each instance.
+You can use dimensions when you create log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually and notifications are sent for each instance.
### Splitting by dimensions in log alert rules
-To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations using numerical or string columns. Splitting on the Azure resource ID column makes the specified resource into the alert target.
-You may also decide not to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+To monitor for the same condition on multiple Azure resources, you can use the technique of splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations using numerical or string columns. Splitting on the Azure resource ID column makes the specified resource into the alert target.
+
+You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
-### Using the API
+### Use the API
-Manage new rules in your workspaces using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API.
+Manage new rules in your workspaces by using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API.
> [!NOTE]
-> Log alerts for Log Analytics used to be managed using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md).
+> Log alerts for Log Analytics was previously managed by using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current scheduledQueryRules API](alerts-log-api-switch.md).
+ ## Log alerts on your Azure bill
-Log Alerts are listed under resource provider microsoft.insights/scheduledqueryrules with:
-- Log Alerts on Application Insights shown with exact resource name along with resource group and alert properties.-- Log Alerts on Log Analytics shown with exact resource name along with resource group and alert properties; when created using scheduledQueryRules API.-- Log alerts created from [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure Resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have this resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log Alerts on legacy API are shown with above hidden resource name along with resource group and alert properties.
+Log alerts are listed under the resource provider `microsoft.insights/scheduledqueryrules` with:
+
+- Log alerts on Application Insights shown with the exact resource name along with resource group and alert properties.
+- Log alerts on Log Analytics shown with the exact resource name along with resource group and alert properties when they're created by using the scheduledQueryRules API.
+- Log alerts created from the [legacy Log Analytics API](./api-alerts.md) aren't tracked in [Azure Resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources. They have the resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log alerts on the legacy API are shown with the preceding hidden resource name along with resource group and alert properties.
+
+> [!NOTE]
+>
+> Unsupported resource characters such as <, >, %, &, \, ?, and / are replaced with an underscore character (_) in the hidden resource names. This change also appears in the billing information.
-> [!Note]
-> Unsupported resource characters such as <, >, %, &, \, ?, / are replaced with _ in the hidden resource names and this will also reflect in the billing information.
## Activity log alerts
-An activity log alert monitors a resource by checking the activity logs for a new activity log event that matches the defined conditions.
+An activity log alert monitors a resource by checking the activity logs for a new activity log event that matches the defined conditions.
+
+You might want to use activity log alerts for these types of scenarios:
-You may want to use activity log alerts for these types of scenarios:
-- When a specific operation occurs on resources in a specific resource group or subscription. For example, you may want to be notified when:
- - Any virtual machine in a production resource group is deleted.
+- When a specific operation occurs on resources in a specific resource group or subscription. For example, you might want to be notified when:
+ - Any virtual machine in a production resource group is deleted.
- Any new roles are assigned to a user in your subscription.-- A service health event occurs. Service health events include notifications of incidents and maintenance events that apply to resources in your subscription.
+- When a service health event occurs. Service health events include notifications of incidents and maintenance events that apply to resources in your subscription.
You can create an activity log alert on:-- Any of the activity log [event categories](../essentials/activity-log-schema.md), other than on alert events. -- Any activity log event in top-level property in the JSON object.
-Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal.
+- Any of the activity log [event categories](../essentials/activity-log-schema.md), other than on alert events.
+- Any activity log event in a top-level property in the JSON object.
+
+Activity log alert rules are Azure resources, so you can use an Azure Resource Manager template to create them. You can also create, update, or delete activity log alert rules in the Azure portal.
An activity log alert only monitors events in the subscription in which the alert is created.
-## Smart Detection alerts
+## Smart detection alerts
-After setting up Application Insights for your project, when your app generates a certain minimum amount of data, Smart Detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies. Smart Detection monitors the data received from your app, and in particular the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
+After you set up Application Insights for your project, your app begins to generate data. Based on this data, Smart Detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others. The overall failure rate might go up as load increases.
-As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If there's an abnormal rise in failure rate compared to previous performance, an analysis is triggered. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature needs no set-up nor configuration, as it uses machine learning algorithms to predict the normal failure rate.
+Smart Detection uses machine learning to find these anomalies. Smart Detection monitors the data received from your app, and especially the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
-While metric alerts tell you there might be a problem, Smart Detection starts the diagnostic work for you, performing much of the analysis you would otherwise have to do yourself. You get the results neatly packaged, helping you to get quickly to the root of the problem.
+As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If there's an abnormal rise in failure rate compared to previous performance, an analysis is triggered. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature doesn't need setup or configuration because it uses machine learning algorithms to predict the normal failure rate.
-Smart detection works for web apps hosted in the cloud or on your own servers that generate application requests or dependency data.
+Metric alerts tell you there might be a problem, but Smart Detection starts the diagnostic work for you. It performs much of the analysis you would otherwise have to do yourself. You get the results neatly packaged, which helps you get to the root of the problem quickly.
+
+Smart Detection works for web apps hosted in the cloud or on your own servers that generate application requests or dependency data.
## Next steps+ - Get an [overview of alerts](alerts-overview.md). - [Create an alert rule](alerts-log.md). - Learn more about [Smart Detection](proactive-failure-diagnostics.md).
azure-monitor Api Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/api-alerts.md
Title: Using Log Analytics Alert REST API
-description: The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics, which is part of Log Analytics. This article provides details of the API and several examples for performing different operations.
+ Title: Use the Log Analytics Alert REST API
+description: The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details about the API and examples for performing different operations.
Last updated 2/23/2022
-# Create and manage alert rules in Log Analytics with REST API
+# Create and manage alert rules in Log Analytics with REST API
> [!IMPORTANT]
-> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), log analytics workspace(s) created after *June 1, 2019* manage alert rules using the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). Customers are encouraged to [switch to the current API](./alerts-log-api-switch.md) in older workspaces to leverage Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits). This article describes management of alert rules using the legacy API.
+> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), Log Analytics workspaces created after *June 1, 2019* manage alert rules by using the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). Customers are encouraged to [switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits). This article describes management of alert rules by using the legacy API.
-The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details of the API and several examples for performing different operations.
+The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details about the API and several examples for performing different operations.
-The Log Analytics Search REST API is RESTful and can be accessed via the Azure Resource Manager REST API. In this document, you will find examples where the API is accessed from a PowerShell command line using [ARMClient](https://github.com/projectkudu/ARMClient), an open-source command-line tool that simplifies invoking the Azure Resource Manager API. The use of ARMClient and PowerShell is one of many options to access the Log Analytics Search API. With these tools, you can utilize the RESTful Azure Resource Manager API to make calls to Log Analytics workspaces and perform search commands within them. The API will output search results to you in JSON format, allowing you to use the search results in many different ways programmatically.
+The Log Analytics Search REST API is RESTful and can be accessed via the Azure Resource Manager REST API. In this article, you'll find examples where the API is accessed from a PowerShell command line by using [ARMClient](https://github.com/projectkudu/ARMClient). This open-source command-line tool simplifies invoking the Azure Resource Manager API.
+
+The use of ARMClient and PowerShell is one of many options you can use to access the Log Analytics Search API. With these tools, you can utilize the RESTful Azure Resource Manager API to make calls to Log Analytics workspaces and perform search commands within them. The API outputs search results in JSON format so that you can use the search results in many different ways programmatically.
## Prerequisites
-Currently, alerts can only be created with a saved search in Log Analytics. You can refer to the [Log Search REST API](../logs/log-query-overview.md) for more information.
+
+Currently, alerts can only be created with a saved search in Log Analytics. For more information, see the [Log Search REST API](../logs/log-query-overview.md).
## Schedules
-A saved search can have one or more schedules. The schedule defines how often the search is run and the time interval over which the criteria is identified.
-Schedules have the properties in the following table.
+
+A saved search can have one or more schedules. The schedule defines how often the search is run and the time interval over which the criteria are identified. Schedules have the properties described in the following table:
| Property | Description | |: |: |
-| Interval |How often the search is run. Measured in minutes. |
-| QueryTimeSpan |The time interval over which the criteria is evaluated. Must be equal to or greater than Interval. Measured in minutes. |
-| Version |The API version being used. Currently, this should always be set to 1. |
+| `Interval` |How often the search is run. Measured in minutes. |
+| `QueryTimeSpan` |The time interval over which the criteria are evaluated. Must be equal to or greater than `Interval`. Measured in minutes. |
+| `Version` |The API version being used. Currently, this setting should always be `1`. |
-For example, consider an event query with an Interval of 15 minutes and a Timespan of 30 minutes. In this case, the query would be run every 15 minutes, and an alert would be triggered if the criteria continued to resolve to true over a 30-minute span.
+For example, consider an event query with an `Interval` of 15 minutes and a `Timespan` of 30 minutes. In this case, the query would be run every 15 minutes. An alert would be triggered if the criteria continued to resolve to `true` over a 30-minute span.
+
+### Retrieve schedules
-### Retrieving schedules
Use the Get method to retrieve all schedules for a saved search. ```powershell
Use the Get method with a schedule ID to retrieve a particular schedule for a sa
armclient get /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Subscription ID}/schedules/{Schedule ID}?api-version=2015-03-20 ```
-Following is a sample response for a schedule.
+The following sample response is for a schedule:
```json {
Following is a sample response for a schedule.
} ```
-### Creating a schedule
-Use the Put method with a unique schedule ID to create a new schedule. Two schedules cannot have the same ID even if they are associated with different saved searches. When you create a schedule in the Log Analytics console, a GUID is created for the schedule ID.
+### Create a schedule
+
+Use the Put method with a unique schedule ID to create a new schedule. Two schedules can't have the same ID even if they're associated with different saved searches. When you create a schedule in the Log Analytics console, a GUID is created for the schedule ID.
> [!NOTE] > The name for all saved searches, schedules, and actions created with the Log Analytics API must be in lowercase.
$scheduleJson = "{'properties': { 'Interval': 15, 'QueryTimeSpan':15, 'Enabled':
armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/mynewschedule?api-version=2015-03-20 $scheduleJson ```
-### Editing a schedule
-Use the Put method with an existing schedule ID for the same saved search to modify that schedule; in example below the schedule is disabled. The body of the request must include the *etag* of the schedule.
+### Edit a schedule
+
+Use the Put method with an existing schedule ID for the same saved search to modify that schedule. In the following example, the schedule is disabled. The body of the request must include the *etag* of the schedule.
```powershell $scheduleJson = "{'etag': 'W/\"datetime'2016-02-25T20%3A54%3A49.8074679Z'\""','properties': { 'Interval': 15, 'QueryTimeSpan':15, 'Enabled':'false' } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/mynewschedule?api-version=2015-03-20 $scheduleJson ```
-### Deleting schedules
+### Delete schedules
+ Use the Delete method with a schedule ID to delete a schedule. ```powershell
armclient delete /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupN
``` ## Actions
-A schedule can have multiple actions. An action may define one or more processes to perform such as sending a mail or starting a runbook, or it may define a threshold that determines when the results of a search match some criteria. Some actions will define both so that the processes are performed when the threshold is met.
-All actions have the properties in the following table. Different types of alerts have different additional properties, which are described below.
+A schedule can have multiple actions. An action might define one or more processes to perform, such as sending an email or starting a runbook. An action also might define a threshold that determines when the results of a search match some criteria. Some actions will define both so that the processes are performed when the threshold is met.
+
+All actions have the properties described in the following table. Different types of alerts have other different properties, which are described in the following table:
| Property | Description | |: |: |
-| `Type` |Type of the action. Currently the possible values are Alert and Webhook. |
+| `Type` |Type of the action. Currently, the possible values are `Alert` and `Webhook`. |
| `Name` |Display name for the alert. |
-| `Version` |The API version being used. Currently, this should always be set to 1. |
+| `Version` |The API version being used. Currently, this setting should always be `1`. |
-### Retrieving actions
+### Retrieve actions
Use the Get method to retrieve all actions for a schedule.
Use the Get method with the action ID to retrieve a particular action for a sche
armclient get /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Subscription ID}/schedules/{Schedule ID}/actions/{Action ID}?api-version=2015-03-20 ```
-### Creating or editing actions
-Use the Put method with an action ID that is unique to the schedule to create a new action. When you create an action in the Log Analytics console, a GUID is for the action ID.
+### Create or edit actions
+
+Use the Put method with an action ID that's unique to the schedule to create a new action. When you create an action in the Log Analytics console, a GUID is for the action ID.
> [!NOTE] > The name for all saved searches, schedules, and actions created with the Log Analytics API must be in lowercase.
-Use the Put method with an existing action ID for the same saved search to modify that schedule. The body of the request must include the etag of the schedule.
+Use the Put method with an existing action ID for the same saved search to modify that schedule. The body of the request must include the etag of the schedule.
-The request format for creating a new action varies by action type so these examples are provided in the sections below.
+The request format for creating a new action varies by action type, so these examples are provided in the following sections.
-### Deleting actions
+### Delete actions
Use the Delete method with the action ID to delete an action.
Use the Delete method with the action ID to delete an action.
armclient delete /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Subscription ID}/schedules/{Schedule ID}/Actions/{Action ID}?api-version=2015-03-20 ```
-### Alert Actions
-A Schedule should have one and only one Alert action. Alert actions have one or more of the sections in the following table. Each is described in further detail below.
+### Alert actions
+
+A schedule should have one and only one Alert action. Alert actions have one or more of the sections described in the following table:
| Section | Description | Usage | |: |: |: |
-| Threshold |Criteria for when the action is run.| Required for every alert, before or after they are extended to Azure. |
-| Severity |Label used to classify alert when triggered.| Required for every alert, before or after they are extended to Azure. |
-| Suppress |Option to stop notifications from alert. | Optional for every alert, before or after they are extended to Azure. |
-| Action Groups |IDs of Azure ActionGroup where actions required are specified, like - E-Mails, SMSs, Voice Calls, Webhooks, Automation Runbooks, ITSM Connectors, etc.| Required once alerts are extended to Azure|
-| Customize Actions|Modify the standard output for select actions from ActionGroup| Optional for every alert, can be used after alerts are extended to Azure. |
+| Threshold |Criteria for when the action is run.| Required for every alert, before or after they're extended to Azure. |
+| Severity |Label used to classify the alert when triggered.| Required for every alert, before or after they're extended to Azure. |
+| Suppress |Option to stop notifications from alerts. | Optional for every alert, before or after they're extended to Azure. |
+| Action groups |IDs of Azure `ActionGroup` where actions required are specified, like emails, SMSs, voice calls, webhooks, automation runbooks, and ITSM Connectors.| Required after alerts are extended to Azure.|
+| Customize actions|Modify the standard output for select actions from `ActionGroup`.| Optional for every alert and can be used after alerts are extended to Azure. |
### Thresholds
-An Alert action should have one and only one threshold. When the results of a saved search match the threshold in an action associated with that search, then any other processes in that action are run. An action can also contain only a threshold so that it can be used with actions of other types that donΓÇÖt contain thresholds.
-Thresholds have the properties in the following table.
+An Alert action should have one and only one threshold. When the results of a saved search match the threshold in an action associated with that search, any other processes in that action are run. An action can also contain only a threshold so that it can be used with actions of other types that don't contain thresholds.
+
+Thresholds have the properties described in the following table:
| Property | Description | |: |: |
-| `Operator` |Operator for the threshold comparison. <br> gt = Greater Than <br> lt = Less Than |
+| `Operator` |Operator for the threshold comparison. <br> gt = Greater than <br> lt = Less than |
| `Value` |Value for the threshold. |
-For example, consider an event query with an Interval of 15 minutes, a Timespan of 30 minutes, and a Threshold of greater than 10. In this case, the query would be run every 15 minutes, and an alert would be triggered if it returned 10 events that were created over a 30-minute span.
+For example, consider an event query with an `Interval` of 15 minutes, a `Timespan` of 30 minutes, and a `Threshold` of greater than 10. In this case, the query would be run every 15 minutes. An alert would be triggered if it returned 10 events that were created over a 30-minute span.
-Following is a sample response for an action with only a threshold.
+The following sample response is for an action with only a `Threshold`:
```json "etag": "W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"",
Following is a sample response for an action with only a threshold.
} ```
-Use the Put method with a unique action ID to create a new threshold action for a schedule.
+Use the Put method with a unique action ID to create a new threshold action for a schedule.
```powershell $thresholdJson = "{'properties': { 'Name': 'My Threshold', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 10 } } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/mythreshold?api-version=2015-03-20 $thresholdJson ```
-Use the Put method with an existing action ID to modify a threshold action for a schedule. The body of the request must include the etag of the action.
+Use the Put method with an existing action ID to modify a threshold action for a schedule. The body of the request must include the etag of the action.
```powershell $thresholdJson = "{'etag': 'W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"','properties': { 'Name': 'My Threshold', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 10 } } }"
armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName
``` #### Severity
-Log Analytics allows you to classify your alerts into categories, to allow easier management and triage. The Alert severity defined is: informational, warning, and critical. These are mapped to the normalized severity scale of Azure Alerts as:
-|Log Analytics Severity Level |Azure Alerts Severity Level |
+Log Analytics allows you to classify your alerts into categories for easier management and triage. The Alerts severity levels are `informational`, `warning`, and `critical`. These categories are mapped to the normalized severity scale of Azure Alerts as shown in the following table:
+
+|Log Analytics severity level |Azure Alerts severity level |
||| |`critical` |Sev 0| |`warning` |Sev 1| |`informational` | Sev 2|
-Following is a sample response for an action with only a threshold and severity.
+The following sample response is for an action with only `Threshold` and `Severity`:
```json "etag": "W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"",
Following is a sample response for an action with only a threshold and severity.
} ```
-Use the Put method with a unique action ID to create a new action for a schedule with severity.
+Use the Put method with a unique action ID to create a new action for a schedule with `Severity`.
```powershell $thresholdWithSevJson = "{'properties': { 'Name': 'My Threshold', 'Version':'1','Severity': 'critical', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 10 } } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/mythreshold?api-version=2015-03-20 $thresholdWithSevJson ```
-Use the Put method with an existing action ID to modify a severity action for a schedule. The body of the request must include the etag of the action.
+Use the Put method with an existing action ID to modify a severity action for a schedule. The body of the request must include the etag of the action.
```powershell $thresholdWithSevJson = "{'etag': 'W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"','properties': { 'Name': 'My Threshold', 'Version':'1','Severity': 'critical', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 10 } } }"
armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName
``` #### Suppress
-Log Analytics based query alerts will fire every time threshold is met or exceeded. Based on the logic implied in the query, this may result in alert getting fired for a series of intervals and hence notifications also being sent constantly. To prevent such scenario, a user can set Suppress option instructing Log Analytics to wait for a stipulated amount of time before notification is fired the second time for the alert rule. So if suppress is set for 30 minutes; then alert will fire the first time and send notifications configured. But then wait for 30 minutes, before notification for the alert rule is again used. In the interim period, alert rule will continue to run - only notification is suppressed by Log Analytics for specified time, regardless of how many times the alert rule fired in this period.
-Suppress property of Log Analytics alert rule is specified using the *Throttling* value and the suppression period using *DurationInMinutes* value.
+Log Analytics-based query alerts fire every time the threshold is met or exceeded. Based on the logic implied in the query, an alert might get fired for a series of intervals. The result is that notifications are sent constantly. To prevent such a scenario, you can set the `Suppress` option that instructs Log Analytics to wait for a stipulated amount of time before notification is fired the second time for the alert rule.
+
+For example, if `Suppress` is set for 30 minutes, the alert will fire the first time and send notifications configured. It will then wait for 30 minutes before notification for the alert rule is again used. In the interim period, the alert rule will continue to run. Only notification is suppressed by Log Analytics for a specified time regardless of how many times the alert rule fired in this period.
+
+The `Suppress` property of a Log Analytics alert rule is specified by using the `Throttling` value. The suppression period is specified by using the `DurationInMinutes` value.
-Following is a sample response for an action with only a threshold, severity, and suppress property
+The following sample response is for an action with only `Threshold`, `Severity`, and `Suppress` properties.
```json "etag": "W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"",
Following is a sample response for an action with only a threshold, severity, an
} ```
-Use the Put method with a unique action ID to create a new action for a schedule with severity.
+Use the Put method with a unique action ID to create a new action for a schedule with `Severity`.
```powershell $AlertSuppressJson = "{'properties': { 'Name': 'My Threshold', 'Version':'1','Severity': 'critical', 'Type':'Alert', 'Throttling': { 'DurationInMinutes': 30 },'Threshold': { 'Operator': 'gt', 'Value': 10 } } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myalert?api-version=2015-03-20 $AlertSuppressJson ```
-Use the Put method with an existing action ID to modify a severity action for a schedule. The body of the request must include the etag of the action.
+Use the Put method with an existing action ID to modify a severity action for a schedule. The body of the request must include the etag of the action.
```powershell $AlertSuppressJson = "{'etag': 'W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"','properties': { 'Name': 'My Threshold', 'Version':'1','Severity': 'critical', 'Type':'Alert', 'Throttling': { 'DurationInMinutes': 30 },'Threshold': { 'Operator': 'gt', 'Value': 10 } } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myalert?api-version=2015-03-20 $AlertSuppressJson ```
-#### Action Groups
-All alerts in Azure, use Action Group as the default mechanism for handling actions. With Action Group, you can specify your actions once and then associate the action group to multiple alerts - across Azure. Without the need, to repeatedly declare the same actions over and over again. Action Groups support multiple actions - including email, SMS, Voice Call, ITSM Connection, Automation Runbook, Webhook URI and more.
+#### Action groups
-For users who have extended their alerts into Azure - a schedule should now have Action Group details passed along with threshold, to be able to create an alert. E-mail details, Webhook URLs, Runbook Automation details, and other Actions, need to be defined in side an Action Group first before creating an alert; one can create [Action Group from Azure Monitor](./action-groups.md) in Portal or use [Action Group API](/rest/api/monitor/actiongroups).
+All alerts in Azure use action group as the default mechanism for handling actions. With an action group, you can specify your actions once and then associate the action group to multiple alerts across Azure without the need to declare the same actions repeatedly. Action groups support multiple actions like email, SMS, voice call, ITSM connection, automation runbook, and webhook URI.
-To add association of action group to an alert, specify the unique Azure Resource Manager ID of the action group in the alert definition. A sample illustration is provided below:
+For users who have extended their alerts into Azure, a schedule should now have action group details passed along with `Threshold` to be able to create an alert. E-mail details, webhook URLs, runbook automation details, and other actions need to be defined inside an action group first before you create an alert. You can create an [action group from Azure Monitor](./action-groups.md) in the Azure portal or use the [Action Group API](/rest/api/monitor/actiongroups).
+
+To associate an action group to an alert, specify the unique Azure Resource Manager ID of the action group in the alert definition. The following sample illustrates the use:
```json "etag": "W/\"datetime'2017-12-13T10%3A52%3A21.1697364Z'\"",
To add association of action group to an alert, specify the unique Azure Resourc
} ```
-Use the Put method with a unique action ID to associate already existing Action Group for a schedule. The following is a sample illustration of usage.
+Use the Put method with a unique action ID to associate an already existing action group for a schedule. The following sample illustrates the use:
```powershell $AzNsJson = "{'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup']} } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson ```
-Use the Put method with an existing action ID to modify an Action Group associated for a schedule. The body of the request must include the etag of the action.
+Use the Put method with an existing action ID to modify an action group associated for a schedule. The body of the request must include the etag of the action.
```powershell $AzNsJson = "{'etag': 'datetime'2017-12-13T10%3A52%3A21.1697364Z'\"', 'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': { 'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup'] } } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson ```
-#### Customize Actions
-By default actions, follow standard template and format for notifications. But user can customize some actions, even if they are controlled by Action Groups. Currently, customization is possible for Email Subject and Webhook Payload.
+#### Customize actions
+
+By default, actions follow standard templates and format for notifications. But you can customize some actions, even if they're controlled by action groups. Currently, customization is possible for `EmailSubject` and `WebhookPayload`.
-##### Customize E-Mail Subject for Action Group
-By default, the email subject for alerts is: Alert Notification `<AlertName>` for `<WorkspaceName>`. But this can be customized, so that you can specific words or tags - to allow you to easily employ filter rules in your Inbox.
-The customize email header details need to send along with ActionGroup details, as in sample below.
+##### Customize EmailSubject for an action group
+
+By default, the email subject for alerts is Alert Notification `<AlertName>` for `<WorkspaceName>`. But the subject can be customized so that you can specify words or tags to allow you to easily employ filter rules in your Inbox. The customized email header details need to be sent along with `ActionGroup` details, as in the following sample:
```json "etag": "W/\"datetime'2017-12-13T10%3A52%3A21.1697364Z'\"",
The customize email header details need to send along with ActionGroup details,
} ```
-Use the Put method with a unique action ID to associate already existing Action Group with customization for a schedule. The following is a sample illustration of usage.
+Use the Put method with a unique action ID to associate an existing action group with customization for a schedule. The following sample illustrates the use:
```powershell $AzNsJson = "{'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup'], 'CustomEmailSubject': 'Azure Alert fired'} } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson ```
-Use the Put method with an existing action ID to modify an Action Group associated for a schedule. The body of the request must include the etag of the action.
+Use the Put method with an existing action ID to modify an action group associated for a schedule. The body of the request must include the etag of the action.
```powershell $AzNsJson = "{'etag': 'datetime'2017-12-13T10%3A52%3A21.1697364Z'\"', 'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup']}, 'CustomEmailSubject': 'Azure Alert fired' } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson ```
-##### Customize Webhook Payload for Action Group
-By default, the webhook sent via Action Group for log analytics has a fixed structure. But one can customize the JSON payload by using specific variables supported, to meet requirements of the webhook endpoint. For more information, see [Webhook action for log alert rules](./alerts-log-webhook.md).
+##### Customize WebhookPayload for an action group
+
+By default, the webhook sent via an action group for Log Analytics has a fixed structure. But you can customize the JSON payload by using specific variables supported to meet requirements of the webhook endpoint. For more information, see [Webhook action for log alert rules](./alerts-log-webhook.md).
-The customize webhook details need to send along with ActionGroup details and will be applied to all Webhook URI specified inside the action group; as in sample below.
+The customized webhook details must be sent along with `ActionGroup` details. They'll be applied to all webhook URIs specified inside the action group. The following sample illustrates the use:
```json "etag": "W/\"datetime'2017-12-13T10%3A52%3A21.1697364Z'\"",
The customize webhook details need to send along with ActionGroup details and wi
}, ```
-Use the Put method with a unique action ID to associate already existing Action Group with customization for a schedule. The following is a sample illustration of usage.
+Use the Put method with a unique action ID to associate an existing action group with customization for a schedule. The following sample illustrates the use:
```powershell $AzNsJson = "{'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup'], 'CustomEmailSubject': 'Azure Alert fired','CustomWebhookPayload': '{\"field1\":\"value1\",\"field2\":\"value2\"}'} } }" armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson ```
-Use the Put method with an existing action ID to modify an Action Group associated for a schedule. The body of the request must include the etag of the action.
+Use the Put method with an existing action ID to modify an action group associated for a schedule. The body of the request must include the etag of the action.
```powershell $AzNsJson = "{'etag': 'datetime'2017-12-13T10%3A52%3A21.1697364Z'\"', 'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup']}, 'CustomEmailSubject': 'Azure Alert fired','CustomWebhookPayload': '{\"field1\":\"value1\",\"field2\":\"value2\"}' } }"
armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Na
## Next steps * Use the [REST API to perform log searches](../logs/log-query-overview.md) in Log Analytics.
-* Learn about [log alerts in Azure monitor](./alerts-unified-log.md)
-* How to [create, edit or manage log alert rules in Azure monitor](./alerts-log.md)
+* Learn about [log alerts in Azure Monitor](./alerts-unified-log.md).
+* Learn how to [create, edit, or manage log alert rules in Azure Monitor](./alerts-log.md).
azure-monitor Itsm Convert Servicenow To Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-convert-servicenow-to-webhook.md
+
+ Title: Convert ITSM actions that send events to ServiceNow to secure webhook actions
+description: Learn how to convert ITSM actions that send events to ServiceNow to secure webhook actions.
+ Last updated : 09/20/2022++++++
+# Convert ITSM actions that send events to ServiceNow to secure webhook actions
+
+> [!NOTE]
+> As of September 2022, we are starting the 3-year process of deprecating support of using ITSM actions to send events to ServiceNow.
+
+To migrate your ITSM connector to the new secure webhook integration, follow the [secure webhook configuration instructions](itsmc-secure-webhook-connections-servicenow.md).
+
+If you're syncing work items between ServiceNow and an Azure Log Analytics workspace (bi-directional), follow the steps below to pull data from ServiceNow into your Log Analytics workspace.
+
+## Pull data from your ServiceNow instance into a Log Analytics workspace
+
+1. [Create a logic app](../../logic-apps/quickstart-create-first-logic-app-workflow.md) in the Azure portal.
+1. Create an HTTP GET request that uses the [ServiceNow **Table** API](https://developer.servicenow.com/dev.do#!/reference/api/sandiego/rest/c_TableAPI) to retrieve data from the ServiceNow instance. [See an example](https://docs.servicenow.com/bundle/sandiego-application-development/page/integrate/inbound-rest/concept/use-REST-API-Explorer.html#t_GetStartedRetrieveExisting) of how to use the Table call to retrieve incidents.
+1. To see a list of tables in your ServiceNow instance, in ServiceNow, go to **System definitions**, then **Tables**. Example table names include: `change_request`, `em_alert`, `incident`, `em_event`.
+
+ :::image type="content" source="media/itsmc-convert-servicenow-to-webhook/alerts-itsmc-service-now-tables.png" alt-text="Screenshot of the Service Now tables.":::
+
+1. In Logic Apps, add a `Parse JSON` action on the results of the GET request you created in step 2.
+1. Add a schema for the retrieved payload. You can use the **Use sample payload to generate schema** feature. See a sample schema for a `change_request` table.
+
+ :::image type="content" source="media/itsmc-convert-servicenow-to-webhook/alerts-itsmc-service-now-parse-json.png" alt-text="Screenshot of a sample schema. ":::
+
+1. Create a [Log Analytics workspace](../logs/quick-create-workspace.md#create-a-workspace).
+1. Create a `for each` loop to insert each row of the data returned from the API into the data in the Log Analytics workspace.
+ - In the **Select an output from previous steps** section, enter the data set returned by the JSON parse action you created in step 4.
+ - Construct each row from the set that enters the loop.
+ - In the last step of the loop, use `Send data` to send the data to the Log Analytics workspace with these values.
+ - **Custom log name**: the name of the custom log you're using to save the data to the Log Analytics workspace.
+ - A connection to the LA workspace that you created in step 6.
+
+ :::image type="content" source="media/itsmc-convert-servicenow-to-webhook/alerts-itsmc-service-now-for-loop.png" alt-text="Screenshot showing loop that imports data into a Log Analytics workspace.":::
+
+The data is visible in the **Custom logs** section of your Log Analytics workspace.
+
+## Sample JSON schema for a change_request table
+
+```json
+{
+ "properties": {
+ "content": {
+ "properties": {
+ "result": {
+ "items": {
+ "properties": {
+ "active": {
+ "type": "string"
+ },
+ "activity_due": {
+ "type": "string"
+ },
+ "additional_assignee_list": {
+ "type": "string"
+ },
+ "approval": {
+ "type": "string"
+ },
+ "approval_history": {
+ "type": "string"
+ },
+ "approval_set": {
+ "type": "string"
+ },
+ "assigned_to": {
+ "properties": {
+ "link": {
+ "type": "string"
+ },
+ "value": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "assignment_group": {
+ "type": "string"
+ },
+ "backout_plan": {
+ "type": "string"
+ },
+ "business_duration": {
+ "type": "string"
+ },
+ "business_service": {
+ "type": "string"
+ },
+ "cab_date": {
+ "type": "string"
+ },
+ "cab_delegate": {
+ "type": "string"
+ },
+ "cab_recommendation": {
+ "type": "string"
+ },
+ "cab_required": {
+ "type": "string"
+ },
+ "calendar_duration": {
+ "type": "string"
+ },
+ "category": {
+ "type": "string"
+ },
+ "change_plan": {
+ "type": "string"
+ },
+ "chg_model": {
+ "type": "string"
+ },
+ "close_code": {
+ "type": "string"
+ },
+ "close_notes": {
+ "type": "string"
+ },
+ "closed_at": {
+ "type": "string"
+ },
+ "closed_by": {
+ "properties": {
+ "link": {
+ "type": "string"
+ },
+ "value": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "cmdb_ci": {
+ "properties": {
+ "link": {
+ "type": "string"
+ },
+ "value": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "comments": {
+ "type": "string"
+ },
+ "comments_and_work_notes": {
+ "type": "string"
+ },
+ "company": {
+ "type": "string"
+ },
+ "conflict_last_run": {
+ "type": "string"
+ },
+ "conflict_status": {
+ "type": "string"
+ },
+ "contact_type": {
+ "type": "string"
+ },
+ "correlation_display": {
+ "type": "string"
+ },
+ "correlation_id": {
+ "type": "string"
+ },
+ "delivery_plan": {
+ "type": "string"
+ },
+ "delivery_task": {
+ "type": "string"
+ },
+ "description": {
+ "type": "string"
+ },
+ "due_date": {
+ "type": "string"
+ },
+ "end_date": {
+ "type": "string"
+ },
+ "escalation": {
+ "type": "string"
+ },
+ "expected_start": {
+ "type": "string"
+ },
+ "follow_up": {
+ "type": "string"
+ },
+ "group_list": {
+ "type": "string"
+ },
+ "impact": {
+ "type": "string"
+ },
+ "implementation_plan": {
+ "type": "string"
+ },
+ "justification": {
+ "type": "string"
+ },
+ "knowledge": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ },
+ "made_sla": {
+ "type": "string"
+ },
+ "number": {
+ "type": "string"
+ },
+ "on_hold": {
+ "type": "string"
+ },
+ "on_hold_reason": {
+ "type": "string"
+ },
+ "on_hold_task": {
+ "type": "string"
+ },
+ "opened_at": {
+ "type": "string"
+ },
+ "opened_by": {
+ "properties": {
+ "link": {
+ "type": "string"
+ },
+ "value": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "order": {
+ "type": "string"
+ },
+ "outside_maintenance_schedule": {
+ "type": "string"
+ },
+ "parent": {
+ "type": "string"
+ },
+ "phase": {
+ "type": "string"
+ },
+ "phase_state": {
+ "type": "string"
+ },
+ "priority": {
+ "type": "string"
+ },
+ "production_system": {
+ "type": "string"
+ },
+ "reason": {
+ "type": "string"
+ },
+ "reassignment_count": {
+ "type": "string"
+ },
+ "requested_by": {
+ "properties": {
+ "link": {
+ "type": "string"
+ },
+ "value": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "requested_by_date": {
+ "type": "string"
+ },
+ "review_comments": {
+ "type": "string"
+ },
+ "review_date": {
+ "type": "string"
+ },
+ "review_status": {
+ "type": "string"
+ },
+ "risk": {
+ "type": "string"
+ },
+ "risk_impact_analysis": {
+ "type": "string"
+ },
+ "route_reason": {
+ "type": "string"
+ },
+ "scope": {
+ "type": "string"
+ },
+ "service_offering": {
+ "type": "string"
+ },
+ "short_description": {
+ "type": "string"
+ },
+ "sla_due": {
+ "type": "string"
+ },
+ "start_date": {
+ "type": "string"
+ },
+ "state": {
+ "type": "string"
+ },
+ "std_change_producer_version": {
+ "type": "string"
+ },
+ "sys_class_name": {
+ "type": "string"
+ },
+ "sys_created_by": {
+ "type": "string"
+ },
+ "sys_created_on": {
+ "type": "string"
+ },
+ "sys_domain": {
+ "properties": {
+ "link": {
+ "type": "string"
+ },
+ "value": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_domain_path": {
+ "type": "string"
+ },
+ "sys_id": {
+ "type": "string"
+ },
+ "sys_mod_count": {
+ "type": "string"
+ },
+ "sys_tags": {
+ "type": "string"
+ },
+ "sys_updated_by": {
+ "type": "string"
+ },
+ "sys_updated_on": {
+ "type": "string"
+ },
+ "task_effective_number": {
+ "type": "string"
+ },
+ "test_plan": {
+ "type": "string"
+ },
+ "time_worked": {
+ "type": "string"
+ },
+ "type": {
+ "type": "string"
+ },
+ "unauthorized": {
+ "type": "string"
+ },
+ "universal_request": {
+ "type": "string"
+ },
+ "upon_approval": {
+ "type": "string"
+ },
+ "upon_reject": {
+ "type": "string"
+ },
+ "urgency": {
+ "type": "string"
+ },
+ "user_input": {
+ "type": "string"
+ },
+ "watch_list": {
+ "type": "string"
+ },
+ "work_end": {
+ "type": "string"
+ },
+ "work_notes": {
+ "type": "string"
+ },
+ "work_notes_list": {
+ "type": "string"
+ },
+ "work_start": {
+ "type": "string"
+ }
+ },
+ "required": [
+ "parent",
+ "reason",
+ "watch_list",
+ "upon_reject",
+ "sys_updated_on",
+ "type",
+ "approval_history",
+ "number",
+ "test_plan",
+ "cab_delegate",
+ "requested_by_date",
+ "state",
+ "sys_created_by",
+ "knowledge",
+ "order",
+ "phase",
+ "cmdb_ci",
+ "delivery_plan",
+ "impact",
+ "active",
+ "work_notes_list",
+ "priority",
+ "sys_domain_path",
+ "cab_recommendation",
+ "production_system",
+ "review_date",
+ "business_duration",
+ "group_list",
+ "requested_by",
+ "change_plan",
+ "approval_set",
+ "implementation_plan",
+ "universal_request",
+ "end_date",
+ "short_description",
+ "correlation_display",
+ "delivery_task",
+ "work_start",
+ "additional_assignee_list",
+ "outside_maintenance_schedule",
+ "std_change_producer_version",
+ "service_offering",
+ "sys_class_name",
+ "closed_by",
+ "follow_up",
+ "reassignment_count",
+ "review_status",
+ "assigned_to",
+ "start_date",
+ "sla_due",
+ "comments_and_work_notes",
+ "escalation",
+ "upon_approval",
+ "correlation_id",
+ "made_sla",
+ "backout_plan",
+ "conflict_status",
+ "task_effective_number",
+ "sys_updated_by",
+ "opened_by",
+ "user_input",
+ "sys_created_on",
+ "on_hold_task",
+ "sys_domain",
+ "route_reason",
+ "closed_at",
+ "review_comments",
+ "business_service",
+ "time_worked",
+ "chg_model",
+ "expected_start",
+ "opened_at",
+ "work_end",
+ "phase_state",
+ "cab_date",
+ "work_notes",
+ "close_code",
+ "assignment_group",
+ "description",
+ "on_hold_reason",
+ "calendar_duration",
+ "close_notes",
+ "sys_id",
+ "contact_type",
+ "cab_required",
+ "urgency",
+ "scope",
+ "company",
+ "justification",
+ "activity_due",
+ "comments",
+ "approval",
+ "due_date",
+ "sys_mod_count",
+ "on_hold",
+ "sys_tags",
+ "conflict_last_run",
+ "unauthorized",
+ "location",
+ "risk",
+ "category",
+ "risk_impact_analysis"
+ ],
+ "type": "object"
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+ },
+ "schema": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "result": {
+ "properties": {
+ "items": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "active": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "activity_due": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "additional_assignee_list": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "approval": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "approval_history": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "approval_set": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "assigned_to": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "link": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "value": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "assignment_group": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "backout_plan": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "business_duration": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "business_service": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "cab_date": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "cab_delegate": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "cab_recommendation": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "cab_required": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "calendar_duration": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "category": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "change_plan": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "chg_model": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "close_code": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "close_notes": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "closed_at": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "closed_by": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "link": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "value": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "cmdb_ci": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "link": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "value": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "comments": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "comments_and_work_notes": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "company": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "conflict_last_run": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "conflict_status": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "contact_type": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "correlation_display": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "correlation_id": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "delivery_plan": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "delivery_task": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "description": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "due_date": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "end_date": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "escalation": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "expected_start": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "follow_up": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "group_list": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "impact": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "implementation_plan": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "justification": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "knowledge": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "location": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "made_sla": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "number": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "on_hold": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "on_hold_reason": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "on_hold_task": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "opened_at": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "opened_by": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "link": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "value": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "order": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "outside_maintenance_schedule": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "parent": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "phase": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "phase_state": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "priority": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "production_system": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "reason": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "reassignment_count": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "requested_by": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "link": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "value": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "requested_by_date": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "review_comments": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "review_date": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "review_status": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "risk": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "risk_impact_analysis": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "route_reason": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "scope": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "service_offering": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "short_description": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sla_due": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "start_date": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "state": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "std_change_producer_version": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_class_name": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_created_by": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_created_on": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_domain": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "link": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "value": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_domain_path": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_id": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_mod_count": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_tags": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_updated_by": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "sys_updated_on": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "task_effective_number": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "test_plan": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "time_worked": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "type": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "unauthorized": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "universal_request": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "upon_approval": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "upon_reject": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "urgency": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "user_input": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "watch_list": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "work_end": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "work_notes": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "work_notes_list": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "work_start": {
+ "properties": {
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+ },
+ "required": {
+ "items": {
+ "type": "string"
+ },
+ "type": "array"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+}
+
+```
+
+## Next steps
+
+* [ITSM Connector overview](itsmc-overview.md)
+* [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts)
+* [Troubleshooting problems in the ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
-# Connect Azure to ITSM tools by using IT Service Management Solution
+# Connect Azure to ITSM tools by using IT Service Management solution
:::image type="icon" source="media/itsmc-overview/itsmc-symbol.png":::
This article provides information about how to configure IT Service Management C
## Add IT Service Management Connector
-Before you can create a connection, you need to install ITSMC.
+Before you can create a connection, install ITSMC.
-1. In the Azure portal, select **Create a resource**:
+1. In the Azure portal, select **Create a resource**.
- ![Screenshot of the menu item for creating a resource.](media/itsmc-overview/azure-add-new-resource.png)
+ ![Screenshot that shows the menu item for creating a resource.](media/itsmc-overview/azure-add-new-resource.png)
-1. Search for **IT Service Management Connector** in Azure Marketplace. Then select **Create**:
+1. Search for **IT Service Management Connector** in Azure Marketplace. Then select **Create**.
![Screenshot that shows the Create button in Azure Marketplace.](media/itsmc-overview/add-itsmc-solution.png)
-1. In the **LA Workspace** section, select the Log Analytics workspace where you want to install ITSMC.
+1. In the **Azure Log Analytics Workspace** section, select the Log Analytics workspace where you want to install ITSMC.
> [!NOTE] > You can install ITSMC in Log Analytics workspaces only in the following regions: East US, West US 2, South Central US, West Central US, US Gov Arizona, US Gov Virginia, Canada Central, West Europe, South UK, Southeast Asia, Japan East, Central India, and Australia Southeast.
-1. In the **Log Analytics workspace** section, select the resource group where you want to create the ITSMC resource:
+1. In the **Azure Log Analytics Workspace** section, select the resource group where you want to create the ITSMC resource.
+
+ ![Screenshot that shows the Azure Log Analytics Workspace section.](media/itsmc-overview/itsmc-solution-workspace.png)
- ![Screenshot that shows the Log Analytics workspace section.](media/itsmc-overview/itsmc-solution-workspace.png)
-
> [!NOTE]
- > As part of the ongoing transition from Microsoft Operations Management Suite (OMS) to Azure Monitor, OMS workspaces are now called *Log Analytics workspaces*.
+ > As part of the ongoing transition from Microsoft Operations Management Suite to Azure Monitor, Operations Management workspaces are now called *Log Analytics workspaces*.
-5. Select **OK**.
+1. Select **OK**.
-When the ITSMC resource is deployed, a notification appears at the upper-right corner of the window.
+When the ITSMC resource is deployed, a notification appears in the upper-right corner of the screen.
## Create an ITSM connection
-After you've installed ITSMC, follow these steps to create the ITSM connection.
+After you've installed ITSMC, create an ITSM connection.
-After you've prepped your ITSM tool, complete these steps to create a connection:
+After you've prepped your ITSM tool, follow these steps to create a connection.
1. [Configure ServiceNow](./itsmc-connections-servicenow.md) to allow the connection from ITSMC.
-1. In **All resources**, look for **ServiceDesk(*your workspace name*)**:
+1. In **All resources**, look for **ServiceDesk(*your workspace name*)**.
![Screenshot that shows recent resources in the Azure portal.](media/itsmc-definition/create-new-connection-from-resource.png)
-1. Under **Workspace Data Sources** on the left pane, select **ITSM Connections**:
+1. Under **Workspace Data Sources** on the left pane, select **ITSM Connections**.
![Screenshot that shows the ITSM Connections menu item.](media/itsmc-overview/add-new-itsm-connection.png)
After you've prepped your ITSM tool, complete these steps to create a connection
- [System Center Service Manager](./itsmc-connections.md) > [!NOTE]
- > By default, ITSMC refreshes the connection's configuration data once every 24 hours. To refresh your connection's data instantly to reflect any edits or template updates that you make, select the **Sync** button on your connection's pane:
+ > By default, ITSMC refreshes the connection's configuration data once every 24 hours. To refresh your connection's data instantly to reflect any edits or template updates that you make, select the **Sync** button on your connection's pane.
> > ![Screenshot that shows the Sync button on the connection's pane.](media/itsmc-overview/itsmc-connections-refresh.png) ## Create ITSM work items from Azure alerts
-After you create your ITSM connection, you can use ITMC to create work items in your ITSM tool based on Azure alerts. To create the work items, you'll use the ITSM action in action groups.
+After you create your ITSM connection, you can use ITSMC to create work items in your ITSM tool based on Azure alerts. To create the work items, you'll use the ITSM action in action groups.
Action groups provide a modular and reusable way to trigger actions for your Azure alerts. You can use action groups with metric alerts, activity log alerts, and Log Analytics alerts in the Azure portal. > [!NOTE]
-> After you create the ITSM connection, you need to wait 30 minutes for the sync process to finish.
+> After you create the ITSM connection, you must wait 30 minutes for the sync process to finish.
### Define a template
-Certain work item types can use templates that you define in the ServiceNow. Using templates, you can define fields that will be automatically populated using constant values that is defined in ServiceNow (not values from the payload). The templates synced with Azure and you can define which template you want to use as a part of the definition of an action group. Find information about how to create templates [here](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html).
+Certain work item types can use templates that you define in ServiceNow. When you use templates, you can define fields that will be automatically populated by using constant values defined in ServiceNow (not values from the payload). The templates are synced with Azure. You can define which template you want to use as a part of the definition of an action group. For information about how to create templates, see the [ServiceNow documentation](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html).
To create an action group:
-1. In the Azure portal, select **Monitor** and then **Alerts**.
-1. On the menu at the top of the screen, select **Manage actions**:
+1. In the Azure portal, select **Monitor** > **Alerts**.
+1. On the menu at the top of the screen, select **Manage actions**.
- ![Screenshot that shows the Manage actions menu item.](media/itsmc-overview/action-groups-selection-big.png)
+ ![Screenshot that shows selecting Action groups.](media/itsmc-overview/action-groups-selection-big.png)
-1. In the **Action groups** window, select **+Create**.
- The **Create action group** window appears.
+1. On the **Action groups** screen, select **+Create**.
+ The **Create action group** screen appears.
-1. Select the **Subscription** and **Resource group** where you want to create your action group. Provide values in **Action group name** and **Display name** for your action group. Then select **Next: Notifications**.
+1. Select the **Subscription** and **Resource group** where you want to create your action group. Enter values in **Action group name** and **Display name** for your action group. Then select **Next: Notifications**.
- ![Screenshot that shows the Create action group window.](media/itsmc-overview/action-groups-details.png)
+ ![Screenshot that shows the Create an action group screen.](media/itsmc-overview/action-groups-details.png)
-1. In the **Notifications** tab, select **Next: Actions**.
-1. In the **Actions** tab, select **ITSM** in the **Action Type** list. For **Name**, provide a name for the action. Then select the pen button that represents **Edit details**.
+1. On the **Notifications** tab, select **Next: Actions**.
+1. On the **Actions** tab, select **ITSM** in the **Action type** list. For **Name**, provide a name for the action. Then select the pen button that represents **Edit details**.
![Screenshot that shows selections for creating an action group.](media/itsmc-definition/action-group-pen.png)
-1. In the **Subscription** list, select the subscription that contains your Log Analytics workspace. In the **Connection** list, select your ITSM connector name. It will be followed by your workspace name. An example is *MyITSMConnector(MyWorkspace)*.
+1. In the **Subscription** list, select the subscription that contains your Log Analytics workspace. In the **Connection** list, select your ITSM Connector name. It will be followed by your workspace name. An example is *MyITSMConnector(MyWorkspace)*.
1. Select a **Work Item** type.
To create an action group:
> [!NOTE] > This section is relevant only for log search alerts. For all other alert types, you'll create one work item per alert.
- * If you selected **Incident** or **Alert** in the **Work Item** drop-down list, you have the option to create individual work items for each configuration item.
+ * **Incident** or **Alert**: If you select one of these options from the **Work Item** dropdown list, you can create individual work items for each configuration item.
![Screenshot that shows the I T S M Ticket area with Incident selected as a work item.](media/itsmc-overview/itsm-action-configuration.png)
- * If you select the **Create individual work items for each Configuration Item** check box, every configuration item in every alert will create a new work item. Because several alerts will occur for the same affected configuration items, there will be more than one work item for each configuration item.
+ * **Create individual work items for each Configuration Item**: If you select this checkbox, every configuration item in every alert will create a new work item. Because several alerts will occur for the same affected configuration items, there will be more than one work item for each configuration item.
For example, an alert that has three configuration items will create three work items. An alert that has one configuration item will create one work item.
- * If you clear the **Create individual work items for each Configuration Item** check box, ITSMC will create a single work item for each alert rule and append to it all affected configuration items. A new work item will be created if the previous one is closed.
+ * **Create individual work items for each Configuration Item**: If you clear this checkbox, ITSMC will create a single work item for each alert rule and append to it all affected configuration items. A new work item will be created if the previous one is closed.
>[!NOTE] > In this case, some of the fired alerts won't generate new work items in the ITSM tool. For example, an alert that has three configuration items will create one work item. If an alert for the same alert rule as the previous example has one configuration item, that configuration item will be attached to the list of affected configuration items in the created work item. An alert for a different alert rule that has one configuration item will create one work item.
- * If you selected **Event** in the **Work Item** drop-down list, you can choose to create individual work items for each log entry or for each configuration item.
+ * **Event**: If you select this option in the **Work Item** dropdown list, you can create individual work items for each log entry or for each configuration item.
![Screenshot that shows the I T S M Ticket area with Event selected as a work item.](media/itsmc-overview/itsm-action-configuration-event.png)
- * If you select **Create individual work items for each Log Entry (Configuration item field is not filled. Can result in large number of work items.)**, a work item will be created for each row in the search results of the log search alert query. The description property in the payload of the work item will contain the row from the search results.
+ * **Create individual work items for each Log Entry (Configuration item field is not filled. Can result in large number of work items.)**: If you select this option, a work item will be created for each row in the search results of the log search alert query. The description property in the payload of the work item will contain the row from the search results.
- * If you select **Create individual work items for each Configuration Item**, every configuration item in every alert will create a new work item. Each configuration item can have more than one work item in the ITSM system. This option is the same as the selecting the check box that appears after you select **Incident** as the work item type.
-9. As a part of the action definition you can define predefined fields that will contain constant values as a part of the payload. According to the work item type there are 3 options that can be used as a part of the payload:
+ * **Create individual work items for each Configuration Item**: If you select this option, every configuration item in every alert will create a new work item. Each configuration item can have more than one work item in the ITSM system. This option is the same as selecting the checkbox that appears after you select **Incident** as the work item type.
+1. As a part of the action definition, you can define predefined fields that will contain constant values as a part of the payload. According to the work item type, three options can be used as a part of the payload:
* **None**: Use a regular payload to ServiceNow without any extra predefined fields and values.
- * **Use default fields**: Using a set of fields and values that will be sent automatically as a part of the payload to ServiceNow. Those fields are not flexible and the values are defined in ServiceNow lists.
- * **Use saved templates from ServiceNow**: Using a predefine set of fields and values that was defined as a part of a template definition in ServiceNow. If you already defined the template in ServiceNow you can use it from the **Template** list otherwise you can define it in ServiceNow, for more [details](#define-a-template).
+ * **Use default fields**: Use a set of fields and values that will be sent automatically as a part of the payload to ServiceNow. Those fields aren't flexible, and the values are defined in ServiceNow lists.
+ * **Use saved templates from ServiceNow**: Use a predefined set of fields and values that were defined as a part of a template definition in ServiceNow. If you already defined the template in ServiceNow, you can use it from the **Template** list. Otherwise, you can define it in ServiceNow. For more information, see the preceding section, [Define a template](#define-a-template).
1. Select **OK**.
When you create or edit an Azure alert rule, use an action group, which has an I
## Next steps
-* [Troubleshoot problems in ITSMC](./itsmc-resync-servicenow.md)
+[Troubleshoot problems in ITSMC](./itsmc-resync-servicenow.md)
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 01/10/2022- Last updated : 09/20/2022 # Application Insights overview
-Application Insights is a feature of [Azure Monitor](../overview.md) that provides extensible application performance management (APM) and monitoring for live web apps. Developers and DevOps professionals can use Application Insights to:
+Application Insights is an extension of [Azure Monitor](../overview.md) and provides Application Performance Monitoring (also known as ΓÇ£APMΓÇ¥) features. APM tools are useful to monitor applications from development, through test, and into production in the following ways:
-- Automatically detect performance anomalies.-- Help diagnose issues by using powerful analytics tools.-- See what users actually do with apps.-- Help continuously improve app performance and usability.
+1. *Proactively* understand how an application is performing.
+1. *Reactively* review application execution data to determine the cause of an incident.
-Application Insights:
+In addition to collecting [Metrics](standard-metrics.md) and application [Telemetry](data-model.md) data, which describe application activities and health, Application Insights can also be used to collect and store application [trace logging data](asp-net-trace-logs.md).
-- Supports a wide variety of platforms, including .NET, Node.js, Java, and Python.-- Works for apps hosted on-premises, hybrid, or on any public cloud.-- Integrates with DevOps processes.-- Has connection points to many development tools.-- Can monitor and analyze telemetry from mobile apps by integrating with Visual Studio [App Center](https://appcenter.ms/).
+The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs; the logging framework rarely needs to be changed.
-<a name="how-does-application-insights-work"></a>
-## How Application Insights works
+Application Insights provides other features including, but not limited to:
-To use Application Insights, you either install a small instrumentation package (SDK) in your app, or enable Application Insights by using the Application Insights agent. For languages and platforms that support the Application Insights agent, see [Supported languages](./platforms.md).
+- [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment
+- [Availability](availability-overview.md) ΓÇô also known as ΓÇ£Synthetic Transaction MonitoringΓÇ¥, probe your applications external endpoint(s) to test the overall availability and responsiveness over time
+- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create GitHub or Azure DevOps work items in context of Application Insights data
+- [Usage](usage-overview.md) ΓÇô understand which features are popular with users and how users interact and use your application
+- [Smart Detection](proactive-diagnostics.md) ΓÇô automatic failure and anomaly detection through proactive telemetry analysis
-You can instrument the web app, any background components, and the JavaScript in the web pages themselves. The app and its components don't have to be hosted in Azure.
+In addition, Application Insights supports [Distributed Tracing](distributed-tracing.md), also known as ΓÇ£distributed component correlationΓÇ¥. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a given execution or transaction. The ability to trace activity end-to-end is increasingly important for applications that have been built as distributed components or [microservices](https://learn.microsoft.com/azure/architecture/guide/architecture-styles/microservices).
-The instrumentation monitors your app and directs the telemetry data to an Application Insights resource by using a unique instrumentation key. The impact on your app's performance is small. Tracking calls are non-blocking, and are batched and sent in a separate thread.
+The [Application Map](app-map.md) allows a high level top-down view of the application architecture and at-a-glance visual references to component health and responsiveness.
-You can pull in telemetry like performance counters, Azure diagnostics, or Docker logs from host environments. You can also set up web tests that periodically send synthetic requests to your web service. All these telemetry streams are integrated into Azure Monitor. In the Azure portal, you can apply powerful analytics and search tools to the raw data.
+To understand the number of Application Insights resources required to cover your Application or components across environments, see the [Application Insights deployment planning guide](separate-resources.md).
-The following diagram shows how Application Insights instrumentation in an app sends telemetry to an Application Insights resource.
+## How do I use Application Insights?
-![Diagram that shows Application Insights instrumentation in an app sending telemetry to an Application Insights resource.](./media/app-insights-overview/diagram.png)
+Application Insights is enabled through either [Auto-Instrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](platforms.md) are supported and the applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, reference [How do I instrument an application?](#how-do-i-instrument-an-application).
-## How to use Application Insights
+The Application Insights agent or SDK pre-processes telemetry and metrics before sending the data to Azure where it's ingested and processed further before being stored in Azure Monitor Logs (Log Analytics). For this reason, an Azure account is required to use Application Insights.
-There are several ways to get started with Application Insights. Begin with whatever works best for you, and you can add others later.
+The easiest way to get started consuming Application insights is through the Azure portal and the built-in visual experiences. Advanced users can [query the underlying data](../logs/log-query-overview.md) directly to [build custom visualizations](tutorial-app-dashboards.md) through Azure Monitor [Dashboards](overview-dashboard.md) and [Workbooks](../visualize/workbooks-overview.md).
-### Prerequisites
+Consider starting with the [Application Map](app-map.md) for a high level view. Use the [Search](diagnostic-search.md) experience to quickly narrow down telemetry and data by type and date-time, or search within data (for example Log Traces) and filter to a given correlated operation of interest.
-- You need an Azure account. Application Insights is hosted in Azure, and sends its telemetry to Azure for analysis and presentation. If you don't have an Azure subscription, you can [sign up for free](https://azure.microsoft.com/free). If your organization already has an Azure subscription, an administrator can [add you to it](../../active-directory/fundamentals/add-users-azure-active-directory.md).
+Jump into analytics with [Performance view](tutorial-performance.md) ΓÇô get deep insights into how your Application or API and downstream dependencies are performing and find for a representative sample to [explore end to end](transaction-diagnostics.md). And, be proactive with the [Failure view](tutorial-runtime-exceptions.md) ΓÇô understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause-analysis.
-- The basic [Application Insights pricing plan](https://azure.microsoft.com/pricing/details/application-insights/) has no charge until your app has substantial usage.
+[Create Azure Monitor Alerts](tutorial-alert.md) to signal potential issues should your Application or components parts deviate from the established baseline.
-### Get started
+Application Insights pricing is consumption-based; you pay for only what you use. For more information on pricing, see the [Azure Monitor Pricing page](https://azure.microsoft.com/pricing/details/monitor/) and [how to optimize costs](../best-practices-cost.md).
-To use Application Insights at run time, you can instrument your web app on the server. This approach is ideal for apps that are already deployed, because it avoids any updates to the app code.
+## How do I instrument an application?
-See the following articles for details and instructions:
+[Auto-Instrumentation](codeless-overview.md) is the preferred instrumentation method. It requires no developer investment and eliminates future overhead related to [updating the SDK](sdk-support-guidance.md). It's the only way to instrument an application in which you don't have access to the source code.
-- [Application monitoring for Azure App Service overview](./azure-web-apps.md)-- [Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets](./azure-vm-vmss-apps.md)-- [Deploy Azure Monitor Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md)-- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md)-
-You can also add Application Insights to your app code at development time. This approach lets you customize and add to telemetry collection.
-
-See the following articles for details and instructions:
--- [Configure Application Insights for your ASP.NET website](./asp-net.md)-- [Application Insights for ASP.NET Core applications](./asp-net-core.md)-- [Application Insights for .NET console applications](./console.md)-- [Application Insights for web pages](./javascript.md)-- [Monitor your Node.js services and apps with Application Insights](./nodejs.md)-- [Set up Azure Monitor for your Python application](./opencensus-python.md)-
-For all supported languages, platforms, and frameworks, see [Supported languages](./platforms.md).
-
-### Monitor
-
-After you set up Application Insights, monitor your app.
--- Set up [availability web tests](./monitor-web-app-availability.md).-- Use the default [application dashboard](./overview-dashboard.md) for your team room, to track load, responsiveness, and performance. Monitor your dependencies, page loads, and AJAX calls.-- Discover which requests are the slowest and fail most often.-- Watch [Live Stream](./live-stream.md) when you deploy a new release, to know immediately about any degradation.-
-### Detect and diagnose
-
-When you receive an alert or discover a problem:
--- Assess how many users are affected.-- Correlate failures with exceptions, dependency calls, and traces.-- Examine profiler, snapshots, stack dumps, and trace logs.-
-### Measure, learn, and build
--- Plan to measure how customers use new user experience or business features.-- Write custom telemetry into your code.-- [Measure the effectiveness](./usage-overview.md) of each new feature that you deploy.-- Base the next development cycle on evidence from your telemetry.-
-## What Application Insights monitors
-
-Application Insights helps development teams understand app performance and usage. Application Insights monitors:
--- Request rates, response times, and failure rates-
- Find out which pages are most popular, at what times of day, and where users are. See which pages perform best. If response times and failure rates are high when there are more requests, there might be a resourcing problem.
--- Dependency rates, response times, and failure rates, to show whether external services are slowing down performance--- Exceptions-
- Analyze the aggregated statistics, or pick specific instances and drill into the stack trace and related requests. Application Insights reports both server and browser exceptions.
+You only need to install the Application Insights SDK in the following circumstances:
-- Page views and load performance reported by users' browsers
+- You require [custom events and metrics](api-custom-events-metrics.md)
+- You require control over the flow of telemetry
+- [Auto-Instrumentation](codeless-overview.md) isn't available (typically due to language or platform limitations)
-- AJAX calls from web pages, including rates, response times, and failure rates
+To use the SDK, you install a small instrumentation package in your app and then instrument the web app, any background components, and JavaScript within the web pages. The app and its components don't have to be hosted in Azure. The instrumentation monitors your app and directs the telemetry data to an Application Insights resource by using a unique token. The effect on your app's performance is small; tracking calls are non-blocking and batched to be sent in a separate thread.
-- User and session counts
+Refer to the decision tree below to see what is available to instrument your app.
-- Performance counters from Windows or Linux server machines, such as CPU, memory, and network usage
+### [.NET](#tab/net)
-- Host diagnostics from Docker or Azure -- Diagnostic trace logs from apps, so you can correlate trace events with requests
+- [Auto-Instrumentation](codeless-overview.md)
+- [Azure Application Insights libraries for .NET](https://docs.microsoft.com/dotnet/api/overview/azure/insights)
+- [Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets](azure-vm-vmss-apps.md)
+- [Deploy Azure Monitor Application Insights Agent for on-premises servers](status-monitor-v2-overview.md)
-- Custom events and metrics in client or server code that track business events, like items sold
-<a name="where-do-i-see-my-telemetry"></a>
-## Where to see Application Insights data
-There are many ways to explore Application Insights telemetry. For more information, see the following articles:
+### [Java](#tab/java)
-- [Smart detection in Application Insights](../alerts/proactive-diagnostics.md)
- Set up automatic alerts that adapt to your app's normal telemetry patterns and trigger when something is outside the usual pattern. You can also set alerts on specified levels of custom or standard metrics. For more information, see [Create, view, and manage log alerts using Azure Monitor](../alerts/alerts-log.md).
--- [Application Map: Triage distributed applications](./app-map.md)-
- Explore the components of your app, with key metrics and alerts.
--- [Profile live Azure App Service apps with Application Insights](./profiler.md)-
- Inspect the execution profiles of sampled requests.
--- [Usage analysis with Application Insights](./usage-overview.md)-
- Analyze user segmentation and retention.
--- [Use Search in Application Insights](./diagnostic-search.md)-
- Apply transaction search for instance data. Search and filter events such as requests, exceptions, dependency calls, log traces, and page views.
--- [Advanced features of the Azure metrics explorer](../essentials/metrics-charts.md)-
- Explore, filter, and segment aggregated data such as request, failure, and exception rates, response times, and page load times.
--- [Application Insights overview dashboard](./overview-dashboard.md)-
- Combine data from multiple resources and share with others. Use the dashboard for multi-component apps and for continuous display in the team room.
--- [Live Metrics Stream: Monitor and diagnose with one-second latency](./live-stream.md)-
- When you deploy a new build, watch these near-realtime performance indicators to make sure everything works as expected.
--- [Log queries in Azure Monitor](../logs/log-query-overview.md)-
- Ask questions about your app's performance and usage by using the powerful Kusto query language (KQL).
+Links:
+- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md)
-- [Debug your applications with Application Insights in Visual Studio](./visual-studio.md)
+### [Node.js](#tab/nodejs)
- See performance data in the code, and go to code from stack traces.
-- [Debug snapshots on exceptions in .NET apps](./snapshot-debugger.md)
+Links:
+- [Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications](opentelemetry-enable.md)
+- [Monitor your Node.js services and apps with Application Insights](nodejs.md)
- Use the Snapshot Debugger to debug snapshots sampled from live operations, with parameter values.
+### [JavaScript](#tab/javascript)
-- [Feed Power BI from Application Insights](./export-power-bi.md)
- Integrate usage metrics with other business intelligence.
+Links:
+- [Application Insights for webpages](javascript.md)
-- [Use the Application Insights REST API to build custom solutions](https://dev.applicationinsights.io/)
+### [Python](#tab/python)
- Write code to run queries over your metrics and raw data.
-- [Export telemetry from Application Insights](./export-telemetry.md)
+Links:
+- [Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications](opentelemetry-enable.md)
+- [Set up Azure Monitor for your Python application](opencensus-python.md)
- Use continuous export to bulk export raw data to storage as soon as it arrives.
+ ## Next steps -- [Instrument your web pages](./javascript.md) for page view, AJAX, and other client-side telemetry.-- [Analyze mobile app usage](../app/mobile-center-quickstart.md) by integrating with Visual Studio App Center.-- [Monitor availability with URL ping tests](./monitor-web-app-availability.md) to your website from Application Insights servers.
+- [Create a resource](create-workspace-resource.md)
+- [Application Map](app-map.md)
+- [Transaction search](diagnostic-search.md)
## Troubleshooting
Post coding questions to [Stack Overflow]() using an Application Insights tag.
### User Voice Leave product feedback for the engineering team on [UserVoice](https://feedback.azure.com/d365community/forum/8849e04d-1325-ec11-b6e6-000d3a4f09d0).-
-<!-- ## Support and feedback
-* Questions and Issues:
- * [Troubleshooting][qna]
- * [Microsoft Q&A question page](/answers/topics/azure-monitor.html)
- * [StackOverflow](https://stackoverflow.com/questions/tagged/ms-application-insights)
-* Your suggestions:
- * [UserVoice](https://feedback.azure.com/d365community/forum/8849e04d-1325-ec11-b6e6-000d3a4f09d0)
-* Blog:
- * [Application Insights blog](https://azure.microsoft.com/blog/tag/application-insights) -->
-
-<!--Link references-->
-
-[android]: ../app/mobile-center-quickstart.md
-[azure]: ../../insights-perf-analytics.md
-[client]: ./javascript.md
-[desktop]: ./windows-desktop.md
-[greenbrown]: ./asp-net.md
-[ios]: ../app/mobile-center-quickstart.md
-[java]: ./java-in-process-agent.md
-[knowUsers]: app-insights-web-track-usage.md
-[platforms]: ./platforms.md
-[portal]: https://portal.azure.com/
-[qna]: ../faq.yml
-[redfield]: ./status-monitor-v2-overview.md
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
ms.contributor: cawa Previously updated : 07/28/2022 Last updated : 09/21/2022
## Trouble registering Microsoft.ChangeAnalysis resource provider from Change history tab.
-If you're viewing Change history after its first integration with Azure Monitor's Change Analysis, you will see it automatically registering the **Microsoft.ChangeAnalysis** resource provider. The resource may fail and incur the following error messages:
+If you're viewing Change history after its first integration with Azure Monitor's Change Analysis, you'll see it automatically registering the **Microsoft.ChangeAnalysis** resource provider. The resource may fail and incur the following error messages:
### You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider.
-You're receiving this error message because your role in the current subscription is not associated with the **Microsoft.Support/register/action** scope. For example, you are not the owner of your subscription and instead received shared access permissions through a coworker (like view access to a resource group).
+You're receiving this error message because your role in the current subscription isn't associated with the **Microsoft.Support/register/action** scope. For example, you aren't the owner of your subscription and instead received shared access permissions through a coworker (like view access to a resource group).
To resolve the issue, contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. 1. In the Azure portal, search for **Subscriptions**.
When changes can't be loaded, Azure Monitor's Change Analysis service presents t
Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+## Only partial data loaded.
+
+This error message may occur in the Azure portal when loading change data via the Change Analysis home page. Typically, the Change Analysis service calculates and returns all change data. However, in a network failure or a temporary outage of service, you may receive an error message indicating only partial data was loaded.
+
+To load all change data, try waiting a few minutes and refreshing the page. If you are still only receiving partial data, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+ ## You don't have enough permissions to view some changes. Contact your Azure subscription administrator.
-This general unauthorized error message occurs when the current user does not have sufficient permissions to view the change. At minimum,
+This general unauthorized error message occurs when the current user doesn't have sufficient permissions to view the change. At minimum,
* To view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager, reader access is required. * For web app in-guest file changes and configuration changes, contributor role is required.
azure-monitor Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/partners.md
Customers can deploy the BMC Helix platform with the cloud deployment of their c
## Botmetric
-See the [Botmetric introduction for Azure](https://www.botmetric.com/blog/announcing-botmetric-cost-governance-beta-microsoft-azure/).
+See the [Botmetric introduction for Azure](https://nutanix.medium.com/announcing-botmetric-cost-governance-beta-in-microsoft-azure-ee6b361c303e).
## Circonus
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Use `'Full'` when you need resource values that aren't part of the properties sc
### Valid uses
-The `reference` function can only be used in the properties of a resource definition and the outputs section of a template or deployment. When used with [property iteration](copy-properties.md), you can use the `reference` function for `input` because the expression is assigned to the resource property.
+The `reference` function can only be used in the outputs section of a template or deployment and properties object of a resource definition. It cannot be used for resource properties such as `type`, `name`, `location` and other top level properties of the resource definition. When used with [property iteration](copy-properties.md), you can use the `reference` function for `input` because the expression is assigned to the resource property.
You can't use the `reference` function to set the value of the `count` property in a copy loop. You can use to set other properties in the loop. Reference is blocked for the count property because that property must be determined before the `reference` function is resolved.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
For more information, see [supported languages](language-support.md).
### Expanded the supported languages in LID and MLID through the API
-We expand the list of the languages to be supported in LID (language identification) and MLID (multi language Identification) using APIs.
+We expanded the list of the languages to be supported in LID (language identification) and MLID (multi language Identification) using APIs.
For more information, see [supported languages](language-support.md).
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
vSAN datastores use data-at-rest encryption by default using keys stored in Azur
## Azure storage integration
-You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads. You can also connect Azure disk pools or [Azure NetApp Files datastores](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) to expand the storage capacity.
+You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads.
+You can expand the datastore capacity by connecting Azure disk pools or [Azure NetApp Files datastores](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md). Azure NetApp Files is available in Ultra, [Premium and Standard performance tiers](/azure/azure-netapp-files/azure-netapp-files-service-levels) to allow adjusting the performance and cost to the requirements of the workloads.
## Alerts and monitoring
bastion Bastion Vm Copy Paste https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-copy-paste.md
description: Learn how copy and paste to and from a Windows VM using Bastion.
Previously updated : 04/19/2022 Last updated : 09/20/2022 # Customer intent: I want to copy and paste to and from VMs using Azure Bastion.
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
description: Learn how to deploy Bastion using settings that you specify - Azure
Previously updated : 08/17/2022 Last updated : 09/21/2022
batch Batch Pool No Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-no-public-ip-address.md
> [!IMPORTANT] > - Support for pools without public IP addresses in Azure Batch is currently in public preview for the following regions: France Central, East Asia, West Central US, South Central US, West US 2, East US, North Europe, East US 2, Central US, West Europe, North Central US, West US, Australia East, Japan East, Japan West.
-> - This preview version will be replaced by [Simplified node communication pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md).
+> - This preview version will be retired on **31 March 2023**, and will be replaced by [Simplified node communication pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md). For more details, please refer to [Retirement Migration Guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md).
> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
batch Batch Tls 101 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-tls-101-migration-guide.md
Customers must update client code before the TLS 1.0/1.1 retirement.
- Customers using native WinHTTP for client code can follow this [guide](https://support.microsoft.com/topic/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-winhttp-in-windows-c4bd73d2-31d7-761e-0178-11268bb10392). -- Customers using .NET framework for their client code should upgrade to .NET > 4.7, that which enforces TLS 1.2 by default.
+- Customers using .NET Framework for their client code should upgrade to .NET > 4.7, that which enforces TLS 1.2 by default.
-- For customers on .NET framework who are unable to upgrade to > 4.7, please follow this [guide](https://docs.microsoft.com/dotnet/framework/network-programming/tls) to enforce TLS 1.2.
+- For customers using .NET Framework who are unable to upgrade to > 4.7, please follow this [guide](/dotnet/framework/network-programming/tls) to enforce TLS 1.2.
-For TLS best practices, refer to [TLS best practices for .NET framework](https://docs.microsoft.com/dotnet/framework/network-programming/tls).
+For TLS best practices, refer to [TLS best practices for .NET Framework](/dotnet/framework/network-programming/tls).
## FAQ
For TLS best practices, refer to [TLS best practices for .NET framework](https:/
## Next steps
-For more information, see [How to enable TLS 1.2 on clients](https://docs.microsoft.com/mem/configmgr/core/plan-design/security/enable-tls-1-2-client).
+For more information, see [How to enable TLS 1.2 on clients](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client).
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/quickstarts/client-libraries.md
Last updated 09/22/2020
keywords: anomaly detection, algorithms ms.devlang: csharp, javascript, python
+recommendations: false
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+## September 2022
+
+### Computer Vision 3.0/3.1 Read previews deprecation
+
+The preview versions of the Computer Vision 3.0 and 3.1 Read API are scheduled to be retired on January 31, 2023. Customers are encouraged to refer to the [How-To](./how-to/call-read-api.md) and [QuickStarts](./quickstarts-sdk/client-library.md?tabs=visual-studio&pivots=programming-language-csharp) to get started with the generally available (GA) version of the Read API instead. The latest GA versions provide the following benefits:
+* 2022 latest generally available OCR model
+* Significant expansion of OCR language coverage including support for handwritten text
+* Significantly improved OCR quality
+ ## June 2022 ### Vision Studio launch
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
When preparing your text file, make sure it:
The rest of this page focuses on Python, but sample code for the Long Audio API is available on GitHub for the following programming languages: * [Sample code: Python](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/Python)
-* [Sample code: C#](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/CSharp)
+* [Sample code: C#](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/LongAudioAPI/CSharp/LongAudioAPISample)
* [Sample code: Java](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/) ## Python example
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.5.0 | Generally available |
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.5.0 | Generally available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.6.0 | Generally available |
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.6.0 | Generally available |
| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.4.0 | Generally available |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.5.0 | Generally available |
## Prerequisites
cognitive-services Dynamic Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/dynamic-dictionary.md
If you already know the translation you want to apply to a word or a phrase, you
**Example: en-de:**
-Source input: `The word <mstrans:dictionary translation=\"wordomatic\">word or phrase</mstrans:dictionary> is a dictionary entry.`
+Source input: `The word <mstrans:dictionary translation=\"wordomatic\">wordomatic</mstrans:dictionary> is a dictionary entry.`
Target output: `Das Wort "wordomatic" ist ein W├╢rterbucheintrag.` This feature works the same way with and without HTML mode.
-Use the feature sparingly. A better way to customize translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you have or can create training data that shows your work or phrase in context, you get much better results. You can find more information about Custom Translator at [https://aka.ms/CustomTranslator](https://aka.ms/CustomTranslator).
+Use the feature sparingly. A better way to customize translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you have or can create training data that shows your work or phrase in context, you get much better results. You can find more information about Custom Translator at [https://aka.ms/CustomTranslator](https://aka.ms/CustomTranslator).
cognitive-services V3 0 Translate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-translate.md
The markup to supply uses the following syntax.
For example, consider the English sentence "The word wordomatic is a dictionary entry." To preserve the word _wordomatic_ in the translation, send the request: ```
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The word <mstrans:dictionary translation=\"wordomatic\">word or phrase</mstrans:dictionary> is a dictionary entry.'}]"
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The word <mstrans:dictionary translation=\"wordomatic\">wordomatic</mstrans:dictionary> is a dictionary entry.'}]"
``` The result is:
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
The following services are Limited Access:
- [Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context): Pro features - [Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context): All features -- [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context): Identify and Verify features
+- [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context): Identify and Verify features, face ID property
- [Computer Vision](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context): Celebrity Recognition feature - [Azure Video Indexer](../azure-video-indexer/limited-access-features.md): Celebrity Recognition and Face Identify features
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)-
-Release note for `3.5.0-amd64`:
+Release note for `3.6.0-amd64`:
**Features** * Security upgrade. - | Image Tags | Notes | Digest | |-|:|:-|
-| `latest` | | `sha256:4900337eb93408064502dcaf2e5bdb16c0724ec6b4daacf140701f8d7e0e5061`|
-| `3.5.0-amd64` | | `sha256:4900337eb93408064502dcaf2e5bdb16c0724ec6b4daacf140701f8d7e0e5061`|
+| `latest` | | `sha256:9a1ef0bcb5616ff9d1c70551d4634acae50ff4f7ed04b0ad514a75f2e6fa1241`|
+| `3.6.0-amd64` | | `sha256:9a1ef0bcb5616ff9d1c70551d4634acae50ff4f7ed04b0ad514a75f2e6fa1241`|
# [Previous version](#tab/previous)
+Release note for `3.5.0-amd64`:
+
+**Features**
+* Security upgrade.
+ Release note for `3.4.0-amd64`: **Features**
The [Speech-to-text][sp-stt] container image can be found on the `mcr.microsoft.
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia* region. Please use the *US Government Virginia* billing endpoint and API keys when using this region. # [Latest version](#tab/current)+
+Release note for `3.6.0-amd64-<locale>`:
+
+**Features**
+* Security upgrade.
+* Support for latest model versions.
+* Support for the following new locales:
+ * az-az
+ * bn-in
+ * bs-ba
+ * cy-gb
+ * eu-es
+ * fa-ir
+ * gl-es
+ * he-il
+ * hy-am
+ * it-ch
+ * ka-ge
+ * kk-kz
+ * mk-mk
+ * mn-mn
+ * ne-np
+ * ps-af
+ * so-so
+ * sq-al
+ * wuu-cn
+ * yue-cn
+ * zh-cn-sichuan
+
+| Image Tags | Notes |
+|-|:--|
+| `latest` | Container image with the `en-US` locale. |
+| `3.6.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.6.0-amd64-en-us`. |
+
+This container has the following locales available.
+
+| Locale for v3.6.0 | Notes | Digest |
+|--|:--|:--|
+| `ar-ae`| Container image with the `ar-AE` locale. | `sha256:0ac3cdd0e8e5f6658e350a8ce3b530b3b6b2159b6fd53fff16a383d086b02597` |
+| `ar-bh`| Container image with the `ar-BH` locale. | `sha256:9f4764e76c5773099c44b0e23269272450c3b63036f65be77d2040cee00e9eee` |
+| `ar-eg`| Container image with the `ar-EG` locale. | `sha256:ade7c6c4ba490b176688deb598460b87abd68075808d63ca32fbdd905fad6ca5` |
+| `ar-iq`| Container image with the `ar-IQ` locale. | `sha256:29cf1e678c634b90dfb835e1162871aa94232769aed209c162f58a81c8af0b1f` |
+| `ar-jo`| Container image with the `ar-JO` locale. | `sha256:0325b8cf7cd211e298b05afd9ed63cd5da09953b8ca248901314c8611d8035c3` |
+| `ar-kw`| Container image with the `ar-KW` locale. | `sha256:76de8d6d422c98320271b8fd0fe09c7bdc10f89cab3af166f7e8016d663ff727` |
+| `ar-lb`| Container image with the `ar-LB` locale. | `sha256:036f653152301c7ea623b71c993fc9bc7dadda91116fbaf143eda408c4c9b144` |
+| `ar-om`| Container image with the `ar-OM` locale. | `sha256:d8e78d68b0ce0164c2858595059a05a0175d4c1f8cdb0029ca11eb441f886ecd` |
+| `ar-qa`| Container image with the `ar-QA` locale. | `sha256:3ff3a0ad3fc3f6eb53c511add3b0cb6bea725c32f7aaca130200c0e9e1150d0e` |
+| `ar-sa`| Container image with the `ar-SA` locale. | `sha256:f2f98cbd28fc109f2e94c7f7b777279b350730a8a5704a3411ae14791806a8c3` |
+| `ar-sy`| Container image with the `ar-SY` locale. | `sha256:0875d6f781e3d68f380325fc42b9d57ea13d4ad28c9cd7b3b65ab25ea86a9a0d` |
+| `az-az`| Container image with the `az-AZ` locale. | `sha256:f6edd0c3c967ebee3abd69f92e5d0999b0ea217add8998afaef4e075e40ff4c1` |
+| `bg-bg`| Container image with the `bg-BG` locale. | `sha256:1e92ebeccc7297beb7396cd20ee61152dc5ab9b3d3d55b54f1a9b56e7b244fe4` |
+| `bn-in`| Container image with the `bn-IN` locale. | `sha256:f38ed3ac2e483ce03a7e0f48ad6b4d0075f1218e4378ec983009a60bd789f266` |
+| `bs-ba`| Container image with the `bs-BA` locale. | `sha256:a7f8fba6eff9116892500edbe8e834e793964b735da5e4681a174c9294301fc7` |
+| `ca-es`| Container image with the `ca-ES` locale. | `sha256:60f70e429126ffe1528dbd50118d7c69b2df8dbc7983e62d1c8619418f21e7bb` |
+| `cs-cz`| Container image with the `cs-CZ` locale. | `sha256:1c6fa33844322229a91422c0156db6e5d47abfff83d156b6f3551a1a9b5f0f6b` |
+| `cy-gb`| Container image with the `cy-GB` locale. | `sha256:928511854d3606296bcb6988a2d2946af33677c85fb1013193f8bd45eb0ecf1c` |
+| `da-dk`| Container image with the `da-DK` locale. | `sha256:b09ce63cfaa06a0421bfa371d6609618b408f7bcc19ac19856bb8e03cbff0c4f` |
+| `de-at`| Container image with the `de-AT` locale. | `sha256:d9c48ec5990b8f34cc567246a1a829e9ea8ec21ad7a5b7568add7fd16a2e40ec` |
+| `de-ch`| Container image with the `de-CH` locale. | `sha256:5a606d180e2a3268f2bf465666c5458b00fdef5ceedc02bffd1093170adfb795` |
+| `de-de`| Container image with the `de-DE` locale. | `sha256:7049be561512dc39d54eeb16403fe21e4db1bd63d6f54b025084faa4695790bd` |
+| `el-gr`| Container image with the `el-GR` locale. | `sha256:adc2e74f77ec91b5aa88a5df3850122da832a89ef0468034f1372bc41da57f9e` |
+| `en-au`| Container image with the `en-AU` locale. | `sha256:5b186c1a72d570c9af4db8bb1469dd0efc32a027ee9e9e1695a2c6c73947bcf7` |
+| `en-ca`| Container image with the `en-CA` locale. | `sha256:200f0fe8f8dc86d245af88d4f353d1d1fbe174ad37249b96fba0da0678bbea59` |
+| `en-gb`| Container image with the `en-GB` locale. | `sha256:f7d0168bb31e03f807aa37d88e5d4d4271386850d0087d602e64323d37c16da9` |
+| `en-gh`| Container image with the `en-GH` locale. | `sha256:5df19e03d2370feed607d621a863989bcdbb09870613b6d52f8e260f1d914025` |
+| `en-hk`| Container image with the `en-HK` locale. | `sha256:ff546494dd3dd77b4848e1fd4e64693f9ad324999e42d4c935991163f639d95b` |
+| `en-ie`| Container image with the `en-IE` locale. | `sha256:3efedf83e8ee94582bd3fc8d627e627f05868c34eb1eab5639da8eb4680a6fab` |
+| `en-in`| Container image with the `en-IN` locale. | `sha256:390e00fcf00a68f92930caed9f4e26ffbe8161e8c9c48e6cc987227ea79d8d14` |
+| `en-ke`| Container image with the `en-KE` locale. | `sha256:d79eb726839572782d4f7cd038bc4f0480dd61a1b1f7bad7770e8994c8a357fe` |
+| `en-nz`| Container image with the `en-NZ` locale. | `sha256:c1ae4664dc6c0d8897d5a3c5eddc873c9df272f07840da7ed2fb6578cce72e8d` |
+| `en-ph`| Container image with the `en-PH` locale. | `sha256:90d51d1eeb6abadebef08ef75b4d3744d0ba168b7d66972ef0a0fdc925628d29` |
+| `en-sg`| Container image with the `en-SG` locale. | `sha256:b9969b7c9b5bf950f349e6d9ef343a3f2733fc9ffb47c624426aa993c2b13dc2` |
+| `en-tz`| Container image with the `en-TZ` locale. | `sha256:a745795f69c510819be586ac99772c513ade3d746f15fa6dd1109da456d1b2c8` |
+| `en-us`| Container image with the `en-US` locale. | `sha256:c738a20eb5316f198a0f6878d811f64ae510bf695d2a2e2290f3bb9c9ae72a14` |
+| `en-za`| Container image with the `en-ZA` locale. | `sha256:a6e6a8dc17d0dea22d15b9b93a96337446fce25c7b736cb836d29c146d17240b` |
+| `es-ar`| Container image with the `es-AR` locale. | `sha256:496959cd040b29ffdde582f7b89528a1bd5fadaf430cf9bbbfb19a8651d2b175` |
+| `es-bo`| Container image with the `es-BO` locale. | `sha256:3a60cacf3b6a992e83fc972ff7a89fb944410a1d58657850f50d0582fb3e7abb` |
+| `es-cl`| Container image with the `es-CL` locale. | `sha256:93be7ea20dafd4e135b494f09bc16fe3dec64b9fa248e89443a713e68c7b0a0e` |
+| `es-co`| Container image with the `es-CO` locale. | `sha256:829df4976d7cbc6eb8bbfc19889adaac822081a9000619711fff1022653672b7` |
+| `es-cr`| Container image with the `es-CR` locale. | `sha256:0996232b7aa110072902a78d078eba790bfb7ff9d5d3a7028a37627bc4521f35` |
+| `es-cu`| Container image with the `es-CU` locale. | `sha256:2b195d500dbaf407ca4b7b2796191cf3de695f24e7433d82b0835a8c25974a34` |
+| `es-do`| Container image with the `es-DO` locale. | `sha256:dd9e879e1b30fdd2a9b19e9764c75453a7a09d24070ab72b2c83ec0dafc98306` |
+| `es-ec`| Container image with the `es-EC` locale. | `sha256:4afed9f72fb52d9ab40302acc0b43ec2b85541a7366d612490f2c40a5d7255fc` |
+| `es-es`| Container image with the `es-ES` locale. | `sha256:035a82d74df784bc3e74284e19fe1fa3cfe78e90af595ff476740dd7020e36a4` |
+| `es-gt`| Container image with the `es-GT` locale. | `sha256:a6d01b7bce9ad5de8ecbbef72ded5bea7cf53d322db31bacea46c4104aeeba98` |
+| `es-hn`| Container image with the `es-HN` locale. | `sha256:ca8488abde8af4a0755b24019df5c65170b4a2a9eb12d4ff079d9244ab910564` |
+| `es-mx`| Container image with the `es-MX` locale. | `sha256:447bb1f2ad0daef9339f1822d70d203a667d4e20c28165e17169110d6e197e20` |
+| `es-ni`| Container image with the `es-NI` locale. | `sha256:388f5597583ce4433041667c2d0c13aec3236c933b9d562ec9fbe5d694a51112` |
+| `es-pa`| Container image with the `es-PA` locale. | `sha256:f52fe174e4b355cd904194dc9052a18d5556d7b1e84d3107ecf3af6c3aeced65` |
+| `es-pe`| Container image with the `es-PE` locale. | `sha256:469c56fe5acbe7e3f86e8cb3ffbcd6cc55af1e9ab6ac947090d84b13067dd79a` |
+| `es-pr`| Container image with the `es-PR` locale. | `sha256:f675406550e1e5fb364c8cb2da8b130aad87f7a631d39f08e85e04daf5990791` |
+| `es-py`| Container image with the `es-PY` locale. | `sha256:b326051aaf0f806b31a6af2214a087dbb019cad4b8c519e32c13217401d53550` |
+| `es-sv`| Container image with the `es-SV` locale. | `sha256:5cca0df8fc9e391e1434c0263a1d72d71baf6d8acb6b5b1402c165dbf7ee4b4a` |
+| `es-us`| Container image with the `es-US` locale. | `sha256:e1310293e2ccb46096c8dbdce0120e47fdacaad26061682323bc5f3950103434` |
+| `es-uy`| Container image with the `es-UY` locale. | `sha256:fc10377355dd5aa853060b17fbad3537c1908ca98467d7c3bbfe5f6c30cf0998` |
+| `es-ve`| Container image with the `es-VE` locale. | `sha256:fae45ff835672a3a8ad8306448421e8d5d07e337ced7e040b35c10a97ae114b7` |
+| `et-ee`| Container image with the `et-EE` locale. | `sha256:8dddc254b4a5869ddd0180043dda858e35cf78e7fe8426c6be7802514970964f` |
+| `eu-es`| Container image with the `eu-ES` locale. | `sha256:8e0eb2e1cd3288e127149cd92d7f64343aef58d1ff030f1676c3f538eb1c7363` |
+| `fa-ir`| Container image with the `fa-IR` locale. | `sha256:3dfc28549f9035ac3ef3a8cad4d7560e0ee774627ff1a3d625f33a26f5f77efe` |
+| `fi-fi`| Container image with the `fi-FI` locale. | `sha256:5ec66af3a5259c55023d62fad82cb7d6267a2ef90fe40a9ce3458bb30f8a1d75` |
+| `fil-ph`| Container image with the `fil-PH` locale. | `sha256:647575b5c131537dd010a8442f35f64707320f2a70c8aaee764db9a4b5da52be` |
+| `fr-ca`| Container image with the `fr-CA` locale. | `sha256:4b8b8078112b869fe16c7bc4da5e4814d5a4c122ff029f472d8abdeb90bcfc7f` |
+| `fr-ch`| Container image with the `fr-CH` locale. | `sha256:cf28529f299505269709a0d515b16420243eaacf388a9832c7676777a4b50780` |
+| `fr-fr`| Container image with the `fr-FR` locale. | `sha256:758546a5287ef6df8261784f7e1c5567b9ceb3b9625692469d4c07685aa5bd4e` |
+| `ga-ie`| Container image with the `ga-IE` locale. | `sha256:04b109c4c9fa97a5177d4340035c63f12e27e5909414e84495a1007b8d7ca58c` |
+| `gl-es`| Container image with the `gl-ES` locale. | `sha256:7e9e414df651824399cdb9344c0b9ea92c290804ebce0f33599dff163a12baef` |
+| `gu-in`| Container image with the `gu-IN` locale. | `sha256:1c1c1aceeaf4647b2fc87615a171790b8a5a90ceb93778a0f286fbaf5ff79f81` |
+| `he-il`| Container image with the `he-IL` locale. | `sha256:ece7d40a5d3afed7565607cfaef0d6023e77b87e3b00fb1d954f705b74c71bd5` |
+| `hi-in`| Container image with the `hi-IN` locale. | `sha256:e5584a4119771fd0a8c6787666f3c83a1d02f0908c2bb42152c60f27cc17a41c` |
+| `hr-hr`| Container image with the `hr-HR` locale. | `sha256:012e138bb4ddc27804ecd5f1388ff98835087945c131f91ad41d50c7891b2a5b` |
+| `hu-hu`| Container image with the `hu-HU` locale. | `sha256:92ef3f57fb20500579a402e95743cd48defb0abd2e65f8267d5e8024fa7da907` |
+| `hy-am`| Container image with the `hy-AM` locale. | `sha256:11b136d61bfb0e26cf589f0ad3e4ffedbb58eeeb29e98779926c4d956150ebe4` |
+| `id-id`| Container image with the `id-ID` locale. | `sha256:b6876e71c9780282184139866c17f7199dcbf224ff8ef27e1bfdf135dd5f4145` |
+| `it-ch`| Container image with the `it-CH` locale. | `sha256:84c194b5d362b16b7cef0fa3fe50ea6d3581bca5d8610231b16d9e15691d8df1` |
+| `it-it`| Container image with the `it-IT` locale. | `sha256:7466c7cc5fe67064bedae3a38d8fb130b46f5d1e59fb1c4f61d8d82f81a0885e` |
+| `ja-jp`| Container image with the `ja-JP` locale. | `sha256:cf0d088b51f75aaaa517758fef083119f9818e83764f1fe465e6fca487cda00f` |
+| `ka-ge`| Container image with the `ka-GE` locale. | `sha256:4d7d73c8478757b9f30196253f3f9d3af4f1d39d4bb6e3dd1672a70e48ea7b80` |
+| `kk-kz`| Container image with the `kk-KZ` locale. | `sha256:1aa1cd8eed78b447d7d70b8dfadbe52e71090652bf89c6bfffbfc3c357df5b4f` |
+| `ko-kr`| Container image with the `ko-KR` locale. | `sha256:02c59457471cd14286f48d659515074d451609c7869099ecb128b06e444c72f4` |
+| `lt-lt`| Container image with the `lt-LT` locale. | `sha256:78fbaa5f52d440b40da1f63f82de27d18ae4d8e7d44bf327945c4e095a769ee0` |
+| `lv-lv`| Container image with the `lv-LV` locale. | `sha256:3e81453e1191ba30b944102de3b1a5d382c90c7031a6dea08fffc5d876e6d6d3` |
+| `mk-mk`| Container image with the `mk-MK` locale. | `sha256:35a45b35660ff3304cc5a8cfc0c4f813238e900f6fd05d7560c5199d4b89acd9` |
+| `mn-mn`| Container image with the `mn-MN` locale. | `sha256:2833f2e6d5de15d25e30f0e60dbff01aae74aba62c4783e9831851cc07dc07c2` |
+| `mr-in`| Container image with the `mr-IN` locale. | `sha256:358506bba4abdaf5fd8d0f02f6088cc196cb8c6c2926f9a2feb4674d177c77cf` |
+| `ms-my`| Container image with the `ms-MY` locale. | `sha256:d51552edd37a9373f6ac97d69a460a84e6b74a8a9d96f6ad8639aba9921bc515` |
+| `mt-mt`| Container image with the `mt-MT` locale. | `sha256:09dc4e22015e63934b2863169cdd6135d4594632b631373a7a88720d46200a51` |
+| `nb-no`| Container image with the `nb-NO` locale. | `sha256:d51d1dcfa4eca08312fd34f108a92508f37f6174b4080980f571567c29cc72da` |
+| `ne-np`| Container image with the `ne-NP` locale. | `sha256:2d43914d6dda8316f528a1592d9d92f7b64cdab9a91155d52e14c0914c21a32f` |
+| `nl-nl`| Container image with the `nl-NL` locale. | `sha256:4777d0bd9ec06e07b1e85cc789de0dafd1875c802258eea5e571b1a6025f394b` |
+| `pl-pl`| Container image with the `pl-PL` locale. | `sha256:5c29ceb4f38ac691047af387e2fb554a150c5d9f0b99d7c8814bc52458c1ce26` |
+| `ps-af`| Container image with the `ps-AF` locale. | `sha256:1b7a114348c1ddda5ab2fa5b8741cc3b13d79e15d89ac35f064bca284f47ab30` |
+| `pt-br`| Container image with the `pt-BR` locale. | `sha256:25374a351401f530c6614044387c37cd7bee8f6760f4205784a426aa2722142a` |
+| `pt-pt`| Container image with the `pt-PT` locale. | `sha256:8535549150e2ae742bd2ba0624953ffd61546b5768c31ac29251093f65430276` |
+| `ro-ro`| Container image with the `ro-RO` locale. | `sha256:197581fa179fd751f037e3fd00c1a9f32e49138f176d8cd97541bb34c6984731` |
+| `ru-ru`| Container image with the `ru-RU` locale. | `sha256:31cab611d45fc9fb634b8cdbf52a8aabd7ec964a7dbc22997da36d51bbc827fb` |
+| `sk-sk`| Container image with the `sk-SK` locale. | `sha256:f2ee4ce886b7d6a18c0077cddbe965f926c05e3d808029d5c1517254272263a5` |
+| `sl-si`| Container image with the `sl-SI` locale. | `sha256:18d36de8592663782c59b5b5a9db93dedcd9b33cc70988dee3e42e50f6ef7d95` |
+| `so-so`| Container image with the `so-SO` locale. | `sha256:0d6f4c458725dc2ffa716179c5682da22780793379ab88f2cde7b609cc1f599d` |
+| `sq-al`| Container image with the `sq-AL` locale. | `sha256:e5c548e65e677a6bb472c6c4defdbdb6db0eac5b9049410d47d0eb59d104a298` |
+| `sv-se`| Container image with the `sv-SE` locale. | `sha256:088368e644a4c8d749b61a40360a7ae20e7f61c259cc39063a9ec8b637994f5d` |
+| `ta-in`| Container image with the `ta-IN` locale. | `sha256:82f2114b53cd98516ec0bef65c6c1dc24d721222535ffe1cb6dd567c851c2680` |
+| `te-in`| Container image with the `te-IN` locale. | `sha256:1e990ec464201ca88fc5204f923f76286b2a1259214cb943184969c7218a470b` |
+| `th-th`| Container image with the `th-TH` locale. | `sha256:8f37ed7f5386f37512e2f9423222f4780d946eb1549b729ffd17a1b26172ed86` |
+| `tr-tr`| Container image with the `tr-TR` locale. | `sha256:81578af72978c035ffea9007be87572af03730e9814e7a61fdab0073103b64ac` |
+| `uk-ua`| Container image with the `uk-UA` locale. | `sha256:380c8b8e3c842a189fa657c65402c72bdf9990badf3a79652ea067efd108c467` |
+| `vi-vn`| Container image with the `vi-VN` locale. | `sha256:8070d738a7dee389fa4d378e4edcd52b29e9902cc49a1f001eed682824c6c59c` |
+| `wuu-cn`| Container image with the `wuu-CN` locale. | `sha256:62ed0704ddd3b62ceab50dcd1f699159bbaa1e559f77b0eaa88bd37c10f8dc5f` |
+| `yue-cn`| Container image with the `yue-CN` locale. | `sha256:1c60aa9cc39b10206e0c56c808a6a95ada9aff8acc7eeb1a97095f3abe39671f` |
+| `zh-cn`| Container image with the `zh-CN` locale. | `sha256:b3258ef54b0bf4be7e178594d14dda7558489f43632898df674a0b2f94dbbad8` |
+| `zh-cn-sichuan`| Container image with the `zh-CN-sichuan` locale. | `sha256:c9537c454b24e8d70e44705498193fddaf262cb988efcdac526740cf8cb2249e` |
+| `zh-hk`| Container image with the `zh-HK` locale. | `sha256:5d21febbb1e8710b01ad1a5727c33080e6853d3a4bfbf5365b059630b76a9901` |
+| `zh-tw`| Container image with the `zh-TW` locale. | `sha256:15dbadcd92e335705e07a8ecefbe621e3c97b723bdf1c5b0c322a5b9965ea47d` |
+
+# [Previous version](#tab/previous)
+ Release note for `3.5.0-amd64-<locale>`: **Features**
This container has the following locales available.
| `zh-hk`| Container image with the `zh-HK` locale. | `sha256:71104ab83fb6d750eecfc050fa705f7520b673c83d30c57b88f66d88d030f2f4` | | `zh-tw`| Container image with the `zh-TW` locale. | `sha256:2f5d720242f64354f769c26b58538bab40f2e860ca21a542b0c1b78a5c7e7419` |
-# [Previous version](#tab/previous)
Release note for `3.4.0-amd64-<locale>`: **Features**
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current) +
+Release notes for `v2.5.0`:
+
+**Features**
+* Security upgrade.
+* Added support for
+ * `az-az-babekneural`
+ * `az-az-banuneural`
+ * `fa-ir-dilaraneural`
+ * `fa-ir-faridneural`
+ * `fil-ph-angeloneural`
+ * `fil-ph-blessicaneural`
+ * `he-il-avrineural`
+ * `he-il-hilaneural`
+ * `id-id-ardineural`
+ * `id-id-gadisneural`
+ * `ka-ge-ekaneural`
+ * `ka-ge-giorgineural`
+
+| Image Tags | Notes |
+||:|
+| `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
+| `2.5.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.5.0-amd64-en-us-arianeural`. |
+
+| v2.5.0 Locales and voices | Notes |
+|-|:|
+| `am-et-amehaneural`| Container image with the `am-ET` locale and `am-ET-amehaneural` voice.|
+| `am-et-mekdesneural`| Container image with the `am-ET` locale and `am-ET-mekdesneural` voice.|
+| `ar-bh-lailaneural`| Container image with the `ar-BH` locale and `ar-BH-lailaneural` voice.|
+| `ar-eg-salmaneural`| Container image with the `ar-EG` locale and `ar-EG-salmaneural` voice.|
+| `ar-eg-shakirneural`| Container image with the `ar-EG` locale and `ar-EG-shakirneural` voice.|
+| `ar-sa-hamedneural`| Container image with the `ar-SA` locale and `ar-SA-hamedneural` voice.|
+| `ar-sa-zariyahneural`| Container image with the `ar-SA` locale and `ar-SA-zariyahneural` voice.|
+| `az-az-babekneural`| Container image with the `az-AZ` locale and `az-AZ-babekneural` voice.|
+| `az-az-banuneural`| Container image with the `az-AZ` locale and `az-AZ-banuneural` voice.|
+| `cs-cz-antoninneural`| Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice.|
+| `cs-cz-vlastaneural`| Container image with the `cs-CZ` locale and `cs-CZ-vlastaneural` voice.|
+| `de-ch-janneural`| Container image with the `de-CH` locale and `de-CH-janneural` voice.|
+| `de-ch-lenineural`| Container image with the `de-CH` locale and `de-CH-lenineural` voice.|
+| `de-de-conradneural`| Container image with the `de-DE` locale and `de-DE-conradneural` voice.|
+| `de-de-katjaneural`| Container image with the `de-DE` locale and `de-DE-katjaneural` voice.|
+| `en-au-natashaneural`| Container image with the `en-AU` locale and `en-AU-natashaneural` voice.|
+| `en-au-williamneural`| Container image with the `en-AU` locale and `en-AU-williamneural` voice.|
+| `en-ca-claraneural`| Container image with the `en-CA` locale and `en-CA-claraneural` voice.|
+| `en-ca-liamneural`| Container image with the `en-CA` locale and `en-CA-liamneural` voice.|
+| `en-gb-libbyneural`| Container image with the `en-GB` locale and `en-GB-libbyneural` voice.|
+| `en-gb-ryanneural`| Container image with the `en-GB` locale and `en-GB-ryanneural` voice.|
+| `en-gb-sonianeural`| Container image with the `en-GB` locale and `en-GB-sonianeural` voice.|
+| `en-us-arianeural`| Container image with the `en-US` locale and `en-US-arianeural` voice.|
+| `en-us-guyneural`| Container image with the `en-US` locale and `en-US-guyneural` voice.|
+| `en-us-jennyneural`| Container image with the `en-US` locale and `en-US-jennyneural` voice.|
+| `es-es-alvaroneural`| Container image with the `es-ES` locale and `es-ES-alvaroneural` voice.|
+| `es-es-elviraneural`| Container image with the `es-ES` locale and `es-ES-elviraneural` voice.|
+| `es-mx-dalianeural`| Container image with the `es-MX` locale and `es-MX-dalianeural` voice.|
+| `es-mx-jorgeneural`| Container image with the `es-MX` locale and `es-MX-jorgeneural` voice.|
+| `fa-ir-dilaraneural`| Container image with the `fa-IR` locale and `fa-IR-dilaraneural` voice.|
+| `fa-ir-faridneural`| Container image with the `fa-IR` locale and `fa-IR-faridneural` voice.|
+| `fil-ph-angeloneural`| Container image with the `fil-PH` locale and `fil-PH-angeloneural` voice.|
+| `fil-ph-blessicaneural`| Container image with the `fil-PH` locale and `fil-PH-blessicaneural` voice.|
+| `fr-ca-antoineneural`| Container image with the `fr-CA` locale and `fr-CA-antoineneural` voice.|
+| `fr-ca-jeanneural`| Container image with the `fr-CA` locale and `fr-CA-jeanneural` voice.|
+| `fr-ca-sylvieneural`| Container image with the `fr-CA` locale and `fr-CA-sylvieneural` voice.|
+| `fr-fr-deniseneural`| Container image with the `fr-FR` locale and `fr-FR-deniseneural` voice.|
+| `fr-fr-henrineural`| Container image with the `fr-FR` locale and `fr-FR-henrineural` voice.|
+| `he-il-avrineural`| Container image with the `he-IL` locale and `he-IL-avrineural` voice.|
+| `he-il-hilaneural`| Container image with the `he-IL` locale and `he-IL-hilaneural` voice.|
+| `hi-in-madhurneural`| Container image with the `hi-IN` locale and `hi-IN-madhurneural` voice.|
+| `hi-in-swaraneural`| Container image with the `hi-IN` locale and `hi-IN-swaraneural` voice.|
+| `id-id-ardineural`| Container image with the `id-ID` locale and `id-ID-ardineural` voice.|
+| `id-id-gadisneural`| Container image with the `id-ID` locale and `id-ID-gadisneural` voice.|
+| `it-it-diegoneural`| Container image with the `it-IT` locale and `it-IT-diegoneural` voice.|
+| `it-it-elsaneural`| Container image with the `it-IT` locale and `it-IT-elsaneural` voice.|
+| `it-it-isabellaneural`| Container image with the `it-IT` locale and `it-IT-isabellaneural` voice.|
+| `ja-jp-keitaneural`| Container image with the `ja-JP` locale and `ja-JP-keitaneural` voice.|
+| `ja-jp-nanamineural`| Container image with the `ja-JP` locale and `ja-JP-nanamineural` voice.|
+| `ka-ge-ekaneural`| Container image with the `ka-GE` locale and `ka-GE-ekaneural` voice.|
+| `ka-ge-giorgineural`| Container image with the `ka-GE` locale and `ka-GE-giorgineural` voice.|
+| `ko-kr-injoonneural`| Container image with the `ko-KR` locale and `ko-KR-injoonneural` voice.|
+| `ko-kr-sunhineural`| Container image with the `ko-KR` locale and `ko-KR-sunhineural` voice.|
+| `pt-br-antonioneural`| Container image with the `pt-BR` locale and `pt-BR-antonioneural` voice.|
+| `pt-br-franciscaneural`| Container image with the `pt-BR` locale and `pt-BR-franciscaneural` voice.|
+| `so-so-muuseneural`| Container image with the `so-SO` locale and `so-SO-muuseneural` voice.|
+| `so-so-ubaxneural`| Container image with the `so-SO` locale and `so-SO-ubaxneural` voice.|
+| `sv-se-hillevineural`| Container image with the `sv-SE` locale and `sv-SE-hillevineural` voice.|
+| `sv-se-mattiasneural`| Container image with the `sv-SE` locale and `sv-SE-mattiasneural` voice.|
+| `sv-se-sofieneural`| Container image with the `sv-SE` locale and `sv-SE-sofieneural` voice.|
+| `tr-tr-ahmetneural`| Container image with the `tr-TR` locale and `tr-TR-ahmetneural` voice.|
+| `tr-tr-emelneural`| Container image with the `tr-TR` locale and `tr-TR-emelneural` voice.|
+| `zh-cn-xiaochenneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaochenneural` voice.|
+| `zh-cn-xiaohanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaohanneural` voice.|
+| `zh-cn-xiaomoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaomoneural` voice.|
+| `zh-cn-xiaoqiuneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoqiuneural` voice.|
+| `zh-cn-xiaoruineural`| Container image with the `zh-CN` locale and `zh-CN-xiaoruineural` voice.|
+| `zh-cn-xiaoshuangneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoshuangneural` voice.|
+| `zh-cn-xiaoxiaoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxiaoneural` voice.|
+| `zh-cn-xiaoxuanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxuanneural` voice.|
+| `zh-cn-xiaoyanneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoyanneural` voice.|
+| `zh-cn-xiaoyouneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoyouneural` voice.|
+| `zh-cn-yunxineural`| Container image with the `zh-CN` locale and `zh-CN-yunxineural` voice.|
+| `zh-cn-yunyangneural`| Container image with the `zh-CN` locale and `zh-CN-yunyangneural` voice.|
+| `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.|
+
+# [Previous version](#tab/previous)
+ Release notes for `v2.4.0`: **Features**
Release notes for `v2.4.0`:
| `zh-cn-yunyangneural`| Container image with the `zh-CN` locale and `zh-CN-yunyangneural` voice.| | `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.| -
-# [Previous version](#tab/previous)
Release notes for `v2.3.0`: **Features**
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
JSON objects can include nested JSON objects and simple property/values. An arra
Personalizer can help you to understand which features are the most and least influential when determining the best action. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference. Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
-Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](https://docs.microsoft.com/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](https://docs.microsoft.com/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
+Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
```JSON {
Enabling inference explainability will add a collection to the JSON response fro
} ```
-Recall that Personalizer will either return the _best action_ as determined by the model or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](https://docs.microsoft.com/azure/cognitive-services/personalizer/concepts-exploration).
+Recall that Personalizer will either return the _best action_ as determined by the model or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](/azure/cognitive-services/personalizer/concepts-exploration).
For the best actions returned by Personalizer, the feature scores can provide general insight where: * Larger positive scores provide more support for the model choosing the best action.
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Short codes do not fall under E.164 formatting guidelines and do not have a coun
Once you have submitted the short code program brief application in the Azure portal, the service desk works with the aggregators to get your application approved by each wireless carrier. This process generally takes 8-12 weeks. We will let you know any updates and the status of your applications via the email you provide in the application. For more questions about your submitted application, please email acstnrequest@microsoft.com. ## Toll-Free Verification
-### What is toll free verification and why is it mandatory?
-The toll-free verification process ensures that your services running on toll-free numbers (TFNs) comply with carrier policies and [industry best practices](./messaging-policy.md). This also provides relevant service information to reduce the likelihood of false positive filtering and wrongful spam blocks.
+### What is toll free verification?
+The toll-free verification process ensures that your services running on toll-free numbers (TFNs) comply with carrier policies and [industry best practices](./messaging-policy.md). This also provides relevant service information to the downstream carriers, reduces the likelihood of false positive filtering and wrongful spam blocks.
-September 30, 2022 onwards, all new TFNs must complete a toll-free verification process. All existing TFNs must complete a toll-free verification process by September 30, 2022. If unverified, the TFNs may face SMS service interruptions. Verification can take 3-4 weeks.
-
-This decision has been made to ensure that the toll-free messaging channel is aligned with both short code and 10 DLC, whereby all services are reviewed. It also ensures that the sending brand and the type of traffic your messaging channels deliver is known, documented, and verified. This verification requirement is applicable to toll-free numbers in United States and Canada.
+This verification is **required** for TFNs sending messages to **Canada recipients** and is **not required** for TFNs sending [low throughput messages](#sms-to-us-phone-numbers) to **US recipients**. Verifying TFNs is free of cost.
+
+### What happens if I don't verify my toll-free numbers?
+What happens to the unverified toll-free number depends on the destination of SMS traffic.
+#### SMS to US phone numbers
+Effective **October 1, 2022**, unverified toll-free numbers sending messages to US phone numbers will be subjected to stricter filtering and the following thresholds for messaging:
+
+- **Daily Limit:** 2,000 messages
+- **Weekly limit:** 12,000 messages
+- **Monthly limit:** 25,000 messages
+
+This does not apply to TFNs in a pending or verified status.
+
+#### SMS to Canadian phone numbers
+Effective **October 1, 2022**, unverified toll-free numbers sending messages to Canadian destinations will have its traffic **blocked**. To be unblocked, TFNs have to be in pending or verified status.
+
+### What is a pending status? What can I do in a pending status?
+After submission of the toll-free verification application, we will process your application and send it to the toll-free messaging aggregator. This process usually takes in 4-6 business days. Once the application reaches the toll-free messaging aggregator the application status changes to pending until verified or rejected.
+
+Once in pending state, you can start sending SMS to US numbers without the thresholds mentioned above and be unblocked from sending SMS to Canadian destinations. TFNs in pending state are subject to reduced likelihood of filtering.
+
+### What happens after I submit the toll-free verification form?
+Updates for changes and the status of your applications will be communicated via the email you provide in the application. Results from the application can be: approved, denied or further clarification needed. For more questions about your submitted application, please email acstnrequest@microsoft.com.
+
+The whole toll-free verification process takes about **5-6 weeks** but is subject to change depending on the volume of applications to the toll-free messaging aggregator and how detailed the application is.
### How do I submit a toll-free verification?
-Existing Azure Communications Service customers with toll-free numbers will have received an email with the toll-free verification form that can be filled out and submitted. If you have not received an email, please email acstnrequest@microsoft.com.
+To submit a toll-free verification application, navigate to Azure Communication Service resource that your toll-free number is associated with in Azure portal and navigate to the Phone numbers blade. Click on the Toll-Free verification application link displayed as "Submit Application" in the infobox at the top of the phone numbers blade. Complete the form.
### How is my data being used? Toll-free verification (TFV) involves an integration between Microsoft and the Toll-Free messaging aggregator. The toll-free messaging aggregator is the final reviewer and approver of the TFV application. Microsoft must share the TFV application information with the toll-free messaging aggregator for them to confirm that the program details meet the CTIA guidelines and standards set by carriers. By submitting a TFV form, you agree that Microsoft may share the TFV application details as necessary for provisioning the toll-free number.
-### What happens if I don't verify my toll-free numbers?
-Unverified numbers may face SMS service interruptions and are subject to carrier filtering and throttling.
-
-### What happens after I submit the toll-free verification form?
-Once we receive your toll-free verification form, we will relay it to the toll-free messaging aggregator for them to review and approve it. This process takes 3-4 weeks. We will let you know any updates and the status of your applications via the email you provide in the application. For more questions about your submitted application, please email acstnrequest@microsoft.com.
+### What are common reasons for toll-free verification delays?
+Your application wait time increases when your application has missing or unclear information.
-### Can I send messages while I wait for approval?
-You will be able to send messages while you wait for approval but the traffic will be subject to carrier filtering and throttling if it's flagged as spam.
+- **Missing required information like Opt-in Image URL** - If there is no Opt-in option, proved a good justification.
+- **Opt-in Image URL is not accessible to the public** - When you host your image on image hosting services (i.e. OneDrive, GoogleDrive, iCloud, Dropbox, etc.) make sure the public can view it. Test the URL by seeing if the URL can be viewed by a personal account.
+- **Incorrect toll-free numbers** - Phone numbers have to be toll-free numbers, not local numbers, 10DLC, or short codes.
## Character and rate limits ### What is the SMS character limit?
As with similar Azure services, customers will be notified at least 30 days prio
## Emergency support ### Can a customer use Azure Communication Services for emergency purposes?
-Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you'll be responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
+Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you'll be responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
The following list presents the set of features that are currently available in
| | Reject an incoming call | ✔️ | ✔️ | | Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | | | Play Audio from an audio file | ✔️ | ✔️ |
+| | Recognize user input through DTMF | ✔️ | ✔️ |
| | Remove one or more endpoints from an existing call| ✔️ | ✔️ | | | Blind Transfer* a call to another endpoint | ✔️ | ✔️ | | | Hang up a call (remove the call leg) | ✔️ | ✔️ |
These actions can be performed on the calls that are answered or placed using Ca
**Add/Remove participant(s)** ΓÇô One or more participants can be added in a single request with each participant being a variation of supported destination endpoints. A web hook callback is sent for every participant successfully added to the call.
-**Play** - When your application answers a call or places an outbound call, you can play an audio prompt for the caller. This audio can be looped if needed in scenarios like playing hold music. To learn more, view our [quickstart](../../quickstarts/voice-video-calling/play-action.md)
+**Play** - When your application answers a call or places an outbound call, you can play an audio prompt for the caller. This audio can be looped if needed in scenarios like playing hold music. To learn more, view our [concepts](./play-action.md) and [quickstart](../../quickstarts/voice-video-calling/play-action.md).
+
+**Recognize input** - After your application has played an audio prompt, you can request user input to drive business logic and navigation in your application. To learn more, view our [concepts](./recognize-action.md) and [quickstart](../../quickstarts/voice-video-calling/Recognize-Action.md).
**Transfer** ΓÇô When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application's ability to control the call using the Call Automation SDKs.
The Call Automation events are sent to the web hook callback URI specified when
| ParticipantUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call | | PlayCompleted| Your application successfully played the audio file provided | | PlayFailed| Your application failed to play audio |
+| RecognizeCompleted | Recognition of user input was successfully completed |
+| RecognizeFailed | Recognition of user input was unsuccessful <br/><br/>*to learn more about recognize action events view our [quickstart](../../quickstarts/voice-video-calling/Recognize-Action.md)*|
## Known Issues
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/recognize-action.md
+
+ Title: Recognize action
+description: Conceptual information about using Recognize action with Call Automation.
+++ Last updated : 09/16/2022++++
+# Recognize action overview
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+With the Recognize action developers will be able to enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is to play a message and request user input. This input is received in the form of DTMF (input via the digits on their calling device) which then allows the application to navigate the user to the next action.
+
+**DTMF**
+Dual-tone multifrequency (DTMF) recognition is the process of understanding tones/sounds generated by a telephone when a number is pressed. Equipment at the receiving end listening for the specific tone then converts them into commands. These commands generally signal user intent when navigating a menu in an IVR scenario or in some cases can be used to capture important information that the user needs to provide via their phones keypad.
+
+**DTMF events and their associated tones**
+
+|Event|Tone|
+| |--|
+|0|Zero|
+|1|One|
+|2|Two|
+|3|Three|
+|4|Four|
+|5|Five|
+|6|Six|
+|7|Seven|
+|8|Eight|
+|9|Nine|
+|A|A|
+|B|B|
+|C|C|
+|D|D|
+|*|Asterisk|
+|#|Pound|
+
+## Common use cases
+
+The recognize action can be used for many reasons, below are a few examples of how developers can use the recognize action in their application.
+
+### Improve user journey with self-service prompts
+
+- **Users can control the call** - By enabling input recognition you allow the caller to navigate your IVR menu and provide information that can be used to resolve their query.
+- **Gather user information** - By enabling input recognition your application can gather input from the callers. This can be information such as account numbers, credit card information, etc.
+
+### Interrupt audio prompts
+
+**User can exit from an IVR menu and speak to a human agent** - With DTMF interruption your application can allow users to exit interrupt the flow of the IVR menu and be able to chat to a human agent.
++
+## How the Recognize action workflow looks
+
+![Recognize Action](./media/recognize-action-flow.png)
+
+## What's coming up next for Recognize action
+
+As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the recognize action will add in new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Speech-To-Text. With these, you can improve customer interactions and recognize voice inputs from participants on the call.
+
+## Next steps
+
+- Check out the [Recognize action quickstart](../../quickstarts/voice-video-calling/recognize-action.md) to learn more.
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/get-phone-number.md
In this quickstart you learned how to:
> * Purchase a phone number > * Manage your phone number > * Release a phone number
+> * Submit toll-free verification application [(see if required)](../../concepts/sms/sms-faq.md#toll-free-verification)
> [!div class="nextstepaction"] > [Send an SMS](../sms/send.md)
communication-services Callflows For Customer Interactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions.md
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps - Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features. -- Learn how to [manage inbound telephony calls](../telephony/Manage-Inbound-Calls.md) with Call Automation.-- Learn more about [Play action](../../concepts/voice-video-calling/Play-Action.md).
+- Learn how to [manage inbound telephony calls](../telephony/manage-inbound-calls.md) with Call Automation.
+- Learn more about [Play action](../../concepts/voice-video-calling/play-action.md).
+- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md).
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/play-action.md
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps -- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md)
+- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md)
+- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md)
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/recognize-action.md
+
+ Title: Recognize Action
+
+description: Provides a quick start for recognizing user input from participants on a call.
+++ Last updated : 09/16/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Quickstart: Recognize action
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+This quickstart will help you get started with recognizing DTMF input provided by participants through Azure Communication Services Call Automation SDK.
+++
+## Event codes
+
+|Status|Code|Subcode|Message|
+|-|--|--|--|
+|RecognizeCompleted|200|8531|Action completed, max digits received.|
+|RecognizeCompleted|200|8514|Action completed as stop tone was detected.|
+|RecognizeCompleted|400|8508|Action failed, the operation was canceled.|
+|RecognizeFailed|400|8510|Action failed, initial silence timeout reached|
+|RecognizeFailed|400|8532|Action failed, inter-digit silence timeout reached.|
+|RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.|
+|RecognizeFailed|500|8512|Unknown internal server error.|
++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next Steps
+
+- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md)
+- Learn more about [Play action](../../concepts/voice-video-calling/play-action.md)
+- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md)
communication-services Hmac Header Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/hmac-header-tutorial.md
Last updated 06/30/2021
+zone_pivot_groups: acs-programming-languages-csharp-python
# Sign an HTTP request In this tutorial, you'll learn how to sign an HTTP request with an HMAC signature.
+>[!NOTE]
+>We strongly encourage to use [Azure SDKs](https://github.com/Azure/azure-sdk). Approach described here is a fallback option for cases when Azure SDKs can't be used for any reason.
++ ## Clean up resources
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
As you begin to design the network around your container app, refer to [Plan vir
:::image type="content" source="media/networking/azure-container-apps-virtual-network.png" alt-text="Diagram of how Azure Container Apps environments use an existing V NET, or you can provide your own.":::
+> [!NOTE]
+> Moving VNETs among different resource groups or subscriptions is not supported if the VNET is in use by a Container Apps environment.
+ <!-- https://learn.microsoft.com/azure/azure-functions/functions-networking-options
The second URL grants access to the log streaming service and the console. If ne
## Ports and IP addresses >[!NOTE]
-> The subnet associated with a Container App Environment requires a CIDR prefix of /23.
+> The subnet associated with a Container App Environment requires a CIDR prefix of /23 or larger (/23, /22 etc.).
The following ports are exposed for inbound connections.
If you're using the Azure CLI and the [platformReservedCidr](vnet-custom-interna
There's no forced tunneling in Container Apps routes.
+## DNS
+- **Custom DNS**: If your VNET uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. If you do not use the Azure recursive resolvers, the Container Apps environment will not function.
+
+- **VNET-scope ingress**: If you plan to use VNET-scope [ingress](./ingress.md#configuration) in an internal Container Apps environment, configure your domains in one of the following ways:
+
+ 1. **Non-custom domains**: If you do not plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App EnvironmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record that points to the static IP address of the Container Apps environment.
+
+ 1. **Custom domains**: If you plan to use custom domains, use a publicly resolvable domain to [add a custom domain and certificate](./custom-domains-certificates.md#add-a-custom-domain-and-certificate) to the container app. Additionally, create a private DNS zone that resolves the apex domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the apex domain, with an `A` record that points to the static IP address of the Container Apps environment.
+ ## Managed resources When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you will be billed for the following:
cosmos-db Feature Support 36 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-36.md
Azure Cosmos DB's API for MongoDB supports the following database commands:
||| | TTL | Yes | | Unique | Yes |
-| Partial | Only supported with unique indexes |
+| Partial | No |
| Case Insensitive | No | | Sparse | No | | Background | Yes |
cosmos-db Feature Support 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-40.md
In an [upgrade scenario](upgrade-mongodb-version.md), documents written prior to
||| | TTL | Yes | | Unique | Yes |
-| Partial | Only supported with unique indexes |
+| Partial | No |
| Case Insensitive | No | | Sparse | No | | Background | Yes |
cosmos-db Feature Support 42 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md
In an [upgrade scenario](upgrade-mongodb-version.md), documents written prior to
||| | TTL | Yes | | Unique | Yes |
-| Partial | Only supported with unique indexes |
+| Partial | No |
| Case Insensitive | No | | Sparse | No | | Background | Yes |
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-nodejs.md
ms.devlang: javascript Previously updated : 08/26/2021 Last updated : 09/21/2022
> * [Spark v3](create-sql-api-spark.md) > * [Go](create-sql-api-go.md) >
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. Without a credit card or an Azure subscription, you can set up a free 30 day [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
-## Walkthrough video
+Get started with the Azure Cosmos DB client library for JavaScript to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
-Watch this video for a complete walkthrough of the content in this article.
-
-> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player]
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-sql-api-javascript-samples) are available on GitHub as a Node.js project.
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.-- [Node.js 6.0.0+](https://nodejs.org/).-- [Git](https://www.git-scm.com/downloads).
+* In a terminal or command window, run ``node --version`` to check that the Node.js version is one of the current long term support (LTS) versions.
+* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
-## Create an Azure Cosmos account
+## Setting up
-For this quickstart purpose, you can use the [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) option to create an Azure Cosmos account.
+This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for JavaScript to manage resources.
-1. Navigate to the [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) page.
+### Create an Azure Cosmos DB account
-1. Choose the **SQL** API account and select **Create**. Sign-in using your Microsoft account.
-1. After the sign-in is successful, your Azure Cosmos account should be ready. Select **Open in the Azure portal** to open the newly created account.
+### Configure environment variables
-The "try Azure Cosmos DB for free" option doesn't require an Azure subscription and it offers you an Azure Cosmos account for a limited period of 30 days. If you want to use the Azure Cosmos account for a longer period, you should instead [create the account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) within your Azure subscription.
-## Add a container
+### Create a new JavaScript project
+1. Create a new Node.js application in an empty folder using your preferred terminal.
-## Add sample data
+ ```bash
+ npm init -y
+ ```
+2. Edit the `package.json` file to use ES6 modules by adding the `"type": "module",` entry. This allows your code to use modern async/await syntax.
-## Query your data
+ :::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/package.json" highlight="6":::
+### Install the package
-## Clone the sample application
-Now let's clone a Node.js app from GitHub, set the connection string, and run it.
+1. Add the [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) npm package to the Node.js project.
-1. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+ ```bash
+ npm install @azure/cosmos
+ ```
+
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-getting-started.git
- ```
+1. Add the [dotenv](https://www.npmjs.com/package/dotenv) npm package to read environment variables from a `.env` file.
+
+ ```bash
+ npm install dotenv
+ ```
-## Review the code
+### Create local development environment files
-This step is optional. If you're interested in learning how the Azure Cosmos database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+1. Create a `.gitignore` file and add the following value to ignore your environment file and your node_modules. This ensures that only the secure and relevant information can be checked into source code.
-If you're familiar with the previous version of the SQL JavaScript SDK, you may be used to seeing the terms _collection_ and _document_. Because Azure Cosmos DB supports [multiple API models](../introduction.md), [version 2.0+ of the JavaScript SDK](https://www.npmjs.com/package/@azure/cosmos) uses the generic terms _container_, which may be a collection, graph, or table, and _item_ to describe the content of the container.
+ ```text
+ .env
+ node_modules
+ ```
-The Cosmos DB JavaScript SDK is called "@azure/cosmos" and can be installed from npm...
+1. Create a `.env` file with the following variables:
-```bash
-npm install @azure/cosmos
-```
+ ```text
+ COSMOS_ENDPOINT=
+ COSMOS_KEY=
+ ```
-The following snippets are all taken from the _app.js_ file.
+### Create a code file
-- The `CosmosClient` is imported from the `@azure/cosmos` npm package.
+Create an `index.js` and add the following boilerplate code to the file to read environment variables:
- ```javascript
- const CosmosClient = require("@azure/cosmos").CosmosClient;
- ```
-- A new `CosmosClient` object is initialized.
+### Add dependency to client library
- ```javascript
- const client = new CosmosClient({ endpoint, key });
- ```
+Add the following code at the end of the `index.js` file to include the required dependency to programmatically access Cosmos DB.
-- Select the "Tasks" database.
- ```javascript
- const database = client.database(databaseId);
- ```
+### Add environment variables to code file
-- Select the "Items" container/collection.
+Add the following code at the end of the `index.js` file to include the required environment variables. The endpoint and key were found at the end of the [account creation steps](#create-an-azure-cosmos-db-account).
- ```javascript
- const container = database.container(containerId);
- ```
-- Select all the items in the "Items" container.
+### Add variables for names
- ```javascript
- // query to return all items
- const querySpec = {
- query: "SELECT * from c"
- };
+Add the following variables to manage unique database and container names as well as the [partition key (pk)](/azure/cosmos-db/partitioning-overview).
- const { resources: items } = await container.items
- .query(querySpec)
- .fetchAll();
- ```
-- Create a new item
+In this example, we chose to add a timeStamp to the database and container in case you run this sample code more than once.
- ```javascript
- const { resource: createdItem } = await container.items.create(newItem);
- ```
+## Object model
-- Update an item
- ```javascript
- const { id, category } = createdItem;
- createdItem.isComplete = true;
- const { resource: updatedItem } = await container
- .item(id, category)
- .replace(createdItem);
- ```
+You'll use the following JavaScript classes to interact with these resources:
-- Delete an item
+* [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+* [``Database``](/javascript/api/@azure/cosmos/database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+* [``Container``](/javascript/api/@azure/cosmos/container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
+* [``SqlQuerySpec``](/javascript/api/@azure/cosmos/sqlqueryspec) - This interface represents a SQL query and any query parameters.
+* [``QueryIterator<>``](/javascript/api/@azure/cosmos/queryiterator) - This class represents an iterator that can track the current page of results and get a new page of results.
+* [``FeedResponse<>``](/javascript/api/@azure/cosmos/feedresponse) - This class represents a single page of responses from the iterator.
- ```javascript
- const { resource: result } = await container.item(id, category).delete();
- ```
+## Code examples
-> [!NOTE]
-> In both the "update" and "delete" methods, the item has to be selected from the database by calling `container.item()`. The two parameters passed in are the id of the item and the item's partition key. In this case, the parition key is the value of the "category" field.
+* [Authenticate the client](#authenticate-the-client)
+* [Create a database](#create-a-database)
+* [Create a container](#create-a-container)
+* [Create an item](#create-an-item)
+* [Get an item](#get-an-item)
+* [Query items](#query-items)
-## Update your connection string
+The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-Now go back to the Azure portal to get the connection string details of your Azure Cosmos account. Copy the connection string into the app so that it can connect to your database.
+For this sample code, the container will use the category as a logical partition key.
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys** from the left navigation, and then select **Read-write Keys**. Use the copy buttons on the right side of the screen to copy the URI and Primary Key into the _app.js_ file in the next step.
+### Authenticate the client
- :::image type="content" source="./media/create-sql-api-dotnet/keys.png" alt-text="View and copy an access key in the Azure portal, Keys blade":::
+In the `index.js`, add the following code to use the resource **endpoint** and **key** to authenticate to Cosmos DB. Define a new instance of the [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) class.
-2. In Open the _config.js_ file.
-3. Copy your URI value from the portal (using the copy button) and make it the value of the endpoint key in _config.js_.
- `endpoint: "<Your Azure Cosmos account URI>"`
+### Create a database
-4. Then copy your PRIMARY KEY value from the portal and make it the value of the `config.key` in _config.js_. You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+Add the following code to use the [``CosmosClient.Databases.createDatabaseIfNotExists``](/javascript/api/@azure/cosmos/databases#@azure-cosmos-databases-createifnotexists) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database.
- `key: "<Your Azure Cosmos account key>"`
-## Run the app
+### Create a container
-1. Run `npm install` in a terminal to install the "@azure/cosmos" npm package
+Add the following code to create a container with the [``Database.Containers.createContainerIfNotExistsAsync``](/javascript/api/@azure/cosmos/containers#@azure-cosmos-containers-createifnotexists) method. The method returns a reference to the container.
-2. Run `node app.js` in a terminal to start your node application.
-3. The two items that you created earlier in this quickstart are listed out. A new item is created. The "isComplete" flag on that item is updated to "true" and then finally, the item is deleted.
+### Create an item
-You can continue to experiment with this sample application or go back to Data Explorer, modify, and work with your data.
+Add the following code to provide your data set. Each _product_ has a unique ID, name, category name (used as partition key) and other fields.
-## Review SLAs in the Azure portal
+Create a few items in the container by calling [``Container.Items.create``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-create) in a loop.
-## Next steps
+
+### Get an item
+
+In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.item().read``](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read) passing in both values to return an item.
+
+The partition key is specific to a container. In this Contoso Products container, the category name, `categoryName`, is used as the partition key.
++
+### Query items
+
+Add the following code to query for all items that match a specific filter. Create a [parameterized query expression](/javascript/api/@azure/cosmos/sqlqueryspec) then call the [``Container.Items.query``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-query) method. This method returns a [``QueryIterator``](/javascript/api/@azure/cosmos/queryiterator) that will manage the pages of results. Then, use a combination of ``while`` and ``for`` loops to [``fetchNext``](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-fetchnext) page of results as a [``FeedResponse``](/javascript/api/@azure/cosmos/feedresponse) and then iterate over the individual data objects.
+
+The query is programmatically composed to `SELECT * FROM todo t WHERE t.partitionKey = 'Bikes, Touring Bikes'`.
++
+If you want to use this data returned from the FeedResponse as an _item_, you need to create an [``Item``](/javascript/api/@azure/cosmos/item), using the [``Container.Items.read``](#get-an-item) method.
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a Node.js app. You can now import additional data to your Azure Cosmos DB account.
+### Delete an item
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+Add the following code to delete an item you need to use the ID and partition key to get the item, then delete it. This example uses the [``Container.Item.delete``](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-delete) method to delete the item.
++
+## Run the code
+
+This app creates an Azure Cosmos DB SQL API database and container. The example then creates items and then reads one item back. Finally, the example issues a query that should only return items matching a specific category. With each step, the example outputs metadata to the console about the steps it has performed.
+
+To run the app, use a terminal to navigate to the application directory and run the application.
+
+```bash
+node index.js
+```
+
+The output of the app should be similar to this example:
+
+```output
+contoso_1663276732626 database ready
+products_1663276732626 container ready
+'Touring-1000 Blue, 50' inserted
+'Touring-1000 Blue, 46' inserted
+'Mountain-200 Black, 42' inserted
+Touring-1000 Blue, 50 read
+08225A9E-F2B3-4FA3-AB08-8C70ADD6C3C2: Touring-1000 Blue, 50, BK-T79U-50
+2C981511-AC73-4A65-9DA3-A0577E386394: Touring-1000 Blue, 46, BK-T79U-46
+0F124781-C991-48A9-ACF2-249771D44029 Item deleted
+```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB SQL API account, create a database, and create a container using the JavaScript SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources.
> [!div class="nextstepaction"]
-> [import data into azure cosmos db](../import-data.md)
+> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md)
+
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
This section walks you through creating an Azure Cosmos account and setting up a
### Create an Azure Cosmos DB account
-This quickstart will create a single Azure Cosmos DB account using the SQL API.
-
-#### [Azure CLI](#tab/azure-cli)
-
-1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
-
- ```azurecli-interactive
- # Variable for resource group name
- resourceGroupName="msdocs-cosmos-quickstart-rg"
- location="westus"
-
- # Variable for account name with a randomnly generated suffix
- let suffix=$RANDOM*$RANDOM
- accountName="msdocs-$suffix"
- ```
-
-1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
-
-1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
-
- ```azurecli-interactive
- az group create \
- --name $resourceGroupName \
- --location $location
- ```
-
-1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB SQL API account with default settings.
-
- ```azurecli-interactive
- az cosmosdb create \
- --resource-group $resourceGroupName \
- --name $accountName \
- --locations regionName=$location
- ```
-
-1. Get the SQL API endpoint *URI* for the account using the [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show) command.
-
- ```azurecli-interactive
- az cosmosdb show \
- --resource-group $resourceGroupName \
- --name $accountName \
- --query "documentEndpoint"
- ```
-
-1. Find the *PRIMARY KEY* from the list of keys for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
-
- ```azurecli-interactive
- az cosmosdb keys list \
- --resource-group $resourceGroupName \
- --name $accountName \
- --type "keys" \
- --query "primaryMasterKey"
- ```
-
-1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
-
-#### [PowerShell](#tab/azure-powershell)
-
-1. Create shell variables for *ACCOUNT_NAME*, *RESOURCE_GROUP_NAME*, and **LOCATION**.
-
- ```azurepowershell-interactive
- # Variable for resource group name
- $RESOURCE_GROUP_NAME = "msdocs-cosmos-quickstart-rg"
- $LOCATION = "West US"
-
- # Variable for account name with a randomnly generated suffix
- $SUFFIX = Get-Random
- $ACCOUNT_NAME = "msdocs-$SUFFIX"
- ```
-
-1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
-
-1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
-
- ```azurepowershell-interactive
- $parameters = @{
- Name = $RESOURCE_GROUP_NAME
- Location = $LOCATION
- }
- New-AzResourceGroup @parameters
- ```
-
-1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB SQL API account with default settings.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- Location = $LOCATION
- }
- New-AzCosmosDBAccount @parameters
- ```
-
-1. Get the SQL API endpoint *URI* for the account using the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- }
- Get-AzCosmosDBAccount @parameters |
- Select-Object -Property "DocumentEndpoint"
- ```
-
-1. Find the *PRIMARY KEY* from the list of keys for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- Type = "Keys"
- }
- Get-AzCosmosDBAccountKey @parameters |
- Select-Object -Property "PrimaryMasterKey"
- ```
-
-1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
-
-#### [Portal](#tab/azure-portal)
-
-> [!TIP]
-> For this quickstart, we recommend using the resource group name ``msdocs-cosmos-quickstart-rg``.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. From the Azure portal menu or the **Home page**, select **Create a resource**.
-
-1. On the **New** page, search for and select **Azure Cosmos DB**.
-
-1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](../index.yml).
-
- :::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos DB.":::
-
-1. On the **Create Azure Cosmos DB Account** page, enter the following information:
-
- | Setting | Value | Description |
- | | | |
- | Subscription | Subscription name | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
- | Resource Group | Resource group name | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
- | Account Name | A unique name | Enter a name to identify your Azure Cosmos account. The name will be used as part of a fully qualified domain name (FQDN) with a suffix of *documents.azure.com*, so the name must be globally unique. The name can only contain lowercase letters, numbers, and the hyphen (-) character. The name must also be between 3-44 characters in length. |
- | Location | The region closest to your users | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
- | Capacity mode |Provisioned throughput or Serverless|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode. |
- | Apply Azure Cosmos DB free tier discount | **Apply** or **Do not apply** |With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/). |
-
- > [!NOTE]
- > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
-
- :::image type="content" source="media/create-account-portal/new-cosmos-account-page.png" lightbox="media/create-account-portal/new-cosmos-account-page.png" alt-text="Screenshot of new account page for Azure Cosmos D B SQL A P I.":::
-
-1. Select **Review + create**.
-
-1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
-
-1. Select **Go to resource** to go to the Azure Cosmos DB account page.
-
- :::image type="content" source="media/create-account-portal/cosmos-deployment-complete.png" lightbox="media/create-account-portal/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos DB SQL A P I resource.":::
-
-1. From the Azure Cosmos DB SQL API account page, select the **Keys** navigation menu option.
-
- :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos DB SQL A P I account page. The Keys option is highlighted in the navigation menu.":::
-
-1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values in a later step.
-
- :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos DB SQL A P I account.":::
-
-#### [Resource Manager template](#tab/azure-resource-manager)
-
-> [!NOTE]
-> Azure Resource Manager templates are written in two syntaxes, JSON and Bicep. This sample uses the [Bicep](../../azure-resource-manager/bicep/overview.md) syntax. To learn more about the two syntaxes, see [comparing JSON and Bicep for templates](../../azure-resource-manager/bicep/compare-template-syntax.md).
-
-1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
-
- ```azurecli-interactive
- # Variable for resource group name
- resourceGroupName="msdocs-cosmos"
-
- # Variable for location
- location="westus"
-
- # Variable for account name with a randomnly generated suffix
- let suffix=$RANDOM*$RANDOM
- accountName="msdocs-$suffix"
- ```
-
-1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
-
-1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
-
- ```azurecli-interactive
- az group create \
- --name $resourceGroupName \
- --location $location
- ```
-
-1. Create a new ``.bicep`` file with the deployment template in the Bicep syntax.
-
- :::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-sql-minimal/main.bicep":::
-
-1. Deploy the Azure Resource Manager (ARM) template with [``az deployment group create``](/cli/azure/deployment/group#az-deployment-group-create)
-specifying the filename using the **template-file** parameter and the name ``initial-bicep-deploy`` using the **name** parameter.
-
- ```azurecli-interactive
- az deployment group create \
- --resource-group $resourceGroupName \
- --name initial-bicep-deploy \
- --template-file main.bicep \
- --parameters accountName=$accountName
- ```
-
- > [!NOTE]
- > In this example, we assume that the name of the Bicep file is **main.bicep**.
-
-1. Validate the deployment by showing metadata from the newly created account using [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show).
-
- ```azurecli-interactive
- az cosmosdb show \
- --resource-group $resourceGroupName \
- --name $accountName
- ```
-- ### Create a new .NET app
Build succeeded.
### Configure environment variables
-To use the **URI** and **PRIMARY KEY** values within your .NET code, persist them to new environment variables on the local machine running the application. To set the environment variable, use your preferred terminal to run the following commands:
-
-#### [Windows](#tab/windows)
-
-```powershell
-$env:COSMOS_ENDPOINT = "<cosmos-account-URI>"
-$env:COSMOS_KEY = "<cosmos-account-PRIMARY-KEY>"
-```
-
-#### [Linux / macOS](#tab/linux+macos)
-
-```bash
-export COSMOS_ENDPOINT="<cosmos-account-URI>"
-export COSMOS_KEY="<cosmos-account-PRIMARY-KEY>"
-```
-- ## Object model
-Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB. Azure Cosmos DB has a specific object model used to create and access resources. The Azure Cosmos DB creates resources in a hierarchy that consists of accounts, databases, containers, and items.
-
- Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
-
-For more information about the hierarchy of different resources, see [working with databases, containers, and items in Azure Cosmos DB](../account-databases-containers-items.md).
You'll use the following .NET classes to interact with these resources:
Created item: 68719518391 [gear-surf-surfboards]
## Clean up resources
-When you no longer need the Azure Cosmos DB SQL API account, you can delete the corresponding resource group.
-
-### [Azure CLI / Resource Manager template](#tab/azure-cli+azure-resource-manager)
-
-Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
-
-```azurecli-interactive
-az group delete --name $resourceGroupName
-```
-
-### [PowerShell](#tab/azure-powershell)
-
-Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
-
-```azurepowershell-interactive
-$parameters = @{
- Name = $RESOURCE_GROUP_NAME
-}
-Remove-AzResourceGroup @parameters
-```
-
-### [Portal](#tab/azure-portal)
-
-1. Navigate to the resource group you previously created in the Azure portal.
-
- > [!TIP]
- > In this quickstart, we recommended the name ``msdocs-cosmos-quickstart-rg``.
-1. Select **Delete resource group**.
-
- :::image type="content" source="media/delete-account-portal/delete-resource-group-option.png" lightbox="media/delete-account-portal/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
-
-1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
-
- :::image type="content" source="media/delete-account-portal/delete-confirmation.png" lightbox="media/delete-account-portal/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
-- ## Next steps
cost-management-billing Migrate Consumption Usage Details Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-consumption-usage-details-api.md
A full example legacy Usage Details record is shown at [Usage Details - List - R
The following table provides a mapping between the old and new fields. New properties are available in the CSV files produced by Exports and the Cost Details API. To learn more about the fields, see [Understand usage details fields](understand-usage-details-fields.md).
+Bold property names are unchanged.
+ | **Old Property** | **New Property** | | | | | accountName | AccountName |
-| | AccountOwnerId |
+| **AccountOwnerId** | AccountOwnerId |
| additionalInfo | AdditionalInfo |
-| | AvailabilityZone |
+| **AvailabilityZone** | AvailabilityZone |
| billingAccountId | BillingAccountId | | billingAccountName | BillingAccountName | | billingCurrency | BillingCurrencyCode |
The following table provides a mapping between the old and new fields. New prope
| effectivePrice | EffectivePrice | | frequency | Frequency | | invoiceSection | InvoiceSectionName |
-| | InvoiceSectionId |
+| **InvoiceSectionId** | InvoiceSectionId |
| isAzureCreditEligible | IsAzureCreditEligible | | meterCategory | MeterCategory | | meterId | MeterId | | meterName | MeterName |
-| | MeterRegion |
+| **MeterRegion** | MeterRegion |
| meterSubCategory | MeterSubCategory | | offerId | OfferId | | partNumber | PartNumber |
-| | PayGPrice |
-| | PlanName |
-| | PricingModel |
+| **PayGPrice** | PayGPrice |
+| **PlanName** | PlanName |
+| **PricingModel** | PricingModel |
| product | ProductName |
-| | ProductOrderId |
-| | ProductOrderName |
-| | PublisherName |
-| | PublisherType |
+| **ProductOrderId** | ProductOrderId |
+| **ProductOrderName** | ProductOrderName |
+| **PublisherName** | PublisherName |
+| **PublisherType** | PublisherType |
| quantity | Quantity |
-| | ReservationId |
-| | ReservationName |
+| **ReservationId** | ReservationId |
+| **ReservationName** | ReservationName |
| resourceGroup | ResourceGroup | | resourceId | ResourceId | | resourceLocation | ResourceLocation | | resourceName | ResourceName | | serviceFamily | ServiceFamily |
-| | ServiceInfo1 |
-| | ServiceInfo2 |
+| **ServiceInfo1** | ServiceInfo1 |
+| **ServiceInfo2** | ServiceInfo2 |
| subscriptionId | SubscriptionId | | subscriptionName | SubscriptionName |
-| | Tags |
-| | Term |
+| **Tags** | Tags |
+| **Term** | Term |
| unitOfMeasure | UnitOfMeasure | | unitPrice | UnitPrice |
-| | CostAllocationRuleName |
+| **CostAllocationRuleName** | CostAllocationRuleName |
## Microsoft Customer Agreement field mapping
cost-management-billing Migrate Cost Management Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/migrate-cost-management-api.md
This article helps you understand the data structure, API, and other system integration differences between Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) accounts. Cost Management supports APIs for both account types. Review the [Setup billing account for](../manage/mca-setup-account.md) Microsoft Customer Agreement article before continuing.
-Organizations with an existing EA account should review this article in conjunction with setting up an MCA account. Previously, renewing an EA account required some minimal work to move from an old enrollment to a new one. However, migrating to an MCA account requires additional effort. Additional effort is because of changes in the underlying billing subsystem, which affect all cost-related APIs and service offerings.
+Organizations with an existing EA account should review this article when they set up an MCA account. Previously, renewing an EA account required some minimal work to move from an old enrollment to a new one. However, migrating to an MCA account requires extra effort. Extra effort is because of changes in the underlying billing subsystem, which affect all cost-related APIs and service offerings.
## MCA APIs and integration
The following items help you transition to MCA APIs.
- Update any programming code to [use Azure AD authentication](/rest/api/azure/#create-the-request). - Update any programming code to replace EA API calls with MCA API calls. - Update error handling to use new error codes.-- Review additional integration offerings like Power BI for other needed action.
+- Review other integration offerings like Power BI for other needed action.
## EA APIs replaced with MCA APIs
EA APIs use an API key for authentication and authorization. MCA APIs use Azure
| Purpose | EA API | MCA API | | | | | | Balance and credits | [/balancesummary](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary) | Microsoft.Billing/billingAccounts/billingProfiles/availableBalanceussae |
-| Usage (JSON) | [/usagedetails](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#json-format)[/usagedetailsbycustomdate](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#json-format) | [Microsoft.Consumption/usageDetails](/rest/api/consumption/usagedetails)┬╣ |
-| Usage (CSV) | [/usagedetails/download](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#csv-format)[/usagedetails/submit](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#csv-format) | [Microsoft.Consumption/usageDetails/download](/rest/api/consumption/usagedetails)┬╣ |
-| Marketplace Usage (CSV) | [/marketplacecharges](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge)[/marketplacechargesbycustomdate](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge) | [Microsoft.Consumption/usageDetails/download](/rest/api/consumption/usagedetails)┬╣ |
+| Usage (JSON) | [/usagedetails](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#json-format)[/usagedetailsbycustomdate](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#json-format) | [Choose a cost details solution](../automate/usage-details-best-practices.md) |
+| Usage (CSV) | [/usagedetails/download](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#csv-format)[/usagedetails/submit](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#csv-format) | [Choose a cost details solution](../automate/usage-details-best-practices.md) |
+| Marketplace Usage (CSV) | [/marketplacecharges](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge)[/marketplacechargesbycustomdate](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge) | [Choose a cost details solution](../automate/usage-details-best-practices.md) |
| Billing periods | [/billingperiods](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods) | Microsoft.Billing/billingAccounts/billingProfiles/invoices | | Price sheet | [/pricesheet](/rest/api/billing/enterprise/billing-enterprise-api-pricesheet) | Microsoft.Billing/billingAccounts/billingProfiles/pricesheet/default/download format=json\|csv Microsoft.Billing/billingAccounts/…/billingProfiles/…/invoices/… /pricesheet/default/download format=json\|csv Microsoft.Billing/billingAccounts/../billingProfiles/../providers/Microsoft.Consumption/pricesheets/download | | Reservation purchases | [/reservationcharges](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-charges) | Microsoft.Billing/billingAccounts/billingProfiles/transactions |
To get available balances with the Available Balance API:
## APIs to get cost and usage
-Get a daily breakdown of costs from Azure service usage, third-party Marketplace usage, and other Marketplace purchases with the following APIs. The following separate APIs were merged for Azure services and third-party Marketplace usage. The old APIs are replaced by the [Microsoft.Consumption/usageDetails](/rest/api/consumption/usagedetails) API. It adds Marketplace purchases, which were previously only shown in the balance summary to date.
+Get a daily breakdown of costs from Azure service usage, third-party Marketplace usage, and other Marketplace purchases with the following APIs. The following separate APIs were merged for Azure services and third-party Marketplace usage. The old APIs are replaced by either [Exports](ingest-azure-usage-at-scale.md) or the [Cost Details API](/rest/api/cost-management/generate-cost-details-report/create-operation). To choose the solution that's right for you, see [Choose a cost details solution](../automate/usage-details-best-practices.md). Both solutions provide the same Cost Details file and have marketplace purchases in the data, which were previously only shown in the balance summary to date.
- [Get usage detail/download](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#csv-format) - [Get usage detail/submit](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail#csv-format)
Get a daily breakdown of costs from Azure service usage, third-party Marketplace
- [Get marketplace store charge/marketplacecharges](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge) - [Get marketplace store charge/marketplacechargesbycustomdate](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge)
-All Consumption APIs are replaced by native Azure APIs that use Azure AD for authentication and authorization. For more information about calling Azure REST APIs, see [Getting started with REST](/rest/api/azure/#create-the-request).
-
-All the preceding APIs are replaced by the Consumption/Usage Details API.
-
-To get usage details with the Usage Details API:
-
-| Method | Request URI |
-| | |
-| GET | `https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?api-version=2019-01-01` |
+Exports and the Cost Details API, as with all Cost Management APIs, are available at multiple scopes. For invoiced costs, as you would traditionally receive at an enrollment level, use the billing profile scope. For more information about Cost Management scopes, see [Understand and work with scopes](understand-work-scopes.md).
-The Usage Details API, as with all Cost Management APIs, is available at multiple scopes. For invoiced costs, as you would traditionally receive at an enrollment level, use the billing profile scope. For more information about Cost Management scopes, see [Understand and work with scopes](understand-work-scopes.md).
-
-| Type | ID format |
-| | |
-| Billing account | `/Microsoft.Billing/billingAccounts/{billingAccountId}` |
-| Billing profile | `/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}` |
-| Subscription | `/subscriptions/{subscriptionId}` |
-| Resource group | `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}` |
-
-Use the following querystring parameters to update any programming code.
-
-| Old parameters | New parameters |
+| **Type** | **ID format** |
| | |
-| `billingPeriod={billingPeriod}` | Not supported |
-| `endTime=yyyy-MM-dd` | `endDate=yyyy-MM-dd` |
-| `startTime=yyyy-MM-dd` | `startDate=yyyy-MM-dd` |
-
-The body of the response also changed.
-
-Old response body:
-
-```
-{
- "id": "string",
- "data": [{...}, ...],
- "nextLink": "string"
-}
-```
-
-New response body:
-
-```
-{
- "value": [{
- "id": "{scope}/providers/Microsoft.Consumption/usageDetails/###",
- "name": "###",
- "type": "Microsoft.Consumption/usageDetails",
- "tags": {...},
- "properties": [{...}, ...],
- "nextLink": "string"
- }, ...]
-}
-```
+| Billing account | /Microsoft.Billing/billingAccounts/{billingAccountId} |
+| Billing profile | /Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId} |
+| Subscription | /subscriptions/{subscriptionId} |
+| Resource group | /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName} |
-The property name containing the array of usage records changed from data to _values_. Each record used to have a flat list of detailed properties. However, each record now all details are now in a nested property named _properties_, except for tags. The new structure is consistent with other Azure APIs. Some property names have changed. The following table shows corresponding properties.
+Some property names have changed in the new Cost Details dataset available through Exports and Cost Details API. The following table shows corresponding properties.
| Old property | New property | Notes | | | | | | AccountId | N/A | The subscription creator isn't tracked. Use invoiceSectionId (same as departmentId). | | AccountNameAccountOwnerId and AccountOwnerEmail | N/A | The subscription creator isn't tracked. Use invoiceSectionName (same as departmentName). | | AdditionalInfo | additionalInfo | |
-| ChargesBilledSeparately | isAzureCreditEligible | Note that these properties are opposites. If isAzureCreditEnabled is true, ChargesBilledSeparately would be false. |
+| ChargesBilledSeparately | isAzureCreditEligible | The properties are opposites. If isAzureCreditEnabled is true, ChargesBilledSeparately would be false. |
| ConsumedQuantity | quantity | | | ConsumedService | consumedService | Exact string values might differ. | | ConsumedServiceId | None | |
The property name containing the array of usage records changed from data to _va
| SubscriptionGuid | subscriptionId | | | SubscriptionId | subscriptionId | | | SubscriptionName | subscriptionName | |
-| Tags | tags | The tags property applies to root object, not to the nested properties property. |
+| Tags | tags | The tags property applies to the root object, not to the properties nested property. |
| UnitOfMeasure | unitOfMeasure | Exact string values differ. | | usageEndDate | date | | | Year | None | Parses year from date. |
OData-EntityId: {operationId}
```
-Make another GET call to the location. The response to the GET call is the same until the operation reaches a completion or failure state. When completed, the response to the GET call location returns the download URL. Just as if the operation was executed at the same time. Here's an example:
+Make another GET call to the location. The response to the GET call is the same until the operation reaches a completion or failure state. When completed, the response to the GET call location returns the download URL as if the operation was executed at the same time. Here's an example:
``` HTTP Status 200
Instead of the above API endpoints, use the following ones for Microsoft Custome
**Price Sheet API for Microsoft Customer Agreements (asynchronous REST API)**
-This API is for Microsoft Customer Agreements and it provides additional attributes.
+This API is for Microsoft Customer Agreements and it provides extra attributes.
**Price Sheet for a Billing Profile scope in a Billing Account**
The following fields are either not available in Microsoft Customer Agreement Pr
| unit | Not applicable. Can be parsed from unitOfMeasure. | | currencyCode | Same as the pricingCurrency in MCA. | | meterLocation | Same as the meterRegion in MCA. |
-| partNumber partnumber | Not applicable because part number isn't listed in MCA invoices. Instead of part number, use the meterId and productOrderName combination to uniquely identify prices. |
+| partNumber | Not applicable because part number isn't listed in MCA invoices. Instead of part number, use the meterId and productOrderName combination to uniquely identify prices. |
| totalIncludedQuantity | Not applicable. | | pretaxStandardRate | Not applicable. |
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
# Pay your Microsoft Customer Agreement Azure or Microsoft Online Subscription Program Azure bill
-This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website (for an Microsoft Online Services Program account also called pay-as-you-go account).
+This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website (for a Microsoft Online Services Program account also called pay-as-you-go account).
[Check your access to a Microsoft Customer Agreement](#check-access-to-a-microsoft-customer-agreement).
If you have Azure credits, they automatically apply to your invoice each billing
**The Reserve Bank of India has issued new directives.**
-On 1 October 2021, automatic payments in India may block some credit card transactions, especially transactions exceeding 5,000 INR. Because of this you may need to make payments manually in the Azure portal. This directive will not affect the total amount you will be charged for your Azure usage.
+On 1 October 2021, automatic payments in India may block some credit card transactions, especially transactions exceeding 5,000 INR. Because of this situation, you may need to make payments manually in the Azure portal. This directive won't affect the total amount you'll be charged for your Azure usage.
[Learn more about the Reserve Bank of India directive; Processing of e-mandate on cards for recurring transactions](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=11668&Mode=0)
-On 30 September 2022, Microsoft and other online merchants will no longer be storing credit card information. To comply with this regulation Microsoft will be removing all stored card details from Microsoft Azure. To avoid service interruption, you will need to add and verify your payment method to make a payment in the Azure portal for all invoices.
+On 30 September 2022, Microsoft and other online merchants will no longer be storing credit card information. To comply with this regulation Microsoft will be removing all stored card details from Microsoft Azure. To avoid service interruption, you'll need to add and verify your payment method to make a payment in the Azure portal for all invoices.
[Learn about the Reserve Bank of India directive; Restriction on storage of actual card data ](https://rbidocs.rbi.org.in/rdocs/notification/PDFs/DPSSC09B09841EF3746A0A7DC4783AC90C8F3.PDF)
+### UPI and NetBanking payment options
+
+Azure supports two alternate payment options for customers in India:
+
+- UPI (Unified Payments Interface) payment is a real-time payment method.
+- NetBanking (Internet Banking) facilitates customers with access to banking services on an online platform.
+
+#### How do I make a payment with UPI or NetBanking?
+
+UPI and NetBanking are only supported for one-time transactions.
+
+To make a payment with UPI or NetBanking:
+
+1. Select Add a new payment method when youΓÇÖre making a payment.
+2. Select UPI / NetBanking.
+3. You're redirected to a payment partner, like Billdesk, where you can choose your payment method.
+4. YouΓÇÖre redirected to your bank's website where you can process the payment.
+
+After you submit the payment, allow time for the payment to appear in the Azure portal.
+
+#### How am I refunded if I made a payment with UPI or NetBanking?
+
+Refunds are treated as a regular charge. TheyΓÇÖre refunded to your bank account.
+ ## Pay by default payment method The default payment method of your billing profile can either be a credit card, debit card, or check wire transfer.
If the default payment method for your billing profile is a credit or debit card
If your automatic credit or debit card charge gets declined for any reason, you can make a one-time payment with a credit or debit card in the Azure portal using **Pay now**.
-If have an Microsoft Online Services Program (pay-as-you-go) account and you have a bill due, you'll see the **Pay now** banner on your subscription property page.
+If you have a Microsoft Online Services Program (pay-as-you-go) account and you have a bill due, you'll see the **Pay now** banner on your subscription property page.
If you want to learn how to change your default payment method to check or wire transfer, see [How to pay by invoice](../manage/pay-by-invoice.md).
The invoice status shows *paid* within 24 hours.
## Pay now might be unavailable
-If you have an Microsoft Online Services Program account (pay-as-you-go account), the **Pay now** option might be unavailable. Instead, you might see a **Settle balance** banner. If so, see [Resolve past due balance](../manage/resolve-past-due-balance.md#resolve-past-due-balance-in-the-azure-portal).
+If you have a Microsoft Online Services Program account (pay-as-you-go account), the **Pay now** option might be unavailable. Instead, you might see a **Settle balance** banner. If so, see [Resolve past due balance](../manage/resolve-past-due-balance.md#resolve-past-due-balance-in-the-azure-portal).
## Check access to a Microsoft Customer Agreement [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)]
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 08/23/2022 Last updated : 09/16/2022
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
If the HDI activity is stuck in preparing for cluster, follow the guidelines bel
- **Cause**: The execution output is greater than 4 MB in size but the maximum supported output response payload size is 4 MB. -- **Recommendation**: Make sure the execution output size does not exceed 4 MB. For more information, see [How to scale out the size of data moving using Azure Data Factory](https://docs.microsoft.com/answers/questions/700102/how-to-scale-out-the-size-of-data-moving-using-azu.html).
+- **Recommendation**: Make sure the execution output size does not exceed 4 MB. For more information, see [How to scale out the size of data moving using Azure Data Factory](/answers/questions/700102/how-to-scale-out-the-size-of-data-moving-using-azu.html).
### Error Code: 2002
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Dataverse](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Dynamics 365](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Dynamics CRM](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Google Sheets (Preview)](connector-google-sheets.md#mapping-data-flow-properties) | | -/Γ£ô |
| [Hive](connector-hive.md#mapping-data-flow-properties) | | -/Γ£ô | | [Quickbase (Preview)](connector-quickbase.md#mapping-data-flow-properties) | | -/Γ£ô | | [SFTP](connector-sftp.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô |
data-factory How To Use Azure Key Vault Secrets Pipeline Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-azure-key-vault-secrets-pipeline-activities.md
This feature relies on the data factory managed identity. Learn how it works fr
|Secure Output |True | |URL |[Your secret URI value]?api-version=7.0 | |Method |GET |
- |Authentication |MSI |
+ |Authentication |System Assigned Managed Identity |
|Resource |https://vault.azure.net | :::image type="content" source="media/how-to-use-azure-key-vault-secrets-pipeline-activities/webactivity.png" alt-text="Web activity":::
databox-online Azure Stack Edge Mini R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-technical-specifications-compliance.md
The following routers and switches are compatible with the 10 Gbps SPF+ network
|Router/Switch |Notes | |||
-|[VoyagerESR 2.0](https://klastelecom.com/products/voyageresr2-0/) |Cisco ESS3300 Switch component |
+|[VoyagerESR 2.0](https://www.klasgroup.com/products-gov/voyager-tdc/) |Cisco ESS3300 Switch component |
|[VoyagerSW26G](https://klastelecom.com/products/voyagersw26g/) | | |[VoyagerVM 3.0](https://klastelecom.com/products/voyager-vm-3-0/) | | |[TDC Switch](https://klastelecom.com/voyager-tdc/) | |
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
The native cloud connector requires:
(Optional) Select **Management account** to create a connector to a management account. Connectors will be created for each member account discovered under the provided management account. Auto-provisioning will be enabled for all of the newly onboarded accounts.
-1. Select **Next: Select plans**.
+1. Select **Next: Select plans**.<a name="cloudtrail-implications-note"></a>
> [!NOTE] > Each plan has its own requirements for permissions, and might incur charges.
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
| Version | Date released | End support date | |--|--|--|
+| 22.2.6 | 09/2022 | 04/2023 |
| 22.2.5 | 08/2022 | 04/2023 | | 22.2.4 | 07/2022 | 04/2023 | | 22.2.3 | 07/2022 | 04/2023 |
For more information, see the [Microsoft Security Development Lifecycle practice
| 10.5.3 | 10/2021 | 07/2022 | | 10.5.2 | 10/2021 | 07/2022 |
+## September 2022
+
+|Service area |Updates |
+|||
+|**OT networks** |**Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm |
+ ## August 2022 |Service area |Updates |
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
Run the below curl command in Azure Cloud Bash to add user(s) to the "Users" gro
``` > [!NOTE] > The value to be sent for the param "email" is the Object ID of the user and not the user's email+ **Sample request** ```bash
event-grid Communication Services Voice Video Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md
Azure Communication Services emits the following voice and video calling event t
| Microsoft.Communication.CallEnded | Published when call is ended | | Microsoft.Communication.CallParticipantAdded | Published when participant is added | | Microsoft.Communication.CallParticipantRemoved | Published when participant is removed |
+| Microsoft.Communication.IncomingCall | Published when there is an incoming call |
## Event responses
This section contains an example of what that data would look like for each even
] ```
+### Microsoft.Communication.IncomingCall
+
+```json
+[
+ {
+ "id": "d5546be8-227a-4db8-b2c3-4f06fd675fd6",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "/caller/4:+16041234567/recipient/8:acs:ff4181e1-324c-4cd1-9c4f-bda3e5d348f5_00000000-0000-0000-0000-000000000000",
+ "data": {
+ "to": {
+ "kind": "PhoneNumber",
+ "rawId": "4:+18331234567",
+ "phoneNumber": {
+ "value": "+18331234567"
+ }
+ },
+ "from": {
+ "kind": "PhoneNumber",
+ "rawId": "4:+16041234567",
+ "phoneNumber": {
+ "value": "+16041234567"
+ }
+ },
+ "callerDisplayName": "",
+ "incomingCallContext": "eyJhbGciOiJub25lIiwidHliSldUIn0.eyJjYyI6Ikg0c0lBQi9iT0JiOUs0SVhtQS9UMGhJbFVaUUlHQVBIc1J1M1RlbzgyNW4xcmtHJNa2hCNVVTQkNUbjFKTVo1NCt3ZDk1WFY0ZnNENUg0VDV2dk5VQ001NWxpRkpJb0pDUWlXS0F3OTJRSEVwUWo4aFFleDl4ZmxjRi9lMTlaODNEUmN6QUpvMVRWVXoxK1dWYm1lNW5zNmF5cFRyVGJ1KzMxU3FMY3E1SFhHWHZpc3FWd2kwcUJWSEhta0xjVFJEQ0hlSjNhdzA5MHE2T0pOaFNqS0pFdXpCcVdidzRoSmJGMGtxUkNaOFA4T3VUMTF0MzVHN0kvS0w3aVQyc09aS2F0NHQ2cFV5d0UwSUlEYm4wQStjcGtiVjlUK0E4SUhLZ2JKUjc1Vm8vZ0hFZGtRT3RCYXl1akc4cUt2U1dITFFCR3JFYjJNY3RuRVF0TEZQV1JEUzJHMDk3TGU5VnhhTktob2JIV0wzOHdab3dWcGVWZmsrL2QxYVZnQ2U1bVVLQTh1T056YmpvdXdnQjNzZTlnTEhjNFlYem5BVU9nRGY5dUFQMndsMXA0WU5nK1cySVRxSEtZUzJDV25IcEUySkhVZzd2UnVHOTBsZ081cU81MngvekR0OElYWHBFSi9peUxtNkdibmR1eEdZREozRXNWWXh4ZzZPd1hqc0pCUjZvR1U3NDIrYTR4M1RpQXFaV245UVIrMHNaVDg3YXpRQzbDNUR3BuZFhST1FTMVRTRzVVTkRGeU5UVjNORTFHU2kxck1UTk9VMUF0TWtWNVNreFRUVVI0YlMxRk1VdEVabnBRTjFsQ1EwWkVlVTQxZURCc1IyaHljVTVYTFROeWVTMVJNVjgyVFhrdGRFNUJZV3hrZW5SSVUwMTFVVE5GWkRKUkluMTlmUS5hMTZ0eXdzTDhuVHNPY1RWa2JnV3FPbTRncktHZmVMaC1KNjZUZXoza0JWQVJmYWYwOTRDWDFJSE5tUXRJeDN1TWk2aXZ3QXFFQWV1UlNGTjhlS3gzWV8yZXppZUN5WDlaSHp6Q1ZKemdZUVprc0RjYnprMGJoR09laWkydkpEMnlBMFdyUW1SeGFxOGZUM25EOUQ1Z1ZSUVczMGRheGQ5V001X1ZuNFNENmxtLVR5TUSVEifQ.",
+ "correlationId": "d732db64-4803-462d-be9c-518943ea2b7a"
+ },
+ "eventType": "Microsoft.Communication.IncomingCall",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-08-25T19:27:24.2415391Z"
+ }
+]
+```
+ ## Limitations Calling events are only available for ACS VoIP users. PSTN, bots, echo bot and Teams users events are excluded. No calling events will be available for ACS - Teams meeting interop call.
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-ip-filtering.md
To deploy the template, follow the instructions for [Azure Resource Manager][lnk
> [!IMPORTANT] > If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
-## default action and public network access
+## Default action and public network access
### REST API
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
Title: Overview of Event Hubs Premium description: This article provides an overview of Azure Event Hubs Premium, which offers multi-tenant deployments of Event Hubs for high-end streaming needs. Previously updated : 02/02/2022 Last updated : 09/20/2022
In comparison to the dedicated offering, the premium tier provides the following
Therefore, the premium tier is often a more cost effective option for event streaming workloads up to 160 MB/sec (per namespace), especially with changing loads throughout the day or week, when compared to the dedicated tier.
-For the extra robustness gained by availability-zone support, the minimal deployment scale for the dedicated tier is 8 capacity units (CU), but you'll have availability zone support in the premium tier from the first PU in all availability zone regions.
+> [!NOTE]
+> For the extra robustness gained by **availability-zone** support, the minimal deployment scale for the dedicated tier is **8 capacity units (CU)**, but you'll have availability zone support in the premium tier from the first PU in all availability zone regions.
You can purchase 1, 2, 4, 8 and 16 processing units for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it is in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU will depend on various factors, including:
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
Title: About ExpressRoute virtual network gateways - Azure| Microsoft Docs
description: Learn about virtual network gateways for ExpressRoute. This article includes information about gateway SKUs and types. - Previously updated : 04/23/2021 Last updated : 09/20/2022 # About ExpressRoute virtual network gateways
-To connect your Azure virtual network and your on-premises network via ExpressRoute, you must create a virtual network gateway first. A virtual network gateway serves two purposes: exchange IP routes between the networks and route network traffic. This article explains gateway types, gateway SKUs, and estimated performance by SKU. This article also explains ExpressRoute [FastPath](#fastpath), a feature that enables the network traffic from your on-premises network to bypass the virtual network gateway to improve performance.
+To connect your Azure virtual network and your on-premises network using ExpressRoute, you must first create a virtual network gateway. A virtual network gateway serves two purposes: exchange IP routes between the networks and route network traffic. This article explains different gateway types, gateway SKUs, and estimated performance by SKU. This article also explains ExpressRoute [FastPath](#fastpath), a feature that enables the network traffic from your on-premises network to bypass the virtual network gateway to improve performance.
## Gateway types When you create a virtual network gateway, you need to specify several settings. One of the required settings, `-GatewayType`, specifies whether the gateway is used for ExpressRoute, or VPN traffic. The two gateway types are:
-* **Vpn** - To send encrypted traffic across the public Internet, you use the gateway type 'Vpn'. This is also referred to as a VPN gateway. Site-to-Site, Point-to-Site, and VNet-to-VNet connections all use a VPN gateway.
+* **Vpn** - To send encrypted traffic across the public Internet, you use the gateway type 'Vpn'. This type of gateway is also referred to as a VPN gateway. Site-to-Site, Point-to-Site, and VNet-to-VNet connections all use a VPN gateway.
-* **ExpressRoute** - To send network traffic on a private connection, you use the gateway type 'ExpressRoute'. This is also referred to as an ExpressRoute gateway and is the type of gateway used when configuring ExpressRoute.
+* **ExpressRoute** - To send network traffic on a private connection, you use the gateway type 'ExpressRoute'. This type of gateway is also referred to as an ExpressRoute gateway and is used when configuring ExpressRoute.
Each virtual network can have only one virtual network gateway per gateway type. For example, you can have one virtual network gateway that uses `-GatewayType` Vpn, and one that uses `-GatewayType` ExpressRoute. ## <a name="gwsku"></a>Gateway SKUs+ [!INCLUDE [expressroute-gwsku-include](../../includes/expressroute-gwsku-include.md)]
-If you want to upgrade your gateway to a more powerful gateway SKU, you can use the `Resize-AzVirtualNetworkGateway` PowerShell cmdlet or perform the upgrade directly in the ExpressRoute virtual network gateway configuration blade in the Azure portal. The following upgrades are supported:
+If you want to upgrade your gateway to a higher capacity gateway SKU, you can use the `Resize-AzVirtualNetworkGateway` PowerShell cmdlet or perform the upgrade directly in the ExpressRoute virtual network gateway configuration page in the Azure portal. The following upgrades are supported:
- Standard to High Performance - Standard to Ultra Performance
Additionally, you can downgrade the virtual network gateway SKU. The following d
- High Performance to Standard - ErGw2Az to ErGw1Az
-For all other downgrade scenarios, you will need to delete and recreate the gateway. Recreating a gateway incurs downtime.
+For all other downgrade scenarios, you'll need to delete and recreate the gateway. Recreating a gateway incurs downtime.
### <a name="gatewayfeaturesupport"></a>Feature support by gateway SKU+ The following table shows the features supported across each gateway type. |**Gateway SKU**|**VPN Gateway and ExpressRoute coexistence**|**FastPath**|**Max Number of Circuit Connections**|
The following table shows the features supported across each gateway type.
|**Ultra Performance SKU/ErGw3Az**|Yes|Yes|16 ### <a name="aggthroughput"></a>Estimated performances by gateway SKU+ The following table shows the gateway types and the estimated performance scale numbers. These numbers are derived from the following testing conditions and represent the max support limits. Actual performance may vary, depending on how closely traffic replicates the testing conditions. ### Testing conditions
The following table shows the gateway types and the estimated performance scale
|**Ultra Performance/ErGw3Az**|16,000|10,000|1,000,000|11,000| > [!IMPORTANT]
-> Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment. Additionally, Microsoft performs routine host and OS maintenance on the ExpressRoute Virtual Network Gateway, to maintain reliability of the service. During a maintenance period, control plane and data path capacity of the gateway is reduced.
+> * Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment. Additionally, Microsoft performs routine host and OS maintenance on the ExpressRoute Virtual Network Gateway, to maintain reliability of the service. During a maintenance period, control plane and data path capacity of the gateway is reduced.
+> * During a maintenance period, you may experience intermittent connectivity issues to private endpoint resources.
>[!NOTE] > The maximum number of ExpressRoute circuits from the same peering location that can connect to the same virtual network is 4 for all gateways.
The following table shows the gateway types and the estimated performance scale
## <a name="gwsub"></a>Gateway subnet
-Before you create an ExpressRoute gateway, you must create a gateway subnet. The gateway subnet contains the IP addresses that the virtual network gateway VMs and services use. When you create your virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the required ExpressRoute gateway settings. Never deploy anything else (for example, additional VMs) to the gateway subnet. The gateway subnet must be named 'GatewaySubnet' to work properly. Naming the gateway subnet 'GatewaySubnet' lets Azure know that this is the subnet to deploy the virtual network gateway VMs and services to.
+Before you create an ExpressRoute gateway, you must create a gateway subnet. The gateway subnet contains the IP addresses that the virtual network gateway VMs and services use. When you create your virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the required ExpressRoute gateway settings. Never deploy anything else into the gateway subnet. The gateway subnet must be named 'GatewaySubnet' to work properly. Naming the gateway subnet 'GatewaySubnet' lets Azure know to deploy the virtual network gateway VMs and services into this subnet.
>[!NOTE] >[!INCLUDE [vpn-gateway-gwudr-warning.md](../../includes/vpn-gateway-gwudr-warning.md)]
Before you create an ExpressRoute gateway, you must create a gateway subnet. The
When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others.
-When you are planning your gateway subnet size, refer to the documentation for the configuration that you are planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Additionally, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future additional configurations. While you can create a gateway subnet as small as /29, we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.) if you have the available address space to do so. If you plan on connecting 16 ExpressRoute circuits to your gateway, you **must** create a gateway subnet of /26 or larger. If you are creating a dual stack gateway subnet, we recommend that you also use an IPv6 range of /64 or larger. This will accommodate most configurations.
+When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Further more, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future configurations. While you can create a gateway subnet as small as /29, we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.). If you plan on connecting 16 ExpressRoute circuits to your gateway, you **must** create a gateway subnet of /26 or larger. If you're creating a dual stack gateway subnet, we recommend that you also use an IPv6 range of /64 or larger. This set up will accommodate most configurations.
The following Resource Manager PowerShell example shows a gateway subnet named GatewaySubnet. You can see the CIDR notation specifies a /27, which allows for enough IP addresses for most configurations that currently exist.
Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.0.3.0/2
### <a name="zrgw"></a>Zone-redundant gateway SKUs
-You can also deploy ExpressRoute gateways in Azure Availability Zones. This physically and logically separates them into different Availability Zones, protecting your on-premises network connectivity to Azure from zone-level failures.
+You can also deploy ExpressRoute gateways in Azure Availability Zones. This configuration physically and logically separates them into different Availability Zones, protecting your on-premises network connectivity to Azure from zone-level failures.
![Zone-redundant ExpressRoute gateway](./media/expressroute-about-virtual-network-gateways/zone-redundant.png)
Zone-redundant gateways use specific new gateway SKUs for ExpressRoute gateway.
* ErGw2AZ * ErGw3AZ
-The new gateway SKUs also support other deployment options to best match your needs. When creating a virtual network gateway using the new gateway SKUs, you also have the option to deploy the gateway in a specific zone. This is referred to as a zonal gateway. When you deploy a zonal gateway, all the instances of the gateway are deployed in the same Availability Zone.
+The new gateway SKUs also support other deployment options to best match your needs. When creating a virtual network gateway using the new gateway SKUs, you can deploy the gateway in a specific zone. This type of gateway is referred to as a zonal gateway. When you deploy a zonal gateway, all the instances of the gateway are deployed in the same Availability Zone.
## <a name="fastpath"></a>FastPath
ExpressRoute virtual network gateway is designed to exchange network routes and
For more information about FastPath, including limitations and requirements, see [About FastPath](about-fastpath.md).
+## Route Server
+
+When you create or delete an Azure Route Server from a virtual network that contains a Virtual Network Gateway (ExpressRoute or VPN), expect downtime until the operation gets completed.
+ ## <a name="resources"></a>REST APIs and PowerShell cmdlets
-For additional technical resources and specific syntax requirements when using REST APIs and PowerShell cmdlets for virtual network gateway configurations, see the following pages:
+
+For more technical resources and specific syntax requirements when using REST APIs and PowerShell cmdlets for virtual network gateway configurations, see the following pages:
| **Classic** | **Resource Manager** | | | |
For additional technical resources and specific syntax requirements when using R
## VNet-to-VNet connectivity
-By default, connectivity between virtual networks are enabled when you link multiple virtual networks to the same ExpressRoute circuit. However, Microsoft advises against using your ExpressRoute circuit for communication between virtual networks and instead use [VNet peering](../virtual-network/virtual-network-peering-overview.md). For more information about why VNet-to-VNet connectivity is not recommended over ExpressRoute, see [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
+By default, connectivity between virtual networks are enabled when you link multiple virtual networks to the same ExpressRoute circuit. However, Microsoft advises against using your ExpressRoute circuit for communication between virtual networks and instead uses [VNet peering](../virtual-network/virtual-network-peering-overview.md). For more information about why VNet-to-VNet connectivity isn't recommended over ExpressRoute, see [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
+
+### Virtual network peering
+
+A virtual network with an ExpressRoute gateway can have virtual network peering with up to 500 other virtual networks. Virtual network peering without an ExpressRoute gateway may have a higher peering limitation.
## Next steps
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
From a browser, navigate to the [Azure portal](https://portal.azure.com) and sig
1. On the **Create ExpressRoute** page. Provide the **Resource Group**, **Region**, and **Name** for the circuit. Then select **Next: Configuration >**.
-| Setting | Value |
-| | |
-| Resource group | Select **Create new**. Enter **ExpressRouteResourceGroup** </br> Select **OK**. |
-| Region | West US 2 |
-| Name | TestERCircuit |
+ | Setting | Value |
+ | | |
+ | Resource group | Select **Create new**. Enter **ExpressRouteResourceGroup** </br> Select **OK**. |
+ | Region | West US 2 |
+ | Name | TestERCircuit |
- :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-create-basic.png" alt-text=" Screenshot of how to configure the resource group and region.":::
+ :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-create-basic.png" alt-text=" Screenshot of how to configure the resource group and region.":::
1. When you're filling in the values on this page, make sure that you specify the correct SKU tier (Local, Standard, or Premium) and data metering billing model (Unlimited or Metered).
firewall Firewall Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-performance.md
Previously updated : 08/03/2022 Last updated : 09/21/2022
Azure Firewall also supports the following throughput for single connections:
||| |Standard<br>Max bandwidth for single TCP connection |1.3| |Premium<br>Max bandwidth for single TCP connection |9.5|
-|Premium max bandwidth with TLS/IDS|100|
|Premium single TCP connection with IDPS on *Alert and Deny* mode|up to 300 Mbps| Performance values are calculated with Azure Firewall at full scale. Actual performance may vary depending on your rule complexity and network configuration. These metrics are updated periodically as performance continuously evolves with each release.
firewall Long Running Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/long-running-sessions.md
+
+ Title: Long running TCP sessions with Azure Firewall
+description: There are few scenarios where Azure Firewall can potentially drop long running TCP sessions.
++++ Last updated : 09/21/2022+++
+# Long running TCP sessions with Azure Firewall
+
+Azure Firewall is designed to be available and redundant. Every effort is made to avoid service disruptions. However, there are few scenarios where Azure Firewall can potentially drop long running TCP sessions.
+
+## Scenarios impacting long running connections
+
+The following scenarios can potentially drop long running TCP sessions:
+- Scale down
+- Firewall maintenance
+- Idle timeout
+- Auto-recovery
+
+### Scale down
+
+Azure Firewall scales up\down based on throughput and CPU usage. Scale down is performed by putting the VM instance in drain mode for 90 seconds before recycling the VM instance. Any long running connections remaining on the VM instance after 90 seconds will be disconnected.
+
+### Firewall maintenance
+
+The Azure Firewall engineering team updates the firewall on an as-needed basis (usually every month), generally during night time hours in the local time-zone for that region. Updates include security patches, bug fixes, and new feature roll outs that are applied by configuring the firewall in a [rolling update mode](https://azure.microsoft.com/blog/deployment-strategies-defined/). The firewall instances are put in a drain mode before reimaging them to give short-lived sessions time to drain. Long running sessions remaining on an instance after the drain period are dropped during the restart.
+
+### Idle timeout
+
+An idle timer is in place to recycle idle sessions. The default value is four minutes. Applications that maintain keepalives don't idle out. If the application needs more than 4 minutes (typical of IOT devices), you can contact support to extend the time to 30 minutes in the backend.
+
+### Auto-recovery
+
+Azure Firewall constantly monitors VM instances and recovers them automatically in case any instance goes unresponsive. In general, there's a 1 in 100 chance for a firewall instance to be auto-recovered over a 30 day period.
+
+## Applications sensitive to TCP session resets
+
+Session disconnection isnΓÇÖt an issue for resilient applications that can handle session reset gracefully. However, there are few applications (like traditional SAP GUI and SAP RFC based apps) which are sensitive to sessions resets. Secure such sensitive applications with Network Security Groups (NSGs).
+
+## Network security groups
+
+You can deploy [network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md#security) (NSGs) to protect against unsolicited traffic into Azure subnets. Network security groups are simple, stateful packet inspection devices that use the 5-tuple approach (source IP, source port, destination IP, destination port, and layer 4 protocol) to create allow/deny rules for network traffic. You allow or deny traffic to and from a single IP address, to and from multiple IP addresses, or to and from entire subnets. NSG flow logs help with auditing by logging information about IP traffic flowing through an NSG. To learn more about NSG flow logging, see [Introduction to flow logging for network security groups](../network-watcher/network-watcher-nsg-flow-logging-overview.md).
+
+## Next steps
+
+To learn more about Azure Firewall performance, see [Azure Firewall performance](firewall-performance.md).
frontdoor Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/routing-methods.md
The lifetime of the cookie is the same as the user's session, as Front Door curr
> > Public proxies may interfere with session affinity. This is because establishing a session requires Front Door to add a session affinity cookie to the response, which cannot be done if the response is cacheable as it would disrupt the cookies of other clients requesting the same resource. To protect against this, session affinity will **not** be established if the origin sends a cacheable response when this is attempted. If the session has already been established, it does not matter if the response from the origin is cacheable. >
-> Session affinity will be established in the following circumstances:
+> Session affinity will be established in the following circumstances beyond the standard non-cacheable scenarios:
> - The response must include the `Cache-Control` header of *no-store*. > - If the response contains an `Authorization` header, it must not be expired. > - The response is an HTTP 302 status code.
frontdoor Rules Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rules-match-conditions.md
In this example, we match all requests with a server port of 443.
For rules that accept values from the standard operator list, the following operators are valid: +
+| Operator | Description | ARM template support |
+|-|--|--|
+| Any | Matches when there's any value, regardless of what it is. | `operator`: `Any` |
+| Equal | Matches when the value exactly matches the specified string. | `operator`: `Equal` |
+| Contains | Matches when the value contains the specified string. | `operator`: `Contains` |
+| Less Than | Matches when the length of the value is less than the specified integer. | `operator`: `LessThan` |
+| Greater Than | Matches when the length of the value is greater than the specified integer. | `operator`: `GreaterThan` |
+| Less Than or Equal | Matches when the length of the value is less than or equal to the specified integer. | `operator`: `LessThanOrEqual` |
+| Greater Than or Equal | Matches when the length of the value is greater than or equal to the specified integer. | `operator`: `GreaterThanOrEqual` |
+| Begins With | Matches when the value begins with the specified string. | `operator`: `BeginsWith` |
+| Ends With | Matches when the value ends with the specified string. | `operator`: `EndsWith` |
+| Not Any | Matches when there's no value. | `operator`: `Any` and `negateCondition` : `true` |
+| Not Equal | Matches when the value doesn't match the specified string. | `operator`: `Equal` and `negateCondition` : `true` |
+| Not Contains | Matches when the value doesn't contain the specified string. | `operator`: `Contains` and `negateCondition` : `true` |
+| Not Less Than | Matches when the length of the value isn't less than the specified integer. | `operator`: `LessThan` and `negateCondition` : `true` |
+| Not Greater Than | Matches when the length of the value isn't greater than the specified integer. | `operator`: `GreaterThan` and `negateCondition` : `true` |
+| Not Less Than or Equal | Matches when the length of the value isn't less than or equal to the specified integer. | `operator`: `LessThanOrEqual` and `negateCondition` : `true` |
+| Not Greater Than or Equals | Matches when the length of the value isn't greater than or equal to the specified integer. | `operator`: `GreaterThanOrEqual` and `negateCondition` : `true` |
+| Not Begins With | Matches when the value doesn't begin with the specified string. | `operator`: `BeginsWith` and `negateCondition` : `true` |
+| Not Ends With | Matches when the value doesn't end with the specified string. | `operator`: `EndsWith` and `negateCondition` : `true` |
+++ | Operator | Description | ARM template support | |-|--|--| | Any | Matches when there's any value, regardless of what it is. | `operator`: `Any` |
For rules that accept values from the standard operator list, the following oper
| Not Ends With | Matches when the value doesn't end with the specified string. | `operator`: `EndsWith` and `negateCondition` : `true` | | Not RegEx | Matches when the value doesn't match the specified regular expression. [See below for further details.](#regular-expressions) | `operator`: `RegEx` and `negateCondition` : `true` | + > [!TIP] > For numeric operators like *Less than* and *Greater than or equals*, the comparison used is based on length. The value in the match condition should be an integer that specifies the length you want to compare.
frontdoor How To Cache Purge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-cache-purge-cli.md
+
+ Title: 'Cache purging - Azure Front Door - Azure CLI'
+description: This article helps you understand how to purge cache on an Azure Front Door Standard and Premium profile using Azure CLI.
++++++ Last updated : 09/20/2022+++
+# Cache purging in Azure Front Door with Azure CLI
+
+Azure Front Door caches assets until the asset's time-to-live (TTL) expires. Whenever a client requests an asset with expired TTL, the Azure Front Door environment retrieves a new updated copy of the asset to serve the request and then stores the refreshed cache.
+
+Best practice is to make sure your users always obtain the latest copy of your assets. The way to do that is to version your assets for each update and publish them as new URLs. Azure Front Door Standard/Premium will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached contents from all edge nodes and force them all to retrieve new updated assets. The reason you want to purge cached contents is because you've made new updates to your application, or you want to update assets that contain incorrect information.
+++
+* Review [Caching with Azure Front Door](../front-door-caching.md) to understand how caching works.
+* Have a functioning Azure Front Door profile. Refer [Create a Front Door - CLI](../create-front-door-cli.md)to learn how to create one.
+
+## Configure cache purge
+
+Run [az afd endpoint purge](/cli/azure/afd/endpoint#az-afd-endpoint-purge) to purge cache after inputting the necessary parameters like:
+ * Name of resource group
+ * Name of the Azure Front Door profile within the resource group with assets you want to purge
+ * Endpoints with assets you want to purge
+ * Domains/Subdomains with assets you want to purge
+
+ > [!IMPORTANT]
+ > Cache purge for wildcard domains is not supported, you have to specify a subdomain for cache purge for a wildcard domain. You can add as many single-level subdomains of the wildcard domain. For example, for the wildcard domain `*.afdxgatest.azfdtest.xyz`, you can add subdomains in the form of `contoso.afdxgatest.azfdtest.xyz` or `cart.afdxgatest.azfdtest.xyz` and so on. For more information, see [Wildcard domains in Azure Front Door](../front-door-wildcard-domain.md).
+
+ * The path to the content to be purged.
+ * These formats are supported in the lists of paths to purge:
+ * **Single path purge**: Purge individual assets by specifying the full path of the asset (without the protocol and domain), with the file extension, for example, /pictures/strasbourg.png.
+ * **Root domain purge**: Purge the root of the endpoint with "/*" in the path.
+
+```azurecli-interactive
+az afd endpoint purge \
+ --resource-group myRGFD \
+ --profile-name contosoafd \
+ --endpoint-name myendpoint \
+ --domains www.contoso.com \
+ --content-paths '/scripts/*'
+```
+Cache purges on the Azure Front Door profile are case-insensitive. Additionally, they're query string agnostic, which means to purge a URL will purge all query-string variations of it.
+
+## Next steps
+
+Learn how to [create an Azure Front Door profile](../create-front-door-portal.md).
frontdoor How To Cache Purge Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-cache-purge-powershell.md
+
+ Title: 'Cache purging - Azure Front Door - Azure PowerShell'
+description: This article helps you understand how to purge cache on an Azure Front Door Standard and Premium profile using Azure PowerShell.
++++++ Last updated : 09/20/2022+++
+# Cache purging in Azure Front Door with Azure PowerShell
+
+Azure Front Door caches assets until the asset's time-to-live (TTL) expires. Whenever a client requests an asset with expired TTL, the Azure Front Door environment retrieves a new updated copy of the asset to serve the request and then stores the refreshed cache.
+
+Best practice is to make sure your users always obtain the latest copy of your assets. The way to do that is to version your assets for each update and publish them as new URLs. Azure Front Door Standard/Premium will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached contents from all edge nodes and force them all to retrieve new updated assets. The reason you want to purge cached contents is because you've made new updates to your application, or you want to update assets that contain incorrect information.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell installed locally or Azure Cloud Shell
+++
+* Review [Caching with Azure Front Door](../front-door-caching.md) to understand how caching works.
+* Have a functioning Azure Front Door profile. Refer [Create a Front Door - PowerShell](../create-front-door-powershell.md)to learn how to create one.
+
+## Configure cache purge
+
+Run [Clear-AzFrontDoorCdnEndpointContent](/powershell/module/az.cdn/clear-azfrontdoorcdnendpointcontent) to purge cache after inputting the necessary parameters like:
+ * Name of resource group
+ * Name of the Azure Front Door profile within the resource group with assets you want to purge
+ * Endpoints with assets you want to purge
+ * Domains/Subdomains with assets you want to purge
+
+ > [!IMPORTANT]
+ > Cache purge for wildcard domains is not supported, you have to specify a subdomain for cache purge for a wildcard domain. You can add as many single-level subdomains of the wildcard domain. For example, for the wildcard domain `*.afdxgatest.azfdtest.xyz`, you can add subdomains in the form of `contoso.afdxgatest.azfdtest.xyz` or `cart.afdxgatest.azfdtest.xyz` and so on. For more information, see [Wildcard domains in Azure Front Door](../front-door-wildcard-domain.md).
+
+ * The path to the content to be purged.
+ * These formats are supported in the lists of paths to purge:
+ * **Single path purge**: Purge individual assets by specifying the full path of the asset (without the protocol and domain), with the file extension, for example, /pictures/strasbourg.png.
+ * **Root domain purge**: Purge the root of the endpoint with "/*" in the path.
+
+```azurepowershell-interactive
+Clear-AzFrontDoorCdnEndpointContent `
+ -ResourceGroupName myRGFD `
+ -ProfileName contosoafd `
+ -EndpointName myendpoint `
+ -Domain www.contoso.com `
+ -ContentPath /scripts/*
+```
+Cache purges on the Azure Front Door profile are case-insensitive. Additionally, they're query string agnostic, which means to purge a URL will purge all query-string variations of it.
+
+## Next steps
+
+Learn how to [create an Azure Front Door profile](../create-front-door-portal.md).
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-linux.md
Title: Reference - Azure Policy guest configuration baseline for Linux description: Details of the Linux baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 08/02/2022 Last updated : 09/21/2022 -- # Linux security baseline This article details the configuration settings for Linux guests as applicable in the following
-Azure Policy definitions:
+implementations:
-- Linux machines should meet requirements for the Azure compute security baseline-- Vulnerabilities in security configuration on your machines should be remediated
+- **\[Preview\]: Linux machines should meet requirements for the Azure compute security baseline**
+ Azure Policy guest configuration definition
+- **Vulnerabilities in security configuration on your machines should be remediated** in Azure
+ Security Center
-For more information, see [Azure Policy guest configuration](../../machine-configuration/overview.md) and
-[Overview of the Azure Security Benchmark (V3)](../../../security/benchmarks/overview.md).
+For more information, see [Azure Policy guest configuration](../concepts/guest-configuration.md) and
+[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md).
## General security controls
For more information, see [Azure Policy guest configuration](../../machine-confi
|Ensure nodev option set on /home partition.<br /><sub>(1.1.4)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /home partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /home partition. For more information, see the fstab(5) manual pages. | |Ensure nodev option set on /tmp partition.<br /><sub>(1.1.5)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /tmp partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. | |Ensure nodev option set on /var/tmp partition.<br /><sub>(1.1.6)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /var/tmp partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure nosuid option set on /tmp partition.<br /><sub>(1.1.7)</sub> |Description: Since the /tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create setuid files in /var/tmp. |Edit the /etc/fstab file and add nosuid to the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure nosuid option set on /var/tmp partition.<br /><sub>(1.1.8)</sub> |Description: Since the /var/tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create setuid files in /var/tmp. |Edit the /etc/fstab file and add nosuid to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure noexec option set on /var/tmp partition.<br /><sub>(1.1.9)</sub> |Description: Since the `/var/tmp` filesystem is only intended for temporary file storage, set this option to ensure that users cannot run executable binaries from `/var/tmp` . |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure noexec option set on /dev/shm partition.<br /><sub>(1.1.16)</sub> |Description: Setting this option on a file system prevents users from executing programs from shared memory. This option deters users from introducing potentially malicious software on the system. |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /dev/shm partition. For more information, see the fstab(5) manual pages. |
+|Ensure nosuid option set on /tmp partition.<br /><sub>(1.1.7)</sub> |Description: Since the /tmp filesystem is only intended for temporary file storage, set this option to ensure that users can't create setuid files in /var/tmp. |Edit the /etc/fstab file and add nosuid to the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure nosuid option set on /var/tmp partition.<br /><sub>(1.1.8)</sub> |Description: Since the /var/tmp filesystem is only intended for temporary file storage, set this option to ensure that users can't create setuid files in /var/tmp. |Edit the /etc/fstab file and add nosuid to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure noexec option set on /var/tmp partition.<br /><sub>(1.1.9)</sub> |Description: Since the `/var/tmp` filesystem is only intended for temporary file storage, set this option to ensure that users can't run executable binaries from `/var/tmp` . |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure noexec option set on /dev/shm partition.<br /><sub>(1.1.16)</sub> |Description: Setting this option on a file system prevents users from executing programs from shared memory. This control deters users from introducing potentially malicious software on the system. |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /dev/shm partition. For more information, see the fstab(5) manual pages. |
|Disable automounting<br /><sub>(1.1.21)</sub> |Description: With automounting enabled, anyone with physical access could attach a USB drive or disc and have its contents available in system even if they lack permissions to mount it themselves. |Disable the autofs service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-autofs' | |Ensure mounting of USB storage devices is disabled<br /><sub>(1.1.21.1)</sub> |Description: Removing support for USB storage devices reduces the local attack surface of the server. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install usb-storage /bin/true` then unload the usb-storage module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' | |Ensure core dumps are restricted.<br /><sub>(1.5.1)</sub> |Description: Setting a hard limit on core dumps prevents users from overriding the soft variable. If core dumps are required, consider setting limits for user groups (see `limits.conf(5)` ). In addition, setting the `fs.suid_dumpable` variable to 0 will prevent setuid programs from dumping core. |Add `hard core 0` to /etc/security/limits.conf or a file in the limits.d directory and set `fs.suid_dumpable = 0` in sysctl or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-core-dumps' | |Ensure prelink is disabled.<br /><sub>(1.5.4)</sub> |Description: The prelinking feature can interfere with the operation of AIDE, because it changes binaries. Prelinking can also increase the vulnerability of the system if a malicious user is able to compromise a common library such as libc. |uninstall `prelink` using your package manager or run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-prelink' |
-|Ensure permissions on /etc/motd are configured.<br /><sub>(1.7.1.4)</sub> |Description: If the `/etc/motd` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/motd to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
-|Ensure permissions on /etc/issue are configured.<br /><sub>(1.7.1.5)</sub> |Description: If the `/etc/issue` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
-|Ensure permissions on /etc/issue.net are configured.<br /><sub>(1.7.1.6)</sub> |Description: If the `/etc/issue.net` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue.net to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|Ensure permissions on /etc/motd are configured.<br /><sub>(1.7.1.4)</sub> |Description: If the `/etc/motd` file doesn't have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/motd to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|Ensure permissions on /etc/issue are configured.<br /><sub>(1.7.1.5)</sub> |Description: If the `/etc/issue` file doesn't have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|Ensure permissions on /etc/issue.net are configured.<br /><sub>(1.7.1.6)</sub> |Description: If the `/etc/issue.net` file doesn't have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue.net to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
|The nodev option should be enabled for all removable media.<br /><sub>(2.1)</sub> |Description: An attacker could mount a special device (for example, block or character device) via removable media |Add the nodev option to the fourth field (mounting options) in /etc/fstab. For more information, see the fstab(5) manual pages. | |The noexec option should be enabled for all removable media.<br /><sub>(2.2)</sub> |Description: An attacker could load executable file via removable media |Add the noexec option to the fourth field (mounting options) in /etc/fstab. For more information, see the fstab(5) manual pages. | |The nosuid option should be enabled for all removable media.<br /><sub>(2.3)</sub> |Description: An attacker could load files that run with an elevated security context via removable media |Add the nosuid option to the fourth field (mounting options) in /etc/fstab. For more information, see the fstab(5) manual pages. | |Ensure talk client is not installed.<br /><sub>(2.3.3)</sub> |Description: The software presents a security risk as it uses unencrypted protocols for communication. |Uninstall `talk` or run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-talk' |
-|Ensure permissions on /etc/hosts.allow are configured.<br /><sub>(3.4.4)</sub> |Description: It is critical to ensure that the `/etc/hosts.allow` file is protected from unauthorized write access. Although it is protected by default, the file permissions could be changed either inadvertently or through malicious actions. |Set the owner and group of /etc/hosts.allow to root and the permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
-|Ensure permissions on /etc/hosts.deny are configured.<br /><sub>(3.4.5)</sub> |Description: It is critical to ensure that the `/etc/hosts.deny` file is protected from unauthorized write access. Although it is protected by default, the file permissions could be changed either inadvertently or through malicious actions. |Set the owner and group of /etc/hosts.deny to root and the permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
-|Ensure default deny firewall policy<br /><sub>(3.6.2)</sub> |Description: With a default accept policy, the firewall will accept any packet that is not explicitly denied. It is easier to maintain a secure firewall with a default DROP policy than it is with a default ALLOW policy. |Set the default policy for incoming, outgoing, and routed traffic to `deny` or `reject` as appropriate using your firewall software |
+|Ensure permissions on /etc/hosts.allow are configured.<br /><sub>(3.4.4)</sub> |Description: It's critical to ensure that the `/etc/hosts.allow` file is protected from unauthorized write access. Although it's protected by default, the file permissions could be changed either inadvertently or through malicious actions. |Set the owner and group of /etc/hosts.allow to root and the permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|Ensure permissions on /etc/hosts.deny are configured.<br /><sub>(3.4.5)</sub> |Description: It's critical to ensure that the `/etc/hosts.deny` file is protected from unauthorized write access. Although it's protected by default, the file permissions could be changed either inadvertently or through malicious actions. |Set the owner and group of /etc/hosts.deny to root and the permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|Ensure default deny firewall policy<br /><sub>(3.6.2)</sub> |Description: With a default accept policy, the firewall will accept any packet that is not explicitly denied. It is easier to maintain a secure firewall with a default DROP policy than it is with a default Allow policy. |Set the default policy for incoming, outgoing, and routed traffic to `deny` or `reject` as appropriate using your firewall software |
|The nodev/nosuid option should be enabled for all NFS mounts.<br /><sub>(5)</sub> |Description: An attacker could load files that run with an elevated security context or special devices via remote file system |Add the nosuid and nodev options to the fourth field (mounting options) in /etc/fstab. For more information, see the fstab(5) manual pages. | |Ensure permissions on /etc/ssh/sshd_config are configured.<br /><sub>(5.2.1)</sub> |Description: The `/etc/ssh/sshd_config` file needs to be protected from unauthorized changes by non-privileged users. |Set the owner and group of /etc/ssh/sshd_config to root and set the permissions to 0600 or run '/opt/microsoft/omsagent/plugin/omsremediate -r sshd-config-file-permissions' | |Ensure password creation requirements are configured.<br /><sub>(5.3.1)</sub> |Description: Strong passwords protect systems from being hacked through brute force methods. |Set the following key/value pairs in the appropriate PAM for your distro: minlen=14, minclass = 4, dcredit = -1, ucredit = -1, ocredit = -1, lcredit = -1, or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-password-requirements' | |Ensure lockout for failed password attempts is configured.<br /><sub>(5.3.2)</sub> |Description: Locking out user IDs after `n` unsuccessful consecutive login attempts mitigates brute force password attacks against your systems. |for Ubuntu and Debian, add the pam_tally and pam_deny modules as appropriate. For all other distros, refer to your distro's documentation |
-|Disable the installation and use of file systems that are not required (cramfs)<br /><sub>(6.1)</sub> |Description: An attacker could use a vulnerability in cramfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables cramfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
-|Disable the installation and use of file systems that are not required (freevxfs)<br /><sub>(6.2)</sub> |Description: An attacker could use a vulnerability in freevxfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables freevxfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
-|Ensure all users' home directories exist<br /><sub>(6.2.7)</sub> |Description: If the user's home directory does not exist or is unassigned, the user will be placed at the directory root. Moreover, the user will be unable to write any files or set local environment variables. |If any users' home directories do not exist, create them and make sure the respective user owns the directory. Users without an assigned home directory should be removed or assigned a home directory as appropriate. |
-|Ensure users own their home directories<br /><sub>(6.2.9)</sub> |Description: Since the user is accountable for files stored in the user home directory, the user must be the owner of the directory. |Change the ownership of any home directories that are not owned by the defined user to the correct user. |
-|Ensure users' dot files are not group or world writable.<br /><sub>(6.2.10)</sub> |Description: Group or world-writable user configuration files may enable malicious users to steal or modify other users' data or to gain another user's system privileges. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, you should implement a monitoring policy to report user file permissions and determine a remediation action. |
-|Ensure no users have .forward files<br /><sub>(6.2.11)</sub> |Description: Use of the `.forward` file poses a security risk in that sensitive data may be inadvertently transferred outside the organization. The `.forward` file also poses a risk as it can be used to execute commands that may perform unintended actions. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.forward` files and determine the action to be taken in accordance with site policy. |
-|Ensure no users have .netrc files<br /><sub>(6.2.12)</sub> |Description: The `.netrc` file presents a significant security risk since it stores passwords in unencrypted form. Even if FTP is user accounts may have brought over `.netrc` files from other systems which could pose a risk to those systems |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.netrc` files and determine the action to be taken in accordance with site policy. |
-|Ensure no users have .rhosts files<br /><sub>(6.2.14)</sub> |Description: This action is only meaningful if `.rhosts` support is permitted in the file `/etc/pam.conf` . Even though the `.rhosts` files are ineffective if support is disabled in `/etc/pam.conf` , they may have been brought over from other systems and could contain information useful to an attacker for those other systems. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.rhosts` files and determine the action to be taken in accordance with site policy. |
-|Ensure all groups in /etc/passwd exist in /etc/group<br /><sub>(6.2.15)</sub> |Description: Groups which are defined in the /etc/passwd file but not in the /etc/group file poses a threat to system security since group permissions are not properly managed. |For each group defined in /etc/passwd, ensure there is a corresponding group in /etc/group |
+|Disable the installation and use of file systems that aren't required (cramfs)<br /><sub>(6.1)</sub> |Description: An attacker could use a vulnerability in cramfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables cramfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Disable the installation and use of file systems that aren't required (freevxfs)<br /><sub>(6.2)</sub> |Description: An attacker could use a vulnerability in freevxfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables freevxfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Ensure all users' home directories exist<br /><sub>(6.2.7)</sub> |Description: If the user's home directory does not exist or is unassigned, the user will be placed in the volume root. Moreover, the user will be unable either to write any files or set environment variables. |If any users' home directories don't exist, create them and make sure the respective user owns the directory. Users without an assigned home directory should be removed or assigned a home directory as appropriate. |
+|Ensure users own their home directories<br /><sub>(6.2.9)</sub> |Description: Since the user is accountable for files stored in the user home directory, the user must be the owner of the directory. |Change the ownership of any home directories that aren't owned by the defined user to the correct user. |
+|Ensure users' dot files aren't group or world writable.<br /><sub>(6.2.10)</sub> |Description: Group or world-writable user configuration files may enable malicious users to steal or modify other users' data or to gain another user's system privileges. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, we recommended you establish a monitoring policy to report user dot file permissions and determine site policy remediation actions. |
+|Ensure no users have .forward files<br /><sub>(6.2.11)</sub> |Description: Use of the `.forward` file poses a security risk in that sensitive data may be inadvertently transferred outside the organization. The `.forward` file also poses a risk as it can be used to execute commands that may perform unintended actions. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it's recommended that a monitoring policy be established to report user `.forward` files and determine the action to be taken in accordance with site policy. |
+|Ensure no users have .netrc files<br /><sub>(6.2.12)</sub> |Description: The `.netrc` file presents a significant security risk since it stores passwords in unencrypted form. Even if FTP is disabled, user accounts may have brought over `.netrc` files from other systems that could pose a risk to those systems |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it's recommended that a monitoring policy be established to report user `.netrc` files and determine the action to be taken in accordance with site policy. |
+|Ensure no users have .rhosts files<br /><sub>(6.2.14)</sub> |Description: This action is only meaningful if `.rhosts` support is permitted in the file `/etc/pam.conf` . Even though the `.rhosts` files are ineffective if support is disabled in `/etc/pam.conf` , they may have been brought over from other systems and could contain information useful to an attacker for those other systems. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it's recommended that a monitoring policy be established to report user `.rhosts` files and determine the action to be taken in accordance with site policy. |
+|Ensure all groups in /etc/passwd exist in /etc/group<br /><sub>(6.2.15)</sub> |Description: Groups which are defined in the /etc/passwd file but not in the /etc/group file poses a threat to system security since group permissions aren't properly managed. |For each group defined in /etc/passwd, ensure there is a corresponding group in /etc/group |
|Ensure no duplicate UIDs exist<br /><sub>(6.2.16)</sub> |Description: Users must be assigned unique UIDs for accountability and to ensure appropriate access protections. |Establish unique UIDs and review all files owned by the shared UIDs to determine which UID they are supposed to belong to. | |Ensure no duplicate GIDs exist<br /><sub>(6.2.17)</sub> |Description: Groups must be assigned unique GIDs for accountability and to ensure appropriate access protections. |Establish unique GIDs and review all files owned by the shared GIDs to determine which GID they are supposed to belong to. | |Ensure no duplicate user names exist<br /><sub>(6.2.18)</sub> |Description: If a user is assigned a duplicate user name, it will create and have access to files with the first UID for that username in `/etc/passwd` . For example, if 'test4' has a UID of 1000 and a subsequent 'test4' entry has a UID of 2000, logging in as 'test4' will use UID 1000. Effectively, the UID is shared, which is a security problem. |Establish unique user names for all users. File ownerships will automatically reflect the change as long as the users have unique UIDs. | |Ensure no duplicate groups exist<br /><sub>(6.2.19)</sub> |Description: If a group is assigned a duplicate group name, it will create and have access to files with the first GID for that group in `/etc/group` . Effectively, the GID is shared, which is a security problem. |Establish unique names for all user groups. File group ownerships will automatically reflect the change as long as the groups have unique GIDs. |
-|Ensure shadow group is empty<br /><sub>(6.2.20)</sub> |Description: Any users assigned to the shadow group would be granted read access to the /etc/shadow file. If attackers can gain read access to the `/etc/shadow` file, they can easily run a password cracking program against the hashed passwords to break them. Other security information that is stored in the `/etc/shadow` file (such as expiration) could also be useful to subvert additional user accounts. |Remove all users form the shadow group |
-|Disable the installation and use of file systems that are not required (hfs)<br /><sub>(6.3)</sub> |Description: An attacker could use a vulnerability in hfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables hfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
-|Disable the installation and use of file systems that are not required (hfsplus)<br /><sub>(6.4)</sub> |Description: An attacker could use a vulnerability in hfsplus to elevate privileges |Add a file to the /etc/modprob.d directory that disables hfsplus or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
-|Disable the installation and use of file systems that are not required (jffs2)<br /><sub>(6.5)</sub> |Description: An attacker could use a vulnerability in jffs2 to elevate privileges |Add a file to the /etc/modprob.d directory that disables jffs2 or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Ensure shadow group is empty<br /><sub>(6.2.20)</sub> |Description: Any users assigned to the shadow group would be granted read access to the /etc/shadow file. If attackers can gain read access to the `/etc/shadow` file, they can easily run a password cracking program against the hashed passwords to break them. Other security information that is stored in the `/etc/shadow` file (such as expiration) could also be useful to subvert other user accounts. |Remove all users form the shadow group |
+|Disable the installation and use of file systems that aren't required (hfs)<br /><sub>(6.3)</sub> |Description: An attacker could use a vulnerability in hfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables hfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Disable the installation and use of file systems that aren't required (hfsplus)<br /><sub>(6.4)</sub> |Description: An attacker could use a vulnerability in hfsplus to elevate privileges |Add a file to the /etc/modprob.d directory that disables hfsplus or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Disable the installation and use of file systems that aren't required (jffs2)<br /><sub>(6.5)</sub> |Description: An attacker could use a vulnerability in jffs2 to elevate privileges |Add a file to the /etc/modprob.d directory that disables jffs2 or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
|Kernels should only be compiled from approved sources.<br /><sub>(10)</sub> |Description: A kernel from an unapproved source could contain vulnerabilities or backdoors to grant access to an attacker. |Install the kernel that is provided by your distro vendor. |
-|/etc/shadow file permissions should be set to 0400<br /><sub>(11.1)</sub> |Description: An attacker can retrieve or manipulate hashed passwords from /etc/shadow if it is not correctly secured. |Set the permissions and ownership of /etc/shadow* or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-shadow-perms' |
-|/etc/shadow- file permissions should be set to 0400<br /><sub>(11.2)</sub> |Description: An attacker can retrieve or manipulate hashed passwords from /etc/shadow- if it is not correctly secured. |Set the permissions and ownership of /etc/shadow* or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-shadow-perms' |
-|/etc/gshadow file permissions should be set to 0400<br /><sub>(11.3)</sub> |Description: An attacker could join security groups if this file is not properly secured |Set the permissions and ownership of /etc/gshadow- or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-gshadow-perms' |
-|/etc/gshadow- file permissions should be set to 0400<br /><sub>(11.4)</sub> |Description: An attacker could join security groups if this file is not properly secured |Set the permissions and ownership of /etc/gshadow or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-gshadow-perms' |
+|/etc/shadow file permissions should be set to 0400<br /><sub>(11.1)</sub> |Description: An attacker can retrieve or manipulate hashed passwords from /etc/shadow if it's not correctly secured. |Set the permissions and ownership of /etc/shadow* or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-shadow-perms' |
+|/etc/shadow- file permissions should be set to 0400<br /><sub>(11.2)</sub> |Description: An attacker can retrieve or manipulate hashed passwords from /etc/shadow- if it's not correctly secured. |Set the permissions and ownership of /etc/shadow* or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-shadow-perms' |
+|/etc/gshadow file permissions should be set to 0400<br /><sub>(11.3)</sub> |Description: An attacker could join security groups if this file isn't properly secured |Set the permissions and ownership of /etc/gshadow- or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-gshadow-perms' |
+|/etc/gshadow- file permissions should be set to 0400<br /><sub>(11.4)</sub> |Description: An attacker could join security groups if this file isn't properly secured |Set the permissions and ownership of /etc/gshadow or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-gshadow-perms' |
|/etc/passwd file permissions should be 0644<br /><sub>(12.1)</sub> |Description: An attacker could modify userIDs and login shells |Set the permissions and ownership of /etc/passwd or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-passwd-perms' | |/etc/group file permissions should be 0644<br /><sub>(12.2)</sub> |Description: An attacker could elevate privileges by modifying group membership |Set the permissions and ownership of /etc/group or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-group-perms |
-|/etc/passwd- file permissions should be set to 0600<br /><sub>(12.3)</sub> |Description: An attacker could join security groups if this file is not properly secured |Set the permissions and ownership of /etc/passwd- or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-passwd-perms |
+|/etc/passwd- file permissions should be set to 0600<br /><sub>(12.3)</sub> |Description: An attacker could join security groups if this file isn't properly secured |Set the permissions and ownership of /etc/passwd- or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-passwd-perms |
|/etc/group- file permissions should be 0644<br /><sub>(12.4)</sub> |Description: An attacker could elevate privileges by modifying group membership |Set the permissions and ownership of /etc/group- or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-group-perms |
-|Access to the root account via su should be restricted to the 'root' group<br /><sub>(21)</sub> |Description: An attacker could escalate permissions by password guessing if su is not restricted to users in the root group. |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r fix-su-permissions'. This will add the line 'auth required pam_wheel.so use_uid' to the file '/etc/pam.d/su' |
+|Access to the root account via su should be restricted to the 'root' group<br /><sub>(21)</sub> |Description: An attacker could escalate permissions by password guessing if su is not restricted to users in the root group. |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r fix-su-permissions'. This control adds the line 'auth required pam_wheel.so use_uid' to the file '/etc/pam.d/su' |
|The 'root' group should exist, and contain all members who can su to root<br /><sub>(22)</sub> |Description: An attacker could escalate permissions by password guessing if su is not restricted to users in the root group. |Create the root group via the command 'groupadd -g 0 root' | |All accounts should have a password<br /><sub>(23.2)</sub> |Description: An attacker can login to accounts with no password and execute arbitrary commands. |Use the passwd command to set passwords for all accounts | |Accounts other than root must have unique UIDs greater than zero(0)<br /><sub>(24)</sub> |Description: If an account other than root has uid zero, an attacker could compromise the account and gain root privileges. |Assign unique, non-zero uids to all non-root accounts using 'usermod -u' | |Randomized placement of virtual memory regions should be enabled<br /><sub>(25)</sub> |Description: An attacker could write executable code to known regions in memory resulting in elevation of privilege |Add the value '1' or '2' to the file '/proc/sys/kernel/randomize_va_space' | |Kernel support for the XD/NX processor feature should be enabled<br /><sub>(26)</sub> |Description: An attacker could cause a system to executable code from data regions in memory resulting in elevation of privilege. |Confirm the file '/proc/cpuinfo' contains the flag 'nx' |
-|The '.' should not appear in root's $PATH<br /><sub>(27.1)</sub> |Description: An attacker could elevate privileges by placing a malicious file in root's $PATH |Modify the 'export PATH=' line in /root/.profile |
+|The '.' shouldn't appear in root's $PATH<br /><sub>(27.1)</sub> |Description: An attacker could elevate privileges by placing a malicious file in root's $PATH |Modify the 'export PATH=' line in /root/.profile |
|User home directories should be mode 750 or more restrictive<br /><sub>(28)</sub> |Description: An attacker could retrieve sensitive information from the home folders of other users. |Set home folder permissions to 750 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-home-dir-permissions | |The default umask for all users should be set to 077 in login.defs<br /><sub>(29)</sub> |Description: An attacker could retrieve sensitive information from files owned by other users. |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r set-default-user-umask'. This will add the line 'UMASK 077' to the file '/etc/login.defs' | |All bootloaders should have password protection enabled.<br /><sub>(31)</sub> |Description: An attacker with physical access could modify bootloader options, yielding unrestricted system access |Add a boot loader password to the file '/boot/grub/grub.cfg' |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Performing source validation by reverse path should be enabled for all interfaces. (net.ipv4.conf.all.rp_filter = 1)<br /><sub>(46.1)</sub> |Description: The system will accept traffic from addresses that are unroutable. |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-rp-filter' | |Performing source validation by reverse path should be enabled for all interfaces. (net.ipv4.conf.default.rp_filter = 1)<br /><sub>(46.2)</sub> |Description: The system will accept traffic from addresses that are unroutable. |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-rp-filter' | |TCP SYN cookies should be enabled. (net.ipv4.tcp_syncookies = 1)<br /><sub>(47)</sub> |Description: An attacker could perform a DoS over TCP |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-tcp-syncookies' |
-|The system should not act as a network sniffer.<br /><sub>(48)</sub> |Description: An attacker may use promiscuous interfaces to sniff network traffic |Promiscuous mode is enabled via a 'promisc' entry in '/etc/network/interfaces' or '/etc/rc.local.' Check both files and remove this entry. |
+|The system shouldn't act as a network sniffer.<br /><sub>(48)</sub> |Description: An attacker may use promiscuous interfaces to sniff network traffic |Promiscuous mode is enabled via a 'promisc' entry in '/etc/network/interfaces' or '/etc/rc.local.' Check both files and remove this entry. |
|All wireless interfaces should be disabled.<br /><sub>(49)</sub> |Description: An attacker could create a fake AP to intercept transmissions. |Confirm all wireless interfaces are disabled in '/etc/network/interfaces' |
-|The IPv6 protocol should be enabled.<br /><sub>(50)</sub> |Description: IPv6 is necessary for communication on many modern networks. |Open /etc/sysctl.conf and confirm that 'net.ipv6.conf.all.disable_ipv6' and 'net.ipv6.conf.default.disable_ipv6' are set to 0 |
-|Ensure DCCP is disabled<br /><sub>(54)</sub> |Description: If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install dccp /bin/true` then unload the dccp module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
-|Ensure SCTP is disabled<br /><sub>(55)</sub> |Description: If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install sctp /bin/true` then unload the sctp module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|The IPv6 protocol should be enabled.<br /><sub>(50)</sub> |Description: This is necessary for communication on modern networks. |Open /etc/sysctl.conf and confirm that 'net.ipv6.conf.all.disable_ipv6' and 'net.ipv6.conf.default.disable_ipv6' are set to 0 |
+|Ensure DCCP is disabled<br /><sub>(54)</sub> |Description: If the protocol is not required, it's recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install dccp /bin/true` then unload the dccp module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Ensure SCTP is disabled<br /><sub>(55)</sub> |Description: If the protocol is not required, it's recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install sctp /bin/true` then unload the sctp module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
|Disable support for RDS.<br /><sub>(56)</sub> |Description: An attacker could use a vulnerability in RDS to compromise the system |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install rds /bin/true` then unload the rds module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
-|Ensure TIPC is disabled<br /><sub>(57)</sub> |Description: If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install tipc /bin/true` then unload the tipc module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Ensure TIPC is disabled<br /><sub>(57)</sub> |Description: If the protocol is not required, it's recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install tipc /bin/true` then unload the tipc module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
|Ensure logging is configured<br /><sub>(60)</sub> |Description: A great deal of important security-related information is sent via `rsyslog` (for example, successful and failed su attempts, failed login attempts, root login attempts, etc.). |Configure syslog, rsyslog or syslog-ng as appropriate | |The syslog, rsyslog, or syslog-ng package should be installed.<br /><sub>(61)</sub> |Description: Reliability and security issues will not be logged, preventing proper diagnosis. |Install the rsyslog package, or run '/opt/microsoft/omsagent/plugin/omsremediate -r install-rsyslog' | |The systemd-journald service should be configured to persists log messages<br /><sub>(61.1)</sub> |Description: Reliability and security issues will not be logged, preventing proper diagnosis. |Create /var/log/journal and ensure that Storage in journald.conf is auto or persistent |
-|Ensure a logging service is enabled<br /><sub>(62)</sub> |Description: It is imperative to have the ability to log events on a node. |Enable the rsyslog package or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-rsyslog' |
+|Ensure a logging service is enabled<br /><sub>(62)</sub> |Description: It's imperative to have the ability to log events on a node. |Enable the rsyslog package or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-rsyslog' |
|File permissions for all rsyslog log files should be set to 640 or 600.<br /><sub>(63)</sub> |Description: An attacker could hide activity by manipulating logs |Add the line '$FileCreateMode 0640' to the file '/etc/rsyslog.conf' |
-|Ensure logger configuration files are restricted.<br /><sub>(63.1)</sub> |Description: It is important to ensure that log files exist and have the correct permissions to ensure that sensitive syslog data is archived and protected. |Set your logger's configuration files to 0640 or run '/opt/microsoft/omsagent/plugin/omsremediate -r logger-config-file-permissions' |
+|Ensure logger configuration files are restricted.<br /><sub>(63.1)</sub> |Description: It's important to ensure that log files exist and have the correct permissions to ensure that sensitive syslog data is archived and protected. |Set your logger's configuration files to 0640 or run '/opt/microsoft/omsagent/plugin/omsremediate -r logger-config-file-permissions' |
|All rsyslog log files should be owned by the adm group.<br /><sub>(64)</sub> |Description: An attacker could hide activity by manipulating logs |Add the line '$FileGroup adm' to the file '/etc/rsyslog.conf' | |All rsyslog log files should be owned by the syslog user.<br /><sub>(65)</sub> |Description: An attacker could hide activity by manipulating logs |Add the line '$FileOwner syslog' to the file '/etc/rsyslog.conf' or run '/opt/microsoft/omsagent/plugin/omsremediate -r syslog-owner |
-|Rsyslog should not accept remote messages.<br /><sub>(67)</sub> |Description: An attacker could inject messages into syslog, causing a DoS or a distraction from other activity |Remove the lines '$ModLoad imudp' and '$ModLoad imtcp' from the file '/etc/rsyslog.conf' |
+|Rsyslog shouldn't accept remote messages.<br /><sub>(67)</sub> |Description: An attacker could inject messages into syslog, causing a DoS or a distraction from other activity |Remove the lines '$ModLoad imudp' and '$ModLoad imtcp' from the file '/etc/rsyslog.conf' |
|The logrotate (syslog rotater) service should be enabled.<br /><sub>(68)</sub> |Description: Logfiles could grow unbounded and consume all disk space |Install the logrotate package and confirm the logrotate cron entry is active (chmod 755 /etc/cron.daily/logrotate; chown root:root /etc/cron.daily/logrotate) | |The rlogin service should be disabled.<br /><sub>(69)</sub> |Description: An attacker could gain access, bypassing strict authentication requirements |Remove the inetd service. | |Disable inetd unless required. (inetd)<br /><sub>(70.1)</sub> |Description: An attacker could exploit a vulnerability in an inetd service to gain access |Uninstall the inetd service (apt-get remove inetd) |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Ensure permissions on /etc/cron.hourly are configured.<br /><sub>(95)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.hourly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms | |Ensure permissions on /etc/cron.monthly are configured.<br /><sub>(96)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.monthly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms | |Ensure permissions on /etc/cron.weekly are configured.<br /><sub>(97)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.weekly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms |
-|Ensure at/cron is restricted to authorized users<br /><sub>(98)</sub> |Description: On many systems, only the system administrator is authorized to schedule `cron` jobs. Using the `cron.allow` file to control who can run `cron` jobs enforces this policy. It is easier to manage an allowlist than a denylist. In a denylist, you could potentially add a user ID to the system and forget to add it to the deny files. |Replace /etc/cron.deny and /etc/at.deny with their respective `allow` files or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-job-allow' |
+|Ensure at/cron is restricted to authorized users<br /><sub>(98)</sub> |Description: On many systems, only the system administrator is authorized to schedule `cron` jobs. Using the `cron.allow` file to control who can run `cron` jobs enforces this policy. It's easier to manage an allowlist than a denylist. In a denylist, you could potentially add a user ID to the system and forget to add it to the deny files. |Replace /etc/cron.deny and /etc/at.deny with their respective `allow` files or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-job-allow' |
|SSH must be configured and managed to meet best practices. - '/etc/ssh/sshd_config Protocol = 2'<br /><sub>(106.1)</sub> |Description: An attacker could use flaws in an earlier version of the SSH protocol to gain access |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r configure-ssh-protocol'. This will set 'Protocol 2' in the file '/etc/ssh/sshd_config' | |SSH must be configured and managed to meet best practices. - '/etc/ssh/sshd_config IgnoreRhosts = yes'<br /><sub>(106.3)</sub> |Description: An attacker could use flaws in the Rhosts protocol to gain access |Run the command '/usr/local/bin/azsecd remediate (/opt/microsoft/omsagent/plugin/omsremediate) -r enable-ssh-ignore-rhosts'. This will add the line 'IgnoreRhosts yes' to the file '/etc/ssh/sshd_config' |
-|Ensure SSH LogLevel is set to INFO<br /><sub>(106.5)</sub> |Description: SSH provides several logging levels with varying amounts of verbosity. `DEBUG `is specifically _not_ recommended other than strictly for debugging SSH communications since it provides so much data that it is difficult to identify important security information. `INFO `level is the basic level that only records login activity of SSH users. In many situations, such as Incident Response, it is important to determine when a particular user was active on a system. The logout record can eliminate those users who disconnected, which helps narrow the field. |Edit the `/etc/ssh/sshd_config` file to set the parameter as follows: ``` LogLevel INFO ``` |
+|Ensure SSH LogLevel is set to INFO<br /><sub>(106.5)</sub> |Description: SSH provides several logging levels with varying amounts of verbosity. `DEBUG `is specifically _not_ recommended other than strictly for debugging SSH communications since it provides so much data that it's difficult to identify important security information. `INFO `level is the basic level that only records login activity of SSH users. In many situations, such as Incident Response, it's important to determine when a particular user was active on a system. The logout record can eliminate those users who disconnected, which helps narrow the field. |Edit the `/etc/ssh/sshd_config` file to set the parameter as follows: ``` LogLevel INFO ``` |
|Ensure SSH MaxAuthTries is set to 6 or less<br /><sub>(106.7)</sub> |Description: Setting the `MaxAuthTries `parameter to a low number will minimize the risk of successful brute force attacks to the SSH server. While the recommended setting is 4, set the number based on site policy. |Ensure SSH MaxAuthTries is set to 6 or less Edit the `/etc/ssh/sshd_config` file to set the parameter as follows: ``` MaxAuthTries 6 ``` | |Ensure SSH access is limited<br /><sub>(106.11)</sub> |Description: Restricting which users can remotely access the system via SSH will help ensure that only authorized users access the system. |Ensure SSH access is limited Edit the `/etc/ssh/sshd_config` file to set one or more of the parameter as follows: ``` AllowUsers AllowGroups DenyUsers DenyGroups ``` | |Emulation of the rsh command through the ssh server should be disabled. - '/etc/ssh/sshd_config RhostsRSAAuthentication = no'<br /><sub>(107)</sub> |Description: An attacker could use flaws in the RHosts protocol to gain access |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r disable-ssh-rhost-rsa-auth'. This will add the line 'RhostsRSAAuthentication no' to the file '/etc/ssh/sshd_config' |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Root login via SSH should be disabled. - '/etc/ssh/sshd_config PermitRootLogin = no'<br /><sub>(109)</sub> |Description: An attacker could brute force the root password, or hide their command history by logging in directly as root |Run the command '/usr/local/bin/azsecd remediate -r disable-ssh-root-login'. This will add the line 'PermitRootLogin no' to the file '/etc/ssh/sshd_config' | |Remote connections from accounts with empty passwords should be disabled. - '/etc/ssh/sshd_config PermitEmptyPasswords = no'<br /><sub>(110)</sub> |Description: An attacker could gain access through password guessing |Run the command '/usr/local/bin/azsecd remediate (/opt/microsoft/omsagent/plugin/omsremediate) -r disable-ssh-empty-passwords'. This will add the line 'PermitEmptyPasswords no' to the file '/etc/ssh/sshd_config' | |Ensure SSH Idle Timeout Interval is configured.<br /><sub>(110.1)</sub> |Description: Having no timeout value associated with a connection could allow an unauthorized user access to another user's ssh session. Setting a timeout value at least reduces the risk of this happening. While the recommended setting is 300 seconds (5 minutes), set this timeout value based on site policy. The recommended setting for `ClientAliveCountMax` is 0. In this case, the client session will be terminated after 5 minutes of idle time and no keepalive messages will be sent. |Edit the /etc/ssh/sshd_config file to set the parameters according to the policy |
-|Ensure SSH LoginGraceTime is set to one minute or less.<br /><sub>(110.2)</sub> |Description: Setting the `LoginGraceTime` parameter to a low number will minimize the risk of successful brute force attacks to the SSH server. This setting also limits the number of concurrent unauthenticated connections. While the recommended setting is 60 seconds, you should set the number based on your site policy. |Edit the /etc/ssh/sshd_config file to set the parameters according to the policy or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-login-grace-time' |
+|Ensure SSH LoginGraceTime is set to one minute or less.<br /><sub>(110.2)</sub> |Description: Setting the `LoginGraceTime` parameter to a low number will minimize the risk of successful brute force attacks to the SSH server. It will also limit the number of concurrent unauthenticated connections While the recommended setting is 60 seconds (1 Minute), set the number based on site policy. |Edit the /etc/ssh/sshd_config file to set the parameters according to the policy or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-login-grace-time' |
|Ensure only approved MAC algorithms are used<br /><sub>(110.3)</sub> |Description: MD5 and 96-bit MAC algorithms are considered weak and have been shown to increase exploitability in SSH downgrade attacks. Weak algorithms continue to have a great deal of attention as a weak spot that can be exploited with expanded computing power. An attacker that breaks the algorithm could take advantage of a MiTM position to decrypt the SSH tunnel and capture credentials and information |Edit the /etc/sshd_config file and add/modify the MACs line to contain a comma separated list of the approved MACs or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-macs' |
-|Ensure remote login warning banner is configured properly.<br /><sub>(111)</sub> |Description: Warning messages inform users signing into the system of their legal status. The system must include the name of the owning organization as well as any active monitoring policies. Displaying OS and patch level information in login banners also has the side effect of providing detailed system information to attackers attempting to target specific exploits of a system. Authorized users can easily get this information by running the `uname -a`command once they have logged in. |Remove any instances of \m \r \s and \v from the /etc/issue.net file |
+|Ensure remote login warning banner is configured properly.<br /><sub>(111)</sub> |Description: Warning messages inform users who are attempting to login to the system of their legal status regarding the system and must include the name of the organization that owns the system and any monitoring policies that are in place. Displaying OS and patch level information in login banners also has the side effect of providing detailed system information to attackers attempting to target specific exploits of a system. Authorized users can easily get this information by running the `uname -a`command once they have logged in. |Remove any instances of \m \r \s and \v from the /etc/issue.net file |
|Ensure local login warning banner is configured properly.<br /><sub>(111.1)</sub> |Description: Warning messages inform users who are attempting to login to the system of their legal status regarding the system and must include the name of the organization that owns the system and any monitoring policies that are in place. Displaying OS and patch level information in login banners also has the side effect of providing detailed system information to attackers attempting to target specific exploits of a system. Authorized users can easily get this information by running the `uname -a`command once they have logged in. |Remove any instances of \m \r \s and \v from the /etc/issue file | |SSH warning banner should be enabled. - '/etc/ssh/sshd_config Banner = /etc/issue.net'<br /><sub>(111.2)</sub> |Description: Users will not be warned that their actions on the system are monitored |Run the command '/usr/local/bin/azsecd remediate -r configure-ssh-banner'. This will add the line 'Banner /etc/azsec/banner.txt' to the file '/etc/ssh/sshd_config' |
-|Users are not allowed to set environment options for SSH.<br /><sub>(112)</sub> |Description: An attacker may be able to bypass some access restrictions over SSH |Remove the line 'PermitUserEnvironment yes' from the file '/etc/ssh/sshd_config' |
+|Users aren't allowed to set environment options for SSH.<br /><sub>(112)</sub> |Description: An attacker may be able to bypass some access restrictions over SSH |Remove the line 'PermitUserEnvironment yes' from the file '/etc/ssh/sshd_config' |
|Appropriate ciphers should be used for SSH. (Ciphers aes128-ctr,aes192-ctr,aes256-ctr)<br /><sub>(113)</sub> |Description: An attacker could compromise a weakly secured SSH connection |Run the command '/usr/local/bin/azsecd remediate -r configure-ssh-ciphers'. This will add the line 'Ciphers aes128-ctr,aes192-ctr,aes256-ctr' to the file '/etc/ssh/sshd_config' | |The avahi-daemon service should be disabled.<br /><sub>(114)</sub> |Description: An attacker could use a vulnerability in the avahi daemon to gain access |Disable the avahi-daemon service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-avahi-daemon' | |The cups service should be disabled.<br /><sub>(115)</sub> |Description: An attacker could use a flaw in the cups service to elevate privileges |Disable the cups service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-cups' |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Ensure no legacy `+` entries exist in /etc/group<br /><sub>(156.3)</sub> |Description: An attacker could gain access by using the username '+' with no password |Remove any entries in /etc/group that begin with '+:' | |Ensure password expiration is 365 days or less.<br /><sub>(157.1)</sub> |Description: Reducing the maximum age of a password also reduces an attacker's window of opportunity to leverage compromised credentials or successfully compromise credentials via an online brute force attack. |Set the `PASS_MAX_DAYS` parameter to no more than 365 in `/etc/login.defs` or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-password-policy-max-days' | |Ensure password expiration warning days is 7 or more.<br /><sub>(157.2)</sub> |Description: Providing an advance warning that a password will be expiring gives users time to think of a secure password. Users caught unaware may choose a simple password or write it down where it may be discovered. |Set the `PASS_WARN_AGE` parameter to 7 in `/etc/login.defs` or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-password-policy-warn-age' |
-|Ensure password reuse is limited.<br /><sub>(157.5)</sub> |Description: Forcing users not to reuse their past 5 passwords makes it less likely that an attacker will be able to guess the password. |Ensure the 'remember' option is set to at least 5 in either /etc/pam.d/common-password or both /etc/pam.d/password_auth and /etc/pam.d/system_auth or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-password-policy-history' |
+|Ensure password reuse is limited.<br /><sub>(157.5)</sub> |Description: Forcing users not to reuse their past five passwords makes it less likely that an attacker will be able to guess the password. |Ensure the 'remember' option is set to at least 5 in either /etc/pam.d/common-password or both /etc/pam.d/password_auth and /etc/pam.d/system_auth or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-password-policy-history' |
|Ensure password hashing algorithm is SHA-512<br /><sub>(157.11)</sub> |Description: The SHA-512 algorithm provides much stronger hashing than MD5, thus providing additional protection to the system by increasing the level of effort for an attacker to successfully determine passwords. Note: These changes only apply to accounts configured on the local system. |Set password hashing algorithm to sha512. Many distributions provide tools for updating PAM configuration, consult your documentation for details. If no tooling is provided edit the appropriate `/etc/pam.d/` configuration file and add or modify the `pam_unix.so` lines to include the sha512 option: ``` password sufficient pam_unix.so sha512 ``` | |Ensure minimum days between password changes is 7 or more.<br /><sub>(157.12)</sub> |Description: By restricting the frequency of password changes, an administrator can prevent users from repeatedly changing their password in an attempt to circumvent password reuse controls. |Set the `PASS_MIN_DAYS` parameter to 7 in `/etc/login.defs`: `PASS_MIN_DAYS 7`. Modify user parameters for all users with a password set to match: `chage --mindays 7` or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-pass-min-days' | |Ensure all users last password change date is in the past<br /><sub>(157.14)</sub> |Description: If a users recorded password change date is in the future, then they could bypass any set password expiration. |Ensure inactive password lock is 30 days or less Run the following command to set the default password inactivity period to 30 days: ``` # useradd -D -f 30 ``` Modify user parameters for all users with a password set to match: ``` # chage --inactive 30 ``` |
-|Ensure system accounts are non-login<br /><sub>(157.15)</sub> |Description: It is important to make sure that accounts that are not being used by regular users are prevented from being used to provide an interactive shell. By default, Ubuntu sets the password field for these accounts to an invalid string, but it is also recommended that the shell field in the password file be set to `/usr/sbin/nologin`. This prevents the account from potentially being used to run any commands. |Set the shell for any accounts returned by the audit script to `/sbin/nologin` |
+|Ensure system accounts are non-login<br /><sub>(157.15)</sub> |Description: It's important to make sure that accounts that aren't being used by regular users are prevented from being used to provide an interactive shell. By default, Ubuntu sets the password field for these accounts to an invalid string, but it's also recommended that the shell field in the password file be set to `/usr/sbin/nologin`. This prevents the account from potentially being used to run any commands. |Set the shell for any accounts returned by the audit script to `/sbin/nologin` |
|Ensure default group for the root account is GID 0<br /><sub>(157.16)</sub> |Description: Using GID 0 for the `_root_ `account helps prevent `_root_`-owned files from accidentally becoming accessible to non-privileged users. |Run the following command to set the `root` user default group to GID `0` : ``` # usermod -g 0 root ``` | |Ensure root is the only UID 0 account<br /><sub>(157.18)</sub> |Description: This access must be limited to only the default `root `account and only from the system console. Administrative access must be through an unprivileged account using an approved mechanism. |Remove any users other than `root` with UID `0` or assign them a new UID if appropriate. | |Remove unnecessary accounts<br /><sub>(159)</sub> |Description: For compliance |Remove the unnecessary accounts | |Ensure auditd service is enabled<br /><sub>(162)</sub> |Description: The capturing of system events provides system administrators with information to allow them to determine if unauthorized access to their system is occurring. |Install audit package (systemctl enable auditd) | |Run AuditD service<br /><sub>(163)</sub> |Description: The capturing of system events provides system administrators with information to allow them to determine if unauthorized access to their system is occurring. |Run AuditD service (systemctl start auditd) |
-|Ensure SNMP Server is not enabled<br /><sub>(179)</sub> |Description: The SNMP server can communicate using SNMP v1, which transmits data in the clear and does not require authentication to execute commands. Unless absolutely necessary, it is recommended that the SNMP service not be used. If SNMP is required the server should be configured to disallow SNMP v1. |Run one of the following commands to disable `snmpd`: ``` # chkconfig snmpd off ``` ``` # systemctl disable snmpd ``` ``` # update-rc.d snmpd disable ``` |
+|Ensure SNMP Server is not enabled<br /><sub>(179)</sub> |Description: The SNMP server can communicate using SNMP v1, which transmits data in the clear and does not require authentication to execute commands. Unless absolutely necessary, it's recommended that the SNMP service not be used. If SNMP is required the server should be configured to disallow SNMP v1. |Run one of the following commands to disable `snmpd`: ``` # chkconfig snmpd off ``` ``` # systemctl disable snmpd ``` ``` # update-rc.d snmpd disable ``` |
|Ensure rsync service is not enabled<br /><sub>(181)</sub> |Description: The `rsyncd` service presents a security risk as it uses unencrypted protocols for communication. |Run one of the following commands to disable `rsyncd` : `chkconfig rsyncd off`, `systemctl disable rsyncd`, `update-rc.d rsyncd disable` or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rsysnc' |
-|Ensure NIS server is not enabled<br /><sub>(182)</sub> |Description: The NIS service is an inherently insecure system that has been vulnerable to DOS attacks, buffer overflows and has poor authentication for querying NIS maps. NIS is generally replaced by protocols like Lightweight Directory Access Protocol (LDAP). It is recommended that the service be disabled and more secure services be used |Run one of the following commands to disable `ypserv` : ``` # chkconfig ypserv off ``` ``` # systemctl disable ypserv ``` ``` # update-rc.d ypserv disable ``` |
-|Ensure rsh client is not installed<br /><sub>(183)</sub> |Description: These legacy clients contain numerous security exposures and have been replaced with the more secure SSH package. Even if the server is removed, it is best to ensure the clients are also removed to prevent users from inadvertently attempting to use these commands and therefore exposing their credentials. Note that removing the `rsh `package removes the clients for `rsh`, `rcp `and `rlogin`. |Uninstall `rsh` using the appropriate package manager or manual installation: ``` yum remove rsh ``` ``` apt-get remove rsh ``` ``` zypper remove rsh ``` |
-|Disable SMB V1 with Samba<br /><sub>(185)</sub> |Description: SMB v1 has well-known, serious vulnerabilities and does not encrypt data in transit. If it must be used for business reasons, it is strongly recommended that additional steps be taken to mitigate the risks inherent to this protocol. |If Samba is not running, remove package, otherwise there should be a line in the [global] section of /etc/samba/smb.conf: min protocol = SMB2 or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-smb-min-version |
+|Ensure NIS server is not enabled<br /><sub>(182)</sub> |Description: The NIS service is an inherently insecure system that has been vulnerable to DOS attacks, buffer overflows and has poor authentication for querying NIS maps. NIS is generally replaced by protocols like Lightweight Directory Access Protocol (LDAP). It's recommended that the service be disabled and more secure services be used |Run one of the following commands to disable `ypserv` : ``` # chkconfig ypserv off ``` ``` # systemctl disable ypserv ``` ``` # update-rc.d ypserv disable ``` |
+|Ensure rsh client is not installed<br /><sub>(183)</sub> |Description: These legacy clients contain numerous security exposures and have been replaced with the more secure SSH package. Even if the server is removed, it's best to ensure the clients are also removed to prevent users from inadvertently attempting to use these commands and therefore exposing their credentials. Note that removing the `rsh `package removes the clients for `rsh`, `rcp `and `rlogin`. |Uninstall `rsh` using the appropriate package manager or manual installation: ``` yum remove rsh ``` ``` apt-get remove rsh ``` ``` zypper remove rsh ``` |
+|Disable SMB V1 with Samba<br /><sub>(185)</sub> |Description: SMB v1 has well-known, serious vulnerabilities and does not encrypt data in transit. If it must be used for business reasons, it's strongly recommended that additional steps be taken to mitigate the risks inherent to this protocol. |If Samba is not running, remove package, otherwise there should be a line in the [global] section of /etc/samba/smb.conf: min protocol = SMB2 or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-smb-min-version |
> [!NOTE] > Availability of specific Azure Policy guest configuration settings may vary in Azure Government
For more information, see [Azure Policy guest configuration](../../machine-confi
Additional articles about Azure Policy and guest configuration: -- [Azure Policy guest configuration](../../machine-configuration/overview.md).
+- [Azure Policy guest configuration](../concepts/guest-configuration.md).
- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview. - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).
hdinsight Hdinsight Hadoop Optimize Hive Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-optimize-hive-query.md
description: This article describes how to optimize your Apache Hive queries in
Previously updated : 04/29/2022 Last updated : 09/21/2022 # Optimize Apache Hive queries in Azure HDInsight
Choose the appropriate cluster type to help optimize performance for your worklo
* Choose **Interactive Query** cluster type to optimize for `ad hoc`, interactive queries. * Choose Apache **Hadoop** cluster type to optimize for Hive queries used as a batch process.
-* **Spark** and **HBase** cluster types can also run Hive queries, and might be appropriate if you are running those workloads.
+* **Spark** and **HBase** cluster types can also run Hive queries, and might be appropriate if you're running those workloads.
For more information on running Hive queries on various HDInsight cluster types, see [What is Apache Hive and HiveQL on Azure HDInsight?](hadoop/hdinsight-use-hive.md).
For more information about scaling HDInsight, see [Scale HDInsight clusters](hdi
[Apache Tez](https://tez.apache.org/) is an alternative execution engine to the MapReduce engine. Linux-based HDInsight clusters have Tez enabled by default. Tez is faster because:
Some partitioning considerations:
* **Don't under partition** - Partitioning on columns with only a few values can cause few partitions. For example, partitioning on gender only creates two partitions to be created (male and female), so reduce the latency by a maximum of half. * **Don't over partition** - On the other extreme, creating a partition on a column with a unique value (for example, userid) causes multiple partitions. Over partition causes much stress on the cluster namenode as it has to handle the large number of directories.
-* **Avoid data skew** - Choose your partitioning key wisely so that all partitions are even size. For example, partitioning on *State* column may skew the distribution of data. Since the state of California has a population almost 30x that of Vermont, the partition size is potentially skewed and performance may vary tremendously.
+* **Avoid data skew** - Choose your partitioning key wisely so that all partitions are even size. For example, partitioning on *State* column may skew the distribution of data. Since the state of California has a population almost 30x that of Vermont, the partition size is potentially skewed, and performance may vary tremendously.
To create a partition table, use the *Partitioned By* clause:
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
The DICOM service is secured by Azure Active Directory (Azure AD) that can't be
### Register a client application
-You can create or register a client application from the [Azure portal](../register-application.md), or using PowerShell and Azure CLI scripts. This client application can be used for one or more DICOM service instances. It can also be used for other services in Azure Health Data Services.
+You can create or register a client application from the [Azure portal](dicom-register-application.md), or using PowerShell and Azure CLI scripts. This client application can be used for one or more DICOM service instances. It can also be used for other services in Azure Health Data Services.
If the client application is created with a certificate or client secret, ensure that you renew the certificate or client secret before expiration and replace the client credentials in your applications.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
Previously updated : 08/01/2022 Last updated : 09/20/2022 # What is the FHIR service in Azure Health Data Services?
-The FHIR service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. Offered as a managed Platform-as-a-Service (PaaS), the FHIR service makes it easy for anyone working with health data to securely store and exchange Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud.
+The FHIR service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. As part of a managed Platform-as-a-Service (PaaS), the FHIR service makes it easy for anyone working with health data to securely store and exchange Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud.
The FHIR service offers the following:
The FHIR service offers the following:
- High performance, low latency - Secure management of Protected Health Information (PHI) in a compliant cloud environment - SMART on FHIR for mobile and web clients-- Controlled access to FHIR data at scale with Azure Active Directory-backed Role-Based Access Control (RBAC)-- Audit log tracking for access, creation, and modification within the FHIR service data store
+- Controlled access to FHIR data at scale with Azure Active Directory Role-Based Access Control (RBAC)
+- Audit log tracking for access, creation, and modification events within the FHIR service data store
-The FHIR service allows you to quickly create and deploy a FHIR server in just minutes to leverage the elastic scale of the cloud for ingesting, persisting, and querying FHIR data. The Azure services that power the FHIR service are designed for high performance no matter how much data you're working with.
+The FHIR service allows you to quickly create and deploy a FHIR server to leverage the elastic scale of the cloud for ingesting, persisting, and querying FHIR data. The Azure services that power the FHIR service are designed for high performance no matter how much data you're working with.
The FHIR API provisioned in the FHIR service enables any FHIR-compliant system to securely connect and interact with FHIR data. As a PaaS offering, Microsoft takes on the operations, maintenance, update, and compliance requirements for the FHIR service so you can free up your own operational and development resources.
industrial-iot Overview What Is Industrial Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/overview-what-is-industrial-iot.md
Azure IIoT solutions are built from specific components:
The [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) acts as a central message hub for secure, bi-directional communications between any IoT application and the devices it manages. It's an open and flexible cloud platform as a service (PaaS) that supports open-source SDKs and multiple protocols.
-Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can also apply Microsoft Azure services and tools, such as [Power BI](https://powerbi.microsoft.com), on your combined data.
+Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can process your combined data with Microsoft Azure services and tools, for example [Azure Stream Analytics](https://docs.microsoft.com/azure/stream-analytics), or visualize in your Business Intelligence platform of choice such as [Power BI](https://powerbi.microsoft.com).
### IoT Edge devices
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
az iot hub device-identity create -n {iothub_name} -d {device_id} --ee
The result is a JSON printout which includes your keys and other information.
-Alternatively, there are several options to register a device using different kinds of authorization. To explore the options, see [Examples](/device-identity?view=azure-cli-latest#az-iot-hub-device-identity-create-examples&preserve-view=true) on the **az iot hub device-identity** reference page.
+Alternatively, there are several options to register a device using different kinds of authorization. To explore the options, see [Examples](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create-examples) on the **az iot hub device-identity** reference page.
## Remove an IoT Hub
kinect-dk About Azure Kinect Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/about-azure-kinect-dk.md
Title: About Azure Kinect DK
description: Overview of the Azure Kinect developer kit (DK) tools and integrated services. + Last updated 06/26/2019 keywords: azure, kinect, overview, dev kit, DK, device, depth, body tracking, speech, cognitive services, SDKs, SDK, firmware
kinect-dk About Sensor Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/about-sensor-sdk.md
Title: About Azure Kinect Sensor SDK
description: Overview of the Azure Kinect Sensor software development kit (SDK), its features, and tools. + Last updated 06/26/2019 keywords: azure, kinect, rgb, IR, recording, sensor, sdk, access, depth, video, camera, imu, motion, sensor, audio, microphone, matroska, sensor sdk, download
kinect-dk Access Mics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/access-mics.md
Title: Access Azure Kinect DK microphone input data
description: Understand how to get microphone data using the Azure Kinect DK microphone array. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, microphone, access mics, mic data
kinect-dk Add Library To Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/add-library-to-project.md
Title: Add Azure Kinect library to your Visual Studio project
description: Learn how to add the Azure Kinect NuGet package to your Visual Studio Project. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, visual studio 2017, visual studio 2019, nuget
kinect-dk Azure Kinect Firmware Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/azure-kinect-firmware-tool.md
Title: Azure Kinect firmware tool
description: Understand how to query and update device firmware using the Azure Kinect firmware tool. + Last updated 06/26/2019 keywords: kinect, firmware, update
kinect-dk Azure Kinect Recorder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/azure-kinect-recorder.md
Title: Azure Kinect DK recorder
description: Understand how to record data streams from the sensor SDK to a file using the Azure Kinect recorder. + Last updated 06/26/2019 keywords: kinect, record, playback, reader, matroska, mkv, streams, depth, rgb, camera, color, imu, audio
kinect-dk Azure Kinect Viewer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/azure-kinect-viewer.md
Title: Azure Kinect Viewer
description: Understand how to visualize all device data streams using the Azure Kinect viewer. + Last updated 06/26/2019 keywords: azure, kinect, sensor, viewer, visualization, depth, rgb, color, imu, audio, microphone, point cloud
kinect-dk Build First App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/build-first-app.md
Title: Quickstart- Build your first Azure Kinect application
description: This quickstart guides the Azure Kinect DK user through the process of creating a new application. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, microphone, access mics, mic data
kinect-dk Capture Device Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/capture-device-synchronization.md
Title: Capture Azure Kinect device synchronization
description: Learn how to synchronize Azure Kinect capture devices using the Azure Kinect Sensor SDK. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, depth, rgb, internal, external, synchronization, daisy chain, phase offset
kinect-dk Coordinate Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/coordinate-systems.md
Title: Azure Kinect DK coordinate systems
description: Azure Kinect DK coordinate systems description associated with Azure DK sensors + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, depth camera, tof, principles, performance, invalidation
kinect-dk Depth Camera https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/depth-camera.md
Title: Azure Kinect DK depth camera
description: Understand the operating principles and key features of the depth camera in your Azure Kinect DK. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, depth camera, tof, principles, performance, invalidation
kinect-dk Find Then Open Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/find-then-open-device.md
Title: Find then open the Azure Kinect device
description: Learn how to find and open an Azure Kinect device using the Azure Kinect Senor SDK. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, depth, rgb, device, find, open
kinect-dk Multi Camera Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/multi-camera-sync.md
Title: Synchronize multiple Azure Kinect DK devices
description: This article explores the benefits of multi-device synchronization as well as how to set up the devices to synchronize. + Last updated 02/20/2020 keywords: azure, kinect, specs, hardware, DK, capabilities, depth, color, RGB, IMU, array, depth, multi, synchronization
kinect-dk Record External Synchronized Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/record-external-synchronized-units.md
description: Learn how to record data from devices configured for external synch
+ Last updated 06/26/2019 keywords: Kinect, sensor, viewer, external sync, phase delay, depth, RGB, camera, audio cable, recorder
kinect-dk Record File Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/record-file-format.md
Title: Use Azure Kinect Sensor SDK to record file format
description: Understand how to use the Azure Kinect Sensor SDK recorded file format. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, depth, rgb, record, playback, matroska, mkv
kinect-dk Record Playback Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/record-playback-api.md
Title: Azure Kinect playback API
description: Learn how to use the Azure Kinect Sensor SDK to open a recording file using the playback API. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, depth, rgb, record, playback, matroska, mkv
kinect-dk Record Sensor Streams File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/record-sensor-streams-file.md
Title: Quickstart- Record Azure Kinect sensor streams to a file
description: In this quickstart, you will learn how to record data streams from the Sensor SDK to a file. + Last updated 06/26/2019 keywords: azure, kinect, record, play back, reader, matroska, mkv, streams, depth, rgb, camera, color, imu, audio, sensor
kinect-dk Reset Azure Kinect Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/reset-azure-kinect-dk.md
description: Describes how to reset an Azure Kinect DK device to its factory ima
+ Last updated 03/15/2022 keywords: kinect, reset
kinect-dk Retrieve Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/retrieve-images.md
Title: Retrieve Azure Kinect image data
description: Learn how to retrieve Azure Kinect image data using the Kinect Sensor SDK. + Last updated 06/26/2019 keywords: kinect, azure, retrieve, sensor, camera, sdk, depth, rgb, images, color, capture, resolution, buffer
kinect-dk Retrieve Imu Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/retrieve-imu-samples.md
description: Learn how to retrieve Azure Kinect IMU samples using the Azure Kine
Last updated 06/26/2019+ keywords: kinect, azure, configure, depth, color, RBG, camera, sensor, sdk, IMU, motion sensor, motion, gyroscope, gyro, accelerometer, FPS
kinect-dk Sensor Sdk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/sensor-sdk-download.md
Title: Azure Kinect Sensor SDK download
description: Learn how to download and install the Azure Kinect Sensor SDK on Windows and Linux. + Last updated 06/26/2019 keywords: azure, kinect,sdk, download update, latest, available, install
kinect-dk Set Up Azure Kinect Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/set-up-azure-kinect-dk.md
Title: Quickstart- Set up Azure Kinect DK
description: This quickstart provides instructions about how to set up Azure Kinect DK hardware + Last updated 02/12/2020 keywords: azure, kinect, dev kit, azure dk, set up, hardware, quick, usb, power, viewer, sensor, streaming, setup, SDK, firmware
kinect-dk Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/support.md
Title: Azure Kinect support options and resources
description: Understand the different support options and resources for the Azure Kinect. + Last updated 06/26/2019 keywords: azure, kinect, rgb, IR, recording, sensor, sdk, access, depth, video, camera, imu, motion, sensor, audio, microphone, matroska, sensor sdk, download, body, tracking, support
kinect-dk Update Device Firmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/update-device-firmware.md
Title: Update Azure Kinect DK firmware
description: Learn how to update the Azure Kinect DK device firmware using the Azure Kinect firmware tool. + Last updated 06/26/2019 keywords: kinect, firmware, update, recovery
kinect-dk Use Calibration Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/use-calibration-functions.md
-
+ Title: Use Azure Kinect calibration functions description: Learn how to use the calibration functions for Azure Kinect DK. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, coordinate system, calibration, functions, camera, intrinsic, extrinsic, project, unproject, transformation, rgb-d, point cloud
kinect-dk Use Image Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/use-image-transformation.md
Title: Use Azure Kinect Sensor SDK image transformations
description: Learn how to use the Azure Kinect Sensor SDK image transformation functions. + Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, coordinate system, calibration, project, unproject, transformation, rgb-d, point cloud
kinect-dk Windows Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/windows-comparison.md
Title: Azure Kinect DK Windows comparison
description: Hardware and software differences between Azure Kinect DK and Kinect for Windows v2 + Last updated 06/26/2019 keywords: Kinect, Windows, v2, Azure Kinect, comparison, SDK, differences, hardware, software
logic-apps Logic Apps Http Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-http-endpoint.md
For more information about security, authorization, and encryption for inbound c
1. On your logic app's menu, select **Overview**.
- 1. In the **Summary** section, select **See trigger history**.
+ 1. On the **Overview** pane, select **Trigger history**. Under **Callback url [POST]**, copy the URL:
- ![Get endpoint URL from Azure portal](./media/logic-apps-http-endpoint/find-manual-trigger-url.png)
-
- 1. Under **Callback url [POST]**, copy the URL:
-
- ![Copy endpoint URL from Azure portal](./media/logic-apps-http-endpoint/copy-manual-trigger-callback-url-post.png)
+ ![Screenshot showing logic app 'Overview' pane with 'Trigger history' selected.](./media/logic-apps-http-endpoint/find-manual-trigger-url.png)
<a name="select-method"></a>
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
For Azure Logic Apps to receive incoming communication through your firewall, yo
| Jio India West | 20.193.206.48,20.193.206.49,20.193.206.50,20.193.206.51 | | Korea Central | 52.231.14.182, 52.231.103.142, 52.231.39.29, 52.231.14.42, 20.200.207.29, 20.200.231.229 | | Korea South | 52.231.166.168, 52.231.163.55, 52.231.163.150, 52.231.192.64 |
-| North Central US | 168.62.249.81, 157.56.12.202, 65.52.211.164, 65.52.9.64, 20.94.151.41, 20.88.209.113 |
+| North Central US | 168.62.249.81, 157.56.12.202, 65.52.211.164, 65.52.9.64, 52.162.177.104, 23.101.174.98 |
| North Europe | 13.79.173.49, 52.169.218.253, 52.169.220.174, 40.112.90.39, 40.127.242.203, 51.138.227.94, 40.127.145.51 | | Norway East | 51.120.88.93, 51.13.66.86, 51.120.89.182, 51.120.88.77, 20.100.27.17, 20.100.36.102 | | South Africa North | 102.133.228.4, 102.133.224.125, 102.133.226.199, 102.133.228.9, 20.87.92.64, 20.87.91.171 |
This section lists the outbound IP addresses that Azure Logic Apps requires in y
| Jio India West | 20.193.206.128, 20.193.206.129, 20.193.206.130, 20.193.206.131, 20.193.206.132, 20.193.206.133, 20.193.206.134, 20.193.206.135 | | Korea Central | 52.231.14.11, 52.231.14.219, 52.231.15.6, 52.231.10.111, 52.231.14.223, 52.231.77.107, 52.231.8.175, 52.231.9.39, 20.200.206.170, 20.200.202.75, 20.200.231.222, 20.200.231.139 | | Korea South | 52.231.204.74, 52.231.188.115, 52.231.189.221, 52.231.203.118, 52.231.166.28, 52.231.153.89, 52.231.155.206, 52.231.164.23 |
-| North Central US | 168.62.248.37, 157.55.210.61, 157.55.212.238, 52.162.208.216, 52.162.213.231, 65.52.10.183, 65.52.9.96, 65.52.8.225, 20.94.150.220, 20.94.149.199, 20.88.209.97, 20.88.209.88 |
+| North Central US | 168.62.248.37, 157.55.210.61, 157.55.212.238, 52.162.208.216, 52.162.213.231, 65.52.10.183, 65.52.9.96, 65.52.8.225, 52.162.177.90, 52.162.177.30, 23.101.160.111, 23.101.167.207 |
| North Europe | 40.113.12.95, 52.178.165.215, 52.178.166.21, 40.112.92.104, 40.112.95.216, 40.113.4.18, 40.113.3.202, 40.113.1.181, 40.127.242.159, 40.127.240.183, 51.138.226.19, 51.138.227.160, 40.127.144.251, 40.127.144.121 | | Norway East | 51.120.88.52, 51.120.88.51, 51.13.65.206, 51.13.66.248, 51.13.65.90, 51.13.65.63, 51.13.68.140, 51.120.91.248, 20.100.26.148, 20.100.26.52, 20.100.36.49, 20.100.36.10 | | South Africa North | 102.133.231.188, 102.133.231.117, 102.133.230.4, 102.133.227.103, 102.133.228.6, 102.133.230.82, 102.133.231.9, 102.133.231.51, 20.87.92.40, 20.87.91.122, 20.87.91.169, 20.87.88.47 |
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
To enable sending SAP telemetry to Application insights, follow these steps:
1. In your on-premises data gateway installation directory, check that the **Microsoft.ApplicationInsights.dll** file has the same version number as the **Microsoft.ApplicationInsights.EventSourceListener.dll** file that you added. The gateway currently uses version 2.14.0.
-1. In the **ApplicationInsights.config** file, add your [Application Insights instrumentation key](../azure-monitor/app/app-insights-overview.md#how-does-application-insights-work) by uncommenting the line with the `<InstrumentationKey></Instrumentation>` element. Replace the placeholder, *your-Application-Insights-instrumentation-key*, with your key, for example:
+1. In the **ApplicationInsights.config** file, add your [Application Insights instrumentation key](../azure-monitor/app/sdk-connection-string.md) by uncommenting the line with the `<InstrumentationKey></Instrumentation>` element. Replace the placeholder, *your-Application-Insights-instrumentation-key*, with your key, for example:
```xml <?xml version="1.0" encoding="utf-8"?>
logic-apps Quickstart Create Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-logic-apps-with-visual-studio.md
In this quickstart, you create the same logic app workflow with Visual Studio as
* Download and install these tools, if you don't have them already:
- * [Visual Studio 2019, 2017, or 2015 - Community edition](https://aka.ms/download-visual-studio). This quickstart uses Visual Studio Community 2017. Currently, Visual Studio 2022 doesn't include support for the Azure Logic Apps extension.
+ * [Visual Studio 2019, 2017, or 2015 - Community edition](https://aka.ms/download-visual-studio), which is free. The Azure Logic Apps extension is currently unavailable for Visual Studio 2022. This quickstart uses Visual Studio Community 2017.
> [!IMPORTANT] > If you use Visual Studio 2019 or 2017, make sure that you select the **Azure development** workload.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
| **Host name** | **Purpose** | | - | - |
- | **graph.windows.net** | Used by Azure Machine Learning compute instance/cluster. |
| **anaconda.com**</br>**\*.anaconda.com** | Used to install default packages. | | **\*.anaconda.org** | Used to get repo data. | | **pypi.org** | Used to list dependencies from the default index, if any, and the index isn't overwritten by user settings. If the index is overwritten, you must also allow **\*.pythonhosted.org**. |
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
No&nbsp;criteria | If you do not define any exit parameters the experiment conti
`max_concurrent_trials`| Represents the maximum number of trials (children jobs) that would be executed in parallel. It's a good practice to match this number with the number of nodes your cluster ## Run experiment
-> [!WARNING]
+> [!NOTE]
> If you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score and generated models. The algorithms automated ML employs have inherent randomness that can cause slight variation in the models output by the experiment and the recommended model's final metrics score, like accuracy. You'll likely also see results with the same model name, but different hyperparameters used.
+> [!WARNING]
+> If you have set rules in firewall and/or Network Security Group over your workspace, verify that required permissions are given to inbound and outbound network traffic as defined in [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
+ Submit the experiment to run and generate a model. With the MLClient created in the prerequisites,you can run the following command in the workspace. ```python
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Previously updated : 08/05/2022 Last updated : 09/21/2022 # Create and manage an Azure Machine Learning compute instance
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
-> * [CLI v1](v1/how-to-create-manage-compute-instance.md)
-> * [CLI v2 (current version)](how-to-create-manage-compute-instance.md)
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
+> * [v1](v1/how-to-create-manage-compute-instance.md)
+> * [v2 (current version)](how-to-create-manage-compute-instance.md)
Learn how to create and manage a [compute instance](concept-compute-instance.md) in your Azure Machine Learning workspace.
Compute instances can run jobs securely in a [virtual network environment](how-t
* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
-* The [Azure CLI extension for Machine Learning service (v2)](https://aka.ms/sdk-v2-install), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
+* The [Azure CLI extension for Machine Learning service (v2)](https://aka.ms/sdk-v2-install), [Azure Machine Learning Python SDK (v2)](https://aka.ms/sdk-v2-install), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
+
+* If using the Python SDK, [set up your development environment with a workspace](how-to-configure-environment.md). Once your environment is set up, attach to the workspace in your Python script:
+
+ [!INCLUDE [connect ws v2](../../includes/machine-learning-connect-ws-v2.md)]
+ ## Create
The following example demonstrates how to create a compute instance:
# [Python SDK](#tab/python) -
-```python
-import datetime
-import time
-
-from azureml.core.compute import ComputeTarget, ComputeInstance
-from azureml.core.compute_target import ComputeTargetException
-
-# Choose a name for your instance
-# Compute instance name should be unique across the azure region
-compute_name = "ci{}".format(ws._workspace_id)[:10]
-
-# Verify that instance does not exist already
-try:
- instance = ComputeInstance(workspace=ws, name=compute_name)
- print('Found existing instance, use it.')
-except ComputeTargetException:
- compute_config = ComputeInstance.provisioning_configuration(
- vm_size='STANDARD_D3_V2',
- ssh_public_access=False,
- # vnet_resourcegroup_name='<my-resource-group>',
- # vnet_name='<my-vnet-name>',
- # subnet_name='default',
- # admin_user_ssh_public_key='<my-sshkey>'
- )
- instance = ComputeInstance.create(ws, compute_name, compute_config)
- instance.wait_for_completion(show_output=True)
-```
-For more information on the classes, methods, and parameters used in this example, see the following reference documents:
+[!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=ci_basic)]
-* [ComputeInstance class](/python/api/azureml-core/azureml.core.compute.computeinstance.computeinstance)
-* [ComputeTarget.create](/python/api/azureml-core/azureml.core.compute.computetarget#create-workspace--name--provisioning-configuration-)
-* [ComputeInstance.wait_for_completion](/python/api/azureml-core/azureml.core.compute.computeinstance(class)#wait-for-completion-show-output-false--is-delete-operation-false-)
+For more information on the classes, methods, and parameters used in this example, see the following reference documents:
+* [`AmlCompute` class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute)
+* [`ComputeInstance` class](/python/api/azure-ai-ml/azure.ai.ml.entities.computeinstance)
# [Azure CLI](#tab/azure-cli)
Where the file *create-instance.yml* is:
* Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of-preview) * Provision with a setup script (preview) - for more information about how to create and use a setup script, see [Customize the compute instance with a script](how-to-customize-compute-instance.md). * Add schedule (preview). Schedule times for the compute instance to automatically start and/or shutdown. See [schedule details](#schedule-automatic-start-and-stop-preview) below.
- * Enable auto-stop (preview). Configure a compute instance to automatically shutdown if it is inactive. See [configure auto-stop](#configure-auto-stop-preview) for more details.
+ * Enable auto-stop (preview). Configure a compute instance to automatically shut down if it's inactive. For more information, see [configure auto-stop](#configure-auto-stop-preview).
SSH access is disabled by default. SSH access can't be changed after creation.
To avoid getting charged for a compute instance that is switched on but inactive, you can configure auto-stop. A compute instance is considered inactive if the below conditions are met:
-* No active Jupyter Kernel sessions (this translates to no Notebooks usage via Jupyter, JupyterLab or Interactive notebooks)
+* No active Jupyter Kernel sessions (which translates to no Notebooks usage via Jupyter, JupyterLab or Interactive notebooks)
* No active Jupyter terminal sessions * No active AzureML runs or experiments * No SSH connections * No VS code connections; you must close your VS Code connection for your compute instance to be considered inactive. Sessions are auto-terminated if VS code detects no activity for 3 hours.
-Note that activity on custom applications installed on the compute instance is not considered. There are also some basic bounds around inactivity time periods; CI must be inactive for a minimum of 15 mins and a maximum of 3 days.
+Activity on custom applications installed on the compute instance isn't considered. There are also some basic bounds around inactivity time periods; CI must be inactive for a minimum of 15 mins and a maximum of three days.
This setting can be configured during CI creation or for existing CIs via the following interfaces: * AzureML Studio
This setting can be configured during CI creation or for existing CIs via the fo
} ```
-* CLIv2 (YAML) -- only configurable during new CI creation
+* CLIv2 (YAML): only configurable during new CI creation
```YAML # Note that this is just a snippet for the idle shutdown property. Refer to the "Create" Azure CLI section for more information. idle_time_before_shutdown_minutes: 30 ```
-* Python SDKv2 -- only configurable during new CI creation
+* Python SDKv2: only configurable during new CI creation
```Python ComputeInstance(name=ci_basic_name, size="STANDARD_DS3_v2", idle_time_before_shutdown_minutes="30") ```
-* ARM Templates -- only configurable during new CI creation
+* ARM Templates: only configurable during new CI creation
```JSON // Note that this is just a snippet for the idle shutdown property in an ARM template {
This setting can be configured during CI creation or for existing CIs via the fo
``` ### Azure policy support
-Administrators can use a built-in [Azure Policy](./../governance/policy/overview.md) definition to enfore auto-stop on all compute instances in a given subscription/resource-group.
+Administrators can use a built-in [Azure Policy](./../governance/policy/overview.md) definition to enforce auto-stop on all compute instances in a given subscription/resource-group.
1. Navigate to Azure Policy in the Azure portal. 2. Under "Definitions", look for the idle shutdown policy.
- :::image type="content" source="media/how-to-create-attach-studio/idle-shutdown-policy.png" alt-text="Screenshot for the idle shutdown policy in Azure Portal.":::
+ :::image type="content" source="media/how-to-create-attach-studio/idle-shutdown-policy.png" alt-text="Screenshot for the idle shutdown policy in Azure portal.":::
3. Assign policy to the necessary scope.
-You can also create your own custom Azure policy. For example, if the below policy is assigned, all new compute instances will have auto-stop configured with a 60 minute inactivity period.
+You can also create your own custom Azure policy. For example, if the below policy is assigned, all new compute instances will have auto-stop configured with a 60-minute inactivity period.
```json {
You can [create a schedule](#schedule-automatic-start-and-stop-preview) for the
# [Python SDK](#tab/python)
-In the examples below, the name of the compute instance is **instance**
+In the examples below, the name of the compute instance is stored in the variable `ci_basic_name`.
* Get status
- ```python
- # get_status() gets the latest status of the ComputeInstance target
- instance.get_status()
- ```
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=ci_basic_state)]
+ * Stop
- ```python
- # stop() is used to stop the ComputeInstance
- # Stopping ComputeInstance will stop the billing meter and persist the state on the disk.
- # Available Quota will not be changed with this operation.
- instance.stop(wait_for_completion=True, show_output=True)
- ```
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=stop_compute)]
+ * Start
- ```python
- # start() is used to start the ComputeInstance if it is in stopped state
- instance.start(wait_for_completion=True, show_output=True)
- ```
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=start_compute)]
+ * Restart
- ```python
- # restart() is used to restart the ComputeInstance
- instance.restart(wait_for_completion=True, show_output=True)
- ```
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=restart_compute)]
+ * Delete
- ```python
- # delete() is used to delete the ComputeInstance target. Useful if you want to re-use the compute name
- instance.delete(wait_for_completion=True, show_output=True)
- ```
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=delete_compute)]
+ # [Azure CLI](#tab/azure-cli)
For each compute instance in a workspace that you created (or that was created f
-[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user sign-in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment jobs. SSH access is controlled through public/private key mechanism.
+[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access. That user has access to Jupyter/JupyterLab/RStudio running on the instance. Compute instance will have single-user sign-in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment jobs. SSH access is controlled through public/private key mechanism.
These actions can be controlled by Azure RBAC: * *Microsoft.MachineLearningServices/workspaces/computes/read*
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
Previously updated : 06/01/2022 Last updated : 08/29/2022
Along with managing quotas, you can learn how to [plan and manage costs for Azur
In this section, you learn about the default and maximum quota limits for the following resources: + Azure Machine Learning assets
- + Azure Machine Learning compute
- + Azure Machine Learning pipelines
+ + Azure Machine Learning compute
+ + Azure Machine Learning managed online endpoints
+ + Azure Machine Learning pipelines
+ Virtual machines + Azure Container Instances + Azure Storage
The following limits on assets apply on a per-workspace basis.
In addition, the maximum **run time** is 30 days and the maximum number of **metrics logged per run** is 1 million. ### Azure Machine Learning Compute
-[Azure Machine Learning Compute](concept-compute-target.md#azure-machine-learning-compute-managed) has a default quota limit on both the number of cores (split by each VM Family and cumulative total cores) as well as the number of unique compute resources allowed per region in a subscription. This quota is separate from the VM core quota listed in the previous section as it applies only to the managed compute resources of Azure Machine Learning.
+[Azure Machine Learning Compute](concept-compute-target.md#azure-machine-learning-compute-managed) has a default quota limit on both the number of cores (split by each VM Family and cumulative total cores) and the number of unique compute resources allowed per region in a subscription. This quota is separate from the VM core quota listed in the previous section as it applies only to the managed compute resources of Azure Machine Learning.
[Request a quota increase](#request-quota-increases) to raise the limits for various VM family core quotas, total subscription core quotas, cluster quota and resources in this section.
Available resources:
+ **Low-priority cores per region** have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families.
-+ **Clusters per region** have a default limit of 200. These are shared between training clusters, compute instances and MIR endpoint deployments. (A compute instance is considered a single-node cluster for quota purposes.) Cluster quota can be increased up to a value of 500 per region within a given subscription.
++ **Clusters per region** have a default limit of 200. This limit is shared between training clusters, compute instances and MIR endpoint deployments. (A compute instance is considered a single-node cluster for quota purposes.) Cluster quota can be increased up to a value of 500 per region within a given subscription. > [!TIP] > To learn more about which VM family to request a quota increase for, check out [virtual machine sizes in Azure](../virtual-machines/sizes.md). For instance GPU VM families start with an "N" in their family name (eg. NCv3 series)
-The following table shows additional limits in the platform. Please reach out to the AzureML product team through a **technical** support ticket to request an exception.
+The following table shows more limits in the platform. Reach out to the AzureML product team through a **technical** support ticket to request an exception.
| **Resource or Action** | **Maximum limit** | | | | | Workspaces per resource group | 800 |
-| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a non communication-enabled pool (i.e. cannot run MPI jobs) | 100 nodes but configurable up to 65000 nodes |
-| Nodes in a single Parallel Run Step **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but configurable up to 65000 nodes if your cluster is set up to scale per above |
+| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a non communication-enabled pool (that is, can't run MPI jobs) | 100 nodes but configurable up to 65,000 nodes |
+| Nodes in a single Parallel Run Step **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but configurable up to 65,000 nodes if your cluster is set up to scale per above |
| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a communication-enabled pool | 300 nodes but configurable up to 4000 nodes | | Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a communication-enabled pool on an RDMA enabled VM Family | 100 nodes | | Nodes in a single MPI **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but can be increased to 300 nodes |
The following table shows additional limits in the platform. Please reach out to
| Job lifetime on a low-priority node | 7 days<sup>2</sup> | | Parameter servers per node | 1 |
-<sup>1</sup> Maximum lifetime is the duration between when a job starts and when it finishes. Completed jobs persist indefinitely. Data for jobs not completed within the maximum lifetime is not accessible.
+<sup>1</sup> Maximum lifetime is the duration between when a job starts and when it finishes. Completed jobs persist indefinitely. Data for jobs not completed within the maximum lifetime isn't accessible.
<sup>2</sup> Jobs on a low-priority node can be preempted whenever there's a capacity constraint. We recommend that you implement checkpoints in your job.
Azure Machine Learning managed online endpoints have limits described in the fol
| Number of deployments per subscription | 200 | | Number of deployments per endpoint | 20 | | Number of instances per deployment | 20 <sup>2</sup> |
-| Max request time out at endpoint level | 90 seconds |
+| Max request time-out at endpoint level | 90 seconds |
| Total requests per second at endpoint level for all deployments | 500 <sup>3</sup> | | Total connections per second at endpoint level for all deployments | 500 <sup>3</sup> | | Total connections active at endpoint level for all deployments | 500 <sup>3</sup> |
Azure Machine Learning managed online endpoints have limits described in the fol
<sup>1</sup> Single dashes like, `my-endpoint-name`, are accepted in endpoint and deployment names.
-<sup>2</sup> We reserve 20% extra compute resources for performing upgrades. For example, if you request 10 instances in a deployment, you must have a quota for 12. Otherwise, you will receive an error.
+<sup>2</sup> We reserve 20% extra compute resources for performing upgrades. For example, if you request 10 instances in a deployment, you must have a quota for 12. Otherwise, you'll receive an error.
<sup>3</sup> If you request a limit increase, be sure to calculate related limit increases you might need. For example, if you request a limit increase for requests per second, you might also want to compute the required connections and bandwidth limits and include these limit increases in the same request. To determine the current usage for an endpoint, [view the metrics](how-to-monitor-online-endpoints.md#metrics).
-To request an exception from the Azure Machine Learning product team, use the steps in the [Request quota increases](#request-quota-increases) section and provide the following information:
+To request an exception from the Azure Machine Learning product team, use the steps in the [Request quota increases](#request-quota-increases).
-1. When opening the support request, __do not select Service and subscription limits (quotas)__. Instead, select __Technical__ as the issue type.
-1. Provide the Azure __subscriptions__ and __regions__ where you want to increase the quota.
-1. Provide the __tenant ID__ and __customer name__.
-1. Provide the __quota type__ and __new limit__. Use the following table as a guide:
-
- | Quota Type | New Limit |
- | -- | -- |
- | MaxEndpointsPerSub (Number of endpoints per subscription) | ? |
- | MaxDeploymentsPerSub (Number of deployments per subscription) | ? |
- | MaxDeploymentsPerEndpoint (Number of deployments per endpoint) | ? |
- | MaxInstancesPerDeployment (Number of instances per deployment) | ? |
- | EndpointRequestRateLimitPerSec (Total requests per second at endpoint level for all deployments) | ? |
- | EndpointConnectionRateLimitPerSec (Total connections per second at endpoint level for all deployments) | ? |
- | EndpointConnectionLimit (Total connections active at endpoint level for all deployments) | ? |
- | EndpointBandwidthLimitKBps (Total bandwidth at endpoint level for all deployments (MBPS)) | ? |
### Azure Machine Learning pipelines [Azure Machine Learning pipelines](concept-ml-pipelines.md) have the following limits.
To request an exception from the Azure Machine Learning product team, use the st
### Virtual machines Each Azure subscription has a limit on the number of virtual machines across all services. Virtual machine cores have a regional total limit and a regional limit per size series. Both limits are separately enforced.
-For example, consider a subscription with a US East total VM core limit of 30, an A series core limit of 30, and a D series core limit of 30. This subscription would be allowed to deploy 30 A1 VMs, or 30 D1 VMs, or a combination of the two that does not exceed a total of 30 cores.
+For example, consider a subscription with a US East total VM core limit of 30, an A series core limit of 30, and a D series core limit of 30. This subscription would be allowed to deploy 30 A1 VMs, or 30 D1 VMs, or a combination of the two that doesn't exceed a total of 30 cores.
You can't raise limits for virtual machines above the values shown in the following table.
You can't set a negative value or a value higher than the subscription-level quo
:::image type="content" source="media/how-to-manage-quotas/select-all-options.png" alt-text="Screenshot shows select all options to see compute resources that need more quota":::
-1. Scroll down until you see the list of VM sizes you do not have quota for.
+1. Scroll down until you see the list of VM sizes you don't have quota for.
:::image type="content" source="media/how-to-manage-quotas/scroll-to-zero-quota.png" alt-text="Screenshot shows list of zero quota":::
When you're requesting a quota increase, select the service that you have in min
> [!NOTE] > [Free trial subscriptions](https://azure.microsoft.com/offers/ms-azr-0044p) are not eligible for limit or quota increases. If you have a free trial subscription, you can upgrade to a [pay-as-you-go](https://azure.microsoft.com/offers/ms-azr-0003p/) subscription. For more information, see [Upgrade Azure free trial to pay-as-you-go](../cost-management-billing/manage/upgrade-azure-subscription.md) and [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq).
+### Endpoint quota increases
+
+When requesting the quota increase, provide the following information:
+
+1. When opening the support request, select __Machine Learning Service: Endpoint Limits__ as the __Quota type__.
+1. On the __Additional details__ tab, select __Enter details__ and then provide the quota you'd like to increase and the new value, the reason for the quota increase request, and __location(s)__ where you need the quota increase. Finally, select __Save and continue__ to continue.
+
+ :::image type="content" source="./media/how-to-manage-quotas/quota-details.png" lightbox="./media/how-to-manage-quotas/quota-details.png" alt-text="Screenshot of the quota details form.":::
+ ## Next steps + [Plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Azure Container Registry can be configured to use a private endpoint. Use the fo
This code returns a value similar to `"/subscriptions/{GUID}/resourceGroups/{resourcegroupname}/providers/Microsoft.ContainerRegistry/registries/{ACRname}"`. The last part of the string is the name of the Azure Container Registry for the workspace.
- # [Azure portal](#tab/portal)
+ # [Portal](#tab/portal)
From the overview section of your workspace, the __Registry__ value links to the Azure Container Registry.
Azure Container Registry can be configured to use a private endpoint. Use the fo
For more information, see the [update()](/python/api/azureml-core/azureml.core.workspace.workspace#update-friendly-name-none--description-none--tags-none--image-build-compute-none--enable-data-actions-none-) method reference.
- # [Azure portal](#tab/portal)
+ # [Portal](#tab/portal)
Currently there isn't a way to set the image build compute from the Azure portal.
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
It's best to use newer, non-deprecated versions.
"conda-forge" ], "dependencies": [
- "python=3.6.2"
+ "python=3.8"
], }, "condaDependenciesFile": null,
It's best to use newer, non-deprecated versions.
- See [PythonSection class](https://aka.ms/azureml/environment/environment-python-section) #### **"Python version missing"**
+*V1*
+ - A Python version must be specified in the environment definition -- A Python version can be added by adding Python as a conda package, specifying the version:
+- A Python version can be added by adding Python as a conda package, specifying the version (this is specific to SDK V1):
```python from azureml.core.environment import CondaDependencies
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
Title: Authentication secrets in training
+ Title: Authentication secrets
-description: Learn how to pass secrets to training jobs in secure fashion using the Azure Key Vault for your workspace.
+description: Learn how to pass secrets to training jobs in secure fashion using Azure Key Vault.
Previously updated : 10/21/2021 Last updated : 09/16/2022 -+
-# Use authentication credential secrets in Azure Machine Learning training jobs
+# Use authentication credential secrets in Azure Machine Learning jobs
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
+> * [v1](v1/how-to-use-secrets-in-runs.md)
+> * [v2 (current version)](how-to-use-secrets-in-runs.md)
-In this article, you learn how to use secrets in training jobs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
+Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in clear text is insecure as it would potentially expose the secret.
-Instead, your Azure Machine Learning workspace has an associated resource called a [Azure Key Vault](../key-vault/general/overview.md). Use this Key Vault to pass secrets to remote jobs securely through a set of APIs in the Azure Machine Learning Python SDK.
+The Azure Key Vault allows you to securely store and retrieve secrets. In this article, learn how you can retrieve secrets stored in a key vault from a training job running on a compute cluster.
-The standard flow for using secrets is:
- 1. On local computer, log in to Azure and connect to your workspace.
- 2. On local computer, set a secret in Workspace Key Vault.
- 3. Submit a remote job.
- 4. Within the remote job, get the secret from Key Vault and use it.
+> [!IMPORTANT]
+> The Azure Machine Learning Python SDK v2 and Azure CLI extension v2 for machine learning do not provide the capability to set or get secrets. Instead, the information in this article uses the [Azure Key Vault Secrets client library for Python](/python/api/overview/azure/keyvault-secrets-readme).
-## Set secrets
+## Prerequisites
-In the Azure Machine Learning, the [Keyvault](/python/api/azureml-core/azureml.core.keyvault.keyvault) class contains methods for setting secrets. In your local Python session, first obtain a reference to your workspace Key Vault, and then use the [`set_secret()`](/python/api/azureml-core/azureml.core.keyvault.keyvault#set-secret-name--value-) method to set a secret by name and value. The __set_secret__ method updates the secret value if the name already exists.
+Before following the steps in this article, make sure you have the following prerequisites:
-```python
-from azureml.core import Workspace
-from azureml.core import Keyvault
-import os
+> [!TIP]
+> Many of the prerequisites in this section require __Contributor__, __Owner__, or equivalent access to your Azure subscription, or the Azure Resource Group that contains the resources. You may need to contact your Azure administrator and have them perform these actions.
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* An Azure Machine Learning workspace. If you don't have one, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one.
-ws = Workspace.from_config()
-my_secret = os.environ.get("MY_SECRET")
-keyvault = ws.get_default_keyvault()
-keyvault.set_secret(name="mysecret", value = my_secret)
-```
+* An Azure Key Vault. If you used the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create your workspace, a key vault was created for you. You can also create a separate key vault instance using the information in the [Quickstart: Create a key vault](/azure/key-vault/general/quick-create-portal) article.
-Do not put the secret value in your Python code as it is insecure to store it in file as cleartext. Instead, obtain the secret value from an environment variable, for example Azure DevOps build secret, or from interactive user input.
+ > [!TIP]
+ > You do not have to use same key vault as the workspace.
-You can list secret names using the [`list_secrets()`](/python/api/azureml-core/azureml.core.keyvault.keyvault#list-secrets--) method and there is also a batch version,[set_secrets()](/python/api/azureml-core/azureml.core.keyvault.keyvault#set-secrets-secrets-batch-) that allows you to set multiple secrets at a time.
+* An Azure Machine Learning compute cluster configured to use a [managed identity](how-to-create-attach-compute-cluster.md?tabs=azure-studio#set-up-managed-identity). The cluster can be configured for either a system-assigned or user-assigned managed identity.
-> [!IMPORTANT]
-> Using `list_secrets()` will only list secrets created through `set_secret()` or `set_secrets()` using the Azure ML SDK. It will not list secrets created by something other than the SDK. For example, a secret created using the Azure portal or Azure PowerShell will not be listed.
->
-> You can use [`get_secret()`](#get-secrets) to get a secret value from the key vault, regardless of how it was created. So you can retrieve secrets that are not listed by `list_secrets()`.
+* Grant the managed identity for the compute cluster access to the secrets stored in key vault. The method used to grant access depends on how your key vault is configured:
+
+ * [Azure role-based access control (Azure RBAC)](/azure/key-vault/general/rbac-guide): When configured for Azure RBAC, add the managed identity to the __Key Vault Secrets User__ role on your key vault.
+ * [Azure Key Vault access policy](/azure/key-vault/general/assign-access-policy): When configured to use access policies, add a new policy that grants the __get__ operation for secrets and assign it to the managed identity.
+
+* A stored secret value in the key vault. This value can then be retrieved using a key. For more information, see [Quickstart: Set and retrieve a secret from Azure Key Vault](/azure/key-vault/secrets/quick-create-python).
+
+ > [!TIP]
+ > The quickstart link is to the steps for using the Azure Key Vault Python SDK. In the table of contents in the left navigation area are links to other ways to set a key.
+
+## Getting secrets
+
+1. Add the `azure-keyvault-secrets` and `azure-identity` packages to the [Azure Machine Learning environment](concept-environments.md) used when training the model. For example, by adding them to the conda file used to build the environment.
-## Get secrets
+ The environment is used to build the Docker image that the training job runs in on the compute cluster.
-In your local code, you can use the [`get_secret()`](/python/api/azureml-core/azureml.core.keyvault.keyvault#get-secret-name-) method to get the secret value by name.
+1. From your training code, use the [Azure Identity SDK](/python/api/overview/azure/identity-readme) and [Key Vault client library](/python/api/overview/azure/keyvault-secrets-readme) to get the managed identity credentials and authenticate to key vault:
-For jobs submitted the [`Experiment.submit`](/python/api/azureml-core/azureml.core.experiment.experiment#submit-config--tags-none-kwargs-) , use the [`get_secret()`](/python/api/azureml-core/azureml.core.run.run#get-secret-name-) method with the [`Run`](/python/api/azureml-core/azureml.core.run%28class%29) class. Because a submitted run is aware of its workspace, this method shortcuts the Workspace instantiation and returns the secret value directly.
+ ```python
+ from azure.identity import DefaultAzureCredential
+ from azure.keyvault.secret import SecretClient
-```python
-# Code in submitted job
-from azureml.core import Experiment, Run
+ credential = DefaultAzureCredential()
-run = Run.get_context()
-secret_value = run.get_secret(name="mysecret")
-```
+ secret_client = SecretClient(vault_url="https://my-key-vault.vault.azure.net/", credential=credential)
+ ```
-Be careful not to expose the secret value by writing or printing it out.
+1. After authenticating, use the Key Vault client library to retrieve a secret by providing the associated key:
-There is also a batch version, [get_secrets()](/python/api/azureml-core/azureml.core.run.run#get-secrets-secrets-) for accessing multiple secrets at once.
+ ```python
+ secret = secret_client.get_secret("secret-name")
+ print(secret.value)
+ ```
## Next steps
- * [View example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.ipynb)
- * [Learn about enterprise security with Azure Machine Learning](concept-enterprise-security.md)
+For an example of submitting a training job using the Azure Machine Learning Python SDK v2 (preview), see [Train models with the Python SDK v2](how-to-train-sdk.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-manage-compute-instance.md
Last updated 05/02/2022
# Create and manage an Azure Machine Learning compute instance with CLI v1
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
-> * [CLI v1](how-to-create-manage-compute-instance.md)
-> * [CLI v2 (current version)](../how-to-create-manage-compute-instance.md)
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
+> * [v1](how-to-create-manage-compute-instance.md)
+> * [v2 (current version)](../how-to-create-manage-compute-instance.md)
Learn how to create and manage a [compute instance](../concept-compute-instance.md) in your Azure Machine Learning workspace with CLI v1.
In this article, you learn how to:
* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
-* The [Azure CLI extension for Machine Learning service (v1)](reference-azure-machine-learning-cli.md)
+* The [Azure CLI extension for Machine Learning service (v1)](reference-azure-machine-learning-cli.md) or [Azure Machine Learning Python SDK (v1)](/python/api/overview/azure/ml/intro).
[!INCLUDE [cli v1 deprecation](../../../includes/machine-learning-cli-v1-deprecation.md)] + ## Create > [!IMPORTANT]
The dedicated cores per region per VM family quota and total regional quota, whi
The following example demonstrates how to create a compute instance:
+# [Python SDK](#tab/python)
++
+```python
+import datetime
+import time
+
+from azureml.core.compute import ComputeTarget, ComputeInstance
+from azureml.core.compute_target import ComputeTargetException
+
+# Choose a name for your instance
+# Compute instance name should be unique across the azure region
+compute_name = "ci{}".format(ws._workspace_id)[:10]
+
+# Verify that instance does not exist already
+try:
+ instance = ComputeInstance(workspace=ws, name=compute_name)
+ print('Found existing instance, use it.')
+except ComputeTargetException:
+ compute_config = ComputeInstance.provisioning_configuration(
+ vm_size='STANDARD_D3_V2',
+ ssh_public_access=False,
+ # vnet_resourcegroup_name='<my-resource-group>',
+ # vnet_name='<my-vnet-name>',
+ # subnet_name='default',
+ # admin_user_ssh_public_key='<my-sshkey>'
+ )
+ instance = ComputeInstance.create(ws, compute_name, compute_config)
+ instance.wait_for_completion(show_output=True)
+```
+
+For more information on the classes, methods, and parameters used in this example, see the following reference documents:
+
+* [ComputeInstance class](/python/api/azureml-core/azureml.core.compute.computeinstance.computeinstance)
+* [ComputeTarget.create](/python/api/azureml-core/azureml.core.compute.computetarget#create-workspace--name--provisioning-configuration-)
+* [ComputeInstance.wait_for_completion](/python/api/azureml-core/azureml.core.compute.computeinstance(class)#wait-for-completion-show-output-false--is-delete-operation-false-)
++
+# [Azure CLI](#tab/azure-cli)
+ [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)] ```azurecli-interactive
az ml computetarget create computeinstance -n instance -s "STANDARD_D3_V2" -v
For more information, see [Az PowerShell module `az ml computetarget create computeinstance`](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-computeinstance) reference. + ## Manage
Start, stop, restart, and delete a compute instance. A compute instance doesn't
> [!TIP] > The compute instance has 120GB OS disk. If you run out of disk space, [use the terminal](../how-to-access-terminal.md) to clear at least 1-2 GB before you stop or restart the compute instance. Please do not stop the compute instance by issuing sudo shutdown from the terminal. The temp disk size on compute instance depends on the VM size chosen and is mounted on /mnt.
+# [Python SDK](#tab/python)
++
+In the examples below, the name of the compute instance is **instance**.
++
+* Get status
+
+ ```python
+ # get_status() gets the latest status of the ComputeInstance target
+ instance.get_status()
+ ```
+
+* Stop
+
+ ```python
+ # stop() is used to stop the ComputeInstance
+ # Stopping ComputeInstance will stop the billing meter and persist the state on the disk.
+ # Available Quota will not be changed with this operation.
+ instance.stop(wait_for_completion=True, show_output=True)
+ ```
+
+* Start
+
+ ```python
+ # start() is used to start the ComputeInstance if it is in stopped state
+ instance.start(wait_for_completion=True, show_output=True)
+ ```
+
+* Restart
+
+ ```python
+ # restart() is used to restart the ComputeInstance
+ instance.restart(wait_for_completion=True, show_output=True)
+ ```
+
+* Delete
+
+ ```python
+ # delete() is used to delete the ComputeInstance target. Useful if you want to re-use the compute name
+ instance.delete(wait_for_completion=True, show_output=True)
+ ```
+
+# [Azure CLI](#tab/azure-cli)
+ [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)] In the examples below, the name of the compute instance is **instance**
In the examples below, the name of the compute instance is **instance**
For more information, see [Az PowerShell module `az ml computetarget delete computeinstance`](/cli/azure/ml(v1)/computetarget#az-ml-computetarget-delete).
-[Azure RBAC](../../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user sign in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
++
+[Azure RBAC](../../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access. That user has access to Jupyter/JupyterLab/RStudio running on the instance. Compute instance will have single-user sign in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
These actions can be controlled by Azure RBAC: * *Microsoft.MachineLearningServices/workspaces/computes/read*
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-secrets-in-runs.md
+
+ Title: Authentication secrets in training
+
+description: Learn how to pass secrets to training jobs in secure fashion using the Azure Key Vault for your workspace.
++++++ Last updated : 10/21/2021++++
+# Use authentication credential secrets in Azure Machine Learning training jobs
+
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
+> * [v1](how-to-use-secrets-in-runs.md)
+> * [v2 (current version)](../how-to-use-secrets-in-runs.md)
+
+In this article, you learn how to use secrets in training jobs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
+
+Instead, your Azure Machine Learning workspace has an associated resource called a [Azure Key Vault](/azure/key-vault/general/overview). Use this Key Vault to pass secrets to remote jobs securely through a set of APIs in the Azure Machine Learning Python SDK.
+
+The standard flow for using secrets is:
+ 1. On local computer, log in to Azure and connect to your workspace.
+ 2. On local computer, set a secret in Workspace Key Vault.
+ 3. Submit a remote job.
+ 4. Within the remote job, get the secret from Key Vault and use it.
+
+## Set secrets
+
+In the Azure Machine Learning, the [Keyvault](/python/api/azureml-core/azureml.core.keyvault.keyvault) class contains methods for setting secrets. In your local Python session, first obtain a reference to your workspace Key Vault, and then use the [`set_secret()`](/python/api/azureml-core/azureml.core.keyvault.keyvault#set-secret-name--value-) method to set a secret by name and value. The __set_secret__ method updates the secret value if the name already exists.
+
+```python
+from azureml.core import Workspace
+from azureml.core import Keyvault
+import os
++
+ws = Workspace.from_config()
+my_secret = os.environ.get("MY_SECRET")
+keyvault = ws.get_default_keyvault()
+keyvault.set_secret(name="mysecret", value = my_secret)
+```
+
+Do not put the secret value in your Python code as it is insecure to store it in file as cleartext. Instead, obtain the secret value from an environment variable, for example Azure DevOps build secret, or from interactive user input.
+
+You can list secret names using the [`list_secrets()`](/python/api/azureml-core/azureml.core.keyvault.keyvault#list-secrets--) method and there is also a batch version,[set_secrets()](/python/api/azureml-core/azureml.core.keyvault.keyvault#set-secrets-secrets-batch-) that allows you to set multiple secrets at a time.
+
+> [!IMPORTANT]
+> Using `list_secrets()` will only list secrets created through `set_secret()` or `set_secrets()` using the Azure ML SDK. It will not list secrets created by something other than the SDK. For example, a secret created using the Azure portal or Azure PowerShell will not be listed.
+>
+> You can use [`get_secret()`](#get-secrets) to get a secret value from the key vault, regardless of how it was created. So you can retrieve secrets that are not listed by `list_secrets()`.
+
+## Get secrets
+
+In your local code, you can use the [`get_secret()`](/python/api/azureml-core/azureml.core.keyvault.keyvault#get-secret-name-) method to get the secret value by name.
+
+For jobs submitted the [`Experiment.submit`](/python/api/azureml-core/azureml.core.experiment.experiment#submit-config--tags-none-kwargs-) , use the [`get_secret()`](/python/api/azureml-core/azureml.core.run.run#get-secret-name-) method with the [`Run`](/python/api/azureml-core/azureml.core.run%28class%29) class. Because a submitted run is aware of its workspace, this method shortcuts the Workspace instantiation and returns the secret value directly.
+
+```python
+# Code in submitted job
+from azureml.core import Experiment, Run
+
+run = Run.get_context()
+secret_value = run.get_secret(name="mysecret")
+```
+
+Be careful not to expose the secret value by writing or printing it out.
+
+There is also a batch version, [get_secrets()](/python/api/azureml-core/azureml.core.run.run#get-secrets-secrets-) for accessing multiple secrets at once.
+
+## Next steps
+
+ * [View example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.ipynb)
+ * [Learn about enterprise security with Azure Machine Learning](../concept-enterprise-security.md)
marketplace Co Sell Solution Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/co-sell-solution-migration.md
Previously updated : 09/27/2021 Last updated : 09/21/2022 # Migration of co-sell solutions from OCP GTM to the commercial marketplace
migrate Discover And Assess Using Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md
Last updated 04/27/2022
-# Discover and assess servers for migration using Private Link
+# Discover and assess servers for migration using Private Link (Preview)
This article describes how to create an Azure Migrate project, set up the Azure Migrate appliance, and use it to discover and assess servers for migration using [Azure Private Link](../private-link/private-endpoint-overview.md). You can use the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
Last updated 4/5/2022
-# Support requirements and considerations
+# Support requirements and considerations for Private endpoint connectivity (Preview)
The article series describes how to use Azure Migrate to discover, assess, and migrate servers over a private network by using [Azure Private Link](../private-link/private-endpoint-overview.md). You can use the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) and [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tools to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
migrate Migrate Servers To Azure Using Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md
Last updated 12/29/2021
-# Migrate servers to Azure using Private Link
+# Migrate servers to Azure using Private Link (Preview)
This article describes how to use Azure Migrate to migrate servers over a private network by using [Azure Private Link](../private-link/private-endpoint-overview.md). You can use the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-azure-ad-authentication.md
+
+ Title: Active Directory authentication - Azure Database for MySQL - Flexible Server Preview
+description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for MySQL flexible server
+++ Last updated : 09/21/2022+++++
+# Active Directory authentication - Azure Database for MySQL - Flexible Server Preview
++
+Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for MySQL Flexible server using identities defined in Azure AD. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
+
+## Benefits
+
+- Authentication of users across Azure Services in a uniform way
+- Management of password policies and password rotation in a single place
+- Multiple forms of authentication supported by Azure Active Directory, which can eliminate the need to store passwords
+- Customers can manage database permissions using external (Azure AD) groups.
+- Azure AD authentication uses MySQL database users to authenticate identities at the database level
+- Support of token-based authentication for applications connecting to Azure Database for MySQL Flexible server
+
+## Use the steps below to configure and use Azure AD authentication
+
+1. Select your preferred authentication method for accessing the MySQL flexible server. By default, the authentication selected will be MySQL authentication only. Select Azure Active Directory authentication only or MySQL and Azure Active Directory authentication to enabled Azure AD authentication.
+2. Select the user managed identity (UMI) with the following privileges: _User.Read.All, GroupMember.Read.All_ and _Application.Read.ALL_, which can be used to configure Azure AD authentication.
+3. Add Azure AD Admin. It can be Azure AD Users, Groups or security principles, which will have access to Azure Database for MySQL flexible server.
+4. Create database users in your database mapped to Azure AD identities.
+5. Connect to your database by retrieving a token for an Azure AD identity and logging in.
+
+> [!Note]
+> For detailed, step-by-step instructions about how to configure Azure AD authentication with Azure Database for MySQL flexible server, see [Learn how to set up Azure Active Directory authentication for Azure Database for MySQL flexible Server](how-to-azure-ad.md)
+
+## Architecture
+
+User-managed identities are required for Azure Active Directory authentication. When a User-Assigned Identity is linked to the flexible server, the Managed Identity Resource Provider (MSRP) issues a certificate internally to that identity, and when the managed identity is deleted, the corresponding service principal is automatically removed. The service then uses the managed identity to request access tokens for services that support Azure AD authentication. Only a User-assigned Managed Identity (UMI) is currently supported by Azure Database for MySQL-Flexible Server. For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure. Azure takes care of rolling the credentials that are used by the service instance.
+
+The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for MySQL. The arrows indicate communication pathways.
++
+1. Your application can request a token from the Azure Instance Metadata Service identity endpoint.
+2. Using the client ID and certificate, a call is made to Azure AD to request an access token.
+3. A JSON Web Token (JWT) access token is returned by Azure AD.
+4. Your application sends the access token on a call to Azure Database for MySQL flexible server.
+
+## Administrator structure
+
+When using Azure AD authentication, there are two Administrator accounts for the MySQL server; the original MySQL administrator and the Azure AD administrator. Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL Flexible server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL Flexible server. Only one Azure AD administrator (a user or group) can be configured at a time.
++
+Methods of authentication for accessing the MySQL flexible server include:
+- MySQL Authentication only - Create a MySQL admin login and password to access your MySQL server with MySQL authentication.
+- Only Azure AD authentication - Authenticate as an Azure AD admin using an existing Azure AD user or group; the server parameter **aad_auth_only** will be _enabled_.
+- Authentication with MySQL and Azure AD - Authenticate using MySQL admin credentials or as an Azure AD admin using an existing Azure AD user or group; the server parameter **aad_auth_only** will be _disabled_.
+
+## Permissions
+
+To allow the UMI to read from Microsoft Graph as the server identity, the following permissions are required. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role.
+
+These permissions should be granted before you provision a logical server or managed instance. After you grant the permissions to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a server identity.
+
+> [!IMPORTANT]
+> Only a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) or [Privileged Role Administrator](/azure/active-directory/roles/permissions-reference#privileged-role-administrator) can grant these permissions.
+
+- [User.Read.All](/graph/permissions-reference#user-permissions): Allows access to Azure AD user information.
+- [GroupMember.Read.All](/graph/permissions-reference#group-permissions): Allows access to Azure AD group information.
+- [Application.Read.ALL](/graph/permissions-reference#application-resource-permissions): Allows access to Azure AD service principal (application) information.
+
+To create a new Azure AD database user, you must connect as the Azure AD administrator. This is demonstrated in Configure and Login with Azure AD for Azure Database for MySQL.
+
+Any Azure AD authentication is only possible if the Azure AD admin was created for Azure Database for MySQL Flexible server. If the Azure Active Directory admin was removed from the server, existing Azure Active Directory users created previously can no longer connect to the database using their Azure Active Directory credentials.
+
+## Token Validation
+
+Azure AD authentication in Azure Database for MySQL flexible server ensures that the user exists in the MySQL server, and it checks the validity of the token by validating the contents of the token. The following token validation steps are performed:
+
+- Token is signed by Azure AD and has not been tampered with.
+- Token was issued by Azure AD for the tenant associated with the server.
+- Token has not expired.
+- Token is for the Azure Database for MySQL flexible server resource (and not another Azure resource).
+
+## Connecting using Azure AD identities
+
+Azure Active Directory authentication supports the following methods of connecting to a database using Azure AD identities:
+
+- Azure Active Directory Password
+- Azure Active Directory Integrated
+- Azure Active Directory Universal with MFA
+- Using Active Directory Application certificates or client secrets
+- Managed Identity
+
+Once you have authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in.
+
+Please note that management operations, such as adding new users, are only supported for Azure AD user roles at this point.
+
+> [!NOTE]
+> For more details on how to connect with an Active Directory token, see [Configure and sign in with Azure AD for Azure Database for MySQL flexible server](how-to-azure-ad.md).
+
+## Additional considerations
+
+- Only one Azure AD administrator can be configured for an Azure Database for MySQL Flexible server at any time.
+- Only an Azure AD administrator for MySQL can initially connect to the Azure Database for MySQL Flexible server using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL Flexible server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL Flexible server.
+- If a user is deleted from Azure AD, that user will no longer be able to authenticate with Azure AD, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching user will still be in the database, it will not be possible to connect to the server with that user.
+
+> [!NOTE]
+> Login with the deleted Azure AD user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for MySQL this access will be revoked immediately.
+
+- If the Azure AD admin is removed from the server, the server will no longer be associated with an Azure AD tenant, and therefore all Azure AD logins will be disabled for the server. Adding a new Azure AD admin from the same tenant will re-enable Azure AD logins.
+- Azure Database for MySQL Flexible server matches access tokens to the Azure Database for MySQL user using the userΓÇÖs unique Azure AD user ID, as opposed to using the username. This means that if an Azure AD user is deleted in Azure AD and a new user created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name added, the new user will not be able to connect with the existing user.
+
+## Next steps
+
+- To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for MySQL, see [Set up Azure Active Directory authentication for Azure Database for MySQL flexible server](how-to-azure-ad.md)
mysql How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md
+
+ Title: Set up Azure Active Directory authentication for Azure Database for MySQL flexible server Preview
+description: Learn how to set up Azure Active Directory authentication for Azure Database for MySQL flexible Server
+++ Last updated : 09/21/2022+++++
+# Set up Azure Active Directory authentication for Azure Database for MySQL - Flexible Server Preview
++
+This tutorial shows you how to set up Azure Active Directory authentication for Azure Database for MySQL flexible server.
+
+In this tutorial, you learn how to:
+
+- Configure the Azure AD Admin
+- Connect to Azure Database for MySQL flexible server using Azure AD
+
+## Configure the Azure AD Admin
+
+Only an Azure AD Admin user can create/enable users for Azure AD-based authentication. To create an Azure AD Admin user, please follow the following steps.
+
+- In the Azure portal, select the instance of Azure Database for MySQL Flexible server that you want to enable for Azure AD.
+
+- Under Security pane, select Authentication:
+
+- There are three types of authentication available:
+
+ - MySQL authentication only ΓÇô By default, MySQL uses the built-in mysql_native_password authentication plugin, which performs authentication using the native password hashing method
+
+ - Azure Active Directory authentication only ΓÇô Only allows authentication with an Azure AD account. Disables mysql_native_password authentication and turns _ON_ the server parameter **aad_auth_only**
+
+ - MySQL and Azure Active Directory authentication ΓÇô Allows authentication using a native MySQL password or an Azure AD account. Turns _OFF_ the server parameter **aad_auth_only**
+
+- Select Identity ΓÇô Select/Add User assigned managed identity. To allow the UMI to read from Microsoft Graph as the server identity, the following permissions are required. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role.
+
+ - [User.Read.All](/graph/permissions-reference#user-permissions): Allows access to Azure AD user information.
+ - [GroupMember.Read.All](/graph/permissions-reference#group-permissions): Allows access to Azure AD group information.
+ - [Application.Read.ALL](/graph/permissions-reference#application-resource-permissions): Allows access to Azure AD service principal (application) information.
+
+These permissions should be granted before you provision a logical server or managed instance. After you grant the permissions to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a server identity.
+
+> [!IMPORTANT]
+> Only a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) or [Privileged Role Administrator](/azure/active-directory/roles/permissions-reference#privileged-role-administrator) can grant these permissions.
+
+- Select a valid Azure AD user or an Azure AD group in the customer tenant to be Azure AD administrator. Once Azure AD authentication support has been enabled, Azure AD Admins can be added as security principals with permissions to add Azure AD Users to the MySQL server.
+
+ > [!NOTE]
+ > Only one Azure AD admin can be created per MySQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
+
+## Connect to Azure Database for MySQL flexible server using Azure AD
+
+#### Prerequisites
+
+- An Azure account with an active subscription.
+
+- If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free) before you begin.
+
+ > [!Note]
+ > With an Azure free account, you can now try Azure Database for MySQL - Flexible Server for free for 12 months. For more information, see [Try Flexible Server for free](how-to-deploy-on-azure-free-account.md).
+
+- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+
+**Step 1: Authenticate with Azure AD**
+
+Start by authenticating with Azure AD using the Azure CLI tool.
+_(This step is not required in Azure Cloud Shell.)_
+
+- Log in to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the ID property, which refers to Subscription ID for your Azure account:
+
+ ```azurecli-interactive
+ az login
+ ```
+
+The command will launch a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and the password.
+
+- If you have multiple subscriptions, choose the appropriate subscription using the az account set command:
+
+ ```azurecli-interactive
+ az account set --subscription \<subscription id\>
+ ```
+
+**Step 2: Retrieve Azure AD access token**
+
+Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 1 to access Azure Database for MySQL flexible server.
+
+- Example (for Public Cloud):
+
+ ```azurecli-interactive
+ az account get-access-token --resource https://ossrdbms-aad.database.windows.net
+ ```
+
+- The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
+
+ ```azurecli-interactive
+ az cloud show
+ ```
+
+- For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
+
+ ```azurecli-interactive
+ az account get-access-token --resource-type oss-rdbms
+ ```
+
+- Using PowerShell, you can use the following command to acquire access token:
+
+ ```powershell
+ $accessToken = Get-AzAccessToken -ResourceUrl https://ossrdbms-aad.database.windows.net
+ $accessToken.Token | out-file C:\temp\MySQLAccessToken.txt
+ ```
+
+After authentication is successful, Azure AD will return an access token:
+
+```json
+{
+ "accessToken": "TOKEN",
+ "expiresOn": "...",
+ "subscription": "...",
+ "tenant": "...",
+ "tokenType": "Bearer"
+}
+```
+
+The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for MySQL service.
+
+The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for MySQL Flexible server.
+
+- You can use the following PowerShell command to see the token validity.
+
+ ```powershell
+ $accessToken.ExpiresOn.DateTime
+ ```
+
+**Step 3: Use token as password for logging in with MySQL**
+
+When connecting you need to use the access token as the MySQL user password. When using GUI clients such as MySQLWorkbench, you can use the method described above to retrieve the token.
+
+#### Using MySQL CLI
+When using the CLI, you can use this short-hand to connect:
+
+**Example (Linux/macOS):**
+
+```
+mysql -h mydb.mysql.database.azure.com \
+ --user user@tenant.onmicrosoft.com \
+ --enable-cleartext-plugin \
+ --password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken`
+```
+
+#### Using MySQL Workbench
+
+* Launch MySQL Workbench and Click the Database option, then click "Connect to database"
+* In the hostname field, enter the MySQL FQDN eg. mysql.database.azure.com
+* In the username field, enter the MySQL Azure Active Directory administrator name and append this with MySQL server name, not the FQDN e.g. user@tenant.onmicrosoft.com@
+* In the password field, click "Store in Vault" and paste in the access token from file e.g. C:\temp\MySQLAccessToken.txt
+* Click the advanced tab and ensure that you check "Enable Cleartext Authentication Plugin"
+* Click OK to connect to the database
+
+#### Important considerations when connecting:
+
+* `user@tenant.onmicrosoft.com` is the name of the Azure AD user or group you are trying to connect as
+* Make sure to use the exact way the Azure AD user or group name is spelled
+* Azure AD user and group names are case sensitive
+* When connecting as a group, use only the group name (e.g. `GroupName`)
+* If the name contains spaces, use `\` before each space to escape it
+
+> [!Note]
+> The ΓÇ£enable-cleartext-pluginΓÇ¥ setting ΓÇô you need to use a similar configuration with other clients to make sure the token gets sent to the server without being hashed.
+
+You are now authenticated to your MySQL flexible server using Azure AD authentication.
+
+## Additional Azure AD Admin commands
+
+- Manage server Active Directory administrator
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin
+ ```
+
+- Create an Active Directory administrator
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin create
+ ```
+
+ _Example: Create Active Directory administrator with user 'john@contoso.com', administrator ID '00000000-0000-0000-0000-000000000000' and identity 'test-identity'_
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin create -g testgroup -s testsvr -u john@contoso.com -i 00000000-0000-0000-0000-000000000000 --identity test-identity
+ ```
+
+- Delete an Active Directory administrator
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin delete
+ ```
+ _Example: Delete Active Directory administrator_
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin delete -g testgroup -s testsvr
+ ```
+
+- List all Active Directory administrators
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin list
+ ```
+ _Example: List Active Directory administrators_
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin list -g testgroup -s testsvr
+ ```
+
+- Get an Active Directory administrator
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin show
+ ```
+
+ _Example: Get Active Directory administrator_
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin show -g testgroup -s testsvr
+ ```
+
+- Wait for the Active Directory administrator to satisfy certain conditions
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin wait
+ ```
+
+ _Examples:_
+ - _Wait until the Active Directory administrator exists_
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin wait -g testgroup -s testsvr --exists
+ ```
+
+ - _Wait for the Active Directory administrator to be deleted_
+
+ ```azurecli-interactive
+ az mysql flexible-server ad-admin wait -g testgroup -s testsvr ΓÇôdeleted
+ ```
+
+## Creating Azure AD users in Azure Database for MySQL
+
+To add an Azure AD user to your Azure Database for MySQL database, perform the following steps after connecting:
+
+1. First ensure that the Azure AD user `<user>@yourtenant.onmicrosoft.com` is a valid user in Azure AD tenant.
+2. Sign in to your Azure Database for MySQL instance as the Azure AD Admin user.
+3. Create user `<user>@yourtenant.onmicrosoft.com` in Azure Database for MySQL.
+
+_Example:_
+```sql
+CREATE AADUSER 'user1@yourtenant.onmicrosoft.com';
+```
+
+For user names that exceed 32 characters, it is recommended you use an alias instead, to be used when connecting:
+
+_Example:_
+```sql
+CREATE AADUSER 'userWithLongName@yourtenant.onmicrosoft.com' as 'userDefinedShortName';
+```
+> [!NOTE]
+> 1. MySQL ignores leading and trailing spaces so user name should not have any leading or trailing spaces.
+> 2. Authenticating a user through Azure AD does not give the user any permissions to access objects within the Azure Database for MySQL database. You must grant the user the required permissions manually.
+
+## Creating Azure AD groups in Azure Database for MySQL
+
+To enable an Azure AD group for access to your database, use the same mechanism as for users, but instead specify the group name:
+
+_Example:_
+
+```sql
+CREATE AADUSER 'Prod_DB_Readonly';
+```
+
+When logging in, members of the group will use their personal access tokens, but sign with the group name specified as the username.
+
+## Compatibility with application drivers
+
+Most drivers are supported, however make sure to use the settings for sending the password in clear-text, so the token gets sent without modification.
+
+- C/C++
+ - libmysqlclient: Supported
+ - mysql-connector-c++: Supported
+
+- Java
+ - Connector/J (mysql-connector-java): Supported, must utilize `useSSL` setting
+
+- Python
+ - Connector/Python: Supported
+
+- Ruby
+ - mysql2: Supported
+
+- .NET
+ - mysql-connector-net: Supported, need to add plugin for mysql_clear_password
+ - mysql-net/MySqlConnector: Supported
+
+- Node.js
+ - mysqljs: Not supported (does not send token in cleartext without patch)
+ - node-mysql2: Supported
+
+- Perl
+ - DBD::mysql: Supported
+ - Net::MySQL: Not supported
+
+- Go
+ - go-sql-driver: Supported, add `?tls=true&allowCleartextPasswords=true` to connection string
+
+## Next steps
+
+- Review the concepts for [Azure Active Directory authentication with Azure Database for MySQL flexible server](concepts-azure-ad-authentication.md)
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-import-export.md
You can use the **Data Export** pane to export your MySQL data.
You can use the **Data Import** pane to import or restore exported data from the data export operation or from the mysqldump command.
-1. In MySQL Workbench, on the **Navigator** pane, select **Data Export/Restore**.
+1. In MySQL Workbench, on the **Navigator** pane, select **Data Import/Restore**.
1. Select the project folder or self-contained SQL file, select the schema to import into, or select the **New** button to define a new schema. 1. Select **Start Import** to begin the import process.
mysql How To Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-portal.md
Server-level firewall rules can be used to manage access to an Azure Database fo
Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md).
+> [!NOTE]
+> Virtual Network (VNet) rules can only be used on General Purpose or Memory Optimized tiers.
+ ## Create a server-level firewall rule in the Azure portal 1. On the MySQL server page, under Settings heading, click **Connection Security** to open the Connection Security page for the Azure Database for MySQL.
notification-hubs Create Notification Hub Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-template.md
Previously updated : 08/04/2020 Last updated : 09/21/2022 ms.lastreviewed: 05/15/2020
-# Quickstart: Create a notification hub using an ARM template
+# Quickstart: Create a notification hub using a Resource Manager template
Azure Notification Hubs provides an easy-to-use and scaled-out push engine that enables you to send notifications to any platform (iOS, Android, Windows, Kindle, etc.) from any backend (cloud or on-premises). For more information about the service, see [What is Azure Notification Hubs](notification-hubs-push-notification-overview.md). [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-This quickstart uses an Azure Resource Manager template (ARM template) to create an Azure Notification Hubs namespace, and a notification hub named **MyHub** within that namespace.
+This quickstart uses an Azure Resource Manager template to create an Azure Notification Hubs namespace, and a notification hub named **MyHub** within that namespace.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
openshift Configure Azure Ad Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/configure-azure-ad-ui.md
You can use optional claims to:
We'll configure OpenShift to use the `email` claim and fall back to `upn` to set the Preferred Username by adding the `upn` as part of the ID token returned by Azure Active Directory.
-Navigate to **Token configuration (preview)** and click on **Add optional claim**. Select **ID** then check the **email** and **upn** claims.
+Navigate to **Token configuration** and click on **Add optional claim**. Select **ID** then check the **email** and **upn** claims.
![Screenshot that shows the email and upn claims that were added.](media/aro4-ad-tokens.png)
Navigate to **Administration**, click on **Cluster Settings**, then select the *
Scroll down to select **Add** under **Identity Providers** and select **OpenID Connect**. ![Select OpenID Connect from the Identity Providers dropdown](media/aro4-oauth-idpdrop.png)
-Fill in the name as **AAD**, the **Client ID** as the **Application ID** and the **Client Secret**. The **Issuer URL** is formatted as such: `https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. Replace the placeholder with the Tenant ID you retrieved earlier.
+Fill in the name as **AAD**, the **Client ID** as the **Application ID** and the **Client Secret**. The **Issuer URL** is formatted as such: `https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/v2.0`. Replace the placeholder with the Tenant ID you retrieved earlier.
![Fill in OAuth details](media/aro4-oauth-idp-1.png)
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Last updated 08/25/2022
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL.
+## Release: September 2022
+
+* Support for [Fast Restore](./concepts-backup-restore.md)
+* General availability of [Geo-Redundant Backups](./concepts-backup-restore.md)
+
+Please see the [regions](overview.md#azure-regions) where Geo-redundant backup is currently available.
++ ## Release: August 2022 * Support for [PostgreSQL minor version](./concepts-supported-versions.md) 14.4. <sup>$</sup>
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-lineage-user-guide.md
Previously updated : 01/20/2022 Last updated : 09/20/2022 # Microsoft Purview Data Catalog lineage user guide
Data integration and ETL tools can push lineage into Microsoft Purview at execut
| Azure Data Share | [Share snapshot](how-to-link-azure-data-share.md) | ### Data storage systems
-Databases & storage solutions such as Oracle, Teradata, and SAP have query engines to transform data using scripting language. Data lineage from views/stored procedures/etc are collected into Microsoft Purview and stitched with lineage from other systems. Lineage is supported for the following data sources via Microsoft Purview data scan. Learn more about the supported lineage scenarios from the respective article.
+Databases & storage solutions such as Oracle, Teradata, and SAP have query engines to transform data using scripting language. Data lineage information from views/stored procedures/etc is collected into Microsoft Purview and stitched with lineage from other systems. Lineage is supported for the following data sources via Microsoft Purview data scan. Learn more about the supported lineage scenarios from the respective article.
|**Category**| **Data source** | |||
To access lineage information for an asset in Microsoft Purview, follow the step
Microsoft Purview supports asset level lineage for the datasets and processes. To see the asset level lineage go to the **Lineage** tab of the current asset in the catalog. Select the current dataset asset node. By default the list of columns belonging to the data appears in the left pane.
- :::image type="content" source="./media/catalog-lineage-user-guide/view-columns-from-lineage.png" alt-text="Screenshot showing how to select View columns in the lineage page" border="true":::
+ :::image type="content" source="./media/catalog-lineage-user-guide/view-columns-from-lineage-inline.png" alt-text="Screenshot showing how to select View columns in the lineage page" lightbox="./media/catalog-lineage-user-guide/view-columns-from-lineage.png"border="true":::
## Dataset column lineage
To see column-level lineage of a dataset, go to the **Lineage** tab of the curre
1. Once you are in the lineage tab, in the left pane, select the check box next to each column you want to display in the data lineage.
- :::image type="content" source="./media/catalog-lineage-user-guide/select-columns-to-show-in-lineage.png" alt-text="Screenshot showing how to select columns to display in the lineage page." lightbox="./media/catalog-lineage-user-guide/select-columns-to-show-in-lineage.png":::
+ :::image type="content" source="./media/catalog-lineage-user-guide/select-columns-to-show-in-lineage-inline.png" alt-text="Screenshot showing how to select columns to display in the lineage page." lightbox="./media/catalog-lineage-user-guide/select-columns-to-show-in-lineage.png":::
-2. Hover over a selected column on the left pane or in the dataset of the lineage canvas to see the column mapping. All the column instances are highlighted.
+1. Hover over a selected column on the left pane or in the dataset of the lineage canvas to see the column mapping. All the column instances are highlighted.
- :::image type="content" source="./media/catalog-lineage-user-guide/show-column-flow-in-lineage.png" alt-text="Screenshot showing how to hover over a column name to highlight the column flow in a data lineage path." lightbox="./media/catalog-lineage-user-guide/show-column-flow-in-lineage.png":::
+ :::image type="content" source="./media/catalog-lineage-user-guide/show-column-flow-in-lineage-inline.png" alt-text="Screenshot showing how to hover over a column name to highlight the column flow in a data lineage path." lightbox="./media/catalog-lineage-user-guide/show-column-flow-in-lineage.png":::
-3. If the number of columns is larger than what can be displayed in the left pane, use the filter option to select a specific column by name. Alternatively, you can use your mouse to scroll through the list.
+1. If the number of columns is larger than what can be displayed in the left pane, use the filter option to select a specific column by name. Alternatively, you can use your mouse to scroll through the list.
:::image type="content" source="./media/catalog-lineage-user-guide/filter-columns-by-name.png" alt-text="Screenshot showing how to filter columns by column name on the lineage page." lightbox="./media/catalog-lineage-user-guide/filter-columns-by-name.png":::
-4. If the lineage canvas contains more nodes and edges, use the filter to select data asset or process nodes by name. Alternatively, you can use your mouse to pan around the lineage window.
+1. If the lineage canvas contains more nodes and edges, use the filter to select data asset or process nodes by name. Alternatively, you can use your mouse to pan around the lineage window.
:::image type="content" source="./media/catalog-lineage-user-guide/filter-assets-by-name.png" alt-text="Screenshot showing data asset nodes by name on the lineage page." lightbox="./media/catalog-lineage-user-guide/filter-assets-by-name.png":::
-5. Use the toggle in the left pane to highlight the list of datasets in the lineage canvas. If you turn off the toggle, any asset that contains at least one of the selected columns is displayed. If you turn on the toggle, only datasets that contain all of the columns are displayed.
+1. Use the toggle in the left pane to highlight the list of datasets in the lineage canvas. If you turn off the toggle, any asset that contains at least one of the selected columns is displayed. If you turn on the toggle, only datasets that contain all of the columns are displayed.
:::image type="content" source="./media/catalog-lineage-user-guide/use-toggle-to-filter-nodes.png" alt-text="Screenshot showing how to use the toggle to filter the list of nodes on the lineage page." lightbox="./media/catalog-lineage-user-guide/use-toggle-to-filter-nodes.png"::: ## Process column lineage
-Data process can take one or more input datasets to produce one or more outputs. In Microsoft Purview, column level lineage is available for process nodes.
-1. Switch between input and output datasets from a drop down in the columns panel.
-2. Select columns from one or more tables to see the lineage flowing from input dataset to corresponding output dataset.
- :::image type="content" source="./media/catalog-lineage-user-guide/process-column-lineage.png" alt-text="Screenshot showing columns lineage of a process node." lightbox="./media/catalog-lineage-user-guide/process-column-lineage.png":::
+You can also view data processes, like copy activities, in the data catalog. For example, in this lineage flow, select the copy activity:
++
+The copy activity will expand, and then you can select the **Switch to asset** button, which will give you more details about the process itself.
++
+Data process can take one or more input datasets to produce one or more outputs. In Microsoft Purview, column level lineage is available for process nodes.
+
+1. Switch between input and output datasets from a drop-down in the columns panel.
+1. Select columns from one or more tables to see the lineage flowing from input dataset to corresponding output dataset.
+
+ :::image type="content" source="./media/catalog-lineage-user-guide/process-column-lineage-inline.png" alt-text="Screenshot showing columns lineage of a process node." lightbox="./media/catalog-lineage-user-guide/process-column-lineage.png":::
## Browse assets in lineage+ 1. Select **Switch to asset** on any asset to view its corresponding metadata from the lineage view. Doing so is an effective way to browse to another asset in the catalog from the lineage view.
- :::image type="content" source="./media/catalog-lineage-user-guide/select-switch-to-asset.png" alt-text="Screenshot how to select Switch to asset in a lineage data asset." lightbox="./media/catalog-lineage-user-guide/select-switch-to-asset.png":::
+ :::image type="content" source="./media/catalog-lineage-user-guide/select-switch-to-asset-inline.png" alt-text="Screenshot how to select Switch to asset in a lineage data asset." lightbox="./media/catalog-lineage-user-guide/select-switch-to-asset.png":::
-2. The lineage canvas could become complex for popular datasets. To avoid clutter, the default view will only show five levels of lineage for the asset in focus. The rest of the lineage can be expanded by selecting the bubbles in the lineage canvas. Data consumers can also hide the assets in the canvas that are of no interest. To further reduce the clutter, turn off the toggle **More Lineage** at the top of lineage canvas. This action will hide all the bubbles in lineage canvas.
+1. The lineage canvas could become complex for popular datasets. To avoid clutter, the default view will only show five levels of lineage for the asset in focus. The rest of the lineage can be expanded by selecting the bubbles in the lineage canvas. Data consumers can also hide the assets in the canvas that are of no interest. To further reduce the clutter, turn off the toggle **More Lineage** at the top of lineage canvas. This action will hide all the bubbles in lineage canvas.
- :::image type="content" source="./media/catalog-lineage-user-guide/use-toggle-to-hide-bubbles.png" alt-text="Screenshot showing how to toggle More lineage." lightbox="./media/catalog-lineage-user-guide/use-toggle-to-hide-bubbles.png":::
+ :::image type="content" source="./media/catalog-lineage-user-guide/use-toggle-to-hide-bubbles-inline.png" alt-text="Screenshot showing how to toggle More lineage." lightbox="./media/catalog-lineage-user-guide/use-toggle-to-hide-bubbles.png":::
-3. Use the smart buttons in the lineage canvas to get an optimal view of the lineage. Auto layout, Zoom to fit, Zoom in/out, Full screen, and navigation map are available for an immersive lineage experience in the catalog.
+1. Use the smart buttons in the lineage canvas to get an optimal view of the lineage:
+ 1. Full screen
+ 1. Zoom to fit
+ 1. Zoom in/out
+ 1. Auto align
+ 1. Zoom preview
+ 1. And more options:
+ 1. Center the current asset
+ 1. Reset to default view
- :::image type="content" source="./media/catalog-lineage-user-guide/use-lineage-smart-buttons.png" alt-text="Screenshot showing how to select the lineage smart buttons." lightbox="./media/catalog-lineage-user-guide/use-lineage-smart-buttons.png":::
+ :::image type="content" source="./media/catalog-lineage-user-guide/use-lineage-smart-buttons-inline.png" alt-text="Screenshot showing how to select the lineage smart buttons." lightbox="./media/catalog-lineage-user-guide/use-lineage-smart-buttons.png":::
## Next steps
purview Concept Data Lineage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-data-lineage.md
Last updated 09/27/2021
This article provides an overview of data lineage in Microsoft Purview Data Catalog. It also details how data systems can integrate with the catalog to capture lineage of data. Microsoft Purview can capture lineage for data in different parts of your organization's data estate, and at different levels of preparation including: -- Completely raw data staged from various platforms
+- Raw data staged from various platforms
- Transformed and prepared data - Data used by visualization platforms.
-## Use Cases
+## Use cases
-Data lineage is broadly understood as the lifecycle that spans the dataΓÇÖs origin, and where it moves over time across the data estate. It is used for different kinds of backwards-looking scenarios such as troubleshooting, tracing root cause in data pipelines and debugging. Lineage is also used for data quality analysis, compliance and ΓÇ£what ifΓÇ¥ scenarios often referred to as impact analysis. Lineage is represented visually to show data moving from source to destination including how the data was transformed. Given the complexity of most enterprise data environments, these views can be hard to understand without doing some consolidation or masking of peripheral data points.
+Data lineage is broadly understood as the lifecycle that spans the dataΓÇÖs origin, and where it moves over time across the data estate. It's used for different kinds of backwards-looking scenarios such as troubleshooting, tracing root cause in data pipelines and debugging. Lineage is also used for data quality analysis, compliance and ΓÇ£what ifΓÇ¥ scenarios often referred to as impact analysis. Lineage is represented visually to show data moving from source to destination including how the data was transformed. Given the complexity of most enterprise data environments, these views can be hard to understand without doing some consolidation or masking of peripheral data points.
## Lineage experience in Microsoft Purview Data Catalog Microsoft Purview Data Catalog will connect with other data processing, storage, and analytics systems to extract lineage information. The information is combined to represent a generic, scenario-specific lineage experience in the Catalog. Your data estate may include systems doing data extraction, transformation (ETL/ELT systems), analytics, and visualization systems. Each of the systems captures rich static and operational metadata that describes the state and quality of the data within the systems boundary. The goal of lineage in a data catalog is to extract the movement, transformation, and operational metadata from each data system at the lowest grain possible.
The following section covers the details about the granularity of which the line
- Lineage is represented as a graph, typically it contains source and target entities in Data storage systems that are connected by a process invoked by a compute system. - Data systems connect to the data catalog to generate and report a unique object referencing the physical object of the underlying data system for example: SQL Stored procedure, notebooks, and so on.-- High fidelity lineage with additional metadata like ownership is captured to show the lineage in a human readable format for source & target entities. for example: lineage at a hive table level instead of partitions or file level.
+- High fidelity lineage with other metadata like ownership is captured to show the lineage in a human readable format for source & target entities. for example: lineage at a hive table level instead of partitions or file level.
### Column or attribute level lineage
purview How To Link Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-link-azure-data-factory.md
Follow the steps below to connect an existing data factory to your Microsoft Pur
:::image type="content" source="./media/how-to-link-azure-data-factory/warning-for-disconnect-factory.png" alt-text="Screenshot showing warning to disconnect Azure Data Factory."::: >[!Note]
->We now support adding no more than 10 data factories at once. If you want to add more than 10 data factories at once, please file a support ticket.
+>We support adding 10 Data Factory at once. If you want to add more than 10 Data Factory, please do so in multiple batches of 10 Data Factory.
### How authentication works
In the following example, an Azure Data Lake Gen2 resource set is produced from
[Catalog lineage user guide](catalog-lineage-user-guide.md)
-[Link to Azure Data Share for lineage](how-to-link-azure-data-share.md)
+[Link to Azure Data Share for lineage](how-to-link-azure-data-share.md)
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
Previously updated : 04/29/2022 Last updated : 09/21/2022
For a list of metadata available for Power BI, see our [available metadata docum
|**Scenario** |**Microsoft Purview public access** |**Power BI public access** | **Runtime option** | **Authentication option** | **Deployment checklist** | ||||||| |Public access with Azure integration runtime |Allowed |Allowed |Azure runtime |Delegated authentication | [Deployment checklist](#deployment-checklist) |
-|Public access with self-hosted integration runtime |Allowed |Allowed |Self-hosted runtime |Delegated authentication | [Deployment checklist](#deployment-checklist) |
+|Public access with self-hosted integration runtime |Allowed |Allowed |Self-hosted runtime |Delegated authentication / service principal | [Deployment checklist](#deployment-checklist) |
### Known limitations -- For the cross-tenant scenario, delegated authentication is the only supported option for scanning.
+- For the cross-tenant scenario, delegated authentication and service principal are the only supported authentication options for scanning.
- You can create only one scan for a Power BI data source that is registered in your Microsoft Purview account. - If the Power BI dataset schema isn't shown after the scan, it's due to one of the current limitations with the [Power BI metadata scanner](/power-bi/admin/service-admin-metadata-scanning). - Empty workspaces are skipped.
Use either of the following deployment checklists during the setup, or for troub
1. From the Power BI tenant admin portal, make sure the Power BI tenant is configured to allow a public network. 1. Check your instance of Azure Key Vault to make sure:
- 1. There are no typos in the password.
+ 1. There are no typos in the password or secret.
2. Microsoft Purview managed identity has **get** and **list** access to secrets. 1. Review your credential to validate that the: 1. Client ID matches the _Application (Client) ID_ of the app registration.
- 2. Username includes the user principal name, such as `johndoe@contoso.com`.
+ 2. For **delegated auth**, username includes the user principal name, such as `johndoe@contoso.com`.
1. In the Power BI Azure AD tenant, validate the following Power BI admin user settings: 1. The user is assigned to the Power BI administrator role.
Use either of the following deployment checklists during the setup, or for troub
2. **Implicit grant and hybrid flows** > **ID tokens (used for implicit and hybrid flows)** is selected. 3. **Allow public client flows** is enabled.
+1. In Power BI tenant, In Azure Active Directory create a security group.
+1. In Power BI tenant, from Azure Active Directory tenant, make sure [Service Principal is member of the new security group](#authenticate-to-power-bi-tenant).
+1. On the Power BI Tenant Admin portal, validate if [Allow service principals to use read-only Power BI admin APIs](#associate-the-security-group-with-power-bi-tenant) is enabled for the new security group.
+ # [Public access with self-hosted integration runtime](#tab/Scenario2) ### Scan cross-tenant Power BI by using delegated authentication in a public network
Use either of the following deployment checklists during the setup, or for troub
1. Client ID matches the _Application (Client) ID_ of the app registration. 2. Username includes the user principal name, such as `johndoe@contoso.com`.
-1. In the Power BI Azure AD tenant, validate the following Power BI admin user settings:
- 1. The user is assigned to the Power BI administrator role.
- 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
- 3. If the user is recently created, sign in with the user at least once, to make sure that the password is reset successfully, and the user can successfully initiate the session.
- 4. There are no multifactor authentication or conditional access policies enforced on the user.
- 1. In the Power BI Azure AD tenant, validate the following app registration settings: 1. The app registration exists in your Azure AD tenant where the Power BI tenant is located. 2. Under **API permissions**, the following APIs are set up with **read** for **delegated permissions** and **grant admin consent for the tenant**:
Use either of the following deployment checklists during the setup, or for troub
2. **Implicit grant and hybrid flows** > **ID tokens (used for implicit and hybrid flows)** is selected. 3. **Allow public client flows** is enabled.
+1. If delegated authentication is used, in the Power BI Azure AD tenant validate the following Power BI admin user settings:
+ 1. The user is assigned to the Power BI administrator role.
+ 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
+ 3. If the user is recently created, sign in with the user at least once, to make sure that the password is reset successfully, and the user can successfully initiate the session.
+ 4. There are no multifactor authentication or conditional access policies enforced on the user.
+ 1. Validate the following self-hosted runtime settings: 1. The latest version of the [self-hosted runtime](https://www.microsoft.com/download/details.aspx?id=39717) is installed on the VM. 1. Network connectivity from the self-hosted runtime to the Power BI tenant is enabled. 1. Network connectivity from the self-hosted runtime to Microsoft services is enabled. 1. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed.-
+1. In Power BI tenant, In Azure Active Directory create a security group.
+1. In Power BI tenant, from Azure Active Directory tenant, make sure [Service Principal is member of the new security group](#authenticate-to-power-bi-tenant).
+1. On the Power BI Tenant Admin portal, validate if [Allow service principals to use read-only Power BI admin APIs](#associate-the-security-group-with-power-bi-tenant) is enabled for the new security group.
## Register the Power BI tenant
Delegated authentication is the only supported option for cross-tenant scanning.
> 1. Confirm you have completed the [deployment checklist for your scenario](#deployment-checklist). > 1. Review the [scan troubleshooting documentation](register-scan-power-bi-tenant-troubleshoot.md).
+### Authenticate to Power BI tenant
+
+In Azure Active Directory Tenant, where Power BI tenant is located:
+
+1. In the [Azure portal](https://portal.azure.com), search for **Azure Active Directory**.
+
+2. Create a new security group in your Azure Active Directory, by following [Create a basic group and add members using Azure Active Directory](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+
+ > [!Tip]
+ > You can skip this step if you already have a security group you want to use.
+
+3. Select **Security** as the **Group Type**.
+
+ :::image type="content" source="./media/setup-power-bi-scan-PowerShell/security-group.png" alt-text="Screenshot of security group type.":::
+
+4. Add your **service principal** to this security group. Select **Members**, then select **+ Add members**.
+
+5. Search for your Microsoft Purview managed identity or service principal and select it.
+
+ :::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-catalog-to-group-by-search.png" alt-text="Screenshot showing how to add catalog by searching for its name.":::
+
+ You should see a success notification showing you that it was added.
+
+ :::image type="content" source="./media/setup-power-bi-scan-PowerShell/success-add-catalog-msi.png" alt-text="Screenshot showing successful addition of catalog managed identity.":::
+
+### Associate the security group with Power BI tenant
+
+1. Log into the [Power BI admin portal](https://app.powerbi.com/admin-portal/tenantSettings).
+
+2. Select the **Tenant settings** page.
+
+ > [!Important]
+ > You need to be a Power BI Admin to see the tenant settings page.
+
+3. Select **Admin API settings** > **Allow service principals to use read-only Power BI admin APIs (Preview)**.
+
+4. Select **Specific security groups**.
+
+ :::image type="content" source="./media/setup-power-bi-scan-PowerShell/allow-service-principals-power-bi-admin.png" alt-text="Image showing how to allow service principals to get read-only Power BI admin API permissions.":::
+
+5. Select **Admin API settings** > **Enhance admin APIs responses with detailed metadata** > Enable the toggle to allow Microsoft Purview Data Map automatically discover the detailed metadata of Power BI datasets as part of its scans.
+
+ > [!IMPORTANT]
+ > After you update the Admin API settings on your power bi tenant, wait around 15 minutes before registering a scan and test connection.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-sub-artifacts.png" alt-text="Image showing the Power BI admin portal config to enable subartifact scan.":::
+
+ > [!Caution]
+ > When you allow the security group you created (that has your Microsoft Purview managed identity as a member) to use read-only Power BI admin APIs, you also allow it to access the metadata (e.g. dashboard and report names, owners, descriptions, etc.) for all of your Power BI artifacts in this tenant. Once the metadata has been pulled into the Microsoft Purview, Microsoft Purview's permissions, not Power BI permissions, determine who can see that metadata.
+
+ > [!Note]
+ > You can remove the security group from your developer settings, but the metadata previously extracted won't be removed from the Microsoft Purview account. You can delete it separately, if you wish.
+
+### Create scan for cross-tenant using Azure IR with delegated authentication
+ To create and run a new scan by using the Azure runtime, perform the following steps: 1. Create a user account in the Azure AD tenant where the Power BI tenant is located, and assign the user to the Azure AD role, **Power BI Administrator**. Take note of the username and sign in to change the password.
To create and run a new scan by using the Azure runtime, perform the following s
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan.png" alt-text="Screenshot that shows how to save and run the Power BI source.":::
+### Create scan for cross-tenant using self-hosted IR with service principal
+
+To create and run a new scan by using the self-hosted integration runtime, perform the following steps:
+
+1. Create an app registration in your Azure AD tenant where Power BI is located. Provide a web URL in the **Redirect URI**. Take note of the client ID (app ID).
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot that shows how to create a service principle.":::
+
+1. From the Azure AD dashboard, select the newly created application, and then select **App permissions**. Assign the application the following delegated permissions, and grant admin consent for the tenant:
+
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI and Microsoft Graph.":::
+
+1. From the Azure AD dashboard, select the newly created application, and then select **Authentication**. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-multitenant.png" alt-text="Screenshot of account type support multitenant.":::
+
+1. Under **Implicit grant and hybrid flows**, select **ID tokens (used for implicit and hybrid flows)**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-id-token-hybrid-flows.png" alt-text="Screenshot of ID token hybrid flows.":::
+
+1. Under **Advanced settings**, enable **Allow Public client flows**.
+
+1. In the tenant where Microsoft Purview is created go to the instance of Azure Key Vault.
+
+1. Select **Settings** > **Secrets**, and then select **+ Generate/Import**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault.png" alt-text="Screenshot of the instance of Azure Key Vault.":::
+
+1. Enter a name for the secret. For **Value**, type the newly created password for the Azure AD user. Select **Create** to complete.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault-secret.png" alt-text="Screenshot that shows how to generate a secret in Azure Key Vault.":::
+
+1. If your key vault isn't connected to Microsoft Purview yet, you need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account).
+
+1. In the Microsoft Purview Studio, go to the **Data map** in the left menu. Go to **Sources**.
+
+1. Select the registered Power BI source from cross-tenant.
+
+1. Select **+ New scan**.
+
+1. Give your scan a name. Then select the option to include or exclude the personal workspaces.
+
+ > [!Note]
+ > If you switch the configuration of a scan to include or exclude a personal workspace, you trigger a full scan of the Power BI source.
+
+1. Select your self-hosted integration runtime from the drop-down list.
+
+1. For the **Credential**, select **Service Principal**, and then select **+ New** to create a new credential.
+
+1. Create a new credential and provide the following required parameters:
+
+ - **Name**: Provide a unique name for credential
+ - **Authentication method**: Service principal
+ - **Tenant ID**: Your Power BI tenant ID
+ - **Client ID**: Use Service Principal Client ID (App ID) you created earlier
+
+1. Select **Test connection** before continuing to the next steps.
+
+ If the test fails, select **View Report** to see the detailed status and troubleshoot the problem:
+
+ 1. *Access - Failed* status means that the user authentication failed. Validate if the App ID and secret are correct. Review if the credential contains the correct client (app) ID from the app registration.
+ 2. *Assets (+ lineage) - Failed* status means that the authorization between Microsoft Purview and Power BI has failed. Make sure that the user is added to the Power BI administrator role, and has the proper Power BI license assigned.
+ 3. *Detailed metadata (Enhanced) - Failed* status means that the Power BI admin portal is disabled for the following setting: **Enhance admin APIs responses with detailed metadata**.
+
+1. Set up a scan trigger. Your options are **Recurring** or **Once**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Microsoft Purview scan scheduler.":::
+
+1. On **Review new scan**, select **Save and run** to launch your scan.
+ ## Next steps Now that you've registered your source, see the following guides to learn more about Microsoft Purview and your data.
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 07/07/2022 Last updated : 09/21/2022
For a list of metadata available for Power BI, see our [available metadata docum
|**Scenarios** |**Microsoft Purview public access allowed/denied** |**Power BI public access allowed /denied** | **Runtime option** | **Authentication option** | **Deployment checklist** | |||||||
-|Public access with Azure IR |Allowed |Allowed |Azure Runtime | Microsoft Purview Managed Identity | [Review deployment checklist](#deployment-checklist) |
-|Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-|Private access |Allowed |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-|Private access |Denied |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-|Private access |Denied |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
+|Public access with Azure IR |Allowed |Allowed |Azure Runtime | Microsoft Purview Managed Identity | [Review deployment checklist](#deployment-checklist) |
+|Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated authentication / Service principal| [Review deployment checklist](#deployment-checklist) |
+|Private access |Allowed |Denied |Self-hosted runtime |Delegated authentication / Service principal| [Review deployment checklist](#deployment-checklist) |
+|Private access |Denied |Allowed |Self-hosted runtime |Delegated authentication / Service principal| [Review deployment checklist](#deployment-checklist) |
+|Private access |Denied |Denied |Self-hosted runtime |Delegated authentication / Service principal| [Review deployment checklist](#deployment-checklist) |
### Known limitations - If Microsoft Purview or Power BI tenant is protected behind a private endpoint, Self-hosted runtime is the only option to scan.-- Delegated authentication is the only supported authentication option if self-hosted integration runtime is used during the scan.
+- Delegated authentication and service principal are the only supported authentication options when self-hosted integration runtime is used during the scan.
- You can create only one scan for a Power BI data source that is registered in your Microsoft Purview account. - If Power BI dataset schema isn't shown after scan, it's due to one of the current limitations with [Power BI Metadata scanner](/power-bi/admin/service-admin-metadata-scanning). - Empty workspaces are skipped.
Before you start, make sure you have the following prerequisites:
- Managed Identity - Delegated Authentication
+- Service Principal
## Deployment checklist
Use any of the following deployment checklists during the setup or for troublesh
### Scan same-tenant Power BI using Azure IR and Managed Identity in public network 1. Make sure Power BI and Microsoft Purview accounts are in the same tenant.+ 1. Make sure Power BI tenant ID is entered correctly during the registration.+ 1. Make sure your [Power BI Metadata model is up to date by enabling metadata scanning.](/power-bi/admin/service-admin-metadata-scanning-setup#enable-tenant-settings-for-metadata-scanning)+ 1. From Azure portal, validate if Microsoft Purview account Network is set to public access.+ 1. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.+ 1. In Azure Active Directory tenant, create a security group.
-1. From Azure Active Directory tenant, make sure [Microsoft Purview account MSI is member of the new security group](#authenticate-to-power-bi-tenant-managed-identity-only).
+
+1. From Azure Active Directory tenant, make sure [Microsoft Purview account MSI is member of the new security group](#authenticate-to-power-bi-tenant).
+ 1. On the Power BI Tenant Admin portal, validate if [Allow service principals to use read-only Power BI admin APIs](#associate-the-security-group-with-power-bi-tenant) is enabled for the new security group. # [Public access with Self-hosted IR](#tab/Scenario2)
-### Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in public network
+### Scan same-tenant Power BI using self-hosted IR with Delegated Authentication or Service Principal in public network
1. Make sure Power BI and Microsoft Purview accounts are in the same tenant.+ 1. Make sure Power BI tenant ID is entered correctly during the registration.+ 1. Make sure your [Power BI Metadata model is up to date by enabling metadata scanning.](/power-bi/admin/service-admin-metadata-scanning-setup#enable-tenant-settings-for-metadata-scanning) 1. From Azure portal, validate if Microsoft Purview account Network is set to public access.+ 1. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.+ 1. Check your Azure Key Vault to make sure:
- 1. There are no typos in the password.
+ 1. There are no typos in the password or secret.
2. Microsoft Purview Managed Identity has get/list access to secrets.+ 1. Review your credential to validate: 1. Client ID matches _Application (Client) ID_ of the app registration. 2. Username includes the user principal name such as `johndoe@contoso.com`.
-1. Validate Power BI admin user settings to make sure:
- 1. User is assigned to Power BI Administrator role.
- 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
- 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
- 4. There's no MFA or Conditional Access Policies are enforced on the user.
+ 1. Validate App registration settings to make sure: 1. App registration exists in your Azure Active Directory tenant. 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
Use any of the following deployment checklists during the setup or for troublesh
2. Microsoft Graph openid 3. Microsoft Graph User.Read 3. Under **Authentication**, **Allow public client flows** is enabled.
-2. Validate Self-hosted runtime settings:
+
+2. If delegated authentication is used, validate Power BI admin user settings to make sure:
+ 1. User is assigned to Power BI Administrator role.
+ 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
+ 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
+ 4. There's no MFA or Conditional Access Policies are enforced on the user.
+
+3. Validate Self-hosted runtime settings:
1. Latest version of [Self-hosted runtime](https://www.microsoft.com/download/details.aspx?id=39717) is installed on the VM. 2. Network connectivity from Self-hosted runtime to Power BI tenant is enabled. 3. Network connectivity from Self-hosted runtime to Microsoft services is enabled. 4. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed.
+1. In Azure Active Directory tenant, create a security group.
+
+1. From Azure Active Directory tenant, make sure [Service Principal is member of the new security group](#authenticate-to-power-bi-tenant).
+
+1. On the Power BI Tenant Admin portal, validate if [Allow service principals to use read-only Power BI admin APIs](#associate-the-security-group-with-power-bi-tenant) is enabled for the new security group.
+ # [Private access](#tab/Scenario3)
-### Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in a private network
+### Scan same-tenant Power BI using self-hosted IR with Delegated Authentication or Service Principal in a private network
1. Make sure Power BI and Microsoft Purview accounts are in the same tenant.+ 1. Make sure Power BI tenant ID is entered correctly during the registration.+ 1. Make sure your [Power BI Metadata model is up to date by enabling metadata scanning.](/power-bi/admin/service-admin-metadata-scanning-setup#enable-tenant-settings-for-metadata-scanning)+ 1. Check your Azure Key Vault to make sure: 1. There are no typos in the password. 2. Microsoft Purview Managed Identity has get/list access to secrets.+ 1. Review your credential to validate: 1. Client ID matches _Application (Client) ID_ of the app registration. 2. Username includes the user principal name such as `johndoe@contoso.com`.
-1. Validate Power BI admin user to make sure:
- 1. User is assigned to Power BI Administrator role.
- 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
- 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
- 4. There's no MFA or Conditional Access Policies are enforced on the user.
+
+1. If Delegated Authentication is used, validate Power BI admin user settings to make sure:
+ 1. User is assigned to Power BI Administrator role.
+ 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
+ 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
+ 4. There's no MFA or Conditional Access Policies are enforced on the user.
+ 1. Validate Self-hosted runtime settings: 1. Latest version of [Self-hosted runtime](https://www.microsoft.com/download/details.aspx?id=39717) is installed on the VM. 2. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed.+ 1. Validate App registration settings to make sure: 1. App registration exists in your Azure Active Directory tenant. 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
Use any of the following deployment checklists during the setup or for troublesh
2. Microsoft Graph openid 3. Microsoft Graph User.Read 3. Under **Authentication**, **Allow public client flows** is enabled.+ 2. Review network configuration and validate if: 1. A [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links) is deployed. (Optional) 2. All required [private endpoints for Microsoft Purview](./catalog-private-link-end-to-end.md) are deployed. 3. Network connectivity from Self-hosted runtime to Power BI tenant is enabled. 3. Network connectivity from Self-hosted runtime to Microsoft services is enabled through private network.
+1. In Azure Active Directory tenant, create a security group.
+
+1. From Azure Active Directory tenant, make sure [Service Principal is member of the new security group](#authenticate-to-power-bi-tenant).
+
+1. On the Power BI Tenant Admin portal, validate if [Allow service principals to use read-only Power BI admin APIs](#associate-the-security-group-with-power-bi-tenant) is enabled for the new security group.
## Register Power BI tenant
This section describes how to register a Power BI tenant in Microsoft Purview fo
> 1. Confirm you have completed the [**deployment checklist for your scenario**](#deployment-checklist). > 1. Review our [**scan troubleshooting documentation**](register-scan-power-bi-tenant-troubleshoot.md).
-### Scan same-tenant Power BI using Azure IR and Managed Identity
-This is a suitable scenario, if both Microsoft Purview and Power BI tenant are configured to allow public access in the network settings.
-
-#### Authenticate to Power BI tenant-managed identity only
-
-> [!Note]
-> Follow steps in this section, only if you are planning to use **Managed Identity** as authentication option.
+### Authenticate to Power BI tenant
In Azure Active Directory Tenant, where Power BI tenant is located:
In Azure Active Directory Tenant, where Power BI tenant is located:
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/security-group.png" alt-text="Screenshot of security group type.":::
-4. Add your Microsoft Purview managed identity to this security group. Select **Members**, then select **+ Add members**.
+4. Add relevant user to the security group:
+
+ - If you are using **Managed Identity** as authentication method, add your Microsoft Purview managed identity to this security group. Select **Members**, then select **+ Add members**.
- :::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-group-member.png" alt-text="Screenshot of how to add the catalog's managed instance to group.":::
+ :::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-group-member.png" alt-text="Screenshot of how to add the catalog's managed instance to group.":::
-5. Search for your Microsoft Purview managed identity and select it.
+ - If you are using **delegated authentication** or **service principal** as authentication method, add your **service princial** to this security group. Select **Members**, then select **+ Add members**.
- :::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-catalog-to-group-by-search.png" alt-text="Screenshot showing how to add catalog by searching for its name.":::
+5. Search for your Microsoft Purview managed identity or service principal and select it.
+
+ :::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-catalog-to-group-by-search.png" alt-text="Screenshot showing how to add catalog by searching for its name.":::
You should see a success notification showing you that it was added. :::image type="content" source="./media/setup-power-bi-scan-PowerShell/success-add-catalog-msi.png" alt-text="Screenshot showing successful addition of catalog managed identity.":::
-#### Associate the security group with Power BI tenant
+### Associate the security group with Power BI tenant
1. Log into the [Power BI admin portal](https://app.powerbi.com/admin-portal/tenantSettings).
In Azure Active Directory Tenant, where Power BI tenant is located:
> [!Note] > You can remove the security group from your developer settings, but the metadata previously extracted won't be removed from the Microsoft Purview account. You can delete it separately, if you wish.
-### Create scan
+### Create scan for same-tenant Power BI using Azure IR and Managed Identity
+This is a suitable scenario, if both Microsoft Purview and Power BI tenant are configured to allow public access in the network settings.
To create and run a new scan, do the following:
To create and run a new scan, do the following:
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan-managed-identity.png" alt-text="Screenshot of Save and run Power BI source using Managed Identity.":::
-### Scan same tenant using Self-hosted IR and Delegated authentication
+### Create scan for same-tenant using self-hosted IR with service principal
+
+This scenario can be used when Microsoft Purview and Power BI tenant or both, are configured to use private endpoint and deny public access. Additionally, this option is also applicable if Microsoft Purview and Power BI tenant are configured to allow public access.
+
+For more information related to Power BI network, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links).
+
+For more information about Microsoft Purview network settings, see [Use private endpoints for your Microsoft Purview account](catalog-private-link.md).
+
+To create and run a new scan, do the following:
+
+1. Create an App Registration in your Azure Active Directory tenant. Provide a web URL in the **Redirect URI**. Take note of Client ID(App ID).
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle.":::
+
+1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions and grant admin consent for the tenant:
+
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
+
+1. Under **Advanced settings**, enable **Allow Public client flows**.
+
+2. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu.
+
+1. Navigate to **Sources**.
+
+1. Select the registered Power BI source.
+
+1. Select **+ New scan**.
+
+1. Give your scan a name. Then select the option to include or exclude the personal workspaces.
+
+ >[!Note]
+ > Switching the configuration of a scan to include or exclude a personal workspace will trigger a full scan of Power BI source.
+
+1. Select your self-hosted integration runtime from the drop-down list.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-shir.png" alt-text="Image showing Power BI scan setup using SHIR for same tenant.":::
+
+1. For the **Credential**, select **service principal** and select **+ New** to create a new credential.
+
+1. Create a new credential and provide required parameters:
+
+ - **Name**: Provide a unique name for credential
+ - **Authentication method**: Service principal
+ - **Tenant ID**: Your Power BI tenant ID
+ - **Client ID**: Use Service Principal Client ID (App ID) you created earlier
+
+1. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
+ 1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required.
+ 2. Assets (+ lineage) - Failed status means the Microsoft Purview - Power BI authorization has failed. Make sure the Microsoft Purview managed identity is added to the security group associated in Power BI admin portal.
+ 3. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata**
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-test-connection-status-report.png" alt-text="Screenshot of test connection status report page.":::
+
+1. Set up a scan trigger. Your options are **Recurring**, and **Once**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Microsoft Purview scan scheduler.":::
+
+1. On **Review new scan**, select **Save and run** to launch your scan.
+
+### Create scan for same-tenant using self-hosted IR with delegated authentication
This scenario can be used when Microsoft Purview and Power BI tenant or both, are configured to use private endpoint and deny public access. Additionally, this option is also applicable if Microsoft Purview and Power BI tenant are configured to allow public access.
To create and run a new scan, do the following:
1. Create a new credential and provide required parameters: - **Name**: Provide a unique name for credential
+ - **Authentication method**: Delegated auth
- **Client ID**: Use Service Principal Client ID (App ID) you created earlier - **User name**: Provide the username of Power BI Administrator you created earlier - **Password**: Select the appropriate Key vault connection and the **Secret name** where the Power BI account password was saved earlier.
Now that you've registered your source, follow the below guides to learn more ab
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)-- [Search Data Catalog](how-to-search-catalog.md)
+- [Search Data Catalog](how-to-search-catalog.md)
remote-rendering Convert Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/quickstarts/convert-model.md
You need:
## Azure setup
-If you do not have an account yet, go to [https://azure.microsoft.com/get-started/](https://azure.microsoft.com/get-started/), click on the free account option, and follow the instructions.
+If you don't have an account yet, go to [https://azure.microsoft.com/get-started/](https://azure.microsoft.com/get-started/), select the free account option, and follow the instructions.
Once you have an Azure account, go to [https://portal.azure.com/#home](https://portal.azure.com/#home). ### Storage account creation To create blob storage, you first need a storage account.
-To create one, click on the "Create a resource" button:
+1. To create one, select "Create a resource":
![Azure - add resource](media/azure-add-a-resource.png)
-From the new screen, choose **Storage** on the left side and then **Storage account - blob, file, table, queue** from the next column:
+2. From the new screen, choose **Storage** on the left side and then **Storage account - blob, file, table, queue** from the next column:
![Azure - add storage](media/azure-add-storage.png)
-Clicking this button will bring up the following screen with storage properties to fill out:
+3. Clicking this button will bring up the following screen with storage properties to fill out:
![Azure Setup](media/azure-setup1.png)
-Fill out the form in the following manner:
+4. Fill out the form in the following manner:
* Create a new Resource Group from the link below the drop-down box and name this **ARR_Tutorial** * For the **Storage account name**, enter a unique name here. **This name must be globally unique**, otherwise there will be a prompt that informs you that the name is already taken. In the scope of this quickstart, we name it **arrtutorialstorage**. Accordingly, you need to replace it with your name for any occurrence in this quickstart.
-* Select a **location** close to you. Ideally use the same location as used for setting up the rendering in the other quickstart.
+* Select a **Region** close to you. Ideally use the same [region](../reference/regions.md) as used for setting up the rendering in the other quickstart.
* **Performance** set to 'Premium'. 'Standard' works as well, but has lower loading time characteristics when a model is loaded by the runtime.
-* **Account kind** set to 'StorageV2 (general purpose v2)'
-* **Replication** set to 'Read-access geo-redundant storage (RA-GRS)'
-* **Access tier** set to 'Hot'
+* **Premium account type** set to 'Block blobs'
+* **Redundancy** set to 'Zone-redundant storage (ZRS)'
-None of the properties in other tabs have to be changed, so you can proceed with **"Review + create"** and then follow the steps to complete the setup.
+5. None of the properties in other tabs have to be changed, so you can proceed with **"Review + create"** and then follow the steps to complete the setup.
-The website now informs you about the progress of your deployment and reports "Your deployment is complete" eventually. Click on the **"Go to resource"** button for the next steps:
+6. The website now informs you about the progress of your deployment and reports "Your deployment is complete" eventually. Select **"Go to resource"** for the next steps:
![Azure Storage creation complete](./media/storage-creation-complete.png)
The website now informs you about the progress of your deployment and reports "Y
Next we need two blob containers, one for input and one for output.
-From the **"Go to resource"** button above, you get to a page with a panel on the left that contains a list menu. In that list under the **"Blob service"** category, click on the **"Containers"** button:
+1. From the **"Go to resource"** button above, you get to a page with a panel on the left that contains a list menu. In that list under the **"Blob service"** category, select **"Containers"**:
![Azure - add Containers](./media/azure-add-containers.png)
-Press the **"+ Container"** button to create the **input** blob storage container.
+2. Press the **"+ Container"** button to create the **input** blob storage container.
Use the following settings when creating it: * Name = arrinput * Public access level = Private
-After the container has been created, click **+ Container** again and repeat with these settings for the **output** container:
+3. After the container has been created, select **+ Container** again and repeat with these settings for the **output** container:
* Name = arroutput * Public access level = Private
There are three distinct ways to trigger a model conversion:
### 1. Conversion via the ARRT tool
-There is a [UI-based tool called ARRT](./../samples/azure-remote-rendering-asset-tool.md) to start conversions and interact with the rendered result.
+There's a [UI-based tool called ARRT](./../samples/azure-remote-rendering-asset-tool.md) to start conversions and interact with the rendered result.
![ARRT](./../samples/media/azure-remote-rendering-asset-tool.png "ARRT screenshot") ### 2. Conversion via a PowerShell script
-To make it easier to call the asset conversion service, we provide a utility script. It is located in the *Scripts* folder and is called **Conversion.ps1**.
+To make it easier to call the asset conversion service, we provide a utility script. It's located in the *Scripts* folder and is called **Conversion.ps1**.
In particular, this script
-1. uploads all files in a given directory from local disk to the input storage container
-1. calls the [the asset conversion REST API](../how-tos/conversion/conversion-rest-api.md), which will retrieve the data from the input storage container and start a conversion, which will return a conversion ID
-1. poll the conversion status API with the retrieved conversion ID until the conversion process terminates with success or failure
-1. retrieves a link to the converted asset in the output storage
+* uploads all files in a given directory from local disk to the input storage container,
+* calls the [the asset conversion REST API](../how-tos/conversion/conversion-rest-api.md), which will retrieve the data from the input storage container and start a conversion, which will return a conversion ID,
+* polls the conversion status API with the retrieved conversion ID until the conversion process terminates with success or failure,
+* retrieves a link to the converted asset in the output storage.
The script reads its configuration from the file *Scripts\arrconfig.json*. Open that JSON file in a text editor.
The script reads its configuration from the file *Scripts\arrconfig.json*. Open
The configuration within the **accountSettings** group (account ID and key) should be filled out analogous to the credentials in the [Render a model with Unity quickstart](render-model.md). Inside the **assetConversionSettings** group, make sure to change **resourceGroup**, **blobInputContainerName**, and **blobOutputContainerName** as seen above.
-Note that the value for **arrtutorialstorage** needs to be replaced with the unique name you picked during storage account creation.
+The value for **arrtutorialstorage** needs to be replaced with the unique name you picked during storage account creation.
Change **localAssetDirectoryPath** to point to the directory on your disk, which contains the model you intend to convert. Be careful to properly escape backslashes ("\\") in the path using double backslashes ("\\\\").
The conversion script generates a *Shared Access Signature (SAS)* URI for the co
## Optional: Re-creating a SAS URI
-The SAS URI created by the conversion script will only be valid for 24 hours. However, after it expired you do not need to convert your model again. Instead, you can create a new SAS in the portal as described in the next steps:
+The SAS URI created by the conversion script will only be valid for 24 hours. However, after it expired you don't need to convert your model again. Instead, you can create a new SAS in the portal as described in the next steps:
1. Go to the [Azure portal](https://www.portal.azure.com)
-2. Click on your **Storage account** resource:
+2. Select your **Storage account** resource:
![Screenshot that highlights the selected Storage account resource.](./media/portal-storage-accounts.png)
-3. In the following screen, click on **Storage explorer** in the left panel and find your output model (*.arrAsset* file) in the *arroutput* blob storage container. Right-click on the file and select **Get Shared Access Signature** from the context menu:
+3. In the following screen, Select **Storage explorer** in the left panel and find your output model (*.arrAsset* file) in the *arroutput* blob storage container. Right-click on the file and select **Get Shared Access Signature** from the context menu:
![Signature Access](./media/portal-storage-explorer.png)
security Azure Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-domains.md
This page is a partial list of the Azure domains in use. Some of them are REST A
|[Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md)|*.graph.windows.net / *.onmicrosoft.com| |[Azure API Management](https://azure.microsoft.com/services/api-management/)|*.azure-api.net| |[Azure BizTalk Services](https://azure.microsoft.com/pricing/details/biztalk-services/) (retired)|*.biztalk.windows.net|
-|[Azure Blob storage](../../storage/blobs/index.yml)|*.blob.core.windows.net|
+|[Azure Blob storage](../../storage/blobs/storage-blobs-introduction.md)|*.blob.core.windows.net|
|[Azure Cloud Services](../../cloud-services/cloud-services-choose-me.md) and [Azure Virtual Machines](../../virtual-machines/index.yml)|*.cloudapp.net| |[Azure Cloud Services](../../cloud-services/cloud-services-choose-me.md) and [Azure Virtual Machines](../../virtual-machines/index.yml)|*.cloudapp.azure.com| |[Azure Container Registry](https://azure.microsoft.com/services/container-registry/)|*.azurecr.io| |Azure Container Service (ACS) (deprecated)|*.azurecontainer.io| |[Azure Content Delivery Network (CDN)](https://azure.microsoft.com/services/cdn/)|*.vo.msecnd.net|
+|[Azure Cosmos DB](/azure/cosmos-db/)|*.cosmos.azure.com|
+|[Azure Cosmos DB](/azure/cosmos-db/)|*.documents.azure.com|
|[Azure Files](../../storage/files/storage-files-introduction.md)|*.file.core.windows.net| |[Azure Front Door](https://azure.microsoft.com/services/frontdoor/)|*.azurefd.net|
+|[Azure Key Vault](../../key-vault/general/overview.md)| *.vault.azure.net|
|Azure Management Services|*.management.core.windows.net| |[Azure Media Services](https://azure.microsoft.com/services/media-services/)|*.origin.mediaservices.windows.net| |[Azure Mobile Apps](https://azure.microsoft.com/services/app-service/mobile/)|*.azure-mobile.net|
security Services Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/services-technologies.md
Over time, this list will change and grow, just as Azure does. Make sure to chec
| [Azure SQL Always Encryption](/sql/relational-databases/security/encryption/always-encrypted-database-engine)|Protects sensitive data, such as credit card numbers or national identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database or SQL Server databases. | | [Azure&nbsp;SQL&nbsp;Transparent Data Encryption](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)| A database security feature that encrypts the storage of an entire database. | | [Azure SQL Database Auditing](/azure/azure-sql/database/auditing-overview)|A database auditing feature that tracks database events and writes them to an audit log in your Azure storage account. |
+| [Virtual network rules](/azure/azure-sql/database/vnet-service-endpoint-rule-overview)|A firewall security feature that controls whether the server for your databases and elastic pools in Azure SQL Database or for your dedicated SQL pool (formerly SQL DW) databases in Azure Synapse Analytics accepts communications that are sent from particular subnets in virtual networks. |
## Identity and access management
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
Previously updated : 07/11/2022 Last updated : 08/23/2022 # Deploy a Service Fabric managed cluster across availability zones Availability Zones in Azure are a high-availability offering that protects your applications and data from datacenter failures. An Availability Zone is a unique physical location equipped with independent power, cooling, and networking within an Azure region.
Requirements:
If the Public IP resource is not zone resilient, migration of the cluster will cause a brief loss of external connectivity. This is due to the migration setting up new Public IP and updating the cluster FQDN to the new IP. If the Public IP resource is zone resilient, migration will not modify the Public IP resource or FQDN and there will be no external connectivity impact.
-2) Initiate migration of the underlying storage account created for managed cluster from LRS to ZRS using [live migration](../storage/common/redundancy-migration.md#request-a-live-migration-to-zrs-gzrs-or-ra-gzrs). The resource group of storage account that needs to be migrated would be of the form "SFC_ClusterId"(ex SFC_9240df2f-71ab-4733-a641-53a8464d992d) under the same subscription as the managed cluster resource.
+2) Initiate conversion of the underlying storage account created for managed cluster from LRS to ZRS using [customer-initiated conversion](../storage/common/redundancy-migration.md#customer-initiated-conversion-preview). The resource group of storage account that needs to be migrated would be of the form "SFC_ClusterId"(ex SFC_9240df2f-71ab-4733-a641-53a8464d992d) under the same subscription as the managed cluster resource.
3) Add a new primary node type which spans across availability zones
service-fabric Service Fabric Scale Up Non Primary Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-scale-up-non-primary-node-type.md
+
+ Title: Scale up an Azure Service Fabric non-primary node type
+description: Vertically scale your Service Fabric cluster by adding a new non-primary node type and removing the previous one.
+++++ Last updated : 08/26/2022++
+# Scale up a Service Fabric cluster non-primary node type
+
+This article describes how to scale up a Service Fabric cluster non-primary node type with minimal downtime. In-place SKU upgrades aren't supported on Service Fabric cluster nodes, as such operations potentially involve data and availability loss. The safest, most reliable, and recommended method for scaling up a Service Fabric node type is to:
+
+1. Add a new node type to your Service Fabric cluster, backed by your upgraded (or modified) virtual machine scale set SKU and configuration. This step also involves setting up a new load balancer, subnet, and public IP for the scale set.
+
+1. Once both the original and upgraded scale sets are running side by side, migrate the workload by setting placement constraints for applications to the new node type.
+
+1. Verify the cluster is healthy, then remove the original scale set (and related resources) and node state for the deleted nodes.
+
+The following will walk you through the process for updating the VM size and operating system of non-primary node type VMs of a sample cluster with [Silver durability](service-fabric-cluster-capacity.md#durability-characteristics-of-the-cluster), backed by a single scale set with five nodes used as a secondary node type. The primary node type with Service Fabric system services will remain untouched. We'll be upgrading the non-primary node type:
+
+- From VM size *Standard_D2_V2* to *Standard D4_V2*, and
+- From VM operating system *Windows Server 2019 Datacenter* to *Windows Server 2022 Datacenter*.
+
+> [!WARNING]
+> Before attempting this procedure on a production cluster, we recommend that you study the sample templates and verify the process against a test cluster.
+>
+> Do not attempt a non-primary node type scale up procedure if the cluster status is unhealthy, as this will only destabilize the cluster further.
+We'll make use of the step-by-step Azure deployment templates used in the [Scale up a Service Fabric cluster primary node type](service-fabric-scale-up-primary-node-type.md) guide. However, we'll modify them so they aren't specific to primary node types. The templates are [available on GitHub](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary).
+
+## Set up the test cluster
+
+Let's set up the initial Service Fabric test cluster. First, [download](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary) the Azure Resource Manager sample templates that we'll use to complete this scenario.
+
+Next, sign in to your Azure account.
+
+```powershell
+# Sign in to your Azure account
+Login-AzAccount -SubscriptionId "<subscription ID>"
+```
+
+Next open the [*parameters.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary/parameters.json) file and update the value for `clusterName` to something unique (within Azure).
+
+The following commands will guide you through generating a new self-signed certificate and deploying the test cluster. If you already have a certificate you'd like to use, skip to [Use an existing certificate to deploy the cluster](#use-an-existing-certificate-to-deploy-the-cluster).
+
+### Generate a self-signed certificate and deploy the cluster
+
+First, assign the variables you'll need for Service Fabric cluster deployment. Adjust the values for `resourceGroupName`, `certSubjectName`, `parameterFilePath`, and `templateFilePath` for your specific account and environment:
+
+```powershell
+# Assign deployment variables
+$resourceGroupName = "sftestupgradegroup"
+$certOutputFolder = "c:\certificates"
+$certPassword = "Password!1" | ConvertTo-SecureString -AsPlainText -Force
+$certSubjectName = "sftestupgrade.southcentralus.cloudapp.azure.com"
+$parameterFilePath = "C:\parameters.json"
+$templateFilePath = "C:\Initial-TestClusterSetup.json"
+```
+
+> [!NOTE]
+> Ensure that the `certOutputFolder` location exists on your local machine before running the command to deploy a new Service Fabric cluster.
+Then deploy the Service Fabric test cluster:
+
+```powershell
+# Deploy the initial test cluster
+New-AzServiceFabricCluster `
+ -ResourceGroupName $resourceGroupName `
+ -CertificateOutputFolder $certOutputFolder `
+ -CertificatePassword $certPassword `
+ -CertificateSubjectName $certSubjectName `
+ -TemplateFile $templateFilePath `
+ -ParameterFile $parameterFilePath
+```
+
+Once the deployment is complete, locate the *.pfx* file (`$certPfx`) on your local machine and import it to your certificate store:
+
+```powershell
+cd c:\certificates
+$certPfx = ".\sftestupgradegroup20200312121003.pfx"
+Import-PfxCertificate `
+ -FilePath $certPfx `
+ -CertStoreLocation Cert:\CurrentUser\My `
+ -Password (ConvertTo-SecureString Password!1 -AsPlainText -Force)
+```
+
+The operation will return the certificate thumbprint, which you can now use to [connect to the new cluster](#connect-to-the-new-cluster-and-check-health-status) and check its health status. (Skip the following section, which is an alternate approach to cluster deployment.)
+
+### Use an existing certificate to deploy the cluster
+
+Alternately, you can use an existing Azure Key Vault certificate to deploy the test cluster. To do this, you'll need to [obtain references to your Key Vault](#obtain-your-key-vault-references) and certificate thumbprint.
+
+```powershell
+# Key Vault variables
+$certUrlValue = "https://sftestupgradegroup.vault.azure.net/secrets/sftestupgradegroup20200309235308/dac0e7b7f9d4414984ccaa72bfb2ea39"
+$sourceVaultValue = "/subscriptions/########-####-####-####-############/resourceGroups/sftestupgradegroup/providers/Microsoft.KeyVault/vaults/sftestupgradegroup"
+$thumb = "BB796AA33BD9767E7DA27FE5182CF8FDEE714A70"
+```
+
+Next, designate a resource group name for the cluster and set the `templateFilePath` and `parameterFilePath` locations:
+
+> [!NOTE]
+> The designated resource group must already exist and be located in the same region as your Key Vault.
+```powershell
+$resourceGroupName = "sftestupgradegroup"
+$templateFilePath = "C:\Initial-TestClusterSetup.json"
+$parameterFilePath = "C:\parameters.json"
+```
+
+Finally, run the following command to deploy the initial test cluster:
+
+```powershell
+# Deploy the initial test cluster
+New-AzResourceGroupDeployment `
+ -ResourceGroupName $resourceGroupName `
+ -TemplateFile $templateFilePath `
+ -TemplateParameterFile $parameterFilePath `
+ -CertificateThumbprint $thumb `
+ -CertificateUrlValue $certUrlValue `
+ -SourceVaultValue $sourceVaultValue `
+ -Verbose
+```
+
+### Connect to the new cluster and check health status
+
+Connect to the cluster and ensure that all five of its nodes are healthy (substitute the `clusterName` and `thumb` variables with your own values):
+
+```powershell
+# Connect to the cluster
+$clusterName = "sftestupgrade.southcentralus.cloudapp.azure.com:19000"
+$thumb = "BB796AA33BD9767E7DA27FE5182CF8FDEE714A70"
+Connect-ServiceFabricCluster `
+ -ConnectionEndpoint $clusterName `
+ -KeepAliveIntervalInSec 10 `
+ -X509Credential `
+ -ServerCertThumbprint $thumb `
+ -FindType FindByThumbprint `
+ -FindValue $thumb `
+ -StoreLocation CurrentUser `
+ -StoreName My
+# Check cluster health
+Get-ServiceFabricClusterHealth
+```
+
+With that, we're ready to begin the upgrade procedure.
+
+## Deploy a new non-primary node type with an upgraded scale set
+
+In order to upgrade (vertically scale) a node type, we'll first need to deploy a new node type backed by a new scale set and supporting resources. The new scale set will be marked as non-primary (`isPrimary: false`), just like the original scale set. If you want to scale up a primary node type, see [Scale up a Service Fabric cluster primary node type](service-fabric-scale-up-primary-node-type.md). The resources created in the following section will ultimately become the new node type in your cluster, and the original node type resources will be deleted.
+
+### Update the cluster template with the upgraded scale set
+
+Here are the section-by-section modifications of the original cluster deployment template for adding a new node type and supporting resources.
+
+Most of the required changes for this step have already been made for you in the [*Step1-AddPrimaryNodeType.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary/Step1-AddPrimaryNodeType.json) template file. However, an additional change must be made so the template file works for non-primary node types. The following sections will explain these changes in detail, and call outs will be made when you must make a change.
+
+> [!Note]
+> Ensure that you use names that are unique from the original node type, scale set, load balancer, public IP, and subnet of the original non-primary node type, as these resources will be deleted at a later step in the process.
+
+#### Create a new subnet in the existing virtual network
+
+```json
+{
+ "name": "[variables('subnet1Name')]",
+ "properties": {
+ "addressPrefix": "[variables('subnet1Prefix')]"
+ }
+}
+```
+
+#### Create a new public IP with a unique domainNameLabel
+
+```json
+{
+ "apiVersion": "[variables('publicIPApiVersion')]",
+ "type": "Microsoft.Network/publicIPAddresses",
+ "name": "[concat(variables('lbIPName'),'-',variables('vmNodeType1Name'))]",
+ "location": "[variables('computeLocation')]",
+ "properties": {
+ "dnsSettings": {
+ "domainNameLabel": "[concat(variables('dnsName'),'-','nt1')]"
+ },
+ "publicIPAllocationMethod": "Dynamic"
+ },
+ "tags": {
+ "resourceType": "Service Fabric",
+ "clusterName": "[parameters('clusterName')]"
+ }
+}
+```
+
+#### Create a new load balancer for the public IP
+
+```json
+"dependsOn": [
+ "[concat('Microsoft.Network/publicIPAddresses/',concat(variables('lbIPName'),'-',variables('vmNodeType1Name')))]"
+]
+```
+
+#### Create a new virtual machine scale set (with upgraded VM and OS SKUs)
+
+Node Type Ref
+
+```json
+"nodeTypeRef": "[variables('vmNodeType1Name')]"
+```
+
+VM SKU
+
+```json
+"sku": {
+ "name": "[parameters('vmNodeType1Size')]",
+ "capacity": "[parameters('nt1InstanceCount')]",
+ "tier": "Standard"
+}
+```
+
+OS SKU
+
+```json
+"imageReference": {
+ "publisher": "[parameters('vmImagePublisher1')]",
+ "offer": "[parameters('vmImageOffer1')]",
+ "sku": "[parameters('vmImageSku1')]",
+ "version": "[parameters('vmImageVersion1')]"
+}
+```
+
+Also, ensure you include any additional extensions that are required for your workload.
+
+#### Add a new non-primary node type to the cluster
+
+Now that the new node type (vmNodeType1Name) has its own name, subnet, IP, load balancer, and scale set, it can reuse all other variables from the original node type (such as `nt0applicationEndPort`, `nt0applicationStartPort`, and `nt0fabricTcpGatewayPort`).
+
+In the existing template file, the `isPrimary` parameter is set to `true` for the [Scale up a Service Fabric cluster primary node type](service-fabric-scale-up-primary-node-type.md) guide. Change `isPrimary` to `false` for your non-primary node type:
+
+```json
+"name": "[variables('vmNodeType1Name')]",
+"applicationPorts": {
+ "endPort": "[variables('nt0applicationEndPort')]",
+ "startPort": "[variables('nt0applicationStartPort')]"
+},
+"clientConnectionEndpointPort": "[variables('nt0fabricTcpGatewayPort')]",
+"durabilityLevel": "Bronze",
+"ephemeralPorts": {
+ "endPort": "[variables('nt0ephemeralEndPort')]",
+ "startPort": "[variables('nt0ephemeralStartPort')]"
+},
+"httpGatewayEndpointPort": "[variables('nt0fabricHttpGatewayPort')]",
+"isPrimary": false,
+"reverseProxyEndpointPort": "[variables('nt0reverseProxyEndpointPort')]",
+"vmInstanceCount": "[parameters('nt1InstanceCount')]"
+```
+
+Once you've implemented all the changes in your template and parameters files, proceed to the next section to acquire your Key Vault references and deploy the updates to your cluster.
+
+### Obtain your Key Vault references
+
+To deploy the updated configuration, you'll need several references to the cluster certificate stored in your Key Vault. The easiest way to find these values is through Azure portal. You'll need:
+
+* **The Key Vault URL of your cluster certificate.** From your Key Vault in Azure portal, select **Certificates** > *Your desired certificate* > **Secret Identifier**:
+
+ ```powershell
+ $certUrlValue="https://sftestupgradegroup.vault.azure.net/secrets/sftestupgradegroup20200309235308/dac0e7b7f9d4414984ccaa72bfb2ea39"
+ ```
+
+* **The thumbprint of your cluster certificate.** (You probably already have this if you [connected to the initial cluster](#connect-to-the-new-cluster-and-check-health-status) to check its health status.) From the same certificate blade (**Certificates** > *Your desired certificate*) in Azure portal, copy **X.509 SHA-1 Thumbprint (in hex)**:
+
+ ```powershell
+ $thumb = "BB796AA33BD9767E7DA27FE5182CF8FDEE714A70"
+ ```
+
+* **The Resource ID of your Key Vault.** From your Key Vault in Azure portal, select **Properties** > **Resource ID**:
+
+ ```powershell
+ $sourceVaultValue = "/subscriptions/########-####-####-####-############/resourceGroups/sftestupgradegroup/providers/Microsoft.KeyVault/vaults/sftestupgradegroup"
+ ```
+
+### Deploy the updated template
+
+Adjust the `templateFilePath` as needed and run the following command:
+
+```powershell
+# Deploy the new node type and its resources
+$templateFilePath = "C:\Step1-AddPrimaryNodeType.json"
+New-AzResourceGroupDeployment `
+ -ResourceGroupName $resourceGroupName `
+ -TemplateFile $templateFilePath `
+ -TemplateParameterFile $parameterFilePath `
+ -CertificateThumbprint $thumb `
+ -CertificateUrlValue $certUrlValue `
+ -SourceVaultValue $sourceVaultValue `
+ -Verbose
+```
+
+When the deployment completes, check the cluster health again and ensure all nodes on both node types are healthy.
+
+```powershell
+Get-ServiceFabricClusterHealth
+```
+
+### Migrate workloads to the new node type
+++
+Wait till all applications moved to the new node type and are healthy.
++
+### Disable the nodes in the original node type scale set
+
+Once all seed nodes have migrated to the new scale set, you can disable the nodes of the original scale set.
+
+```powershell
+# Disable the nodes in the original scale set.
+$nodeType = "nt0vm"
+$nodes = Get-ServiceFabricNode
+Write-Host "Disabling nodes..."
+foreach($node in $nodes)
+{
+ if ($node.NodeType -eq $nodeType)
+ {
+ $node.NodeName
+ Disable-ServiceFabricNode -Intent RemoveNode -NodeName $node.NodeName -Force
+ }
+}
+```
+
+Use Service Fabric Explorer to monitor the progression of nodes in the original scale set from *Disabling* to *Disabled* status.
+Wait for all nodes to reach *Disabled* state.
+
+### Stop data on the disabled nodes
+
+Now you can stop data on the disabled nodes.
+
+```powershell
+# Stop data on the disabled nodes.
+foreach($node in $nodes)
+{
+ if ($node.NodeType -eq $nodeType)
+ {
+ $node.NodeName
+ Start-ServiceFabricNodeTransition -Stop -OperationId (New-Guid) -NodeInstanceId $node.NodeInstanceId -NodeName $node.NodeName -StopDurationInSeconds 10000
+ }
+}
+```
+
+## Remove the original node type and clean up its resources
+
+We're ready to remove the original node type and its associated resources to conclude the vertical scaling procedure.
+
+### Remove the original scale set
+
+First remove the node type's backing scale set.
+
+```powershell
+$scaleSetName = "nt0vm"
+$scaleSetResourceType = "Microsoft.Compute/virtualMachineScaleSets"
+Remove-AzResource -ResourceName $scaleSetName -ResourceType $scaleSetResourceType -ResourceGroupName $resourceGroupName -Force
+```
+
+### Delete the original IP and load balancer resources
+
+You can now delete the original IP, and load balancer resources. In this step, you'll also update the DNS name.
+
+> [!Note]
+> This step is optional if you're already using a *Standard* SKU public IP and load balancer. In this case you could have multiple scale sets / node types under the same load balancer.
+Run the following commands, modifying the `$lbname` value as needed.
+
+```powershell
+# Delete the original IP and load balancer resources
+$lbName = "LB-sftestupgrade-nt0vm"
+$lbResourceType = "Microsoft.Network/loadBalancers"
+$ipResourceType = "Microsoft.Network/publicIPAddresses"
+$oldPublicIpName = "PublicIP-LB-FE-nt0vm"
+$newPublicIpName = "PublicIP-LB-FE-nt1vm"
+$oldPublicIP = Get-AzPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $resourceGroupName
+$nonPrimaryDNSName = $oldNonPrimaryPublicIP.DnsSettings.DomainNameLabel
+$nonPrimaryDNSFqdn = $oldNonPrimaryPublicIP.DnsSettings.Fqdn
+Remove-AzResource -ResourceName $lbName -ResourceType $lbResourceType -ResourceGroupName $resourceGroupName -Force
+Remove-AzResource -ResourceName $oldPublicIpName -ResourceType $ipResourceType -ResourceGroupName $resourceGroupName -Force
+$PublicIP = Get-AzPublicIpAddress -Name $newPublicIpName -ResourceGroupName $resourceGroupName
+$PublicIP.DnsSettings.DomainNameLabel = $nonPrimaryDNSName
+$PublicIP.DnsSettings.Fqdn = $nonPrimaryDNSFqdn
+Set-AzPublicIpAddress -PublicIpAddress $PublicIP
+```
+
+### Remove node state from the original node type
+
+The original node type nodes will now show *Error* for their **Health State**. Remove their node state from the cluster.
+
+```powershell
+# Remove state of the obsolete nodes from the cluster
+$nodeType = "nt0vm"
+$nodes = Get-ServiceFabricNode
+Write-Host "Removing node state..."
+foreach($node in $nodes)
+{
+ if ($node.NodeType -eq $nodeType)
+ {
+ $node.NodeName
+ Remove-ServiceFabricNodeState -NodeName $node.NodeName -Force
+ }
+}
+```
+
+Service Fabric Explorer should now reflect only the five nodes of the new node type (nt1vm), all with Health State values of *OK*. We'll remediate that next by updating the template to reflect the latest changes and redeploying.
+
+### Update the deployment template to reflect the newly scaled-up non-primary node type
+
+Most of the required changes for this step have already been made for you in the [*Step3-CleanupOriginalPrimaryNodeType.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary/Step3-CleanupOriginalPrimaryNodeType.json) template file. However, an additional change must be made so the template file works for non-primary node types. The following sections will explain these changes in detail, and call outs will be made when you must make a change.
+
+#### Update the cluster management endpoint
+
+Update the cluster `managementEndpoint` on the deployment template to reference the new IP (by updating *vmNodeType0Name* with *vmNodeType1Name*).
+
+```json
+ "managementEndpoint": "[concat('https://',reference(concat(variables('lbIPName'),'-',variables('vmNodeType1Name'))).dnsSettings.fqdn,':',variables('nt0fabricHttpGatewayPort'))]",
+```
+
+#### Remove the original node type reference
+
+Remove the original node type reference from the Service Fabric resource in the deployment template.
+
+In the existing template file, the `isPrimary` parameter is set to `true` for the [Scale up a Service Fabric cluster primary node type](service-fabric-scale-up-primary-node-type.md) guide. Change `isPrimary` to `false` for your non-primary node type:
+
+```json
+"name": "[variables('vmNodeType0Name')]",
+"applicationPorts": {
+ "endPort": "[variables('nt0applicationEndPort')]",
+ "startPort": "[variables('nt0applicationStartPort')]"
+},
+"clientConnectionEndpointPort": "[variables('nt0fabricTcpGatewayPort')]",
+"durabilityLevel": "Bronze",
+"ephemeralPorts": {
+ "endPort": "[variables('nt0ephemeralEndPort')]",
+ "startPort": "[variables('nt0ephemeralStartPort')]"
+},
+"httpGatewayEndpointPort": "[variables('nt0fabricHttpGatewayPort')]",
+"isPrimary": false,
+"reverseProxyEndpointPort": "[variables('nt0reverseProxyEndpointPort')]",
+"vmInstanceCount": "[parameters('nt0InstanceCount')]"
+```
+
+#### Configure health policies to ignore existing errors
+
+Only for Silver and higher durability clusters, update the cluster resource in the template and configure health policies to ignore `fabric:/System` application health by adding *applicationDeltaHealthPolicies* under cluster resource properties as given below. The below policy will ignore existing errors but not allow new health errors.
+
+```json
+"upgradeDescription":
+{
+ "forceRestart": false,
+ "upgradeReplicaSetCheckTimeout": "10675199.02:48:05.4775807",
+ "healthCheckWaitDuration": "00:05:00",
+ "healthCheckStableDuration": "00:05:00",
+ "healthCheckRetryTimeout": "00:45:00",
+ "upgradeTimeout": "12:00:00",
+ "upgradeDomainTimeout": "02:00:00",
+ "healthPolicy": {
+ "maxPercentUnhealthyNodes": 100,
+ "maxPercentUnhealthyApplications": 100
+ },
+ "deltaHealthPolicy":
+ {
+ "maxPercentDeltaUnhealthyNodes": 0,
+ "maxPercentUpgradeDomainDeltaUnhealthyNodes": 0,
+ "maxPercentDeltaUnhealthyApplications": 0,
+ "applicationDeltaHealthPolicies":
+ {
+ "fabric:/System":
+ {
+ "defaultServiceTypeDeltaHealthPolicy":
+ {
+ "maxPercentDeltaUnhealthyServices": 0
+ }
+ }
+ }
+ }
+}
+```
+
+#### Remove supporting resources for the original node type
+
+Remove all other resources related to the original node type from the ARM template and the parameters file. Delete the following:
+
+```json
+ "vmImagePublisher": {
+ "value": "MicrosoftWindowsServer"
+ },
+ "vmImageOffer": {
+ "value": "WindowsServer"
+ },
+ "vmImageSku": {
+ "value": "2019-Datacenter"
+ },
+ "vmImageVersion": {
+ "value": "latest"
+ },
+```
+
+#### Deploy the finalized template
+
+Finally, deploy the modified Azure Resource Manager template.
+
+```powershell
+# Deploy the updated template file
+$templateFilePath = "C:\Step3-CleanupOriginalPrimaryNodeType"
+New-AzResourceGroupDeployment `
+ -ResourceGroupName $resourceGroupName `
+ -TemplateFile $templateFilePath `
+ -TemplateParameterFile $parameterFilePath `
+ -CertificateThumbprint $thumb `
+ -CertificateUrlValue $certUrlValue `
+ -SourceVaultValue $sourceVaultValue `
+ -Verbose
+```
+
+> [!NOTE]
+> This step will take a while, usually up to two hours.
+The upgrade will change settings to the *InfrastructureService*; therefore, a node restart is needed. In this case, *forceRestart* is ignored. The parameter `upgradeReplicaSetCheckTimeout` specifies the maximum time that Service Fabric waits for a partition to be in a safe state, if not already in a safe state. Once safety checks pass for all partitions on a node, Service Fabric proceeds with the upgrade on that node. The value for the parameter `upgradeTimeout` can be reduced to 6 hours, but for maximal safety 12 hours should be used.
+
+Once the deployment has completed, verify in Azure portal that the Service Fabric resource Status is *Ready*. Verify you can reach the new Service Fabric Explorer endpoint, the **Cluster Health State** is *OK*, and any deployed applications function properly.
+
+With that, you've vertically scaled a cluster non-primary node type!
+
+## Next steps
+
+* Learn how to [add a node type to a cluster](virtual-machine-scale-set-scale-node-type-scale-out.md)
+* Learn about [application scalability](service-fabric-concepts-scalability.md).
+* [Scale an Azure cluster in or out](service-fabric-tutorial-scale-cluster.md).
+* [Scale an Azure cluster programmatically](service-fabric-cluster-programmatic-scaling.md) using the fluent Azure compute SDK.
+* [Scale a standalone cluster in or out](service-fabric-cluster-windows-server-add-remove-nodes.md).
service-fabric Service Fabric Scale Up Primary Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-scale-up-primary-node-type.md
Title: Scale up an Azure Service Fabric primary node type description: Vertically scale your Service Fabric cluster by adding a new node type and removing the previous one. --++ Previously updated : 07/11/2022 Last updated : 09/20/2022 # Scale up a Service Fabric cluster primary node type
This article describes how to scale up a Service Fabric cluster primary node typ
The following will walk you through the process for updating the VM size and operating system of primary node type VMs of a sample cluster with [Silver durability](service-fabric-cluster-capacity.md#durability-characteristics-of-the-cluster), backed by a single scale set with five nodes. We'll be upgrading the primary node type: - From VM size *Standard_D2_V2* to *Standard D4_V2*, and-- From VM operating system *Windows Server 2016 Datacenter with Containers* to *Windows Server 2019 Datacenter with Containers*.
+- From VM operating system *Windows Server 2019 Datacenter* to *Windows Server 2022 Datacenter*.
> [!WARNING] > Before attempting this procedure on a production cluster, we recommend that you study the sample templates and verify the process against a test cluster. The cluster may also be unavailable for a short period of time. > > Do not attempt a primary node type scale up procedure if the cluster status is unhealthy, as this will only destabilize the cluster further.
-Here are the step-by-step Azure deployment templates that we'll use to complete this sample upgrade scenario: https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade
+The step-by-step Azure deployment templates that we'll use to complete this sample upgrade scenario are [available on GitHub](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade).
## Set up the test cluster
With that, we're ready to begin the upgrade procedure.
## Deploy a new primary node type with upgraded scale set
-In order to upgrade (vertically scale) a node type, we'll first need to deploy a new node type backed by a new scale set and supporting resources. The new scale set will be marked as primary (`isPrimary: true`), just like the original scale set (unless you're doing a non-primary node type upgrade). The resources created in the following section will ultimately become the new primary node type in your cluster, and the original primary node type resources will be deleted.
+In order to upgrade (vertically scale) a node type, we'll first need to deploy a new node type backed by a new scale set and supporting resources. The new scale set will be marked as primary (`isPrimary: true`), just like the original scale set. If you want to scale up a non-primary node type, see [Scale up a Service Fabric cluster non-primary node type](service-fabric-scale-up-non-primary-node-type.md). The resources created in the following section will ultimately become the new primary node type in your cluster, and the original primary node type resources will be deleted.
### Update the cluster template with the upgraded scale set
The required changes for this step have already been made for you in the [*Step1
"name": "[concat(variables('lbIPName'),'-',variables('vmNodeType1Name'))]", "location": "[variables('computeLocation')]", "properties": {
- "dnsSettings": {
- "domainNameLabel": "[concat(variables('dnsName'),'-','nt1')]"
- },
- "publicIPAllocationMethod": "Dynamic"
+ "dnsSettings": {
+ "domainNameLabel": "[concat(variables('dnsName'),'-','nt1')]"
+ },
+ "publicIPAllocationMethod": "Dynamic"
}, "tags": {
- "resourceType": "Service Fabric",
- "clusterName": "[parameters('clusterName')]"
+ "resourceType": "Service Fabric",
+ "clusterName": "[parameters('clusterName')]"
} } ```
Remove all other resources related to the original node type from the ARM templa
"value": "WindowsServer" }, "vmImageSku": {
- "value": "2016-Datacenter-with-Containers"
+ "value": "2019-Datacenter"
}, "vmImageVersion": { "value": "latest"
static-web-apps Front End Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/front-end-frameworks.md
The intent of the table columns is explained by the following items:
| [Marko](https://markojs.com/) | `public` | n/a | | [Meteor](https://www.meteor.com/) | `bundle` | n/a | | [Mithril](https://mithril.js.org/) | `dist` | n/a |
+| [NextJs](https://nextjs.org/) | `/` | n/a |
| [Polymer](https://www.polymer-project.org/) | `build/default` | n/a | | [Preact](https://preactjs.com/) | `build` | n/a | | [React](https://reactjs.org/) | `build` | n/a |
-| [RedwoodJS](https://redwoodjs.com/) | `web/dist` | `yarn rw build` |
+| [RedwoodJS](https://redwoodjs.com/) | `web/dist` | `yarn rw build web` |
| [Stencil](https://stenciljs.com/) | `www` | n/a | | [Svelte](https://svelte.dev/) | `public` | n/a | | [Three.js](https://threejs.org/) | `/` | n/a |
static-web-apps Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/monitor.md
Use the following steps to add Application Insights monitoring to your static we
The following highlights a few locations in the portal used to inspect aspects of your application's API endpoints. > [!NOTE]
-> For more detail on Application Insights usage, refer to [Where do I see my telemetry?](../azure-monitor/app/app-insights-overview.md#where-do-i-see-my-telemetry).
+> For more detail on Application Insights usage, refer to [Application Insights overview](../azure-monitor/app/app-insights-overview.md).
| Type | Menu location | Description | | | | |
storage Storage Quickstart Blobs Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-ruby.md
Learn how to use Ruby to create, download, and list blobs in a container in Micr
Make sure you have the following additional prerequisites installed: - [Ruby](https://www.ruby-lang.org/en/downloads/)-- [Azure Storage library for Ruby](https://github.com/azure/azure-storage-ruby), using the [RubyGem package](https://rubygems.org/gems/azure-storage-blob):
+- [Azure Storage client library for Ruby](https://github.com/azure/azure-storage-ruby), using the [RubyGem package](https://rubygems.org/gems/azure-storage-blob):
```console gem install azure-storage-blob
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Previously updated : 06/14/2022 Last updated : 09/21/2022
Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets the [Service-Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/) even in the face of failures.
-Azure Storage offers the following types of replication:
+A combination of three factors determine how your storage account is replicated and accessible:
-- Locally redundant storage (LRS)-- Zone-redundant storage (ZRS)-- Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)-- Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
+- **Zone redundancy** - whether data is replicated between different zones within the primary region (LRS vs. ZRS)
+- **Geo-redundancy** - replication within a single "local" region or between different regions (LRS vs. GRS)
+- **Read access (RA)** - read access to the secondary region in the event of a failover when geo-redundancy is used (GRS vs. RA-GRS)
-For an overview of each of these options, see [Azure Storage redundancy](storage-redundancy.md).
+For an overview of all of the redundancy options, see [Azure Storage redundancy](storage-redundancy.md).
-## Switch between types of replication
+In this article, you will learn how to change the replication setting(s) for an existing storage account.
-You can switch a storage account from one type of replication to any other type, but some scenarios are more straightforward than others. If you want to add or remove geo-replication or read access to the secondary region, you can use the Azure portal, PowerShell, or Azure CLI to update the replication setting in some scenarios; other scenarios require a manual or live migration. If you want to change how data is replicated in the primary region, by moving from LRS to ZRS or vice versa, then you must either perform a manual migration or request a live migration. And if you want to move from ZRS to GZRS or RA-GZRS, then you must perform a live migration, unless you are performing a failback operation after failover.
+## Options for changing the replication type
-The following table provides an overview of how to switch from each type of replication to another:
+You can change how your storage account is replicated from any type to any other. There are four basic ways to change the settings:
-| Switching | …to LRS | …to GRS/RA-GRS | …to ZRS | …to GZRS/RA-GZRS |
+- [Use the Azure portal, Azure PowerShell, or the Azure CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli)
+- [Initiate a conversion from within the Azure portal (preview)](#customer-initiated-conversion-preview)
+- [Request a conversion by creating a support request with Microsoft](#support-requested-conversion)
+- [Perform a manual migration](#manual-migration)
+
+To add or remove geo-replication or read access to the secondary region, you can simply [change the replication setting using the portal, PowerShell, or the CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli).
+
+To add or remove zone-redundancy requires using either [customer-initiated conversion (preview)](#customer-initiated-conversion-preview), [support-requested conversion](#support-requested-conversion), or [a manual migration](#manual-migration).
+
+During a conversion, you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the conversion process and there is no data loss. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the conversion.
+
+Performing a manual migration involves downtime and requires the most manual effort, but you have more control over the timing of the process.
+
+If you want to change how data is replicated in the primary region and also configure geo-replication or read-access, a two-step process is required. Geo-redundancy and read-access can be changed at the same time, but zone-redundancy must be changed separately. It doesn't matter which is done first.
+
+> [!NOTE]
+> While Microsoft handles your request for a conversion promptly, there's no guarantee as to when it will complete. If you need your data converted by a certain date, Microsoft recommends that you perform a manual migration instead.
+>
+> Generally, the more data you have in your account, the longer it takes to replicate that data to other zones in the region.
+
+### Replication change table
+
+The following table provides an overview of how to switch from each type of replication to another.
+
+> [!NOTE]
+> Manual migration is an option for any scenario in which you want to change the replication setting within the [limitations for changing replication types](#limitations-for-changing-replication-types), so that option has been omitted from the table below to simplify it.
+>
+> Also, some changes noted in the table involve a two-step process such as switching from LRS to GRS/RA-GRS first, then converting to GZRS/RA-GZRS. The order of the steps doesn't matter. You could also convert from LRS to ZRS first, then switch to GZRS/RA-GZRS. The switch is listed first in the table because it appears to occur almost instantaneously, while the conversion typically takes much longer. Performing the faster change first allows you to initiate both required changes around the same time and not have to wait for the longer change to complete before proceeding with the other one.
+
+| Switching | …to LRS | …to GRS/RA-GRS <sup>6</sup> | …to ZRS | …to GZRS/RA-GZRS <sup>6</sup> |
|--|-||-||
-| <b>…from LRS</b> | N/A | Use Azure portal, PowerShell, or CLI to change the replication setting<sup>1,2</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Request a live migration<sup>5</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Switch to GRS/RA-GRS first and then request a live migration<sup>3</sup> |
-| <b>…from GRS/RA-GRS</b> | Use Azure portal, PowerShell, or CLI to change the replication setting | N/A | Perform a manual migration <br /><br /> OR <br /><br /> Switch to LRS first and then request a live migration<sup>3</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Request a live migration<sup>3</sup> |
-| <b>…from ZRS</b> | Perform a manual migration | Perform a manual migration | N/A | Request a live migration<sup>3</sup> <br /><br /> OR <br /><br /> Use Azure Portal, PowerShell or Azure CLI to change the replication setting as part of a failback operation only<sup>4</sup> |
-| <b>…from GZRS/RA-GZRS</b> | Perform a manual migration | Perform a manual migration | Use Azure portal, PowerShell, or CLI to change the replication setting | N/A |
+| **…from LRS** | **N/A** | [Use Azure portal, PowerShell, or CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli)<sup>1,2</sup> | [Customer-initiated conversion](#customer-initiated-conversion-preview)<sup>3,5</sup><br>**- or -**</br>[Support-requested conversion](#support-requested-conversion)<sup>3,5</sup> | [Switch to GRS/RA-GRS first](#change-the-replication-setting-using-the-portal-powershell-or-the-cli)<sup>1,2</sup>, then perform a conversion to GZRS/RA-GZRS using:<br><br>[Customer-initiated conversion](#customer-initiated-conversion-preview)<sup>3,5</sup><br>**- or -**</br>[Support-requested conversion](#support-requested-conversion)<sup>3,5</sup> |
+| **…from GRS/RA-GRS** | [Use Azure portal, PowerShell, or CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli) | **N/A** | [Switch to LRS first](#change-the-replication-setting-using-the-portal-powershell-or-the-cli), then perform a conversion to ZRS using:<br><br>[Customer-initiated conversion](#customer-initiated-conversion-preview)<sup>3,5</sup><br>**- or -**</br>[Support-requested conversion](#support-requested-conversion)<sup>3,5</sup> | [Customer-initiated conversion](#customer-initiated-conversion-preview)<sup>3,5</sup><br>**- or -**</br>[Support-requested conversion](#support-requested-conversion)<sup>3,5</sup> |
+| **…from ZRS** | [Customer-initiated conversion](#customer-initiated-conversion-preview)<sup>3</sup> | [Switch to GZRS/RA-GZRS first](#change-the-replication-setting-using-the-portal-powershell-or-the-cli)<sup>1,2</sup>, then perform a conversion to GRS/RA-GRS using:<br><br>[Customer-initiated conversion](#customer-initiated-conversion-preview)<sup>3</sup> | **N/A** | [Use Azure portal, PowerShell, or CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli)<sup>2</sup> |
+| **…from GZRS/RA-GZRS** | [Switch to ZRS first](#change-the-replication-setting-using-the-portal-powershell-or-the-cli), then perform a conversion to LRS using:<br><br>[Customer-initiated conversion](#customer-initiated-conversion-preview)<sup>3</sup> | [Customer-initiated conversion](#customer-initiated-conversion-preview)<sup>3</sup> | [Use Azure portal, PowerShell, or CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli)| **N/A** |
<sup>1</sup> Incurs a one-time egress charge.<br />
-<sup>2</sup> Migrating from LRS to GRS is not supported if the storage account contains blobs in the archive tier.<br />
-<sup>3</sup> Live migration is supported for standard general-purpose v2 and premium file share storage accounts. Live migration is not supported for premium block blob or page blob storage accounts.<br />
+<sup>2</sup> Switching to geo-redundancy is not supported if the storage account contains blobs in the archive tier.<br />
+<sup>3</sup> Conversion is supported for standard general-purpose v2 and premium file share storage accounts. It is not supported for premium block blob or page blob storage accounts.<br />
<sup>4</sup> After an account failover to the secondary region, it's possible to initiate a fail back from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see [Use caution when failing back to the original primary](storage-disaster-recovery-guidance.md#use-caution-when-failing-back-to-the-original-primary). <br />
-<sup>5</sup> Migrating from LRS to ZRS is not supported if the NFSv3 protocol support is enabled for Azure Blob Storage or if the storage account contains Azure Files NFSv4.1 shares. <br />
+<sup>5</sup> Converting from LRS to ZRS is not supported if the NFSv3 protocol support is enabled for Azure Blob Storage or if the storage account contains Azure Files NFSv4.1 shares. <br />
+<sup>6</sup> Even though enabling geo-redundancy appears to occur instantaneously, failover to the secondary region cannot be initiated until data synchronization between the two regions has completed.
-> [!CAUTION]
-> If you performed an [account failover](storage-disaster-recovery-guidance.md) for your (RA-)GRS or (RA-)GZRS account, the account is locally redundant (LRS) in the new primary region after the failover. Live migration to ZRS or GZRS for an LRS account resulting from a failover is not supported. This is true even in the case of so-called failback operations. For example, if you perform an account failover from RA-GZRS to the LRS in the secondary region, and then configure it again to RA-GRS and perform another account failover to the original primary region, you can't contact support for the original live migration to RA-GZRS in the primary region. Instead, you'll need to perform a manual migration to ZRS or GZRS.
+## Change the replication setting
-To change the redundancy configuration for a storage account that contains blobs in the Archive tier, you must first rehydrate all archived blobs to the Hot or Cool tier. Microsoft recommends that you avoid changing the redundancy configuration for a storage account that contains archived blobs if at all possible, because rehydration operations can be costly and time-consuming.
+Depending on your scenario from the table above, use one of the methods below to change your replication settings.
-## Change the replication setting
+### Change the replication setting using the portal, PowerShell, or the CLI
-You can use the Azure portal, PowerShell, or Azure CLI to change the replication setting for a storage account, as long as you are not changing how data is replicated in the primary region. If you are migrating from LRS in the primary region to ZRS in the primary region or vice versa, then you must perform either a manual migration or a live migration.
+In most cases you can use the Azure portal, PowerShell, or the Azure CLI to change the geo-redundant or read access (RA) replication setting for a storage account. If you are initiating a zone redundancy conversion, you can change the setting from within the Azure portal, but not from PowerShell or the Azure CLI.
-Changing how your storage account is replicated does not result in down time for your applications.
+Changing how your storage account is replicated in the portal does not result in down time for your applications. This includes changes that require a conversion.
# [Portal](#tab/portal) To change the redundancy option for your storage account in the Azure portal, follow these steps: 1. Navigate to your storage account in the Azure portal.
-1. Under **Settings** select **Configuration**.
-1. Update the **Replication** setting.
+1. Under **Data management** select **Redundancy**.
+1. Update the **Redundancy** setting.
+1. Select **Save**.
:::image type="content" source="media/redundancy-migration/change-replication-option.png" alt-text="Screenshot showing how to change replication option in portal." lightbox="media/redundancy-migration/change-replication-option.png":::
az storage account update \
-## Perform a manual migration to ZRS, GZRS, or RA-GZRS
+### Perform a conversion
-If you want to change how data in your storage account is replicated in the primary region, by moving from LRS to ZRS or vice versa, then you may opt to perform a manual migration. A manual migration provides more flexibility than a live migration. You control the timing of a manual migration, so use this option if you need the migration to complete by a certain date.
+Converting your storage account to add or remove zone-redundancy makes the change without incurring any down time.
-When you perform a manual migration from LRS to ZRS in the primary region or vice versa, the destination storage account can be geo-redundant and can also be configured for read access to the secondary region. For example, you can migrate an LRS account to a GZRS or RA-GZRS account in one step.
+During a conversion, you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the process and there is no data loss associated with a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the conversion.
-You cannot use a manual migration to migrate from ZRS to GZRS or RA-GZRS. You must request a live migration.
+There are two ways to initiate a conversion:
-A manual migration can result in application downtime. If your application requires high availability, Microsoft also provides a live migration option. A live migration is an in-place migration with no downtime.
+- [Customer-initiated](#customer-initiated-conversion-preview)
+- [Support-requested](#support-requested-conversion)
-With a manual migration, you copy the data from your existing storage account to a new storage account that uses ZRS in the primary region. To perform a manual migration, you can use one of the following options:
+#### Customer-initiated conversion (preview)
-- Copy data by using an existing tool such as AzCopy, one of the Azure Storage client libraries, or a reliable third-party tool.-- If you're familiar with Hadoop or HDInsight, you can attach both the source storage account and destination storage account account to your cluster. Then, parallelize the data copy process with a tool like DistCp.
+> [!IMPORTANT]
+> Customer initiated conversion is currently in preview and available in all public ZRS regions except for the following:
+>
+> - (Europe) West Europe
+> - (Europe) UK South
+> - (North America) Canada Central
+> - (North America) East US
+> - (North America) East US 2
+>
+> This preview version is provided without a service level agreement, and might not be suitable for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Request a live migration to ZRS, GZRS, or RA-GZRS
+Customer-initiated conversion adds a new option for customers to start a conversion. Now, instead of needing to open a support request, customers can start the conversion directly from within the Azure portal. Once initiated, the conversion could still take up to 72 hours to actually begin, but potential delays related to opening and managing a support request are eliminated.
-If you need to migrate your storage account from LRS to ZRS in the primary region with no application downtime, you can request a live migration from Microsoft. To migrate from LRS to GZRS or RA-GZRS, first switch to GRS or RA-GRS and then request a live migration. Similarly, you can request a live migration from ZRS, GRS, or RA-GRS to GZRS or RA-GZRS. To migrate from GRS or RA-GRS to ZRS, first switch to LRS, then request a live migration.
+Customer-initiated conversion is only available from the Azure portal, not from PowerShell or the Azure CLI. To initiate the conversion, perform the same steps used for changing other replication settings in the Azure portal as described in [Change the replication setting using the portal, PowerShell, or the CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli).
-During a live migration, you can access data in your storage account with no loss of durability or availability. The Azure Storage SLA is maintained during the migration process. There is no data loss associated with a live migration. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
+#### Support-requested conversion
-For standard performance, ZRS supports general-purpose v2 accounts only, so make sure to upgrade your storage account if it is a general-purpose v1 account prior to submitting a request for a live migration to ZRS. For more information, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md). A storage account must contain data to be migrated via live migration.
+Customers can still request a conversion by opening a support request with Microsoft.
-For premium performance, live migration is supported for premium file share accounts, but not for premium block blob or premium page blob accounts.
+> [!IMPORTANT]
+> If you need to convert more than one storage account, create a single support ticket and specify the names of the accounts to convert on the **Additional details** tab.
-If your account uses RA-GRS, then you need to first change your account's replication type to either LRS or GRS before proceeding with a live migration. This intermediary step removes the secondary read-only endpoint provided by RA-GRS.
+Follow these steps to request a conversion from Microsoft:
-While Microsoft handles your request for live migration promptly, there's no guarantee as to when a live migration will complete. If you need your data migrated to ZRS by a certain date, then Microsoft recommends that you perform a manual migration instead. Generally, the more data you have in your account, the longer it takes to migrate that data.
+1. In the Azure portal, navigate to a storage account that you want to convert.
+1. Under **Support + troubleshooting**, select **New Support Request**.
+1. Complete the **Problem description** tab based on your account information:
+ - **Summary**: (some descriptive text).
+ - **Issue type**: Select **Technical**.
+ - **Subscription**: Select your subscription from the drop-down.
+ - **Service**: Select **My Services**, then **Storage Account Management** for the **Service type**.
+ - **Resource**: Select a storage account to convert. If you need to specify multiple storage accounts, you can do so on the **Additional details** tab.
+ - **Problem type**: Choose **Data Migration**.
+ - **Problem subtype**: Choose **Migrate to ZRS, GZRS, or RA-GZRS**.
-You must perform a manual migration if:
+ :::image type="content" source="media/redundancy-migration/request-live-migration-problem-desc-portal.png" alt-text="Screenshot showing how to request a conversion - Problem description tab.":::
+
+1. Select **Next**. The **Recommended solution** tab might be displayed briefly before it switches to the **Solutions** page. On the **Solutions** page, you can check the eligibility of your storage account(s) for conversion:
+ - **Target replication type**: (choose the desired option from the drop-down)
+ - **Storage accounts from**: (enter a single storage account name or a list of accounts separated by semicolons)
+ - Select **Submit**.
-- You want to migrate your data into a ZRS storage account that is located in a region different than the source account.-- Your storage account is a premium page blob or block blob account.-- You want to migrate data from ZRS to LRS, GRS or RA-GRS.-- Your storage account includes data in the archive tier.
+ :::image type="content" source="media/redundancy-migration/request-live-migration-solutions-portal.png" alt-text="Screenshot showing how to check the eligibility of your storage account(s) for conversion - Solutions page.":::
+
+1. Take the appropriate action if the results indicate your storage account is not eligible for conversion. If it is eligible, select **Return to support request**.
+
+1. Select **Next**. If you have more than one storage account to migrate, then on the **Details** tab, specify the name for each account, separated by a semicolon.
+
+ :::image type="content" source="media/redundancy-migration/request-live-migration-details-portal.png" alt-text="Screenshot showing how to request a conversion - Additional details tab.":::
+
+1. Fill out the additional required information on the **Additional details** tab, then select **Review + create** to review and submit your support ticket. A support person will contact you to provide any assistance you may need.
+
+### Manual migration
+
+A manual migration provides more flexibility and control than a conversion. You can use this option if you need the migration to complete by a certain date, or if conversion is [not supported for your scenario](#limitations-for-changing-replication-types). Manual migration is also useful when moving a storage account to another region. See [Move an Azure Storage account to another region](storage-account-move.md) for more details.
+
+You must perform a manual migration if:
-You can request live migration through the [Azure Support portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+- You want to migrate your storage account to a different region.
+- Your storage account is a block blob account.
+- Your storage account includes data in the archive tier and rehydrating the data is not desired.
> [!IMPORTANT]
-> If you need to migrate more than one storage account, create a single support ticket and specify the names of the accounts to convert on the **Details** tab.
+> A manual migration can result in application downtime. If your application requires high availability, Microsoft also provides a [conversion](#perform-a-conversion) option. A conversion is an in-place migration with no downtime.
-Follow these steps to request a live migration:
+With a manual migration, you copy the data from your existing storage account to a new storage account. To perform a manual migration, you can use one of the following options:
-1. In the Azure portal, navigate to a storage account that you want to migrate.
-1. Under **Support + troubleshooting**, select **New Support Request**.
-1. Complete the **Basics** tab based on your account information:
- - **Issue type**: Select **Technical**.
- - **Service**: Select **My Services**, then **Storage Account Management**.
- - **Resource**: Select a storage account to migrate. If you need to specify multiple storage accounts, you can do so in the **Details** section.
- - **Problem type**: Choose **Data Migration**.
- - **Problem subtype**: Choose **Migrate to ZRS, GZRS, or RA-GZRS**.
+- Copy data by using an existing tool such as AzCopy, one of the Azure Storage client libraries, or a reliable third-party tool.
+- If you're familiar with Hadoop or HDInsight, you can attach both the source storage account and destination storage account to your cluster. Then, parallelize the data copy process with a tool like DistCp.
- :::image type="content" source="media/redundancy-migration/request-live-migration-basics-portal.png" alt-text="Screenshot showing how to request a live migration - Basics tab":::
+For more detailed guidance on how to perform a manual migration, see [Move an Azure Storage account to another region](storage-account-move.md).
-1. Select **Next**. On the **Solutions** tab, you can check the eligibility of your storage accounts for migration.
-1. Select **Next**. If you have more than one storage account to migrate, then on the **Details** tab, specify the name for each account, separated by a semicolon.
+## Limitations for changing replication types
- :::image type="content" source="media/redundancy-migration/request-live-migration-details-portal.png" alt-text="Screenshot showing how to request a live migration - Details tab":::
+Limitations apply to some replication change scenarios depending on:
-1. Fill out the additional required information on the **Details** tab, then select **Review + create** to review and submit your support ticket. A support person will contact you to provide any assistance you may need.
+- [Storage account type](#storage-account-type)
+- [Region](#region)
+- [Access tier](#access-tier)
+- [Protocol support](#protocol-support)
+- [Failover and failback](#failover-and-failback)
-> [!NOTE]
-> Premium file shares are available only for LRS and ZRS.
->
-> GZRS storage accounts do not currently support the archive tier. See [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md) for more details.
->
-> Managed disks are only available for LRS and cannot be migrated to ZRS. You can store snapshots and images for standard SSD managed disks on standard HDD storage and [choose between LRS and ZRS options](https://azure.microsoft.com/pricing/details/managed-disks/). For information about integration with availability sets, see [Introduction to Azure managed disks](../../virtual-machines/managed-disks-overview.md#integration-with-availability-sets).
+### Storage account type
-## Switch from ZRS Classic
+When planning to change your replication settings, consider the following limitations related to the storage account type.
+
+Some storage account types only support certain redundancy configurations, which affects whether they can be converted or migrated and, if so, how. For more details on Azure storage account types and the supported redundancy options, see [the storage account overview](storage-account-overview.md#types-of-storage-accounts).
+
+The following table provides an overview of redundancy options available for storage account types and whether conversion and manual migration are supported:
+
+| Storage account type | Supports LRS | Supports ZRS | Supports conversion<br>(from the portal) | Supports conversion<br>(by support request) | Supports manual migration |
+|:-|::|::|:--:|:-:|:-:|
+| Standard general purpose v2 | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| Premium file shares | &#x2705; | &#x2705; | | &#x2705; <sup>1</sup> | &#x2705; |
+| Premium block blob | &#x2705; | &#x2705; | | | &#x2705; |
+| Premium page blob | &#x2705; | | | | |
+| Managed disks<sup>2</sup> | &#x2705; | | | | |
+| Standard general purpose v1 | &#x2705; | | <sup>3</sup> | | &#x2705; |
+| ZRS Classic<sup>4</sup><br /><sub>(available in standard general purpose v1 accounts)</sub> | &#x2705; | | | |
+
+<sup>1</sup> Conversion for premium file shares is only available by [opening a support request](#support-requested-conversion); [Customer-initiated conversion (preview)](#customer-initiated-conversion-preview) is not currently supported.<br />
+<sup>2</sup> Managed disks are only available for LRS and cannot be migrated to ZRS. You can store snapshots and images for standard SSD managed disks on standard HDD storage and [choose between LRS and ZRS options](https://azure.microsoft.com/pricing/details/managed-disks/). For information about integration with availability sets, see [Introduction to Azure managed disks](../../virtual-machines/managed-disks-overview.md#integration-with-availability-sets).<br />
+<sup>3</sup> If your storage account is v1, you'll need to upgrade it to v2 before performing a conversion. To learn how to upgrade your v1 account, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md).<br />
+<sup>4</sup> ZRS Classic storage accounts have been deprecated. For information about converting ZRS Classic accounts, see [Converting ZRS Classic accounts](#converting-zrs-classic-accounts).<br />
+
+#### Converting ZRS Classic accounts
> [!IMPORTANT]
-> Microsoft will deprecate and migrate ZRS Classic accounts on March 31, 2021. More details will be provided to ZRS Classic customers before deprecation.
->
-> After ZRS becomes generally available in a given region, customers will no longer be able to create ZRS Classic accounts from the Azure portal in that region. Using Microsoft PowerShell and Azure CLI to create ZRS Classic accounts is an option until ZRS Classic is deprecated. For information about where ZRS is available, see [Azure Storage redundancy](storage-redundancy.md).
+> ZRS Classic accounts were deprecated on March 31, 2021. Customers can no longer create ZRS Classic accounts. If you still have some, you should upgrade them to general purpose v2 accounts.
+
+ZRS Classic was available only for **block blobs** in general-purpose V1 (GPv1) storage accounts. For more information about storage accounts, see [Azure storage account overview](storage-account-overview.md).
-ZRS Classic asynchronously replicates data across data centers within one to two regions. Replicated data may not be available unless Microsoft initiates failover to the secondary. A ZRS Classic account can't be converted to or from LRS, GRS, or RA-GRS. ZRS Classic accounts also don't support metrics or logging.
+ZRS Classic accounts asynchronously replicated data across data centers within one to two regions. Replicated data was not available unless Microsoft initiated a failover to the secondary. A ZRS Classic account can't be converted to or from LRS, GRS, or RA-GRS. ZRS Classic accounts also don't support metrics or logging.
-ZRS Classic is available only for **block blobs** in general-purpose V1 (GPv1) storage accounts. For more information about storage accounts, see [Azure storage account overview](storage-account-overview.md).
+To change ZRS Classic to another replication type, use one of the following methods:
-To manually migrate ZRS account data to or from an LRS, GRS, RA-GRS, or ZRS Classic account, use one of the following tools: AzCopy, Azure Storage Explorer, PowerShell, or Azure CLI. You can also build your own migration solution with one of the Azure Storage client libraries.
+- Upgrade it to ZRS first
+- [Manually migrate the data directly to another replication type](#manual-migration)
-You can also upgrade your ZRS Classic storage account to ZRS by using the Azure portal, PowerShell, or Azure CLI in regions where ZRS is available.
+To upgrade your ZRS Classic storage account to ZRS, use the Azure portal, PowerShell, or Azure CLI in regions where ZRS is available:
# [Portal](#tab/portal)
az storage account update -g <resource_group> -n <storage_account> --set kind=St
+To manually migrate your ZRS Classic account data to another type of replication, follow the steps to [perform a manual migration](#manual-migration).
+
+### Region
+
+Make sure the region where your storage account is located supports all of the desired replication settings. For example, if you are converting your account to zone-redundant (ZRS, GZRS, or RA-GZRS), make sure your storage account is in a region that supports it. See the lists of supported regions for [Zone-redundant storage](storage-redundancy.md#zone-redundant-storage) and [Geo-zone-redundant storage](storage-redundancy.md#geo-zone-redundant-storage).
+
+The [customer-initiated conversion (preview)](#customer-initiated-conversion-preview) to ZRS is available in all public ZRS regions except for the following:
+
+- (Europe) West Europe
+- (Europe) UK South
+- (North America) Canada Central
+- (North America) East US
+- (North America) East US 2
+
+If you want to migrate your data into a zone-redundant storage account located in a region different from the source account, you must perform a manual migration. For more details, see [Move an Azure Storage account to another region](storage-account-move.md).
+
+### Access tier
+
+Ensure the desired replication option supports the access tier currently used in the storage account. For example, GZRS storage accounts do not currently support the archive tier. See [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md) for more details.
+
+To change the redundancy configuration for a storage account that contains blobs in the Archive tier, you must first rehydrate all archived blobs to the Hot or Cool tier. Microsoft recommends that you avoid changing the redundancy configuration for a storage account that contains archived blobs if at all possible, because rehydration operations can be costly and time-consuming. An option that avoids the rehydration time and expense is a [manual migration](#manual-migration).
+
+### Protocol support
+
+Converting your storage account to zone-redundancy (ZRS, GZRS or RA-GZRS) is not supported if the NFSv3 protocol support is enabled for Azure Blob Storage, or if the storage account contains Azure Files NFSv4.1 shares.
+
+### Failover and failback
+
+After an account failover to the secondary region, it's possible to initiate a failback from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see [use caution when failing back to the original primary](storage-disaster-recovery-guidance.md#use-caution-when-failing-back-to-the-original-primary).
+
+If you performed an [account failover](storage-disaster-recovery-guidance.md) for your (RA-)GRS or (RA-)GZRS account, the account is locally redundant (LRS) in the new primary region after the failover. Live migration to ZRS or GZRS for an LRS account resulting from a failover is not supported. This is true even in the case of so-called failback operations. For example, if you perform an account failover from RA-GZRS to the LRS in the secondary region, and then configure it again to RA-GRS and perform another account failover to the original primary region, you can't perform a conversion to RA-GZRS in the primary region. Instead, you'll need to perform a manual migration to ZRS or GZRS.
+
+## Downtime requirements
+
+During a conversion, you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the migration process and there is no data loss associated with a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
+
+If you initiate a conversion from the Azure portal, the migration process could take up to 72 hours to begin, and possibly longer if requested by opening a support request.
+
+If you choose to perform a manual migration, downtime is required but you have more control over the timing of the migration process.
+ ## Costs associated with changing how data is replicated The costs associated with changing how data is replicated depend on your conversion path. Ordering from least to the most expensive, Azure Storage redundancy offerings include LRS, ZRS, GRS, RA-GRS, GZRS, and RA-GZRS.
-For example, going *from* LRS to any other type of replication will incur additional charges because you are moving to a more sophisticated redundancy level. Migrating *to* GRS or RA-GRS will incur an egress bandwidth charge at the time of migration because your entire storage account is being replicated to the secondary region. All subsequent writes to the primary region also incur egress bandwidth charges to replicate the write to the secondary region. For details on bandwidth charges, see [Azure Storage Pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
+For example, going *from* LRS to any other type of replication will incur additional charges because you are moving to a more sophisticated redundancy level. Migrating *to* GRS or RA-GRS will incur an egress bandwidth charge at the time of the conversion or migration because your entire storage account is being replicated to the secondary region. All subsequent writes to the primary region also incur egress bandwidth charges to replicate the write to the secondary region. For details on bandwidth charges, see [Azure Storage Pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
If you migrate your storage account from GRS to LRS, there is no additional cost, but your replicated data is deleted from the secondary location.
If you migrate your storage account from GRS to LRS, there is no additional cost
## See also - [Azure Storage redundancy](storage-redundancy.md)-- [Check the Last Sync Time property for a storage account](last-sync-time-get.md) - [Use geo-redundancy to design highly available applications](geo-redundant-design.md)
+- [Move an Azure Storage account to another region](storage-account-move.md)
+- [Check the Last Sync Time property for a storage account](last-sync-time-get.md)
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
The status of items that appear in this table may change over time as support co
| Encryption at rest| ✔️ | | Encryption in transit| ⛔ | | [LRS or ZRS redundancy types](storage-files-planning.md#redundancy)| ✔️ |
-| [LRS to ZRS conversion](../common/redundancy-migration.md?tabs=portal#switch-between-types-of-replication)| Γ¢ö |
+| [LRS to ZRS conversion](../common/redundancy-migration.md?tabs=portal#limitations-for-changing-replication-types)| Γ¢ö |
| [Private endpoints](storage-files-networking-overview.md#private-endpoints) | ✔️ | | Subdirectory mounts| ✔️ | | [Grant network access to specific Azure virtual networks](storage-files-networking-endpoints.md#restrict-access-to-the-public-endpoint-to-specific-virtual-networks)| ✔️ |
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
Title: What's new in Azure Files
-description: Learn more about new features and enhancements in Azure Files.
+ Title: What's new in Azure Files and Azure File Sync
+description: Learn about new features and enhancements in Azure Files and Azure File Sync.
Previously updated : 12/08/2021 Last updated : 09/21/2022 # What's new in Azure Files
-Azure Files is updated regularly to offer new features and enhancements. This article provides detailed information about what's new in Azure Files.
+Azure Files is updated regularly to offer new features and enhancements. This article provides detailed information about what's new in Azure Files and Azure File Sync.
-## 2021 quarter 4 (October, November, December)
-### Increased IOPS for premium file shares
+## What's new in 2022
+
+### 2022 quarter 3 (July, August, September)
+#### Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities on Azure Files (public preview)
+This [preview release](storage-files-identity-auth-azure-active-directory-enable.md) builds on top of [FSLogix profile container support](../../virtual-desktop/create-profile-container-azure-ad.md) released in December 2022 and expands it to support more use cases with an easy, two-step portal experience (SMB only). Azure AD Kerberos allows Kerberos authentication for hybrid identities in Azure AD, reducing the need for customers to configure another domain service and allowing customers to authenticate with Azure Files without the need for line-of-sight to domain controllers. While the initial support is limited to hybrid user identities, which are identities created in AD DS and synced to Azure AD, itΓÇÖs a significant milestone as we simplify identity-based authentication for Azure Files customers. [Read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-leverage-azure-active-directory-kerberos-with/ba-p/3612111).
+
+### 2022 quarter 2 (April, May, June)
+#### SUSE Linux support for SAP HANA System Replication (HSR) and Pacemaker
+Azure customers can now [deploy a highly available SAP HANA system in a scale-out configuration](../../virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-suse.md) with HSR and Pacemaker on Azure SUSE Linux Enterprise Server virtual machines (VMs), using NFS Azure file shares for a shared file system.
+
+### 2022 quarter 1 (January, February, March)
+#### Azure File Sync TCO improvements
+To offer sync and tiering, Azure File Sync performs two types of transactions on behalf of the customer:
+- Transactions from churn, including changed files (sync) and recalled files (tiering).
+- Transactions from cloud change enumeration, done to discover changes made directly on the Azure file share. Historically, this was a major component of an Azure File Sync customerΓÇÖs Azure Files bill.
+
+To improve TCO, we markedly decreased the number of transactions needed to fully scan an Azure file share. Prior to this change, most customers were best off in the hot tier. Now most customers are best off in the cool tier.
+
+## What's new in 2021
+
+### 2021 quarter 4 (October, November, December)
+#### Increased IOPS for premium file shares
Premium Azure file shares now have additional included baseline IOPS and a higher minimum burst IOPS. The baseline IOPS included with a provisioned share was increased from 400 to 3,000, meaning that a 100 GiB share (the minimum share size) is guaranteed 3,100 baseline IOPS. Additionally, the floor for burst IOPS was increased from 4,000 to 10,000, meaning that every premium file share will be able to burst up to at least 10,000 IOPS. Formula changes:
For more information, see:
- [The provisioned model for premium Azure file shares](understanding-billing.md#provisioned-model) - [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/)
-### NFS 4.1 protocol support is generally available
+#### NFS 4.1 protocol support is generally available
Premium Azure file shares now support either the SMB or the NFS 4.1 protocols. NFS 4.1 is available in all regions where Azure Files supports the premium tier, for both locally redundant storage and zone-redundant storage. Azure file shares created with the NFS 4.1 protocol enabled are fully POSIX-compliant, distributed file shares that support a wide variety of Linux and container-based workloads. Some example workloads include: highly available SAP application layer, enterprise messaging, user home directories, custom line-of-business applications, database backups, database replication, and Azure Pipelines. For more information, see:
For more information, see:
- [High availability for SAP NetWeaver on Azure VMs with NFS on Azure Files](../../virtual-machines/workloads/sap/high-availability-guide-suse-nfs-azure-files.md) - [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/)
-### Symmetric throughput for premium file shares
+#### Symmetric throughput for premium file shares
Premium Azure file shares now support symmetric throughput provisioning, which enables the provisioned throughput for an Azure file share to be used for 100% ingress, 100% egress, or some mixture of ingress and egress. Symmetric throughput provides the flexibility to make full utilization of available throughput and aligns premium file shares with standard file shares. Formula changes:
For more information, see:
- [The provisioned model for premium Azure file shares](understanding-billing.md#provisioned-model) - [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/)
-## 2021 quarter 3 (July, August, September)
-### SMB Multichannel is generally available
+### 2021 quarter 3 (July, August, September)
+#### SMB Multichannel is generally available
SMB Multichannel enables SMB clients to establish multiple parallel connections to an Azure file share. This allows SMB clients to take full advantage of all available network bandwidth and makes them resilient to network failures, reducing total cost of ownership and enabling 2-3x for reads and 3-4x for writes through a single client. SMB Multichannel is available for premium file shares (file shares deployed in the FileStorage storage account kind) and is disabled by default. For more information, see:
For more information, see:
- [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel) - [Overview on SMB Multichannel in the Windows Server documentation](/azure-stack/hci/manage/manage-smb-multichannel)
-### SMB 3.1.1 and SMB security settings
+#### SMB 3.1.1 and SMB security settings
SMB 3.1.1 is the most recent version of the SMB protocol, released with Windows 10, containing important security and performance updates. Azure Files SMB 3.1.1 ships with two additional encryption modes, AES-128-GCM and AES-256-GCM, in addition to AES-128-CCM which was already supported. To maximize performance, AES-128-GCM is negotiated as the default SMB channel encryption option; AES-128-CCM will only be negotiated on older clients that don't support AES-128-GCM. Depending on your organization's regulatory and compliance requirements, AES-256-GCM can be negotiated instead of AES-128-GCM by either restricting allowed SMB channel encryption options on the SMB clients, in Azure Files, or both. Support for AES-256-GCM was added in Windows Server 2022 and Windows 10, version 21H1.
For more information, see:
- [Windows](storage-how-to-use-files-windows.md) and [Linux](storage-how-to-use-files-linux.md) SMB version information - [Overview of SMB features in the Windows Server documentation](/windows-server/storage/file-server/file-server-smb-overview)
-## 2021 quarter 2 (April, May, June)
-### Premium, hot, and cool storage capacity reservations
+### 2021 quarter 2 (April, May, June)
+#### Premium, hot, and cool storage capacity reservations
Azure Files supports storage capacity reservations (also referred to as *reserve instances*). Storage capacity reservations allow you to achieve a discount on storage by pre-committing to storage utilization. Azure Files supports capacity reservations on the premium, hot, and cool tiers. Capacity reservations are sold in units of 10 TiB or 100 TiB, for terms of either one year or three years. For more information, see:
For more information, see:
- [Optimized costs for Azure Files with reserved capacity](files-reserve-capacity.md) - [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/)
-### Improved portal experience for domain joining to Active Directory
+#### Improved portal experience for domain joining to Active Directory
The experience for domain joining an Azure storage account has been improved to help guide first-time Azure file share admins through the process. When you select Active Directory under **File share settings** in the **File shares** section of the Azure portal, you will be guided through the steps required to domain join. :::image type="content" source="media/files-whats-new/ad-domain-join-1.png" alt-text="Screenshot of the new portal experience for domain joining a storage account to Active Directory" lightbox="media/files-whats-new/ad-domain-join-1.png":::
For more information, see:
- [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md) - [Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md)
-## 2021 quarter 1 (January, February, March)
-### Azure Files management now available through the control plane
+### 2021 quarter 1 (January, February, March)
+#### Azure Files management now available through the control plane
Management APIs for Azure Files resources, the file service and file shares, are now available through control plane (`Microsoft.Storage` resource provider). This enables Azure file shares to be created with an Azure Resource Manager or Bicep template, to be fully manageable when the data plane (i.e. the FileREST API) is inaccessible (like when the storage account's public endpoint is disabled), and to support full role-based access control (RBAC) semantics. We recommend you manage Azure Files through the control plane in most cases. To support management of the file service and file shares through the control plane, the Azure portal, Azure storage PowerShell module, and Azure CLI have been updated to support most management actions through the control plane.
storsimple Storsimple Data Manager Change Default Blob Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-data-manager-change-default-blob-path.md
To create an Azure function, perform the following steps:
1. Paste the following code:
- ```
+ ```csharp
using System; using System.Configuration; using Microsoft.WindowsAzure.Storage.Blob;
To create an Azure function, perform the following steps:
3. Type **project.json**, and then press **Enter**. In the **project.json** file, paste the following code:
- ```
+ ```json
{ "frameworks": { "net46":{
synapse-analytics Load Data Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/load-data-overview.md
Previously updated : 04/15/2020 Last updated : 09/20/2022
It is best practice to load data into a staging table. Staging tables allow you
To load data with PolyBase, you can use any of these loading options: -- [PolyBase with T-SQL](../sql-data-warehouse/load-data-from-azure-blob-storage-using-copy.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) works well when your data is in Azure Blob storage or Azure Data Lake Store. It gives you the most control over the loading process, but also requires you to define external data objects. The other methods define these objects behind the scenes as you map source tables to destination tables. To orchestrate T-SQL loads, you can use Azure Data Factory, SSIS, or Azure functions.
+- [PolyBase with T-SQL](../sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) works well when your data is in Azure Blob storage or Azure Data Lake Store. It gives you the most control over the loading process, but also requires you to define external data objects. The other methods define these objects behind the scenes as you map source tables to destination tables. To orchestrate T-SQL loads, you can use Azure Data Factory, SSIS, or Azure functions.
- [PolyBase with SSIS](/sql/integration-services/load-data-to-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) works well when your source data is in SQL Server. SSIS defines the source to destination table mappings, and also orchestrates the load. If you already have SSIS packages, you can modify the packages to work with the new data warehouse destination. - [PolyBase with Azure Data Factory (ADF)](../../data-factory/load-azure-sql-data-warehouse.md) is another orchestration tool. It defines a pipeline and schedules jobs. - [PolyBase with Azure Databricks](/azure/databricks/scenarios/databricks-extract-load-sql-data-warehouse?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) transfers data from an Azure Synapse Analytics table to a Databricks dataframe and/or writes data from a Databricks dataframe to an Azure Synapse Analytics table using PolyBase.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
There are some limitations and known issues that you might see in Delta Lake sup
- Serverless SQL pools don't support time travel queries. Use Apache Spark pools in Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel). - Serverless SQL pools don't support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Synapse Analytics to [update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data). - You can't [store query results to storage in Delta Lake format](create-external-table-as-select.md) by using the CETAS command. The CETAS command supports only Parquet and CSV as the output formats.-- Serverless SQL pools in Synapse Analytics don't support the datasets with the [BLOOM filter](/azure/databricks/delta/optimizations/bloom-filters). The serverless SQL pool ignores the BLOOM filters.
+- Serverless SQL pools in Synapse Analytics don't support the datasets with the [BLOOM filter](/azure/databricks/optimizations/bloom-filters). The serverless SQL pool ignores the BLOOM filters.
- Delta Lake support isn't available in dedicated SQL pools. Make sure that you use serverless SQL pools to query Delta Lake files. ### JSON text isn't properly formatted
If you are exporting your [Dataverse table to Azure Data Lake storage](/power-ap
Make sure that your workspace Managed Identity has read access on the ADLS storage that contains Delta folder. The serverless SQL pool reads the Delta Lake table schema from the Delta log that are placed in ADLS and use the workspace Managed Identity to access the Delta transaction logs.
-Try to setup a data source in some SQL Database that references your Azure Data Lake storage using Managed Identity credential, and try to [create external table on top of data source with Managed Identity](/sql/develop-storage-files-storage-access-control.md?tabs=managed-identity#access-a-data-source-using-credentials) to confirm that a table with the Managed Identity can access your storage.
+Try to setup a data source in some SQL Database that references your Azure Data Lake storage using Managed Identity credential, and try to [create external table on top of data source with Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#access-a-data-source-using-credentials) to confirm that a table with the Managed Identity can access your storage.
### Delta tables in Lake databases do not have identical schema in Spark and serverless pools
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
The following updates are new to Azure Synapse Analytics this month.
KQL is the query language used to query Synapse Data Explorer big data. KQL has a fast-growing user community, with hundreds of thousands of developers, data engineers, data analysts, and students.
- Check out the newest [KQL Learn Model](/learn/modules/gain-insights-data-kusto-query-language/) and see for yourself how easy it is to become a KQL master.
+ Check out the newest [KQL Learn module](/training/modules/gain-insights-data-kusto-query-language/) and see for yourself how easy it is to become a KQL master.
To learn more about KQL, read [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/).
update-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md
Update management center (preview) has been redesigned and doesn't depend on Azu
- Ability to take immediate action either by installing updates immediately or schedule them for a later date. - Check updates automatically or on demand. - Helps secure machines with new ways of patching such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hotpatching](../automanage/automanage-hotpatch.md) or custom maintenance schedules.
- - Sync patch cycles in relation to patch TuesdayΓÇöthe unofficial term for Microsoft's scheduled security fix release on every Tuesday.
+ - Sync patch cycles in relation to patch TuesdayΓÇöthe unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month.
The following diagram illustrates how update management center (preview) assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux.
For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../v
### VM images
-Update management center (preview) supports Azure VMs created using Azure Marketplace images, where the virtual machine agent is already included in the Azure Marketplace image. If you have created Azure VMs using custom VM images and not an image from the Azure Marketplace, you need to manually install and enable the Azure virtual machine agent. For details, see:
--- [Manual install of Azure Windows VM agent](../virtual-machines/extensions/agent-windows.md#manual-installation)-- [Manual install of Azure Linux VM agent](../virtual-machines/extensions/agent-linux.md#installation)-
+Update management center (preview) supports Azure VMs created using Azure Marketplace images, where the virtual machine agent is already included in the Azure Marketplace image.
## Next steps
virtual-machines Dasv5 Dadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dasv5-dadsv5-series.md
Dasv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
Dadsv5-series virtual machines support Standard SSD, Standard HDD, and Premium S
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Extensions Rmpolicy Howto Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-cli.md
When you're finished, press **Esc**, and then type **:wq** to save and close the
You also need a [parameters](../../governance/policy/concepts/definition-structure.md#parameters) file that creates a structure for you to use for passing in a list of the unauthorized extensions. This example shows you how to create a parameter file for Linux VMs in Cloud Shell.+
+In the bash Cloud Shell opened before type:
+
+```bash
+vim ~/clouddrive/azurepolicy.parameters.json
``` Copy and paste the following `.json` data into the file.
az policy definition delete --name 'not-allowed-vmextension-linux'
## Next steps
-For more information, see [Azure Policy](../../governance/policy/overview.md).
+For more information, see [Azure Policy](../../governance/policy/overview.md).
virtual-machines Maintenance And Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-and-updates.md
Within a VM, you can get notifications about upcoming maintenance by [using Sche
Most platform updates don't affect customer VMs. When a no-impact update isn't possible, Azure chooses the update mechanism that's least impactful to customer VMs.
-Most nonzero-impact maintenance pauses the VM for less than 10 seconds. In certain cases, Azure uses memory-preserving maintenance mechanisms. These mechanisms pause the VM for typically up to 30 seconds and preserve the memory in RAM. The VM is then resumed, and its clock is automatically synchronized.
+Most nonzero-impact maintenance pauses the VM for less than 10 seconds. In certain cases, Azure uses memory-preserving maintenance mechanisms. These mechanisms pause the VM, typically for about 30 seconds, and preserve the memory in RAM. The VM is then resumed, and its clock is automatically synchronized.
Memory-preserving maintenance works for more than 90 percent of Azure VMs. It doesn't work for G, L, M, N, and H series. Azure increasingly uses live-migration technologies and improves memory-preserving maintenance mechanisms to reduce the pause durations.
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
You can use the following script to do a basic installation of Azure Devops Serv
Log in to Azure Cloud Shell ```bash export ADO_ORGANIZATION=https://dev.azure.com/<yourorganization>
- export ADO_PROJECT=SAP Deployment Automation
+ export ADO_PROJECT=SAP-Deployment-Automation
wget https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/create_devops_artifacts.sh -O devops.sh chmod +x ./devops.sh ./devops.sh
You can finalize the Azure DevOps configuration by running the following scripts
$Env:YourPrefix="<yourPrefix>" $Env:ControlPlaneSubscriptionID="<YourControlPlaneSubscriptionID>"
+ $Env:ControlPlaneSubscriptionName="<YourControlPlaneSubscriptionName>"
$Env:DevSubscriptionID="<YourDevSubscriptionID>"
+ $Env:DevSubscriptionName="<YourDevSubscriptionName>"
``` > [!NOTE]
virtual-machines Sap High Availability Guide Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-guide-wsfc-shared-disk.md
_SAP ASCS/SCS HA architecture with shared disk_
There are two options for shared disk in a windows failover cluster in Azure: - [Azure shared disks](../../disks-shared.md) - feature, that allows to attach Azure managed disk to multiple VMs simultaneously. -- Using 3rd-party software [SIOS DataKeeper Cluster Edition](https://us.sios.com/products/datakeeper-cluster) to create a mirrored storage that simulates cluster shared storage.
+- Using 3rd-party software [SIOS DataKeeper Cluster Edition](https://us.sios.com/products/sios-datakeeper/) to create a mirrored storage that simulates cluster shared storage.
When selecting the technology for for shared disk, keep in mind the following considerations:
To create a shared disk resource for a cluster:
2. Run SIOS DataKeeper Cluster Edition on both virtual machine nodes. 3. Configure SIOS DataKeeper Cluster Edition so that it mirrors the content of the additional disk attached volume from the source virtual machine to the additional disk attached volume of the target virtual machine. SIOS DataKeeper abstracts the source and target local volumes, and then presents them to Windows Server failover clustering as one shared disk.
-Get more information about [SIOS DataKeeper](https://us.sios.com/products/datakeeper-cluster/).
+Get more information about [SIOS DataKeeper](https://us.sios.com/products/sios-datakeeper/).
![Figure 5: Windows Server failover clustering configuration in Azure with SIOS DataKeeper][sap-ha-guide-figure-1002]
virtual-machines Sap High Availability Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-guide.md
You need cluster shared storage for a high-availability SAP ASCS/SCS instance. A
2. Run SIOS DataKeeper Cluster Edition on both virtual machine nodes. 3. Configure SIOS DataKeeper Cluster Edition so that it mirrors the content of the additional disk attached volume from the source virtual machine to the additional disk attached volume of the target virtual machine. SIOS DataKeeper abstracts the source and target local volumes, and then presents them to Windows Server Failover Clustering as one shared disk.
-Get more information about [SIOS DataKeeper](https://us.sios.com/products/datakeeper-cluster/).
+Get more information about [SIOS DataKeeper](https://us.sios.com/products/sios-datakeeper/).
![Figure 3: Windows Server Failover Clustering configuration in Azure with SIOS DataKeeper][sap-ha-guide-figure-1002]
virtual-machines Sap High Availability Infrastructure Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-shared-disk.md
This article describes the steps you take to prepare the Azure infrastructure for installing and configuring a high-availability SAP ASCS/SCS instance on a Windows failover cluster by using a *cluster shared disk* as an option for clustering an SAP ASCS instance. Two alternatives for *cluster shared disk* are presented in the documentation: - [Azure shared disks](../../disks-shared.md)-- Using [SIOS DataKeeper Cluster Edition](https://us.sios.com/products/datakeeper-cluster/) to create mirrored storage, that will simulate clustered shared disk
+- Using [SIOS DataKeeper Cluster Edition](https://us.sios.com/products/sios-datakeeper/) to create mirrored storage, that will simulate clustered shared disk
The documentation doesn't cover the database layer.
This section is only applicable, if you are using the third-party software SIOS
Now, you have a working Windows Server failover clustering configuration in Azure. To install an SAP ASCS/SCS instance, you need a shared disk resource. One of the options is to use SIOS DataKeeper Cluster Edition is a third-party solution that you can use to create shared disk resources. Installing SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk involves these tasks:-- Add Microsoft .NET Framework, if needed. See the [SIOS documentation](https://us.sios.com/products/datakeeper-cluster/) for the most up-to-date .NET framework requirements
+- Add Microsoft .NET Framework, if needed. See the [SIOS documentation](https://us.sios.com/products/sios-datakeeper/) for the most up-to-date .NET framework requirements
- Install SIOS DataKeeper - Configure SIOS DataKeeper
virtual-machines Sap High Availability Installation Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-installation-wsfc-shared-disk.md
This article describes how to install and configure a high-availability SAP syst
As described in [Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk][sap-high-availability-guide-wsfc-shared-disk], there are two alternatives for *cluster shared disk*: - [Azure shared disks](../../disks-shared.md)-- Using [SIOS DataKeeper Cluster Edition](https://us.sios.com/products/datakeeper-cluster/) to create mirrored storage, that will simulate clustered shared disk
+- Using [SIOS DataKeeper Cluster Edition](https://us.sios.com/products/sios-datakeeper/) to create mirrored storage, that will simulate clustered shared disk
## Prerequisites
virtual-network-manager Concept Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-cross-tenant.md
Need help with setting up permissions? Check out how to [add guest users in the
Currently, cross-tenant virtual networks can only be [added to network groups manually](concept-network-groups.md#group-membership). Adding cross-tenant virtual networks to network groups dynamically through Azure Policy is a future capability. ## Next steps --- Learn how to [create a mesh network topology with Azure Virtual Network Manager using the Azure portal](how-to-create-mesh-network.md)--- Check out the [Azure Virtual Network Manager FAQ](faq.md)
+- Learn how to [configure a cross-tenant connection with Azure Virtual Network Manager using the Azure portal](how-to-configure-cross-tenant-portal.md)
+- Check out the [Azure Virtual Network Manager FAQ](faq.md)
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Previously updated : 11/02/2021 Last updated : 09/21/2022
Deploy a virtual network gateway into the hub virtual network. This virtual netw
1. On the *Basics* tab, enter the following information:
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/network-group-basics.png" alt-text="Screenshot of the create a network group basics tab.":::
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/network-group-basics.png" alt-text="Screenshot of the Basics tab on Create a network group page.":::
| Setting | Value | | - | -- |
Deploy a virtual network gateway into the hub virtual network. This virtual netw
1. On the **Overview** page, select **Create Azure Policy** under *Create policy to dynamically add members*.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/define-dynamic-membership.png" alt-text="Screenshot of the define dynamic membership button.":::
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/define-dynamic-membership.png" alt-text="Screenshot of the defined dynamic membership button.":::
1. On the **Create Azure Policy** page, select or enter the following information:
Deploy a virtual network gateway into the hub virtual network. This virtual netw
| Criteria | | | Parameter | Select **Name** from the drop-down.| | Operator | Select **Contains** from the drop-down.|
- | Condition | Enter **VNet-** to dynamically add the three previously created virtual networks into this network group. |
+ | Condition | Enter **-EastUS** to dynamically add the two East US virtual networks into this network group. |
1. Select **Save** to deploy the group membership. 1. Under **Settings**, select **Group Members** to view the membership of the group based on the conditions defined in Azure Policy. ## Create a hub and spoke connectivity configuration 1. Select **Configuration** under *Settings*, then select **+ Add a configuration**. Select **Connectivity** from the drop-down menu.
Deploy a virtual network gateway into the hub virtual network. This virtual netw
| Description | Provide a description about what this connectivity configuration will do. |
-1. Select **Next: Topology >**. Select **Hub and Spoke** under the **Topology** setting. This will reveal additional settings.
+1. Select **Next: Topology >**. Select **Hub and Spoke** under the **Topology** setting. This will reveal other settings.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/hub-configuration.png" alt-text="Screenshot of selecting a hub for the connectivity configuration.":::
-1. Select **Select a hub** under **Hub** setting. Then, select **VNet-A-WestUS** to serve as your network hub and click **Select**.
+1. Select **Select a hub** under **Hub** setting. Then, select **VNet-A-WestUS** to serve as your network hub and select **Select**.
:::image type="content" source="media/tutorial-create-secured-hub-and-spoke/select-hub.png" alt-text="Screenshot of Select a hub configuration.":::
-1. Under **Spoke network groups**, select **+ add**. Then, select **myNetworkGroupB** for the network group and click **Select**.
+1. Under **Spoke network groups**, select **+ add**. Then, select **myNetworkGroupB** for the network group and select **Select**.
:::image type="content" source="media/tutorial-create-secured-hub-and-spoke/select-network-group.png" alt-text="Screenshot of Add network groups page.":::
Deploy a virtual network gateway into the hub virtual network. This virtual netw
| - | -- | | Direct Connectivity | Select the checkbox for **Enable connectivity within network group**. This setting will allow spoke virtual networks in the network group in the same region to communicate with each other directly. | | Hub as gateway | Select the checkbox for **Use hub as a gateway**. |
- | Global Mesh | Leave this option **unchecked**. Since both spokes are in the same region this setting is not required. |
+ | Global Mesh | Leave **Enable mesh connectivity across regions** option **unchecked**. This setting isn't required as both spokes are in the same region |
1. Select **Next: Review + create >** and then create the connectivity configuration.
Make sure the virtual network gateway has been successfully deployed before depl
## Create security configuration
-1. Select **Configuration** under *Settings* again, then select **+ Create**, and select **SecurityAdmin** from the menu to begin creating a SecurityAdmin configuration..
+1. Select **Configuration** under *Settings* again, then select **+ Create**, and select **SecurityAdmin** from the menu to begin creating a SecurityAdmin configuration.
1. Enter the name **mySecurityConfig** for the configuration, then select **Next: Rule collections**.
Make sure the virtual network gateway has been successfully deployed before depl
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deploy-security.png" alt-text="Screenshot of deploying a security configuration.":::
-1. Select **Next** and then **Deploy**.You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take about 15-20 minutes to complete.
+1. Select **Next** and then **Deploy**. You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take about 15-20 minutes to complete.
## Verify deployment of configurations
vpn-gateway Site To Site Vpn Private Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/site-to-site-vpn-private-peering.md
Previously updated : 05/26/2022 Last updated : 09/21/2022
You can configure a Site-to-Site VPN to a virtual network gateway over an Expres
* Point-to-site users connecting to a virtual network gateway can use ExpressRoute (via the Site-to-Site tunnel) to access on-premises resources.
-* It is possible to deploy Site-to-Site VPN connections over ExpressRoute private peering at the same time as Site-to-Site VPN connections via the Internet on the same VPN gateway.
+* It's possible to deploy Site-to-Site VPN connections over ExpressRoute private peering at the same time as Site-to-Site VPN connections via the Internet on the same VPN gateway.
->[!NOTE]
->This feature is supported on gateways with a Standard Public IP only.
->
+This feature is available for the following SKUs:
+
+* VpnGw1, VpnGw2, VpnGw3, VpnGw4, VpnGw5 with standard public IP with no zones
+* VpnGw1AZ, VpnGw2AZ, VpnGw3AZ, VpnGw4AZ, VpnGw5AZ with standard public IP with one or more zones
+
+ >[!NOTE]
+ >This feature is supported on gateways with a standard public IP only.
+ >
+
+## Prerequisites
To complete this configuration, verify that you meet the following prerequisites:
In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN c
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $Gateway -EnablePrivateIpAddress $true ```
- You should see a public and a private IP address. Write down the IP address under the ΓÇ£TunnelIpAddressesΓÇ¥ section of the output. You will use this information in a later step.
+ You should see a public and a private IP address. Write down the IP address under the ΓÇ£TunnelIpAddressesΓÇ¥ section of the output. You'll use this information in a later step.
1. Set the connection to use the private IP address by using the following PowerShell command: ```azurepowershell-interactive
web-application-firewall Web Application Firewall Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-troubleshoot.md
description: This article provides troubleshooting information for Web Applicati
Previously updated : 06/09/2022 Last updated : 09/21/2022
In this example, you want to exclude the **Request attribute name** that equals
![WAF exclusion lists](../media/web-application-firewall-troubleshoot/waf-config.png)
+You can create exclusions for WAF in Application Gateway at different scope levels. For more information, see [Web Application Firewall exclusion lists](application-gateway-waf-configuration.md#exclusion-scopes).
+ ### Disabling rules Another way to get around a false positive is to disable the rule that matched on the input the WAF thought was malicious. Since you've parsed the WAF logs and have narrowed the rule down to 942130, you can disable it in the Azure portal. See [Customize web application firewall rules through the Azure portal](application-gateway-customize-waf-rules-portal.md).