Updates from: 11/16/2022 02:07:52
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
With phone call verification during SSPR or Azure AD Multi-Factor Authentication
## Office phone verification
-With phone call verification during SSPR or Azure AD Multi-Factor Authentication, an automated voice call is made to the phone number registered by the user. To complete the sign-in process, the user is prompted to press # on their keypad.
+With office phone call verification during SSPR or Azure AD Multi-Factor Authentication, an automated voice call is made to the phone number registered by the user. To complete the sign-in process, the user is prompted to press # on their keypad.
## Troubleshooting phone options
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
Customers can right-size permissions based on usage, grant new permissions on-de
### Monitor
-Customers can detect anomalous activities with machine language-powered (ML-powered) alerts and generate detailed forensic reports.
+Customers can detect anomalous activities with machine learning-powered (ML-powered) alerts and generate detailed forensic reports.
- ML-powered anomaly detections. - Context-rich forensic reports around identities, actions, and resources to support rapid investigation and remediation.
active-directory Product Permission Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permission-analytics.md
This article describes how you can create and view permission analytics triggers
- The **Status** column displays if the authorization system is online or offline - The **Controller** column displays if the controller is enabled or disabled.-
-1. On the **Configuration** tab, to update the **Time Interval**, select **90 Days**, **60 Days**, or **30 Days** from the **Time range** dropdown.
1. Select **Save**. ## View permission analytics alert triggers
active-directory Product Statistical Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-statistical-anomalies.md
Statistical anomalies can detect outliers in an identity's behavior if recent ac
- The **Controller** column displays if the controller is enabled or disabled.
-1. On the **Configuration** tab, to update the **Time Interval**, from the **Time Range** dropdown, select **90 Days**, **60 Days**, or **30 Days**, and then select **Save**.
+1. Select **Save**.
## View statistical anomaly triggers
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
The AADLoginForWindows extension must be installed successfully for the VM to co
1. View the device state by running `dsregcmd /status`. The goal is for the device state to show as `AzureAdJoined : YES`. > [!NOTE]
- > Azure AD join activity is captured in Event Viewer under the *User Device Registration\Admin* log at *Event Viewer (local)\Applications* and *Services Logs\Windows\Microsoft\User Device Registration\Admin*.
+ > Azure AD join activity is captured in Event Viewer under the *User Device Registration\Admin* log at *Event Viewer (local)\Applications* and *Services Logs\Microsoft\Windows\User Device Registration\Admin*.
If the AADLoginForWindows extension fails with an error code, you can perform the following steps.
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
Previously updated : 08/26/2022 Last updated : 11/14/2022
The most frequent scenarios for application deletion are:
* An administrator intentionally deletes the application, for example, in response to a support request. * An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you might want a process for deleting abandoned applications that are no longer used or managed. In general, create an offboarding process for applications rather than scripting to avoid unintentional deletions.
-### Properties maintained with soft delete
+When you delete an application, the application registration by default enters the soft-delete state. To understand the relationship between application registrations and service principals, see [Apps and service principals in Azure AD - Microsoft identity platform](../develop/app-objects-and-service-principals.md).
-| Object type| Important properties maintained |
-| - | - |
-| Users (including external users)| *All properties are maintained*, including ObjectID, group memberships, roles, licenses, and application assignments. |
-| Microsoft 365 Groups| *All properties are maintained*, including ObjectID, group memberships, licenses, and application assignments. |
-| Application registration| *All properties are maintained.* (See more information after this table.) |
+### Administrative units
-When you delete an application, the application registration by default enters the soft-delete state. To understand the relationship between application registrations and service principals, see [Apps and service principals in Azure AD - Microsoft identity platform](../develop/app-objects-and-service-principals.md).
+The most common scenario for deletions is when administrative units (AU) are deleted by accident, although still needed.
## Recover from soft deletion
-You can restore soft-deleted items in the Azure portal or with Microsoft Graph.
+You can restore soft-deleted items in the administrative portal, or by using Microsoft Graph. Not all object classes can manage soft-delete capabilities in the portal, some are only listed, viewed, hard deleted, or restored using the deletedItems Microsoft Graph API.
+
+### Properties maintained with soft delete
+
+|Object type|Important properties maintained|
+|||
+|Users (including external users)|All properties maintained, including ObjectID, group memberships, roles, licenses, and application assignments|
+|Microsoft 365 Groups|All properties maintained, including ObjectID, group memberships, licenses, and application assignments|
+|Application registration | All properties maintained. See more information after this table.|
+|Service principal|All properties maintained|
+|Administrative unit (AU)|All properties maintained|
### Users
For more information on how to restore soft-deleted Microsoft 365 Groups, see th
* To restore from the Azure portal, see [Restore a deleted Microsoft 365 Group](../enterprise-users/groups-restore-deleted.md). * To restore by using Microsoft Graph, see [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?tabs=http).
-### Applications
+### Applications and service principals
Applications have two objects: the application registration and the service principal. For more information on the differences between the registration and the service principal, see [Apps and service principals in Azure AD](../develop/app-objects-and-service-principals.md).
To restore an application from the Azure portal, select **App registrations** >
[![Screenshot that shows the app registration restore process in the azure portal.](./media/recoverability/deletion-restore-application.png)](./media/recoverability/deletion-restore-application.png#lightbox)
-To restore applications using Microsoft Graph, see [Restore deleted item - Microsoft Graph v1.0.](/graph/api/directory-deleteditems-restore?tabs=http)
+Currently, service principals can be listed, viewed, hard deleted, or restored via the deletedItems Microsoft Graph API. To restore applications using Microsoft Graph, see [Restore deleted item - Microsoft Graph v1.0.](/graph/api/directory-deleteditems-restore?tabs=http).
+
+### Administrative units
+
+AUs can be listed, viewed, hard deleted, or restored via the deletedItems Microsoft Graph API. To restore AUs using Microsoft Graph, see [Restore deleted item - Microsoft Graph v1.0.](/graph/api/directory-deleteditems-restore?tabs=http).
## Hard deletions
A hard deletion is the permanent removal of an object from your Azure AD tenant.
### When hard deletes usually occur
-Hard deletes most often occur in the following circumstances.
+Hard deletes might occur in the following circumstances.
Moving from soft to hard delete:
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-o365-certs.md
The token signing and token decrypting certificates are usually self-signed cert
> >
-Azure AD attempts to monitor the federation metadata, and update the token signing certificates as indicated by this metadata. 30 days before the expiration of the token signing certificates, Azure AD checks if new certificates are available by polling the federation metadata.
+Azure AD attempts to monitor the federation metadata, and update the token signing certificates as indicated by this metadata. 35 days before the expiration of the token signing certificates, Azure AD checks if new certificates are available by polling the federation metadata.
* If it can successfully poll the federation metadata and retrieve the new certificates, no email notification is issued to the user. * If it cannot retrieve the new token signing certificates, either because the federation metadata is not reachable or automatic certificate rollover is not enabled, Azure AD issues an email.
Get-MsolFederationProperty -DomainName <domain.name> | FL Source, TokenSigningCe
If the thumbprints in both the outputs match, your certificates are in sync with Azure AD. ### Step 3: Check if your certificate is about to expire
-In the output of either Get-MsolFederationProperty or Get-AdfsCertificate, check for the date under "Not After." If the date is less than 30 days away, you should take action.
+In the output of either Get-MsolFederationProperty or Get-AdfsCertificate, check for the date under "Not After." If the date is less than 35 days away, you should take action.
| AutoCertificateRollover | Certificates in sync with Azure AD | Federation metadata is publicly accessible | Validity | Action | |::|::|::|::|::| | Yes |Yes |Yes |- |No action needed. See [Renew token signing certificate automatically](#autorenew). | | Yes |No |- |Less than 15 days |Renew immediately. See [Renew token signing certificate manually](#manualrenew). |
-| No |- |- |Less than 30 days |Renew immediately. See [Renew token signing certificate manually](#manualrenew). |
+| No |- |- |Less than 35 days |Renew immediately. See [Renew token signing certificate manually](#manualrenew). |
\[-] Does not matter
Token signing certificates are standard X509 certificates that are used to secur
By default, AD FS is configured to generate token signing and token decryption certificates automatically, both at the initial configuration time and when the certificates are approaching their expiration date.
-Azure AD tries to retrieve a new certificate from your federation service metadata 30 days before the expiry of the current certificate. In case a new certificate is not available at that time, Azure AD will continue to monitor the metadata on regular daily intervals. As soon as the new certificate is available in the metadata, the federation settings for the domain are updated with the new certificate information. You can use `Get-MsolDomainFederationSettings` to verify if you see the new certificate in the NextSigningCertificate / SigningCertificate.
+Azure AD tries to retrieve a new certificate from your federation service metadata 35 days before the expiry of the current certificate. In case a new certificate is not available at that time, Azure AD will continue to monitor the metadata on regular daily intervals. As soon as the new certificate is available in the metadata, the federation settings for the domain are updated with the new certificate information. You can use `Get-MsolDomainFederationSettings` to verify if you see the new certificate in the NextSigningCertificate / SigningCertificate.
For more information on Token Signing certificates in AD FS see [Obtain and Configure Token Signing and Token Decryption Certificates for AD FS](/windows-server/identity/ad-fs/operations/configure-ts-td-certs-ad-fs)
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-topologies.md
The most common topology is a single on-premises forest, with one or multiple do
### Single forest, multiple sync servers to one Azure AD tenant ![Unsupported, filtered topology for a single forest](./media/plan-connect-topologies/singleforestfilteredunsupported.png)
-Having multiple Azure AD Connect sync servers connected to the same Azure AD tenant is not supported, except for a [staging server](#staging-server). It's unsupported even if these servers are configured to synchronize with a mutually exclusive set of objects. You might have considered this topology if you can't reach all domains in the forest from a single server, or if you want to distribute load across several servers.
+Having multiple Azure AD Connect sync servers connected to the same Azure AD tenant is not supported, except for a [staging server](#staging-server). It's unsupported even if these servers are configured to synchronize with a mutually exclusive set of objects. You might have considered this topology if you can't reach all domains in the forest from a single server, or if you want to distribute load across several servers. (No errors occur when a new Azure AD Sync Server is configured for a new Azure AD forest and a new verified child domain.)
## Multiple forests, single Azure AD tenant ![Topology for multiple forests and a single tenant](./media/plan-connect-topologies/multiforestsingledirectory.png)
You can find more details in [Understanding the default configuration](concept-a
Having more than one Azure AD Connect sync server connected to a single Azure AD tenant is not supported. The exception is the use of a [staging server](#staging-server).
-This topology differs from the one below in that **multiple sync servers** connected to a single Azure AD tenant is not supported.
+This topology differs from the one below in that **multiple sync servers** connected to a single Azure AD tenant is not supported. (While not supported, this still works.)
### Multiple forests, single sync server, users are represented in only one directory ![Option for representing users only once across all directories](./media/plan-connect-topologies/multiforestusersonce.png)
We recommend having a single tenant in Azure AD for an organization. Before you
This topology implements the following use cases:
-* AADConnect can synchronize the same users, groups, and contacts from a single Active Directory to multiple Azure AD tenants. These tenants can be in different Azure environments, such as the Azure China environment or the Azure Government environment, but they could also be in the same Azure environment, such as two tenants that are both in Azure Commercial.
-* The same Source Anchor can be used for a single object in separate tenants (but not for multiple objects in the same tenant)
+* AADConnect can synchronize the users, groups, and contacts from a single Active Directory to multiple Azure AD tenants. These tenants can be in different Azure environments, such as the Azure China environment or the Azure Government environment, but they could also be in the same Azure environment, such as two tenants that are both in Azure Commercial. For more details on options, see https://docs.microsoft.com/azure/azure-government/documentation-government-plan-identity.
+* The same Source Anchor can be used for a single object in separate tenants (but not for multiple objects in the same tenant). (The verified domain can't be the same in two tenants. More details are needed to enable the same object to have two UPNs.)
* You will need to deploy an AADConnect server for every Azure AD tenant you want to synchronize to - one AADConnect server cannot synchronize to more than one Azure AD tenant. * It is supported to have different sync scopes and different sync rules for different tenants. * Only one Azure AD tenant sync can be configured to write back to Active Directory for the same object. This includes device and group writeback as well as Hybrid Exchange configurations ΓÇô these features can only be configured in one tenant. The only exception here is Password Writeback ΓÇô see below.
active-directory Whatis Phs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-phs.md
# What is password hash synchronization with Azure AD?
-Password hash synchronization is one of the sign-in methods used to accomplish hybrid identity. Azure AD Connect synchronizes a hash, of the hash, of a user's password from an on-premises Active Directory instance to a cloud-based Azure AD instance.
+Password hash synchronization is one of the sign-in methods used to accomplish hybrid identity. Azure AD Connect synchronizes a hash of a user's password from an on-premises Active Directory instance to a cloud-based Azure AD instance.
Password hash synchronization is an extension to the directory synchronization feature implemented by Azure AD Connect sync. You can use this feature to sign in to Azure AD services like Microsoft 365. You sign in to the service by using the same password you use to sign in to your on-premises Active Directory instance.
active-directory How Managed Identities Work Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md
ms.devlang: Previously updated : 10/30/2022 Last updated : 11/15/2022
Your code can use a managed identity to request access tokens for services that
The following diagram shows how managed service identities work with Azure virtual machines (VMs):
-[![Managed service identities and Azure VMs](media/how-managed-identities-work-vm/data-flow.png)](media/how-managed-identities-work-vm/data-flow.png#lightbox)
+[![Diagram that shows how managed service identities are associated with Azure virtual machines, get an access token, and invoked a protected Azure AD resource.](media/how-managed-identities-work-vm/data-flow.png)](media/how-managed-identities-work-vm/data-flow.png#lightbox)
The following table shows the differences between the system-assigned and user-assigned managed identities:
The following table shows the differences between the system-assigned and user-a
3. Azure Resource Manager updates the VM identity using the Azure Instance Metadata Service identity endpoint (for [Windows](/azure/virtual-machines/windows/instance-metadata-service) and [Linux](/azure/virtual-machines/linux/instance-metadata-service)), providing the endpoint with the service principal client ID and certificate.
-4. After the VM has an identity, use the service principal information to grant the VM access to Azure resources. To call Azure Resource Manager, use Azure role-based access control (Azure RBAC) to assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the specific secret or key in Key Vault.
+4. After the VM has an identity, use the service principal information to grant the VM access to Azure resources. To call Azure Resource Manager, use Azure Role-Based Access Control (Azure RBAC) to assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the specific secret or key in Key Vault.
5. Your code that's running on the VM can request a token from the Azure Instance Metadata service endpoint, accessible only from within the VM: `http://169.254.169.254/metadata/identity/oauth2/token` - The resource parameter specifies the service to which the token is sent. To authenticate to Azure Resource Manager, use `resource=https://management.azure.com/`. - API version parameter specifies the IMDS version, use api-version=2018-02-01 or greater.
+ The following example demonstrates how to to use CURL to make a request to the local Managed Identity endpoint to get an access token for Azure Instance Metadata service.
+
+ ```bash
+ curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com%2F' -H Metadata:true
+ ```
+ 6. A call is made to Azure AD to request an access token (as specified in step 5) by using the client ID and certificate configured in step 3. Azure AD returns a JSON Web Token (JWT) access token. 7. Your code sends the access token on a call to a service that supports Azure AD authentication.
The following table shows the differences between the system-assigned and user-a
5. Your code that's running on the VM can request a token from the Azure Instance Metadata Service identity endpoint, accessible only from within the VM: `http://169.254.169.254/metadata/identity/oauth2/token` - The resource parameter specifies the service to which the token is sent. To authenticate to Azure Resource Manager, use `resource=https://management.azure.com/`.
- - The client ID parameter specifies the identity for which the token is requested. This value is required for disambiguation when more than one user-assigned identity is on a single VM.
+ - The `client_id` parameter specifies the identity for which the token is requested. This value is required for disambiguation when more than one user-assigned identity is on a single VM. You can find the **Client ID** in the Managed Identity **Overview**:
+
+ [![Screenshot that shows how to copy the managed identity client ID.](./media/how-managed-identities-work-vm/managed-identity-client-id.png)](./media/how-managed-identities-work-vm/managed-identity-client-id.png#lightbox)
+ - The API version parameter specifies the Azure Instance Metadata Service version. Use `api-version=2018-02-01` or higher.
+ The following example demonstrates how to to use CURL to make a request to the local Managed Identity endpoint to get an access token for Azure Instance Metadata service.
+
+ ```bash
+ curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com%2F&client_id=12345678-0000-0000-0000-000000000000' -H Metadata:true
+ ```
+ 6. A call is made to Azure AD to request an access token (as specified in step 5) by using the client ID and certificate configured in step 3. Azure AD returns a JSON Web Token (JWT) access token. 7. Your code sends the access token on a call to a service that supports Azure AD authentication.
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 07/15/2022 Last updated : 11/15/2022
The following Azure AD roles can be assigned with administrative unit scope. Add
| [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | Can reset passwords for non-administrators in the assigned administrative unit only. | | [License Administrator](permissions-reference.md#license-administrator) | Can assign, remove, and update license assignments within the administrative unit only. | | [Password Administrator](permissions-reference.md#password-administrator) | Can reset passwords for non-administrators within the assigned administrative unit only. |
+| [Printer Administrator](permissions-reference.md#printer-administrator) | Can manage printers and printer connectors. For more information, see [Delegate administration of printers in Universal Print](/universal-print/portal/delegated-admin#scoped-admin-vs-tenant-printer-admin). |
| [SharePoint Administrator](permissions-reference.md#sharepoint-administrator) | Can manage Microsoft 365 groups in the assigned administrative unit only. For SharePoint sites associated with Microsoft 365 groups in an administrative unit, can also update site properties (site name, URL, and external sharing policy) using the Microsoft 365 admin center. Cannot use the SharePoint admin center or SharePoint APIs to manage sites. | | [Teams Administrator](permissions-reference.md#teams-administrator) | Can manage Microsoft 365 groups in the assigned administrative unit only. Can manage team members in the Microsoft 365 admin center for teams associated with groups in the assigned administrative unit only. Cannot use the Teams admin center. | | [Teams Devices Administrator](permissions-reference.md#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. |
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Previously updated : 09/23/2022 Last updated : 11/11/2022
Use your Microsoft Azure Active Directory account with Atlassian JIRA server to
To configure Azure AD integration with JIRA SAML SSO by Microsoft, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- JIRA Core and Software 6.4 to 9.1.0 or JIRA Service Desk 3.0 to 4.22.1 should installed and configured on Windows 64-bit version.
+- JIRA Core and Software 6.4 to 8.22.1 or JIRA Service Desk 3.0 to 4.22.1 should be installed and configured on Windows 64-bit version.
- JIRA server is HTTPS enabled. - Note the supported versions for JIRA Plugin are mentioned in below section. - JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD.
To get started, you need the following items:
## Supported versions of JIRA
-* JIRA Core and Software: 6.4 to 9.1.0
+* JIRA Core and Software: 6.4 to 8.22.1.
* JIRA Service Desk 3.0 to 4.22.1. * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md).
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<domain:port>/plugins/servlet/saml/auth`
-
- b. In the **Identifier** box, type a URL using the following pattern:
+ a. In the **Identifier** box, type a URL using the following pattern:
`https://<domain:port>/`
- c. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<domain:port>/plugins/servlet/saml/auth`
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
`https://<domain:port>/plugins/servlet/saml/auth` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-On URL. Port is optional in case itΓÇÖs a named URL. These values are received during the configuration of Jira plugin, which is explained later in the tutorial.
+ > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-on URL. Port is optional in case itΓÇÖs a named URL. These values are received during the configuration of Jira plugin, which is explained later in the tutorial.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. The Name ID attribute in Azure AD can be mapped to any desired user attribute by editing the Attributes & Claims section.
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing how to edit Attributes and Claims.](common/edit-attribute.png)
+ ![Screenshot showing how to edit Attributes and Claims.](common/edit-attribute.png)
a. After clicking on Edit, any desired user attribute can be mapped by clicking on Unique User Identifier (Name ID).
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing the NameID in Attributes and Claims.](common/attribute-nameID.png)
+ ![Screenshot showing the NameID in Attributes and Claims.](common/attribute-nameID.png)
b. On the next screen, the desired attribute name like user.userprincipalname can be selected as an option from the Source Attribute dropdown menu.
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing how to select Attributes and Claims.](common/attribute-select.png)
+ ![Screenshot showing how to select Attributes and Claims.](common/attribute-select.png)
c. The selection can then be saved by clicking on the Save button at the top.
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing how to save Attributes and Claims.](common/attribute-save.png)
+ ![Screenshot showing how to save Attributes and Claims.](common/attribute-save.png)
d. Now, the user.userprincipalname attribute source in Azure AD is mapped to the Name ID attribute name in Azure AD which will be compared with the username attribute in Atlassian by the SSO plugin.
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing how to review Attributes and Claims.](common/attribute-review.png)
+ ![Screenshot showing how to review Attributes and Claims.](common/attribute-review.png)
> [!NOTE] > The SSO service provided by Microsoft Azure supports SAML authentication which is able to perform user identification using different attributes such as givenname (first name), surname (last name), email (email address), and user principal name (username). We recommend not to use email as an authentication attribute as email addresses are not always verified by Azure AD. The plugin compares the values of Atlassian username attribute with the NameID attribute in Azure AD in order to determine the valid user authentication.
+1. If your Azure tenant has **guest users** then follow the below configuration steps:
+
+ a. Click on **pencil** icon to go to the Attributes & Claims section.
+
+ ![Screenshot showing how to edit Attributes and Claims.](common/edit-attribute.png)
+
+ b. Click on **NameID** on Attributes & Claims section.
+
+ ![Screenshot showing the NameID in Attributes and Claims.](common/attribute-nameID.png)
+
+ c. Setup the claim conditions based on the User Type.
+
+ ![Screenshot for claim conditions.](./media/jiramicrosoft-tutorial/claim-conditions.png)
+
+ >[!NOTE]
+ > Give the NameID value as `user.userprinciplename` for Members and `user.mail` for External Guests.
+
+ d. **Save** the changes and verify the SSO for external guest users.
+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
> To enable the default login form for admin login on login page when force azure login is enabled, add the query parameter in the browser URL. > `https://<domain:port>/login.jsp?force_azure_login=false`
- k. **Enable Use of Application Proxy** checkbox, if you have configured your on-premise atlassian application in an App Proxy setup.
+ k. **Enable Use of Application Proxy** checkbox, if you have configured your on-premises atlassian application in an App Proxy setup.
* For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](../app-proxy/what-is-application-proxy.md).
active-directory Sap Analytics Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input the tenant URL value retrieved earlier in **Tenant URL**. Input the access token value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to InVision. If the connection fails, ensure your SAP Analytics Cloud account has Admin permissions and try again.
+5. Under the **Admin Credentials** section, input the tenant URL value retrieved earlier in **Tenant URL**. Input the access token value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to SAP Analytics Cloud. If the connection fails, ensure your SAP Analytics Cloud account has Admin permissions and try again.
![Screenshot shows the Admin Credentials dialog box, where you can enter your Tenant U R L and Secret Token.](./media/sap-analytics-cloud-provisioning-tutorial/provisioning.png)
api-management Api Management Howto Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-oauth2.md
Previously updated : 04/26/2022 Last updated : 11/09/2022 + # How to authorize test console of developer portal by configuring OAuth 2.0 user authorization
If you haven't yet created an API Management service instance, see [Create an AP
Configuring OAuth 2.0 user authorization in API Management only enables the developer portalΓÇÖs test console as a client to acquire a token from the authorization server. The configuration for each OAuth 2.0 provider is different, although the steps are similar, and the required pieces of information used to configure OAuth 2.0 in your API Management service instance are the same. This article shows an example using Azure Active Directory as an OAuth 2.0 provider.
+The following are the high level configuration steps:
1. Register an application (backend-app) in Azure AD to represent the API.
Configuring OAuth 2.0 user authorization in API Management only enables the deve
1. Configure an API to use OAuth 2.0 user authorization.
-1. Add the **validate-jwt** policy to pre-authorize the OAuth 2.0 token for every incoming request.
+1. Add a policy to pre-authorize the OAuth 2.0 token for every incoming request. You can use the `validate-jwt` policy for any OAuth 2.0 provider.
+
+This configuration supports the following OAuth flow:
++
+1. The developer portal requests a token from Azure AD using the client-app credentials.
+
+1. After successful validation, Azure AD issues the access/refresh token.
+
+1. A developer (user of the developer portal) makes an API call with the authorization header.
+
+1. The token gets validated by using the `validate-jwt` policy in API Management by Azure AD.
+
+1. Based on the validation result, the developer will receive the response in the developer portal.
+ ## Authorization grant types
Throughout this tutorial you'll be asked to record key information to reference
You'll need to register two applications with your OAuth 2.0 provider: one represents the backend API to be protected, and a second represents the client application that calls the API - in this case, the test console of the developer portal.
-The following are example steps using Azure AD as the OAuth 2.0 provider.
+The following are example steps using Azure AD as the OAuth 2.0 provider. For details about app registration, see [Quickstart: Configure an application to expose a web API](../active-directory/develop/quickstart-configure-app-expose-web-apis.md).
### Register an application in Azure AD to represent the API
-Using the Azure portal, register an application that represents the backend API in Azure AD.
-
-For details about app registration, see [Quickstart: Configure an application to expose a web API](../active-directory/develop/quickstart-configure-app-expose-web-apis.md).
- 1. In the [Azure portal](https://portal.azure.com), search for and select **App registrations**. 1. Select **New registration**.
For details about app registration, see [Quickstart: Configure an application to
1. Under the **Manage** section of the side menu, select **Expose an API** and set the **Application ID URI** with the default value. Record this value for later. 1. Select the **Add a scope** button to display the **Add a scope** page:
- 1. Enter a new **Scope name**, **Admin consent display name**, and **Admin consent description**.
+ 1. Enter a **Scope name** for a scope that's supported by the API (for example, **Files.Read**).
+ 1. In **Who can consent?**, make a selection for your scenario, such as **Admins and users**. Select **Admins only** for higher privileged scenarios.
+ 1. Enter **Admin consent display name** and **Admin consent description**.
1. Make sure the **Enabled** scope state is selected. 1. Select the **Add scope** button to create the scope.
For details about app registration, see [Quickstart: Configure an application to
### Register another application in Azure AD to represent a client application
-Register every client application that calls the API as an application in Azure AD. In this example, the client application is the **test console** in the API Management developer portal.
-
-To register an application in Azure AD to represent the client application:
+Register every client application that calls the API as an application in Azure AD.
1. In the [Azure portal](https://portal.azure.com), search for and select **App registrations**.
To register an application in Azure AD to represent the client application:
1. When the **Register an application page** appears, enter your application's registration information: - In the **Name** section, enter a meaningful application name that will be displayed to users of the app, such as *client-app*.
- - In the **Supported account types** section, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
+ - In the **Supported account types** section, select an option that suits your scenario.
-1. In the **Redirect URI** section, select `Web` and leave the URL field empty for now.
+1. In the **Redirect URI** section, select **Web** and leave the URL field empty for now.
1. Select **Register** to create the application.
To register an application in Azure AD to represent the client application:
1. Create a client secret for this application to use in a subsequent step. 1. Under the **Manage** section of the side menu, select **Certificates & secrets**.
- 1. Under **Client secrets**, select **New client secret**.
+ 1. Under **Client secrets**, select **+ New client secret**.
1. Under **Add a client secret**, provide a **Description** and choose when the key should expire. 1. Select **Add**.
Optionally:
1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
-1. Under the Developer portal section in the side menu, select **OAuth 2.0 + OpenID Connect**.
+1. Under **Developer portal** in the side menu, select **OAuth 2.0 + OpenID Connect**.
-1. Under the **OAuth 2.0 tab**, select **+ Add**.
+1. Under the **OAuth 2.0** tab, select **+ Add**.
:::image type="content" source="media/api-management-howto-oauth2/oauth-01.png" alt-text="OAuth 2.0 menu":::
Optionally:
1. The next section of the form contains the **Authorization grant types**, **Authorization endpoint URL**, and **Authorization request method** settings.
- * Select one or more desired **Authorization grant types**. For this example, select **Authorization code** (the default). [Learn more](#authorization-grant-types).
+ * Select one or more desired **Authorization grant types**. For this example, select **Authorization code** (the default). [Learn more](#authorization-grant-types)
- * Enter the **Authorization endpoint URL**. For Azure AD, this URL will be similar to one of the following URLs, where `<tenant_id>` is replaced with the ID of your Azure AD tenant. You can obtain the endpoint URL from the **Endpoints** page of one of your app registrations.
+ * Enter the **Authorization endpoint URL**. You can obtain the endpoint URL from the **Endpoints** page of one of your app registrations. For a single-tenant app in Azure AD, this URL will be similar to one of the following URLs, where `{aad-tenant}` is replaced with the ID of your Azure AD tenant.
Using the v2 endpoint is recommended; however, API Management supports both v1 and v2 endpoints.
- `https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/authorize` (v2)
+ `https://login.microsoftonline.com/{aad-tenant}/oauth2/v2.0/authorize` (v2)
- `https://login.microsoftonline.com/<tenant_id>/oauth2/authorize` (v1)
+ `https://login.microsoftonline.com/{aad-tenant}/oauth2/authorize` (v1)
* The **Authorization request method** specifies how the authorization request is sent to the OAuth 2.0 server. Select **POST**. :::image type="content" source="media/api-management-howto-oauth2/oauth-03.png" alt-text="Specify authorization settings":::
-1. Specify **Token endpoint URL**, **Client authentication methods**, **Access token sending method** and **Default scope**.
+1. Specify **Token endpoint URL**, **Client authentication methods**, **Access token sending method**, and **Default scope**.
- * Enter the **Token endpoint URL**. For Azure AD, it will be similar to one of the following URLs, where `<tenant_id>` is replaced with the ID of your Azure AD tenant. Use the same endpoint version (v2 or v1) that you chose previously.
+ * Enter the **Token endpoint URL**. For a single tenant app in Azure AD, it will be similar to one of the following URLs, where `{aad-tenant}` is replaced with the ID of your Azure AD tenant. Use the same endpoint version (v2 or v1) that you chose previously.
- `https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/token` (v2)
+ `https://login.microsoftonline.com/{aad-tenant}/oauth2/v2.0/token` (v2)
- `https://login.microsoftonline.com/<tenant_id>/oauth2/token` (v1)
+ `https://login.microsoftonline.com/{aad-tenant}/oauth2/token` (v1)
* If you use **v1** endpoints, add a body parameter: * Name: **resource**.
Optionally:
* Accept the default settings for **Client authentication methods** and **Access token sending method**.
-1. The **Client credentials** section contains the **Client ID** and **Client secret**, which you obtained during the creation and configuration process of your client-app.
+1. In **Client credentials**, enter the **Client ID** and **Client secret**, which you obtained during the creation and configuration process of your client-app.
1. After the **Client ID** and **Client secret** are specified, the **Redirect URI** for the **authorization code** is generated. This URI is used to configure the redirect URI in your OAuth 2.0 server configuration.
Optionally:
- `/signin-oauth/code/callback/{authServerName}` for authorization code grant flow - `/signin-oauth/implicit/callback` for implicit grant flow
-
- Copy the appropriate Redirect URI to the **Authentication** page of your client-app registration.
:::image type="content" source="media/api-management-howto-oauth2/oauth-04.png" alt-text="Add client credentials for the OAuth 2.0 service":::
+ Copy the appropriate Redirect URI to the **Authentication** page of your client-app registration. In the app registration, select **Authentication** > **+ Add a platform** > **Web**, and then enter the Redirect URI.
+ 1. If **Authorization grant types** is set to **Resource owner password**, the **Resource owner password credentials** section is used to specify those credentials; otherwise you can leave it blank. 1. Select **Create** to save the API Management OAuth 2.0 authorization server configuration. 1. [Republish](api-management-howto-developer-portal-customize.md#publish) the developer portal.
- > [!NOTE]
- > When making OAuth 2.0-related changes, it is important that you remember to (re-)publish the developer portal after every modification as relevant changes (for example, scope change) otherwise cannot propagate into the portal and subsequently be used in trying out the APIs.
+ > [!IMPORTANT]
+ > When making OAuth 2.0-related changes, be sure to to republish the developer portal after every modification as relevant changes (for example, scope change) otherwise cannot propagate into the portal and subsequently be used in trying out the APIs.
-After saving the OAuth 2.0 server configuration, configure APIs to use this configuration, as shown in the next section.
+After saving the OAuth 2.0 server configuration, configure an API or APIs to use this configuration, as shown in the next section.
## Configure an API to use OAuth 2.0 user authorization
After saving the OAuth 2.0 server configuration, configure APIs to use this conf
:::image type="content" source="./media/api-management-howto-oauth2/oauth-07.png" alt-text="Configure OAuth 2.0 authorization server"::: - ## Developer portal - test the OAuth 2.0 user authorization [!INCLUDE [api-management-test-oauth-authorization](../../includes/api-management-test-oauth-authorization.md)]
+## Configure a JWT validation policy to pre-authorize requests
+
+In the configuration so far, API Management doesn't validate the access token. It only passes the token in the authorization header to the backend API.
+
+To pre-authorize requests, configure a [validate-jwt](api-management-access-restriction-policies.md#ValidateJWT) policy to validate the access token of each incoming request. If a request doesn't have a valid token, API Management blocks it.
+++ ## Legacy developer portal - test the OAuth 2.0 user authorization [!INCLUDE [api-management-portal-legacy.md](../../includes/api-management-portal-legacy.md)]
-Once you've configured your OAuth 2.0 authorization server and configured your API to use that server, you can test it by going to the developer portal and calling an API. Click **Developer portal (legacy)** in the top menu from your Azure API Management instance **Overview** page.
+Once you've configured your OAuth 2.0 authorization server and configured your API to use that server, you can test it by going to the developer portal and calling an API. Select **Developer portal (legacy)** in the top menu from your Azure API Management instance **Overview** page.
-Click **APIs** in the top menu and select **Echo API**.
+Select **APIs** in the top menu and select **Echo API**.
![Echo API][api-management-apis-echo-api] > [!NOTE] > If you have only one API configured or visible to your account, then clicking APIs takes you directly to the operations for that API.
-Select the **GET Resource** operation, click **Open Console**, and then select **Authorization code** from the drop-down.
+Select the **GET Resource** operation, select **Open Console**, and then select **Authorization code** from the drop-down.
![Open console][api-management-open-console]
Once you've signed in, the **Request headers** are populated with an `Authorizat
At this point you can configure the desired values for the remaining parameters, and submit the request.
-## Configure a JWT validation policy to pre-authorize requests
-
-In the preceding section, API Management doesn't validate the access token. It only passes the token in the authorization header to the backend API.
-
-To pre-authorize requests, configure a [validate-jwt](api-management-access-restriction-policies.md#ValidateJWT) policy to validate the access token of each incoming request. If a request doesn't have a valid token, API Management blocks it.
-- ## Next steps For more information about using OAuth 2.0 and API Management, see [Protect a web API backend in Azure API Management using OAuth 2.0 authorization with Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
Each backup is a complete offline copy of your app, not an incremental update.
#### Does Azure Functions support automatic backups?
-Automatic backups are available in preview for Azure Functions in [dedicated (App Service)](../azure-functions/dedicated-plan.md) **Standard** or **Premium** tiers. Function apps in the [**Consumption**](../azure-functions/consumption-plan.md) or [**Elastic Premium**](../azure-functions/functions-premium-plan.md) pricing tiers aren't supported for automatic backups.
+Automatic backups are available in preview for Azure Functions in [dedicated (App Service)](../azure-functions/dedicated-plan.md) **Basic** or **Standard** or **Premium** tiers. Function apps in the [**Consumption**](../azure-functions/consumption-plan.md) or [**Elastic Premium**](../azure-functions/functions-premium-plan.md) pricing tiers aren't supported for automatic backups.
#### What's included in an automatic backup?
application-gateway Application Gateway Configure Listener Specific Ssl Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-listener-specific-ssl-policy.md
Before you proceed, here are some important points related to listener-specific
- We recommend using TLS 1.2 as this version will be mandated in the future. - You don't have to configure client authentication on an SSL profile to associate it to a listener. You can have only client authentication or listener-specific SSL policy configured, or both configured in your SSL profile.-- Using a new Predefined or Customv2 policy enhances SSL security and performance for the entire gateway (SSL Policy and SSL Profile). Therefore, you cannot have different listeners on both old as well as new SSL (predefined or custom) policies.
+- Using a [2022 Predefined](./application-gateway-ssl-policy-overview.md#predefined-tls-policy) or Customv2 policy enhances SSL security and performance for the entire gateway (SSL Policy and SSL Profile). Therefore, you cannot have different listeners on both old as well as new SSL (predefined or custom) policies.
Consider this example, you're currently using SSL Policy and SSL Profile with &#34;older&#34; policies/ciphers. To use a &#34;new&#34; Predefined or Customv2 policy for any one of them will also require you to upgrade the other configuration. You may use the new predefined policies, or customv2 policy, or combination of these across the gateway.
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
The TLS policy includes control of the TLS protocol version as well as the ciphe
- SSL 2.0 and 3.0 are disabled for all application gateways and are not configurable. - A custom TLS policy allows you to select any TLS protocol as the minimum protocol version for your gateway: TLSv1_0, TLSv1_1, TLSv1_2, or TLSv1_3. - If no TLS policy is defined, the minimum protocol version is set to TLSv1_0, and protocol versions v1.0, v1.1, and v1.2 are supported.-- The new **Predefined and Customv2 policies** that support **TLS v1.3** are currently in **Preview** and only available with Application Gateway V2 SKUs (Standard_v2 or WAF_v2).-- Using a new Predefined or Customv2 policy enhances SSL security and performance posture of the entire gateway (for SSL Policy and [SSL Profile](application-gateway-configure-listener-specific-ssl-policy.md#set-up-a-listener-specific-ssl-policy)). Hence, both old and new policies cannot co-exist on a gateway. You must use any of the older predefined or custom policies across the gateway if clients require older TLS versions or ciphers (for example, TLS v1.0).
+- The [**2022 Predefined**](#predefined-tls-policy) and [**Customv2 policies**](#custom-tls-policy) that support **TLS v1.3** are available only with Application Gateway V2 SKUs (Standard_v2 or WAF_v2).
+- Using a 2022 Predefined or Customv2 policy enhances SSL security and performance posture of the entire gateway (for SSL Policy and [SSL Profile](application-gateway-configure-listener-specific-ssl-policy.md#set-up-a-listener-specific-ssl-policy)). Hence, both old and new policies cannot co-exist on a gateway. You must use any of the older predefined or custom policies across the gateway if clients require older TLS versions or ciphers (for example, TLS v1.0).
- TLS cipher suites used for the connection are also based on the type of the certificate being used. The cipher suites used in "client to application gateway connections" are based on the type of listener certificates on the application gateway. Whereas the cipher suites used in establishing "application gateway to backend pool connections" are based on the type of server certificates presented by the backend servers. ## Predefined TLS policy
Application Gateway offers several predefined security policies. You can configu
The following table shows the list of cipher suites and minimum protocol version support for each predefined policy. The ordering of the cipher suites determines the priority order during TLS negotiation. To know the exact ordering of the cipher suites for these predefined policies, you can refer to the PowerShell, CLI, REST API or the Listeners blade in portal.
-| Predefined policy names (AppGwSslPolicy&lt;YYYYMMDD&gt;) | 20150501 | 20170401 | 20170401S | 20220101 <br/> (Preview) | 20220101S <br/> (Preview) |
+| Predefined policy names (AppGwSslPolicy&lt;YYYYMMDD&gt;) | 20150501 | 20170401 | 20170401S | 20220101 | 20220101S |
| - | - | - | - | - | - | | **Minimum Protocol Version** | 1.0 | 1.1 | 1.2 | 1.2 | 1.2 | | **Enabled protocol versions** | 1.0<br/>1.1<br/>1.2 | 1.1<br/>1.2 | 1.2 | 1.2<br/>1.3 | 1.2<br/>1.3 |
The following table shows the list of cipher suites and minimum protocol version
If a TLS policy needs to be configured for your requirements, you can use a Custom TLS policy. With a custom TLS policy, you have complete control over the minimum TLS protocol version to support, as well as the supported cipher suites and their priority order. > [!NOTE]
-> The newer, stronger ciphers and TLSv1.3 support are only available with the **CustomV2 policy (Preview)**. It provides enhanced security and performance benefits.
+> The newer, stronger ciphers and TLSv1.3 support are only available with the **CustomV2 policy**. It provides enhanced security and performance benefits.
> [!IMPORTANT] > - If you're using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU.
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
For example, here's how to calculate the available addressing for a subnet with
- Gateway 3: Maximum of 15 instances; utilizes a private frontend IP configuration - Subnet Size: /24
-Subnet Size /24 = 255 IP addresses - 5 reserved from the platform = 250 available addresses.
-250 - Gateway 1 (10) - 1 private frontend IP configuration = 239
-239 - Gateway 2 (2) = 237
-237 - Gateway 3 (15) - 1 private frontend IP configuration = 221
+Subnet Size /24 = 256 IP addresses - 5 reserved from the platform = 251 available addresses.
+251 - Gateway 1 (10) - 1 private frontend IP configuration = 240
+240 - Gateway 2 (2) = 238
+238 - Gateway 3 (15) - 1 private frontend IP configuration = 222
> [!IMPORTANT] > Although a /24 subnet isn't required per Application Gateway v2 SKU deployment, it is highly recommended. This is to ensure that Application Gateway v2 has sufficient space for autoscaling expansion and maintenance upgrades. You should ensure that the Application Gateway v2 subnet has sufficient address space to accommodate the number of instances required to serve your maximum expected traffic. If you specify the maximum instance count, then the subnet should have capacity for at least that many addresses. For capacity planning around instance count, see [instance count details](understanding-pricing.md#instance-count).
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
Choose the frontend IP address that you plan to associate with this listener. Th
## Frontend port
-Choose the frontend port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners.
+Choose the front-end port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners, however the same port cannot be used for both at the same time.
## Protocol
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Previously updated : 10/14/2022 Last updated : 11/14/2022
-monikerRange: '>=form-recog-2.1.0'
recommendations: false <!-- markdownlint-disable MD033 -->
-# Business card data extraction
+# Azure Form Recognizer business card model
-## How business card data extraction works
-Business cards are a great way of representing a business or a professional. The company logo, fonts and background images found in business cards help the company branding and differentiate it from others. Applying OCR and machine-learning based techniques to automate scanning of business cards is a common image processing scenario. Enterprise systems used by sales and marketing teams typically have business card data extraction capability integrated into them for the benefit of their users.
+The Form Recognizer business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from business card images. The API analyzes printed business cards; extracts key information such as first name, last name, company name, email address, and phone number; and returns a structured JSON data representation.
-## Form Recognizer Business Card model
+## Business card data extraction
-The business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from business card images. The API analyzes printed business cards; extracts key information such as first name, last name, company name, email address, and phone number; and returns a structured JSON data representation.
+Business cards are a great way to represent a business or a professional. The company logo, fonts and background images found in business cards help promote the company branding and differentiate it from others. Applying OCR and machine-learning based techniques to automate scanning of business cards is a common image processing scenario. Enterprise systems used by sales and marketing teams typically have business card data extraction capability integration into for the benefit of their users.
***Sample business card processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)*** +++
+***Sample business processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)***
++ ## Development options + The following tools are supported by Form Recognizer v3.0: | Feature | Resources | Model ID | |-|-|--| |**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-businessCard**| ++ The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>| + ### Try business card data extraction
-See how data, including name, job title, address, email, and company name, is extracted from business cards using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
+See how data, including name, job title, address, email, and company name, is extracted from business cards. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data, including name, job title, address, email, and company name, is ex
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: + #### Form Recognizer Studio > [!NOTE]
See how data, including name, job title, address, email, and company name, is ex
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
-## Input requirements
++
+## Form Recognizer Sample Labeling tool
+
+1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
+
+ :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
+
+1. Select the **Form Type** to analyze from the dropdown menu.
+
+1. Choose a URL for the file you would like to analyze from the below options:
+
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
+ * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
+ * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
+ * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
+
+1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
+
+1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
+
+ :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot of the select-form-type dropdown menu.":::
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
-* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
-* The total size of the training data is 500 pages or less.
-* If your PDFs are password-locked, you must remove the lock before submission.
+1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
+
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+
+ :::image type="content" source="media/business-card-results.png" alt-text="Screenshot of the business card model analyze results operation.":::
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service. +
+## Input requirements
+++++
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
+++ ## Supported languages and locales >[!NOTE]
See how data, including name, job title, address, email, and company name, is ex
| WorkPhones | Array of phone numbers | Work phone number(s) from business card | +1 xxx xxx xxxx | | OtherPhones | Array of phone numbers | Other phone number(s) from business card | +1 xxx xxx xxxx |
-## Form Recognizer v3.0
- Form Recognizer v3.0 introduces several new features and capabilities.
+
+### Fields extracted
+
+|Name| Type | Description | Text |
+|:--|:-|:-|:-|
+| ContactNames | array of objects | Contact name extracted from business card | [{ "FirstName": "John", "LastName": "Doe" }] |
+| FirstName | string | First (given) name of contact | "John" |
+| LastName | string | Last (family) name of contact | "Doe" |
+| CompanyNames | array of strings | Company name extracted from business card | ["Contoso"] |
+| Departments | array of strings | Department or organization of contact | ["R&D"] |
+| JobTitles | array of strings | Listed Job title of contact | ["Software Engineer"] |
+| Emails | array of strings | Contact email extracted from business card | ["johndoe@contoso.com"] |
+| Websites | array of strings | Website extracted from business card | ["https://www.contoso.com"] |
+| Addresses | array of strings | Address extracted from business card | ["123 Main Street, Redmond, WA 98052"] |
+| MobilePhones | array of phone numbers | Mobile phone number extracted from business card | ["+19876543210"] |
+| Faxes | array of phone numbers | Fax phone number extracted from business card | ["+19876543211"] |
+| WorkPhones | array of phone numbers | Work phone number extracted from business card | ["+19876543231"] |
+| OtherPhones | array of phone numbers | Other phone number extracted from business card | ["+19876543233"] |
+
+## Supported locales
+
+**Prebuilt business cards v2.1** supports the following locales:
+
+* **en-us**
+* **en-au**
+* **en-ca**
+* **en-gb**
+* **en-in**
+
+### Migration guide and REST API v3.0
* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
## Next steps
-* Complete a Form Recognizer quickstart:
- > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
+* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-* Explore our REST API:
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
With composed models, you can assign multiple custom models to a composed model
### Composed model compatibility
-|Custom model type|Models trained with v2.1 and v2.0| Custom template models v3.0 (preview)|Custom neural models v3.0 (preview)|Custom neural models 3.0 (GA)|
+|Custom model type|Models trained with v2.1 and v2.0| Custom template models v3.0 |Custom neural models v3.0 |Custom neural models 3.0 (GA)|
|--|--|--|--|--| |**Models trained with version 2.1 and v2.0** |Supported|Supported|Not Supported|Not Supported|
-|**Custom template models v3.0 (preview)** |Supported|Supported|Not Supported|NotSupported|
+|**Custom template models v3.0** |Supported|Supported|Not Supported|NotSupported|
|**Custom template models v3.0 (GA)** |Not Supported|Not Supported|Supported|Not Supported|
-|**Custom neural models v3.0 (preview)**|Not Supported|Not Supported|Supported|Not Supported|
+|**Custom neural models v3.0**|Not Supported|Not Supported|Supported|Not Supported|
|**Custom Neural models v3.0 (GA)**|Not Supported|Not Supported|Not Supported|Supported| * To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition will ensure that the v2.1 model can be composed with other models.
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
recommendations: false
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-Form Recognizer uses advanced machine learning technology to detect and extract information from forms and documents and returns the extracted data in a structured JSON output. With Form Recognizer, you can use pre-built or pre-trained models or you can train standalone custom models. Custom models extract and analyze distinct data and use cases from forms and documents specific to your business. Standalone custom models can be combined to create [composed models](concept-composed-models.md).
+Form Recognizer uses advanced machine learning technology to detect and extract information from forms and documents and returns the extracted data in a structured JSON output. With Form Recognizer, you can use prebuilt or pre-trained models or you can train standalone custom models. Custom models extract and analyze distinct data and use cases from forms and documents specific to your business. Standalone custom models can be combined to create [composed models](concept-composed-models.md).
To create a custom model, you label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.
The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) doesn't support
## Supported languages and locales
+>[!NOTE]
+ > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
+ The Form Recognizer v3.0 version introduces more language support for custom models. For a list of supported handwritten and printed text, see [Language support](language-support.md). ## Form Recognizer v3.0
applied-ai-services Concept Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-form-recognizer-studio.md
The following Form Recognizer service features are available in the Studio.
* **General Documents**: Try out Form Recognizer's General Documents feature to extract key-value pairs and entities. Start with the [Studio General Documents feature](https://formrecognizer.appliedai.azure.com/studio/document). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [General Documents overview](concept-general-document.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model).
-* **Prebuilt models**: Form Recognizer's pre-built models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model).
+* **Prebuilt models**: Form Recognizer's prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model).
* **Custom models**: Form Recognizer's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. Create standalone custom models or combine two or more custom models to create a composed model to extract data from multiple form types. Start with the [Studio Custom models feature](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects). Use the online wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. See the [Custom models overview](concept-custom.md) to learn more and use the [Form Recognizer v3.0 migration guide](v3-migration-guide.md) to start integrating the new models with your applications.
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Title: General key-value extraction - Form Recognizer
-description: Extract key-value paits, tables, selection marks,and text from your documents with Form Recognizer
+description: Extract key-value pairs, tables, selection marks, and text from your documents with Form Recognizer
Previously updated : 10/14/2022 Last updated : 11/14/2022 monikerRange: 'form-recog-3.0.0' recommendations: false <!-- markdownlint-disable MD033 -->
-# General key-value extraction with General Document model
+# Form Recognizer general document model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**. The General document v3.0 model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is only available with the v3.0 API. For more information on using the v3.0 API, see our [migration guide](v3-migration-guide.md).
+### Key-value pair extraction
+ The general document API supports most form types and will analyze your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels. > [!NOTE]
-> The ```2022-06-30``` and later versions of the general document model add support for selection marks.
+> The ```2022-06-30``` and subsequent versions of the general document model add support for selection marks.
## General document features
Keys can also exist in isolation when the model detects that a key exists, with
## Supported languages and locales
+>[!NOTE]
+ > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
+ | Model | LanguageΓÇöLocale code | Default | |--|:-|:| |General document| <ul><li>English (United States)ΓÇöen-US</li></ul>| English (United States)ΓÇöen-US|
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Previously updated : 10/27/2022 Last updated : 11/14/2022 recommendations: false <!-- markdownlint-disable MD033 -->
-# Identity document (ID) processing
+# Azure Form Recognizer identity document model
::: moniker range="form-recog-3.0.0" [!INCLUDE [applies to v3.0](includes/applies-to-v3-0.md)]
[!INCLUDE [applies to v2.1](includes/applies-to-v2-1.md)] ::: moniker-end
-## What is identity document (ID) processing
+
+Form Recognizer Identity document (ID) model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from identity documents such as US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state IDs, social security cards, and permanent resident cards and more. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
+++
+Azure Form Recognizer can analyze and extract information from government-issued identification documents (IDs) using its prebuilt IDs model. It combines our powerful [Optical Character Recognition (OCR)](../../cognitive-services/computer-vision/overview-ocr.md) capabilities with ID recognition capabilities to extract key information from Worldwide Passports and U.S. Driver's Licenses (all 50 states and D.C.). The IDs API extracts key information from these identity documents, such as first name, last name, date of birth, document number, and more. This API is available in the Form Recognizer v2.1 as a cloud service.
+
-Identity document (ID) processing involves extraction of data from identity documents whether manually or using OCR based techniques. Examples of identity documents include passports, driver licenses, resident cards, and national identity cards like the social security card in the US. It is an important step in any business process that requires some proof of identity. Examples include customer verification in banks and other financial institutions, mortgage applications, medical visits, claim processing, hospitality industry, and more. Individuals provide some proof of their identity via driver licenses, passports, and other similar documents so that the business can efficiently verify them before providing services and benefits.
+## Identity document processing
-## Form Recognizer Identity document (ID) model
+Identity document processing involves extracting data from identity documents either manually or by using OCR-based technology. ID document is processing an important step in any business process that requires some proof of identity. Examples include customer verification in banks and other financial institutions, mortgage applications, medical visits, claim processing, hospitality industry, and more. Individuals provide some proof of their identity via driver licenses, passports, and other similar documents so that the business can efficiently verify them before providing services and benefits.
-The Form Recognizer Identity document (ID) model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from identity documents: US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state IDs, social security cards, and permanent resident cards and more. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
***Sample U.S. Driver's License processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)*** :::image type="content" source="media/studio/analyze-drivers-license.png" alt-text="Image of a sample driver's license."::: ++
+## Data extraction
+
+The prebuilt IDs service extracts the key values from worldwide passports and U.S. Driver's Licenses and returns them in an organized structured JSON response.
+
+### **Driver's license example**
+
+![Sample Driver's License](./media/id-example-drivers-license.JPG)
+
+### **Passport example**
+
+![Sample Passport](./media/id-example-passport-result.JPG)
++ ## Development options ::: moniker range="form-recog-3.0.0"
The following tools are supported by Form Recognizer v2.1:
|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>| ::: moniker-end
-Extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio. You'll need the following resources:
+## Input requirements
+++++
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
++
+### Try Form Recognizer
+
+Extract data, including name, birth date, and expiration date, from ID documents. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
Extract data, including name, birth date, machine-readable zone, and expiration
::: moniker range="form-recog-3.0.0"
-#### Form Recognizer Studio
+## Form Recognizer Studio
> [!NOTE] > Form Recognizer studio is available with the v3.0 API (API version 2022-08-31 generally available (GA) release)
Extract data, including name, birth date, machine-readable zone, and expiration
::: moniker range="form-recog-2.1.0"
-#### Form Recognizer sample labeling tool
+## Form Recognizer Sample Labeling tool
1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-1. On the sample tool home page, select **Use prebuilt model to get data**.
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
- :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Analyze results of Form Recognizer Layout":::
+ :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
-1. Select the **Form Type** to analyze from the dropdown window.
+1. Select the **Form Type** to analyze from the dropdown menu.
1. Choose a URL for the file you would like to analyze from the below options:
Extract data, including name, birth date, machine-readable zone, and expiration
1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
- :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot: select form type dropdown window.":::
+ :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot: select form type dropdown menu.":::
1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document. 1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
- :::image type="content" source="media/id-example-drivers-license.jpg" alt-text="Analyze Results of Form Recognizer invoice model":::
+ :::image type="content" source="media/id-example-drivers-license.jpg" alt-text="Screenshot of the identity model analyze results operation.":::
1. Download the JSON output file to view the detailed results.
Extract data, including name, birth date, machine-readable zone, and expiration
* The "selectionMarks" node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected". * The "pageResults" section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted. * The "documentResults" field contains key/value pairs information and line items information for the most relevant parts of the document.
-## Input requirements
+> [!NOTE]
+> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
+ ## Supported languages and locales
+>[!NOTE]
+ > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
+ | Model | LanguageΓÇöLocale code | Default | |--|:-|:| |ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li><li>English (United States)ΓÇöen-US (state ID)</li><li>English (United States)ΓÇöen-US (social security card)</li><li>English (United States)ΓÇöen-US (permanent resident card)</li></ul></br>|English (United States)ΓÇöen-US| - ## Field extractions Below are the fields extracted per document type. The Azure Form Recognizer ID model `prebuilt-idDocument` extracts the below fields in the `documents.*.fields`. It also extracts all the text in the documents, words, lines, and styles that are included in the JSON output in the different sections.
-* `pages.*.words`
-* `pages.*.lines`
-* `paragraphs`
-* `styles`
-* `documents`
-* `documents.*.fields`
+>[!NOTE]
+>
+> In addition to specifying the IdDocument model, you can designate the ID type for (driver license, passport, national identity card, residence permit, or US social security card ).
+
+### Data extraction (all types)
+
+|**Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** |**Key-Value pairs** | **Fields** |
+|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+|[prebuilt-idDocument](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
### Document type - `idDocument.driverLicense` fields extracted
Below are the fields extracted per document type. The Azure Form Recognizer ID m
|`LastName`|`string`|Surname|TALBOT| |`DateOfIssue`|`date`|Date of issue|08/12/2012| --
-### ID document field extractions
+### Document type - `idDocument` field extracted
|Name| Type | Description | Standardized output| |:--|:-|:-|:-|
Below are the fields extracted per document type. The Azure Form Recognizer ID m
| Address | String | Extracted address, address is also parsed to its components - address, city, state, country, zip code || | Region | String | Extracted region, state, province, etc. (Driver's License only) | |
-### Migration guide and REST API v3.0
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+
+## Supported document types and locales
+
+ **Prebuilt ID v2.1** extracts key values from worldwide passports, and U.S. Driver's Licenses in the **en-us** locale.
+
+## Fields extracted
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
+|Name| Type | Description | Value |
+|:--|:-|:-|:-|
+| Country | country | Country code compliant with ISO 3166 standard | "USA" |
+| DateOfBirth | date | DOB in YYYY-MM-DD format | "1980-01-01" |
+| DateOfExpiration | date | Expiration date in YYYY-MM-DD format | "2019-05-05" |
+| DocumentNumber | string | Relevant passport number, driver's license number, etc. | "340020013" |
+| FirstName | string | Extracted given name and middle initial if applicable | "JENNIFER" |
+| LastName | string | Extracted surname | "BROOKS" |
+| Nationality | country | Country code compliant with ISO 3166 standard | "USA" |
+| Sex | gender | Possible extracted values include "M", "F" and "X" | "F" |
+| MachineReadableZone | object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
+| DocumentType | string | Document type, for example, Passport, Driver's License | "passport" |
+| Address | string | Extracted address (Driver's License only) | "123 STREET ADDRESS YOUR CITY WA 99999-1234"|
+| Region | string | Extracted region, state, province, etc. (Driver's License only) | "Washington" |
+
+### Migration guide
+
+* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
::: moniker-end
Below are the fields extracted per document type. The Azure Form Recognizer ID m
::: moniker range="form-recog-3.0.0"
-* [Learn how to process your own forms and documents](quickstarts/try-v3-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
Below are the fields extracted per document type. The Azure Form Recognizer ID m
::: moniker range="form-recog-2.1.0"
-* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Form Recognizer sample labeling tool](https://fott-2-1.azurewebsites.net/)
+* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
Title: Invoice data extraction ΓÇô Form Recognizer
-description: Automate invoice data extraction with Form RecognizerΓÇÖs invoice model to extract accounts payable data including invoice line items.
+description: Automate invoice data extraction with Form Recognizer's invoice model to extract accounts payable data including invoice line items.
Previously updated : 10/14/2022 Last updated : 11/14/2022
-monikerRange: '>=form-recog-2.1.0'
recommendations: false <!-- markdownlint-disable MD033 -->
-# Automated invoice processing
+# Azure Form Recognizer invoice model
-## What is automated invoice processing?
-Automated invoice processing is the process of extracting key accounts payable fields from including invoice line items from invoices and integrating it with your accounts payable (AP) workflows for reviews and payments. Historically, the accounts payable process has been very manual and time consuming. Accurate extraction of key data from invoices is typically the first and one of the most critical steps in the invoice automation process.
+The Form Recognizer invoice model combines powerful Optical Character Recognition (OCR) capabilities with invoice understanding models to analyze and extract key fields and line items from sales invoices. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports both English and Spanish invoices.
-## Form Recognizer Invoice model
+## Automated invoice processing
-The machine learning based invoice model combines powerful Optical Character Recognition (OCR) capabilities with invoice understanding models to analyze and extract key fields and line items from sales invoices. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports both English and Spanish invoices.
+Automated invoice processing is the process of extracting key accounts payable fields from billing account documents. Extracted data includes line items from invoices integrated with your accounts payable (AP) workflows for reviews and payments. Historically, the accounts payable process has been done manually and, hence, very time consuming. Accurate extraction of key data from invoices is typically the first and one of the most critical steps in the invoice automation process.
+ **Sample invoice processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)**: +++
+**Sample invoice processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net)**:
++ ## Development options + The following tools are supported by Form Recognizer v3.0: | Feature | Resources | Model ID | |-|-|--| |**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-invoice**| ++ The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>| +
+## Input requirements
+++++
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
+
-### Try invoice data extraction
+## Try invoice data extraction
-See how data, including customer information, vendor details, and line items, is extracted from invoices using the Form Recognizer Studio. You'll need the following resources:
+See how data, including customer information, vendor details, and line items, is extracted from invoices. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data, including customer information, vendor details, and line items, is
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio
+
+## Form Recognizer Studio
1. On the Form Recognizer Studio home page, select **Invoices**
See how data, including customer information, vendor details, and line items, is
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
-## Input requirements
+
+## Form Recognizer Sample Labeling tool
+
+1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
+
+ :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of layout model analyze results process.":::
+
+1. Select the **Form Type** to analyze from the dropdown menu.
+
+1. Choose a URL for the file you would like to analyze from the below options:
+
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
+ * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
+ * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
+ * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
+
+1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
+
+1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
+
+ :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot showing the select-form-type dropdown menu.":::
+
+1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
+
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+
+ :::image type="content" source="media/invoice-example-new.jpg" alt-text="Screenshot of layout model analyze results operation.":::
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service. ++ ## Supported languages and locales
+>[!NOTE]
+ > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
+ | Model | LanguageΓÇöLocale code | Default | |--|:-|:| |Invoice| <ul><li>English (United States)ΓÇöen-US</li></ul>| English (United States)ΓÇöen-US|
Following are the line items extracted from an invoice in the JSON output respon
The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
-### Key-value pairs
+### Key-value pairs
The prebuilt invoice **2022-06-30** and later releases support returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures. Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key will be either customer or user (based on context).
-## Form Recognizer v3.0
++
+## Supported locales
+
+**Prebuilt invoice v2.1** supports invoices in the **en-us** locale.
- The Form Recognizer v3.0 introduces several new features, capabilities, and AI quality improvements to underlying technologies.
+## Fields extracted
+
+The Invoice service will extract the text, tables, and 26 invoice fields. Following are the fields extracted from an invoice in the JSON output response (the output below uses this [sample invoice](media/sample-invoice.jpg)).
+
+|Name| Type | Description | Text | Value (standardized output) |
+|:--|:-|:-|:-| :-|
+| CustomerName | string | Customer being invoiced | Microsoft Corp | |
+| CustomerId | string | Reference ID for the customer | CID-12345 | |
+| PurchaseOrder | string | A purchase order reference number | PO-3333 | |
+| InvoiceId | string | ID for this specific invoice (often "Invoice Number") | INV-100 | |
+| InvoiceDate | date | Date the invoice was issued | 11/15/2019 | 2019-11-15 |
+| DueDate | date | Date payment for this invoice is due | 12/15/2019 | 2019-12-15 |
+| VendorName | string | Vendor who has created this invoice | CONTOSO LTD. | |
+| VendorAddress | string | Mailing address for the Vendor | 123 456th St New York, NY, 10001 | |
+| VendorAddressRecipient | string | Name associated with the VendorAddress | Contoso Headquarters | |
+| CustomerAddress | string | Mailing address for the Customer | 123 Other Street, Redmond WA, 98052 | |
+| CustomerAddressRecipient | string | Name associated with the CustomerAddress | Microsoft Corp | |
+| BillingAddress | string | Explicit billing address for the customer | 123 Bill Street, Redmond WA, 98052 | |
+| BillingAddressRecipient | string | Name associated with the BillingAddress | Microsoft Services | |
+| ShippingAddress | string | Explicit shipping address for the customer | 123 Ship Street, Redmond WA, 98052 | |
+| ShippingAddressRecipient | string | Name associated with the ShippingAddress | Microsoft Delivery | |
+| SubTotal | number | Subtotal field identified on this invoice | $100.00 | 100 |
+| TotalTax | number | Total tax field identified on this invoice | $10.00 | 10 |
+| InvoiceTotal | number | Total new charges associated with this invoice | $110.00 | 110 |
+| AmountDue | number | Total Amount Due to the vendor | $610.00 | 610 |
+| ServiceAddress | string | Explicit service address or property address for the customer | 123 Service Street, Redmond WA, 98052 | |
+| ServiceAddressRecipient | string | Name associated with the ServiceAddress | Microsoft Services | |
+| RemittanceAddress | string | Explicit remittance or payment address for the customer | 123 Remit St New York, NY, 10001 | |
+| RemittanceAddressRecipient | string | Name associated with the RemittanceAddress | Contoso Billing | |
+| ServiceStartDate | date | First date for the service period (for example, a utility bill service period) | 10/14/2019 | 2019-10-14 |
+| ServiceEndDate | date | End date for the service period (for example, a utility bill service period) | 11/14/2019 | 2019-11-14 |
+| PreviousUnpaidBalance | number | Explicit previously unpaid balance | $500.00 | 500 |
+
+Following are the line items extracted from an invoice in the JSON output response (the output below uses this [sample invoice](./media/sample-invoice.jpg))
+
+|Name| Type | Description | Text (line item #1) | Value (standardized output) |
+|:--|:-|:-|:-| :-|
+| Items | string | Full string text line of the line item | 3/4/2021 A123 Consulting Services 2 hours $30.00 10% $60.00 | |
+| Amount | number | The amount of the line item | $60.00 | 100 |
+| Description | string | The text description for the invoice line item | Consulting service | Consulting service |
+| Quantity | number | The quantity for this invoice line item | 2 | 2 |
+| UnitPrice | number | The net or gross price (depending on the gross invoice setting of the invoice) of one unit of this item | $30.00 | 30 |
+| ProductCode | string| Product code, product number, or SKU associated with the specific line item | A123 | |
+| Unit | string| The unit of the line item, e.g, kg, lb etc. | hours | |
+| Date | date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 |
+| Tax | number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |
+
+### JSON output
+
+The JSON output has three parts:
+
+* `"readResults"` node contains all of the recognized text and selection marks. Text is organized by page, then by line, then by individual words.
+* `"pageResults"` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults".
+* `"documentResults"` node contains the invoice-specific values and line items that the model discovered. It's where you'll find all the fields from the invoice such as invoice ID, ship to, bill to, customer, total, line items and lots more.
+
+## Migration guide
* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
## Next steps
-* Complete a Form Recognizer quickstart:
+
+* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
++
- > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
+* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-* Explore our REST API:
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
- > [!div class="nextstepaction"]
- > [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5ed8c9843c2794cbb1a96291)
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Previously updated : 10/14/2022 Last updated : 11/10/2022
-monikerRange: '>=form-recog-2.1.0'
recommendations: false
-# Document layout analysis
+# Azure Form Recognizer layout model
-## What is document layout analysis?
-Document structure and layout analysis is the process of analyzing a document to extract regions of interest and their inter-relationships. The goal is to extract text and structural elements from the page for building better semantic understanding models. For all extracted text, there are two types of roles that text plays in a document layout. Text, tables, and selection marks are examples of geometric roles. Titles, headings, and footers are examples of logical roles. For example. a reading system requires differentiating text regions from non-textual ones along with their reading order.
+Form Recognizer layout model is an advanced machine-learning based document analysis API available in the Form Recognizer cloud. It enables you to take documents in a variety of formats and return structured data representations of the documents. It combines an enhanced version of our powerful [Optical Character Recognition (OCR)](../../cognitive-services/computer-vision/overview-ocr.md) capabilities with deep learning models to extract text, tables, selection marks, and document structure.
+
+## Document layout analysis
+
+Document structure layout analysis is the process of analyzing a document to extract regions of interest and their inter-relationships. The goal is to extract text and structural elements from the page to build better semantic understanding models. There are two types of roles that text plays in a document layout:
+
+* **Geometric roles**: Text, tables, and selection marks are examples of geometric roles.
+* **Logical roles**: Titles, headings, and footers are examples of logical roles.
The following illustration shows the typical components in an image of a sample page. :::image type="content" source="media/document-layout-example.png" alt-text="Illustration of document layout example.":::
-## Form Recognizer Layout model
-The Form Recognizer Layout is an advanced machine-learning based document layout analysis model available in the Form Recognizer cloud API. In the version v2.1, the document layout model extracted text lines, words, tables, and selection marks.
+***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
-**Starting with v3.0 GA**, it extracts paragraphs and additional structure information like titles, section headings, page header, page footer, page number, and footnote from the document page. These are examples of logical roles described in the previous section. This capability is supported for PDF documents and images (JPG, PNG, BMP, TIFF).
-***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
+## Development options
+The following tools are supported by Form Recognizer v3.0:
-## Supported document types
+| Feature | Resources | Model ID |
+|-|||
+|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-layout**|
-| **Model** | **Images** | **PDF** | **TIFF** |
-| | | | |
-| Layout | Γ£ô | Γ£ô | Γ£ô |
-### Data extraction
-| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Logical roles** |
-| | | | | | |
-| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+**Sample document processed with [Form Recognizer Sample Labeling tool layout model](https://fott-2-1.azurewebsites.net/layout-analyze)**:
-**Supported logical roles for paragraphs**:
-The paragraph roles are best used with unstructured documents. Paragraph roles help analyze the structure of the extracted content for better semantic search and analysis.
-* title
-* sectionHeading
-* footnote
-* pageHeader
-* pageFooter
-* pageNumber
-## Development options
+## Input requirements
-The following tools are supported by Form Recognizer v3.0:
-| Feature | Resources | Model ID |
-|-|||
-|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-layout**|
-The following tools are supported by Form Recognizer v2.1:
-| Feature | Resources |
-|-|-|
-|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
+
-## Try document layout analysis
+### Try layout extraction
-Try extracting data from forms and documents using the Form Recognizer Studio. You'll need the following resources:
+See how data, including text, tables, table headers, selection marks, and structure information is extracted from documents using Form Recognizer. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
Try extracting data from forms and documents using the Form Recognizer Studio. Y
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-### Form Recognizer Studio
+
+## Form Recognizer Studio
> [!NOTE] > Form Recognizer studio is available with the v3.0 API.
Try extracting data from forms and documents using the Form Recognizer Studio. Y
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
-## Input requirements
+
+## Form Recognizer Sample Labeling tool
+
+1. Navigate to the [Form Recognizer sample tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select **Use Layout to get text, tables and selection marks**.
+
+ :::image type="content" source="media/label-tool/layout-1.jpg" alt-text="Screenshot of connection settings for the Form Recognizer layout process.":::
+
+1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
+
+1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
+
+1. In the **Source** field, select **URL** from the dropdown menu You can use our sample document:
+
+ * [**Sample document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg)
+
+ * Select the **Fetch** button.
+
+1. Select **Run Layout**. The Form Recognizer Sample Labeling tool will call the Analyze Layout API and analyze the document.
+
+ :::image type="content" source="media/fott-layout.png" alt-text="Screenshot: Layout dropdown window.":::
+
+1. View the results - see the highlighted text extracted, selection marks detected and tables detected.
+
+ :::image type="content" source="media/label-tool/layout-3.jpg" alt-text="Screenshot of connection settings for the Form Recognizer Sample Labeling tool.":::
++
+## Supported document types
+
+| **Model** | **Images** | **PDF** | **TIFF** |
+| | | | |
+| Layout | Γ£ô | Γ£ô | Γ£ô |
## Supported languages and locales *See* [Language Support](language-support.md) for a complete list of supported handwritten and printed languages. +
+### Data extraction
+
+**Starting with v3.0 GA**, it extracts paragraphs and more structure information like titles, section headings, page header, page footer, page number, and footnote from the document page. These structural elements are examples of logical roles described in the previous section. This capability is supported for PDF documents and images (JPG, PNG, BMP, TIFF).
+
+| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Logical roles** |
+| | | | | | |
+| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+
+**Supported logical roles for paragraphs**:
+The paragraph roles are best used with unstructured documents. Paragraph roles help analyze the structure of the extracted content for better semantic search and analysis.
+
+* title
+* sectionHeading
+* footnote
+* pageHeader
+* pageFooter
+* pageNumber
+++
+### Data extraction
+
+| **Model** | **Text** | **Tables** | Selection marks|
+| | | | |
+| Layout | Γ£ô | Γ£ô| Γ£ô |
+
+The following tools are supported by Form Recognizer v2.1:
+
+| Feature | Resources |
+|-|-|
+|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+++ ## Model extraction The layout model extracts text, selection marks, tables, paragraphs, and paragraph types (`roles`) from your documents.
-### Paragraph extraction <sup>🆕</sup>
+### Paragraph extraction
The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
The Layout model extracts all identified blocks of text in the `paragraphs` coll
] ```
-### Paragraph roles<sup> 🆕</sup>
+### Paragraph roles
The new machine-learning based page object detection extracts logical roles like titles, section headings, page headers, page footers, and more. The Form Recognizer Layout model assigns certain text blocks in the `paragraphs` collection with their specialized role or type predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
The new machine-learning based page object detection extracts logical roles like
### Pages extraction
-The pages collection is the very first object you see in the service response.
+The pages collection is the first object you see in the service response.
```json "pages": [
The document layout model in Form Recognizer extracts print and handwritten styl
} ] ```+ ### Selection marks extraction The Layout model also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
Extracting tables is a key requirement for processing documents containing large
} ```+ ### Handwritten style for text lines (Latin languages only) The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows an example JSON snippet.
The response includes classifying whether each text line is of handwriting style
```json "styles": [ {
- "confidence": 0.95,
- "spans": [
- {
- "offset": 509,
- "length": 24
- }
- "isHandwritten": true
- ]
+ "confidence": 0.95,
+ "spans": [
+ {
+ "offset": 509,
+ "length": 24
+ }
+ "isHandwritten": true
+ ]
} ```
The response includes classifying whether each text line is of handwriting style
For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction. ++
+### Natural reading order output (Latin only)
+
+You can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
++
+### Select page numbers or ranges for text extraction
+
+For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
++
+## The Get Analyze Layout Result operation
+
+The second step is to call the [Get Analyze Layout Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult) operation. This operation takes as input the Result ID that was created by the Analyze Layout operation. It returns a JSON response that contains a **status** field with the following possible values.
+
+|Field| Type | Possible values |
+|:--|:-:|:-|
+|status | string | `notStarted`: The analysis operation hasn't started.<br /><br />`running`: The analysis operation is in progress.<br /><br />`failed`: The analysis operation has failed.<br /><br />`succeeded`: The analysis operation has succeeded.|
+
+Call this operation iteratively until it returns the `succeeded` value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+
+When the **status** field has the `succeeded` value, the JSON response will include the extracted layout, text, tables, and selection marks. The extracted data includes extracted text lines and words, bounding boxes, text appearance with handwritten indication, tables, and selection marks with selected/unselected indicated.
+
+### Handwritten classification for text lines (Latin only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
++
+### Sample JSON output
+
+The response to the *Get Analyze Layout Result* operation is a structured representation of the document with all the information extracted.
+See here for a [sample document file](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-layout.pdf) and its structured output [sample layout output](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-layout-output.json).
+
+The JSON output has two parts:
+
+* `readResults` node contains all of the recognized text and selection marks. Text is organized by page, then by line, then by individual words.
+* `pageResults` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults".
+
+## Example Output
+
+### Text
+
+Layout API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided on lines, words, bounding boxes, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
+
+### Tables with headers
+
+Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding box is output along with information whether it's recognized as part of a header or not. The model predicted header cells can span multiple rows and aren't necessarily the first rows in a table. They also work with rotated tables. Each table cell also includes the full text with references to the individual words in the `readResults` section.
+
+![Tables example](./media/layout-table-header-demo.gif)
+
+### Selection marks
+
+Layout API also extracts selection marks from documents. Extracted selection marks include the bounding box, confidence, and state (selected/unselected). Selection mark information is extracted in the `readResults` section of the JSON output.
+
+### Migration guide
+
+* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+ ## Next steps
-* Complete a Form Recognizer quickstart:
+
+* [Learn how to process your own forms and documents](quickstarts/try-v3-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
++
- > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
+* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-* Explore our REST API:
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Learn how to use Form Recognizer v3.0 in your applications by following our [**F
The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from documents.
-***Sample document processed using the [sample labeling tool](https://fott-2-1.azurewebsites.net/layout-analyze)***:
+***Sample document processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/layout-analyze)***:
> [!div class="nextstepaction"] >
The Layout API analyzes and extracts text, tables and headers, selection marks,
The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due.
-***Sample invoice processed using the [sample labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
+***Sample invoice processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
> [!div class="nextstepaction"] > [Learn more: invoice model](concept-invoice.md)
The invoice model analyzes and extracts key information from sales invoices. The
* The receipt model analyzes and extracts key information from printed and handwritten sales receipts.
-***Sample receipt processed using [sample labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
+***Sample receipt processed using [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
:::image type="content" source="./media/receipts-example.jpg" alt-text="Screenshot of a sample receipt." lightbox="./media/overview-receipt.jpg":::
The invoice model analyzes and extracts key information from sales invoices. The
* Biographical pages from international passports (excluding visa and other travel documents). The API analyzes identity documents and extracts
-***Sample U.S. Driver's License processed using the [sample labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
+***Sample U.S. Driver's License processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
:::image type="content" source="./media/id-example-drivers-license.jpg" alt-text="Screenshot of a sample identification card.":::
The invoice model analyzes and extracts key information from sales invoices. The
The business card model analyzes and extracts key information from business card images.
-***Sample business card processed using the [sample labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
+***Sample business card processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
:::image type="content" source="./media/business-card-example.jpg" alt-text="Screenshot of a sample business card.":::
The business card model analyzes and extracts key information from business card
* Custom models analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
-***Sample custom model processing using the [sample labeling tool](https://fott-2-1.azurewebsites.net/)***:
+***Sample custom model processing using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/)***:
:::image type="content" source="media/overview-custom.jpg" alt-text="Screenshot: Form Recognizer tool analyze-a-custom-form window.":::
The business card model analyzes and extracts key information from business card
A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
-***Composed model dialog window using the [sample labeling tool](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
+***Composed model dialog window using the [Sample Labeling tool](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
:::image type="content" source="media/custom-model-compose.png" alt-text="Screenshot of Form Recognizer Studio compose custom model dialog window.":::
A composed model is created by taking a collection of custom models and assignin
::: moniker range="form-recog-3.0.0"
-* [Learn how to process your own forms and documents](quickstarts/try-v3-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
A composed model is created by taking a collection of custom models and assignin
::: moniker range="form-recog-2.1.0"
-* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Form Recognizer sample labeling tool](https://fott-2-1.azurewebsites.net/)
+* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Title: OCR for documents - Form Recognizer
-description: Extract print and handwritten text from scanned and digital documents with Form RecognizerΓÇÖs Read OCR model.
+description: Extract print and handwritten text from scanned and digital documents with Form Recognizer's Read OCR model.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# OCR for documents
+# Form Recognizer read (OCR) model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**. > [!NOTE] > > For extracting text from in-the-wild images like labels, street signs, and posters, use the [Computer Vision v4.0 preview Read](../../cognitive-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
->
+>
## What is OCR for documents?
-Optical Character Recognition (OCR) for documents is optimized for large text-heavy documents in multiple file formats and global languages. It should include features like higher-resolution scanning of document images for better handling of smaller and dense text, paragraphs detection, handling fillable forms, and advanced forms and document scenarios like single character boxes and accurate extraction of key fields commonly found in invoices, receipts, and other prebuilt scenarios.
+Optical Character Recognition (OCR) for documents is optimized for large text-heavy documents in multiple file formats and global languages. It includes features like higher-resolution scanning of document images for better handling of smaller and dense text; paragraph detection; and fillable form management. OCR capabilities also include advanced scenarios like single character boxes and accurate extraction of key fields commonly found in invoices, receipts, and other prebuilt scenarios.
## OCR in Form Recognizer - Read model
-Form Recognizer v3.0ΓÇÖs Read Optical Character Recognition (OCR) model runs at a higher resolution than Computer Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes preview support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages, and is the underlying OCR engine for other Form Recognizer models like Layout, General Document, Invoice, Receipt, Identity (ID) document, and other prebuilt models, as well as custom models.
+Form Recognizer v3.0's Read Optical Character Recognition (OCR) model runs at a higher resolution than Computer Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes preview support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages. The read model is the underlying OCR engine for other Form Recognizer prebuilt models like Layout, General Document, Invoice, Receipt, Identity (ID) document, in addition to custom models.
## OCR supported document types
Form Recognizer v3.0 version supports several languages for the read OCR model.
## Data detection and extraction
-### Microsoft Office and HTML text extraction (preview) <sup>🆕</sup>
-Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDKs of that API version to preview text extraction from Microsoft Word, Excel, PowerPoint, and HTML files. The following illustration shows extraction of the digital text as well as text from the images embedded in the Word document by running OCR on the images.
+### Microsoft Office and HTML text extraction
+
+Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDKs of that API version to preview text extraction from Microsoft Word, Excel, PowerPoint, and HTML files. The following illustration shows extraction of the digital text and text from the images embedded in the Word document by running OCR on the images.
:::image type="content" source="media/office-to-ocr.png" alt-text="Screenshot of a Microsoft Word document extracted by Form Recognizer Read OCR.":::
The page units in the model output are computed as shown:
**File format** | **Computed page unit** | **Total pages** | | | | |
-|Word (preview) | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
-|Excel (preview) | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
-|PowerPoint (preview)| Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
-|HTML (preview)| Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+|Word | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
+|Excel | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
+|PowerPoint | Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
+|HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
-### Paragraphs extraction <sup>🆕</sup>
+### Paragraphs extraction
The Read OCR model in Form Recognizer extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top-level `content` property that contains the full text from the document.
The Read OCR model in Form Recognizer extracts all identified blocks of text in
} ] ```
-### Language detection <sup>🆕</sup>
+
+### Language detection
The Read OCR model in Form Recognizer adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, Rea
} ] ```+ ### Select page (s) for text extraction For large multi-page PDF documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
The response includes classifying whether each text line is of handwriting style
```json "styles": [ {
- "confidence": 0.95,
- "spans": [
- {
- "offset": 509,
- "length": 24
- }
- "isHandwritten": true
- ]
+ "confidence": 0.95,
+ "spans": [
+ {
+ "offset": 509,
+ "length": 24
+ }
+ "isHandwritten": true
+ ]
} ```
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Previously updated : 10/14/2022 Last updated : 11/14/2022
-monikerRange: '>=form-recog-2.1.0'
recommendations: false <!-- markdownlint-disable MD033 -->
-# Receipt data extraction
+# Azure Form Recognizer receipt model
-## What is receipt digitization
-Receipt digitization is the process of converting scanned receipts into digital form for downstream processing. OCR powered receipt data extraction helps to automate the conversion and save time and effort. The output from the receipt data extraction is used for accounts payable and receivables automation, sales data analytics, and other business scenarios.
+The Form Recognizer receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns structured JSON data.
-## Form Recognizer receipt model
+## Receipt data extraction
-The Form Recognizer receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns structured JSON data.
+Receipt digitization is the process of converting scanned receipts into digital form for downstream processing. Azure Form Recognizer OCR-powered receipt data extraction helps to automate the conversion and save time and effort. The output from the receipt data extraction is used for accounts payable and receivables automation, sales data analytics, and other business scenarios.
+ ***Sample receipt processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***: +++
+**Sample invoice processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/connection)**:
++ ## Development options The following tools are supported by Form Recognizer v3.0: | Feature | Resources | Model ID | |-|-|--| |**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-receipt**| ++ The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>| +
+## Input requirements
+++++
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
++ ### Try receipt data extraction
-See how data, including time and date of transactions, merchant information, and amount totals, is extracted from receipts using the Form Recognizer Studio. You'll need the following resources:
+See how data, including time and date of transactions, merchant information, and amount totals, is extracted from receipts. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data, including time and date of transactions, merchant information, and
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio
+
+#### Form Recognizer Studio
> [!NOTE] > Form Recognizer studio is available with the v3.0 API.
See how data, including time and date of transactions, merchant information, and
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
-## Input requirements
+
+## Form Recognizer Sample Labeling tool
+
+1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
+
+ :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results process.":::
+
+1. Select the **Form Type** to analyze from the dropdown menu.
+
+1. Choose a URL for the file you would like to analyze from the below options:
+
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
+ * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
+ * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
+ * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
+
+1. In the **Form Recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
+
+1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
+
+ :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot of the select-form-type dropdown menu.":::
+
+1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
+
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+
+ :::image type="content" source="media/invoice-example-new.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
+
+> [!NOTE]
+> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
+++
+## Supported languages and locales v3.0
+
+>[!NOTE]
+> It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
+
+The receipt model supports all English receipts and the following locales:
+
+|Language| Locale code |
+|:--|:-:|
+|English (Australia)|`en-au`|
+|English (Canada)|`en-ca`|
+|English (United Kingdom)|`en-gb`|
+|English (India|`en-in`|
+|English (United States)| `en-us`|
+|French | 'fr' |
+|Spanish | `es` |
++ ## Supported languages and locales v2.1
See how data, including time and date of transactions, merchant information, and
|--|:-|:| |Receipt| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li></ul> | Autodetected | + ## Field extraction |Name| Type | Description | Standardized output |
See how data, including time and date of transactions, merchant information, and
| Price | Number | Individual price of each item unit| Two-decimal float | | TotalPrice | Number | Total price of line item | Two-decimal float |
-## Form Recognizer v3.0
Form Recognizer v3.0 introduces several new features and capabilities. The **Receipt** model supports single-page hotel receipt processing.
See how data, including time and date of transactions, merchant information, and
|--|:-|:| |Receipt (hotel) | <ul><li>English (United States)ΓÇöen-US</li></ul>| English (United States)ΓÇöen-US| ++ ### Migration guide and REST API v3.0 * Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
## Next steps
-* Complete a Form Recognizer quickstart:
+
+* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
++
- > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
+* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-* Explore our REST API:
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Previously updated : 10/14/2022 Last updated : 11/10/2022 monikerRange: 'form-recog-3.0.0' recommendations: false
-# Automated W-2 form processing
+# Form Recognizer W-2 form model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-## Why is automated W-2 form processing important?
-
-Form W-2, also known as the Wage and Tax Statement, is sent by an employer to each employee and the Internal Revenue Service (IRS) at the end of the year. A W-2 form reports employees' annual wages and the amount of taxes withheld from their paychecks. The IRS also uses W-2 forms to track individuals' tax obligations. The Social Security Administration (SSA) uses the information on this and other forms to compute the Social Security benefits for all workers.
+The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported on [US Internal Revenue Service (IRS) tax forms](https://www.irs.gov/forms-pubs/about-form-w-2). A W-2 tax form is a multipart form divided into state and federal sections consisting of more than 14 boxes detailing an employee's income from the previous year. The W-2 tax form is a key document used in employees' federal and state tax filings, as well as other processes like mortgage loans and Social Security Administration (SSA) benefits. The Form Recognizer W-2 model supports both single and multiple standard and customized forms from 2018 to the present.
-## Form Recognizer W-2 form model
+## Automated W-2 form processing
-The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported on [US Internal Revenue Service (IRS) tax forms](https://www.irs.gov/forms-pubs/about-form-w-2). A W-2 tax form is a multipart form divided into state and federal sections consisting of more than 14 boxes detailing an employee's income from the previous year. The W-2 tax form is a key document used in employees' federal and state tax filings, as well as other processes like mortgage loans and Social Security Administration (SSA) benefits. The Form Recognizer W-2 model supports both single and multiple standard and customized forms from 2018 to the present.
+Form W-2, also known as the Wage and Tax Statement, is sent by an employer to each employee and the Internal Revenue Service (IRS) at the end of the year. A W-2 form reports employees' annual wages and the amount of taxes withheld from their paychecks. The IRS also uses W-2 forms to track individuals' tax obligations. The Social Security Administration (SSA) uses the information on this and other forms to compute the Social Security benefits for all workers.
***Sample W-2 tax form processed using Form Recognizer Studio*** ## Development options
The prebuilt W-2 model is supported by Form Recognizer v3.0 with the following t
|-|-|--| |**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
-### Try W-2 form data extraction
+### Try W-2 data extraction
Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need the following resources:
applied-ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/deploy-label-tool.md
recommendations: false
> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version. > [!NOTE]
-> The [cloud hosted](https://fott-2-1.azurewebsites.net/) labeling tool is available at [https://fott-2-1.azurewebsites.net/](https://fott-2-1.azurewebsites.net/). Follow the steps in this document only if you want to deploy the sample labeling tool for yourself.
+> The [cloud hosted](https://fott-2-1.azurewebsites.net/) labeling tool is available at [https://fott-2-1.azurewebsites.net/](https://fott-2-1.azurewebsites.net/). Follow the steps in this document only if you want to deploy the Sample Labeling tool for yourself.
The Form Recognizer Sample Labeling tool is an application that provides a simple user interface (UI), which you can use to manually label forms (documents) for supervised learning. In this article, we'll provide links and instructions that teach you how to:
applied-ai-services Build A Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-model.md
If you add the following content to the request body, the API will train with do
Now that you've learned how to build a training data set, follow a quickstart to train a custom Form Recognizer model and start using it on your forms. * [Train a model and extract document data using the client library or REST API](../quickstarts/get-started-sdks-rest-api.md)
-* [Train with labels using the sample labeling tool](../label-tool.md)
+* [Train with labels using the Sample Labeling tool](../label-tool.md)
## See also
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
Language| Locale code |
>[!NOTE] > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
-Receipt supports all English receipts with the following locales:
+Receipt supports all English receipts and the following locales:
|Language| Locale code | |:--|:-:|
This table lists the written languages supported by each Form Recognizer service
>[!NOTE] > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
-Pre-Built Receipt and Business Cards support all English receipts and business cards with the following locales:
+Prebuilt Receipt and Business Cards support all English receipts and business cards with the following locales:
|Language| Locale code | |:--|:-:|
This technology is currently available for US driver licenses and the biographic
::: moniker range="form-recog-2.1.0" > [!div class="nextstepaction"]
-> [Try Form Recognizer sample labeling tool](https://aka.ms/fott-2.1-ga)
+> [Try Form Recognizer Sample Labeling tool](https://aka.ms/fott-2.1-ga)
::: moniker-end
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Form Recognizer offers several prebuilt models to choose from. Each model has it
1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-1. On the sample tool home page, select **Use prebuilt model to get data**.
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
- :::image type="content" source="../media/label-tool/prebuilt-1.jpg" alt-text="Analyze results of Form Recognizer Layout":::
+ :::image type="content" source="../media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
-1. Select the **Form Type** to analyze from the dropdown window.
+1. Select the **Form Type** to analyze from the dropdown menu.
1. Choose a URL for the file you would like to analyze from the below options:
Form Recognizer offers several prebuilt models to choose from. Each model has it
1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
- :::image type="content" source="../media/fott-select-form-type.png" alt-text="Screenshot: select form type dropdown window.":::
+ :::image type="content" source="../media/fott-select-form-type.png" alt-text="Screenshot: select form type dropdown menu.":::
1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
Azure the Form Recognizer Layout API extracts text, tables, selection marks, and
1. Select **Run Layout**. The Form Recognizer Sample Labeling tool will call the Analyze Layout API and analyze the document.
- :::image type="content" source="../media/fott-layout.png" alt-text="Screenshot: Layout dropdown window.":::
+ :::image type="content" source="../media/fott-layout.png" alt-text="Screenshot: Layout dropdown menu.":::
1. View the results - see the highlighted text extracted, selection marks detected and tables detected.
The labeling tool will also show which tables have been automatically extracted.
##### Apply labels to text
-Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze. Note the sample label data set includes already labeled fields; we'll add another field.
+Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze. Note the Sample Label data set includes already labeled fields; we'll add another field.
Use the tags editor pane to create a new tag you'd like to identify:
applied-ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-azure-function.md
In this tutorial, you learn how to:
* **Select subscription** → choose your Azure subscription with the storage account you created → select your storage account → then select the name of the storage input container (in this case, `input/{name}`). Press **Enter** to confirm.
- * **Select how your would like to open your project** → choose **Open the project in the current window** from the dropdown window.
+ * **Select how your would like to open your project** → choose **Open the project in the current window** from the dropdown menu.
1. Once you've completed these steps, VS Code will add a new Azure Function project with a *\_\_init\_\_.py* Python script. This script will be triggered when a file is uploaded to the **input** storage container:
applied-ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-logic-apps.md
recommendations: false
> [!IMPORTANT] >
-> This tutorial and the Logic App Form Recognizer connector targets Form Recognizer REST API v2.1 and must be used in conjuction with the [FOTT sample labeling tool](https://fott-2-1.azurewebsites.net/).
+> This tutorial and the Logic App Form Recognizer connector targets Form Recognizer REST API v2.1 and must be used in conjuction with the [FOTT Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
Azure Logic Apps is a cloud-based platform that can be used to automate workflows without writing a single line of code. The platform enables you to easily integrate Microsoft and third-party applications with your apps, data, services, and systems. A Logic App is the Azure resource you create when you want to develop a workflow. Here are a few examples of what you can do with a Logic App:
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
The **2022-06-30-preview** release presents extensive updates across the feature
* [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales). * [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales). * [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).
-* [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction (preview)](concept-read.md#microsoft-office-and-html-text-extraction-preview-).
+* [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction ](concept-read.md#microsoft-office-and-html-text-extraction).
#### Form Recognizer SDK beta June 2022 preview release
pip package version 3.1.0b4
* **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks. * **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_. * **Model name** - add a friendly name to your custom models for easier management and tracking.
-* **[New pre-built model for Business Cards](./concept-business-card.md)** for extracting common fields in English, language business cards.
-* **[New locales for pre-built Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN
+* **[New prebuilt model for Business Cards](./concept-business-card.md)** for extracting common fields in English, language business cards.
+* **[New locales for prebuilt Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN
* **Quality improvements** for `Layout`, `Train Custom Model` - _Train without Labels_ and _Train with Labels_. **v2.0** includes the following update:
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
By default, Core Tools reads the function.json files and adds the required packa
</ItemGroup> </Project> ```
+> [!NOTE]
+> For C# script (.csx), you must set `TargetFramework` to a value of `netstandard2.0`. Other target frameworks, such as `net6.0`, aren't supported.
# [v1.x](#tab/functionsv1)
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 9/28/2022 Last updated : 11/14/2022 # Azure Monitor agent extension versions
-This article describes the version details for the Azure Monitor agent virtual machine extension. This extension deploys the agent on virtual machines, scale sets, and Arc-enabled servers (on premise servers with Azure Arc agent installed).
+This article describes the version details for the Azure Monitor agent virtual machine extension. This extension deploys the agent on virtual machines, scale sets, and Arc-enabled servers (on premise servers with Azure Arc agent installed).
We strongly recommended to update to the latest version at all times, or opt in to the [Automatic Extension Update](../../virtual-machines/automatic-extension-upgrade.md) feature.
+[//]: # "DO NOT change the format (column schema, etc.) of the below table without consulting glinuxagent alias. The [Azure Monitor Linux Agent Troubleshooting Tool](https://github.com/Azure/azure-linux-extensions/blob/master/AzureMonitorAgent/ama_tst/AMA-Troubleshooting-Tool.md) parses the below table at runtime to determine the latest version of AMA; altering the format could degrade some of the functions of the tool."
## Version details
-| Release Date | Release notes | Windows | Linux |
-|:|:|:|:|
-| August 2022 | **Common updates** <ul><li>Improved resiliency: Default lookback (retry) time updated to last 3 days (72 hours) up from 60 minutes, for agent to collect data post interruption. This is subject to default offline cache size of 10gigabytes</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Reliability and supportability improvements</li></ul> **Windows** <ul><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> **Linux** <ul><li>Support for OpenSuse 15, Debian 11 ARM64</li><li>Support for coexistence of Azure Monitor agent with legacy Azure Diagnostic extension for Linux (LAD)</li><li>Increased max-size of UDP payload for Telegraf output to prevent dimension truncation</li><li>Prevent unconfigured upload to Azure Monitor Metrics destination</li><li>Fix for disk metrics wherein *instance name* dimension will use the disk mount path(s) instead of the device name(s), to provide parity with legacy agent</li><li>Fixed *disk free MB* metric to report megabytes instead of bytes</li></ul> | 1.8.0.0 | 1.22.2 |
-| July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0.0 | None |
-| June 2022 | Bugfixes with user assigned identity support, and reliability improvements | 1.6.0.0 | None |
+| Release Date | Release notes | Windows | Linux |
+|:|:|:|:|
+| Oct 2022 | <ul><li>Increased default retry timeout for data upload from 4 to 8 hours</li><li>Data quality improvements</li></ul> | 1.10.0.0 | None |
+| Sep 2022 | Reliability improvements | 1.9.0.0 | None |
+| August 2022 | **Common updates** <ul><li>Improved resiliency: Default lookback (retry) time updated to last 3 days (72 hours) up from 60 minutes, for agent to collect data post interruption. This is subject to default offline cache size of 10gigabytes</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Reliability and supportability improvements</li></ul> **Windows** <ul><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> **Linux** <ul><li>Support for OpenSuse 15, Debian 11 ARM64</li><li>Support for coexistence of Azure Monitor agent with legacy Azure Diagnostic extension for Linux (LAD)</li><li>Increased max-size of UDP payload for Telegraf output to prevent dimension truncation</li><li>Prevent unconfigured upload to Azure Monitor Metrics destination</li><li>Fix for disk metrics wherein *instance name* dimension will use the disk mount path(s) instead of the device name(s), to provide parity with legacy agent</li><li>Fixed *disk free MB* metric to report megabytes instead of bytes</li></ul> | 1.8.0.0 | 1.22.2 |
+| July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0.0 | None |
+| June 2022 | Bugfixes with user assigned identity support, and reliability improvements | 1.6.0.0 | None |
| May 2022 | <ul><li>Fixed issue where agent stops functioning due to faulty XPath query. With this version, only query related Windows events will fail, other data types will continue to be collected</li><li>Collection of Windows network troubleshooting logs added to 'CollectAMAlogs.ps1' tool</li><li>Linux support for Debian 11 distro</li><li>Fixed issue to list mount paths instead of device names for Linux disk metrics</li></ul> | 1.5.0.0 | 1.21.0 | | April 2022 | <ul><li>Private IP information added in Log Analytics <i>Heartbeat</i> table for Windows and Linux</li><li>Fixed bugs in Windows IIS log collection (preview) <ul><li>Updated IIS site column name to match backend KQL transform</li><li>Added delay to IIS upload task to account for IIS buffering</li></ul></li><li>Fixed Linux CEF syslog forwarding for Sentinel</li><li>Removed 'error' message for Azure MSI token retrieval failure on Arc to show as 'Info' instead</li><li>Support added for Ubuntu 22.04, RHEL 8.5, 8.6, AlmaLinux and RockyLinux distros</li></ul> | 1.4.1.0<sup>Hotfix</sup> | 1.19.3 | | March 2022 | <ul><li>Fixed timestamp and XML format bugs in Windows Event logs</li><li>Full Windows OS information in Log Analytics Heartbeat table</li><li>Fixed Linux performance counters to collect instance values instead of 'total' only</li></ul> | 1.3.0.0 | 1.17.5.0 |
-| February 2022 | <ul><li>Bugfixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li><li>Internal test improvement on Linux</li></ul> | 1.2.0.0 | 1.15.3 |
-| January 2022 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
-| December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> |
-| September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> |
-| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>Hotfix</sup> |
-| July 2021 | <ul><li>Support for direct proxies</li><li>Support for Log Analytics gateway</li></ul> [Learn more](https://azure.microsoft.com/updates/general-availability-azure-monitor-agent-and-data-collection-rules-now-support-direct-proxies-and-log-analytics-gateway/) | 1.1.1.0 | 1.10.5.0 |
-| June 2021 | General availability announced. <ul><li>All features except metrics destination now generally available</li><li>Production quality, security and compliance</li><li>Availability in all public regions</li><li>Performance and scale improvements for higher EPS</li></ul> [Learn more](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-now-generally-available/) | 1.0.12.0 | 1.9.1.0 |
-
-<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1, v1.1.5.0. Please use hotfix versions listed above.
-<sup>1</sup> Known issue: No data collected from Linux Arc-enabled servers
+| February 2022 | <ul><li>Bugfixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li><li>Internal test improvement on Linux</li></ul> | 1.2.0.0 | 1.15.3 |
+| January 2022 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
+| December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> |
+| September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> |
+| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>Hotfix</sup> |
+| July 2021 | <ul><li>Support for direct proxies</li><li>Support for Log Analytics gateway</li></ul> [Learn more](https://azure.microsoft.com/updates/general-availability-azure-monitor-agent-and-data-collection-rules-now-support-direct-proxies-and-log-analytics-gateway/) | 1.1.1.0 | 1.10.5.0 |
+| June 2021 | General availability announced. <ul><li>All features except metrics destination now generally available</li><li>Production quality, security and compliance</li><li>Availability in all public regions</li><li>Performance and scale improvements for higher EPS</li></ul> [Learn more](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-now-generally-available/) | 1.0.12.0 | 1.9.1.0 |
+
+<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1, v1.1.5.0. Please use hotfix versions listed above.
+<sup>1</sup> Known issue: No data collected from Linux Arc-enabled servers
<sup>2</sup> Known issue: Linux performance counters data stops flowing on restarting/rebooting the machine(s) ## Next steps
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
Previously updated : 8/18/2022 Last updated : 11/14/2022 # Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics Agent to Azure Monitor Agent and track the status of the migration in my account.
Azure Monitor Agent (AMA) replaces the Log Analytics Agent (MMA/OMS) for Windows
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
-## Using AMA Migration Helper (preview)
+## Using AMA Migration Helper
AMA Migration Helper is a workbook-based Azure Monitor solution that helps you **discover what to migrate** and **track progress** as you move from Log Analytics Agent to Azure Monitor Agent. Use this single pane of glass view to expedite and track the status of your agent migration journey.
-You can access the workbook [here](https://portal.azure.com/#view/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Monitor/ConfigurationId/community-Workbooks%2FAzure%20Monitor%20-%20Agents%2FAgent%20Migration%20Tracker/Type/workbook/WorkbookTemplateName/AMA%20Migration%20Helper), or find it on the [Azure portal (preview)](https://portal.azure.com/?feature.includePreviewTemplates=true) under **Monitor** > **Workbooks** > **Public Templates** > **Azure Monitor essentials** > **AMA Migration Helper**.
+You can access the workbook **[here](https://portal.azure.com/#view/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Monitor/ConfigurationId/community-Workbooks%2FAzure%20Monitor%20-%20Agents%2FAgent%20Migration%20Tracker/Type/workbook/WorkbookTemplateName/AMA%20Migration%20Helper)**, or find it on the Azure portal under **Monitor** > **Workbooks** > **Public Templates** > **Azure Monitor essentials** > **AMA Migration Helper**.
:::image type="content" source="media/azure-monitor-migration-tools/ama-migration-helper.png" lightbox="media/azure-monitor-migration-tools/ama-migration-helper.png" alt-text="Screenshot of the Azure Monitor Agent Migration Helper workbook. The screenshot highlights the Subscription and Workspace dropdowns and shows the Azure Virtual Machines tab, on which you can track which agent is deployed on each virtual machine.":::
-## Installing and using DCR Config Generator (preview)
+## Installing and using DCR Config Generator
Azure Monitor Agent relies only on [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) for configuration, whereas Log Analytics Agent inherits its configuration from Log Analytics workspaces. Use the DCR Config Generator tool to parse Log Analytics Agent configuration from your workspaces and generate/deploy corresponding data collection rules automatically. You can then associate the rules to machines running the new agent using built-in association policies.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Azure Monitor Agent provides the following benefits over legacy agents:
Your migration plan to the Azure Monitor Agent should take into account: -- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to the new agent to benefit from added security and reduced cost immediately. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to *discover what solutions and features you're using today that depend on the legacy agent*.
+- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to the new agent to benefit from added security and reduced cost immediately. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to *discover what solutions and features you're using today that depend on the legacy agent*.
If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for us
To ensure safe deployment during migration, begin testing with few resources running Azure Monitor Agent in your nonproduction environment. After you validate the data collected on these test resources, roll out to production by following the same steps.
-To start collecting some of the existing data types, see [Create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association). Alternatively, you can use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator-preview) to convert existing legacy agent configuration into data collection rules.
+To start collecting some of the existing data types, see [Create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association). Alternatively, you can use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator) to convert existing legacy agent configuration into data collection rules.
After you *validate* that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent. ## At-scale migration using Azure Policy
-We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent by using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview). Use this tool to find sources like virtual machines, virtual machine scale sets, and non-Azure servers.
+We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent by using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper). Use this tool to find sources like virtual machines, virtual machine scale sets, and non-Azure servers.
-Use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator-preview) to migrate legacy agent configuration, including data sources and destinations, from the workspace to the new DCRs.
+Use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator) to migrate legacy agent configuration, including data sources and destinations, from the workspace to the new DCRs.
> [!IMPORTANT] > Before you deploy a large number of agents, consider [configuring the workspace](agent-data-sources.md) to disable data collection for the Log Analytics agent. If you leave data collection for the Log Analytics agent enabled, you might collect duplicate data and increase your costs. You might choose to collect duplicate data for a short period during migration until you verify that you've deployed and configured Azure Monitor Agent correctly.
For more information, see:
- [Azure Monitor Agent overview](agents-overview.md) - [Azure Monitor Agent migration for Microsoft Sentinel](../../sentinel/ama-migrate.md)-- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent)
+- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent)
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
## Enable recommended alert rules in the Azure portal (preview) > [!NOTE]
-> The alert rule recommendations feature is currently in preview and is only enabled for VMs.
+> The alert rule recommendations feature is currently in preview and is only enabled for unmonitored:
+> - Virtual machines
+> - AKS resources
+> - Log Analytics workspaces
If you don't have alert rules defined for the selected resource, either individually or as part of a resource group or subscription, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or enable recommended out-of-the-box alert rules in the Azure portal.
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
Once an alert is triggered, the alert is made up of:
- The **user response** is set by the user and doesnΓÇÖt change until the user changes it. You can see all alert instances in all your Azure resources generated in the last 30 days on the **[Alerts page](alerts-page.md)** in the Azure portal. + ## Types of alerts There are four types of alerts. This table provides a brief description of each alert type.
See [this article](alerts-types.md) for detailed information about each alert ty
|[Log alerts](alerts-types.md#log-alerts)|Log alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.| |[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches the defined conditions.| |[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.|+ ## Out-of-the-box alert rules (preview) If you don't have alert rules defined for the selected resource, you can [enable recommended out-of-the-box alert rules in the Azure portal](alerts-manage-alert-rules.md#enable-recommended-alert-rules-in-the-azure-portal-preview). > [!NOTE]
-> The alert rule recommendations feature is currently in preview and is only enabled for VMs.
+> The alert rule recommendations feature is currently in preview and is only enabled for unmonitored:
+> - Virtual machines
+> - AKS resources
+> - Log Analytics workspaces
## Azure role-based access control (Azure RBAC) for alerts
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Title: Azure AD authentication for Application Insights description: Learn how to enable Azure Active Directory (Azure AD) authentication to ensure that only authenticated telemetry is ingested in your Application Insights resources. Previously updated : 08/02/2021 Last updated : 11/14/2022 ms.devlang: csharp, java, javascript, python
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Title: Monitor Azure app services performance ASP.NET | Microsoft Docs description: Application performance monitoring for Azure app services using ASP.NET. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/24/2022 Last updated : 11/14/2022 ms.devlang: javascript
The table below provides a more detailed explanation of what these values mean,
### System.IO.FileNotFoundException after 2.8.44 upgrade
-2.8.44 version of auto instrumentation upgrades Application Insights SDK to 2.20.0. Application Insights SDK has an indirect reference to `System.Runtime.CompilerServices.Unsafe.dll` through `System.Diagnostics.DiagnosticSource.dll`. If application has [binding redirect](https://learn.microsoft.com/dotnet/framework/configure-apps/file-schema/runtime/bindingredirect-element) for `System.Runtime.CompilerServices.Unsafe.dll` and if this library is not present in application folder it may throw `System.IO.FileNotFoundException`.
+2.8.44 version of auto instrumentation upgrades Application Insights SDK to 2.20.0. Application Insights SDK has an indirect reference to `System.Runtime.CompilerServices.Unsafe.dll` through `System.Diagnostics.DiagnosticSource.dll`. If application has [binding redirect](/dotnet/framework/configure-apps/file-schema/runtime/bindingredirect-element) for `System.Runtime.CompilerServices.Unsafe.dll` and if this library is not present in application folder it may throw `System.IO.FileNotFoundException`.
To resolve this issue, remove the binding redirect entry for `System.Runtime.CompilerServices.Unsafe.dll` from web.config file. If the application wanted to use `System.Runtime.CompilerServices.Unsafe.dll` then set the binding redirect as below.
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
Title: Application Insights for console applications | Microsoft Docs description: Monitor web applications for availability, performance, and usage. Previously updated : 05/21/2020 Last updated : 11/14/2022 ms.devlang: csharp
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
Title: Create a new Azure Monitor Application Insights workspace-based resource description: Learn about the steps required to enable the new Azure Monitor Application Insights workspace-based resources. Previously updated : 07/14/2022 Last updated : 11/14/2022
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
Title: Continuous export of telemetry from Application Insights | Microsoft Docs description: Export diagnostic and usage data to storage in Azure and download it from there. Previously updated : 10/24/2022 Last updated : 11/14/2022
azure-monitor Java Jmx Metrics Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-jmx-metrics-configuration.md
Title: How to configure JMX metrics - Azure Monitor application insights for Java
-description: Configure additional JMX metrics collection for Azure Monitor application insights Java agent
+description: Configure additional JMX metrics collection for Azure Monitor Application Insights Java agent
Last updated 03/16/2021 ms.devlang: java
azure-monitor Javascript Angular Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-angular-plugin.md
ibiza Previously updated : 10/07/2020 Last updated : 11/14/2022 ms.devlang: javascript
azure-monitor Javascript React Native Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-native-plugin.md
ibiza Previously updated : 08/06/2020 Last updated : 11/14/2022 ms.devlang: javascript
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
ibiza Previously updated : 07/28/2020 Last updated : 11/14/2022 ms.devlang: javascript
azure-monitor Resource Manager App Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-app-resource.md
Title: Resource Manager template samples for Application Insights Resources description: Sample Azure Resource Manager templates to deploy Application Insights resources in Azure Monitor. Previously updated : 04/27/2022 Last updated : 11/14/2022
azure-monitor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices.md
This article introduces the scenario. If you want to jump right into a specific
| Article | Description | |:|:| | [Planning](best-practices-plan.md) | Planning that you should consider before starting your implementation. Includes design decisions and information about your organization and requirements that you should gather. |
-| [Configure data collection](best-practices-data-collection.md) | Tasks required to collect monitoring data from you Azure and hybrid applications and resources. |
+| [Configure data collection](best-practices-data-collection.md) | Tasks required to collect monitoring data from your Azure and hybrid applications and resources. |
| [Analysis and visualizations](best-practices-analysis.md) | Standard features and additional visualizations that you can create to analyze collected monitoring data. | | [Alerts and automated responses](best-practices-alerts.md) | Configure notifications and processes that are automatically triggered when an alert is created. |
+| [Best practices and cost management](best-practices-cost.md) | Reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. |
## Next steps
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
Change Analysis provides data for various management and troubleshooting scenarios to help you understand what changes to your application might have caused the issues. You can view the Change Analysis data through several channels:
-## The Change Analysis standalone UI
+## Change Analysis overview portal
-You can access Change Analysis in a standalone pane under Azure Monitor, where you can view all changes and application dependency/resource insights. You can access Change Analysis through a couple of entry points:
+You can access the Change Analysis overview portal under Azure Monitor, where you can view all changes and application dependency/resource insights. You can access Change Analysis through a couple of entry points:
+
+### Monitor home page
+
+From the Azure portal home page, select **Monitor** from the menu.
++
+In the Monitor overview page, select the **Change Analysis** card.
++
+### Search
In the Azure portal, search for Change Analysis to launch the experience.
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
ms.contributor: cawa Previously updated : 08/23/2022 Last updated : 11/15/2022
Change Analysis detects various types of changes, from the infrastructure layer
The following diagram illustrates the architecture of Change Analysis:
-![Architecture diagram of how Change Analysis gets change data and provides it to client tools](./media/change-analysis/overview.png)
## Supported resource types
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
Read more about distributed tracing at [What is distributed tracing?](app/distri
Once [Change Analysis is enabled](./change/change-analysis-enable.md), the `Microsoft.ChangeAnalysis` resource provider is registered with an Azure Resource Manager subscription to make the resource properties and configuration change data available. Change Analysis provides data for various management and troubleshooting scenarios to help users understand what changes might have caused the issues: - Troubleshoot your application via the [Diagnose & solve problems tool](./change/change-analysis-enable.md).-- Perform general management and monitoring via the [Change Analysis standalone UI](./change/change-analysis-visualizations.md#the-change-analysis-standalone-ui) and [the activity log](./change/change-analysis-visualizations.md#activity-log-change-history).
+- Perform general management and monitoring via the [Change Analysis overview portal](./change/change-analysis-visualizations.md#change-analysis-overview-portal) and [the activity log](./change/change-analysis-visualizations.md#activity-log-change-history).
- [Learn more about how to view data results for other scenarios](./change/change-analysis-visualizations.md). Read more about Change Analysis, including data sources in [Use Change Analysis in Azure Monitor](./change/change-analysis.md).
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-installation.md
na Previously updated : 07/29/2022 Last updated : 11/14/2022
example steps are to provide guidance on setup of SSH for this communication.
## Enable communication with database
-This section explains how to enable communication with storage. Ensure the storage back-end you're using is correctly selected.
+This section explains how to enable communication with the database. Ensure the database you're using is correctly selected from the tabs.
# [SAP HANA](#tab/sap-hana)
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Using loops in Bicep has these limitations:
- Bicep loops only work with values that can be determined at the start of deployment. - Loop iterations can't be a negative number or exceed 800 iterations. - Can't loop a resource with nested child resources. Change the child resources to top-level resources. See [Iteration for a child resource](#iteration-for-a-child-resource).-- To loop on multiple levels of properties, use [lambda map function](./bicep-functions-lambda.md#map).
+- To loop on multiple levels of properties, use the [lambda map function](./bicep-functions-lambda.md#map).
## Integer index
azure-resource-manager Quickstart Create Templates Use The Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md
In this quickstart, you learn how to create an Azure Resource Manager template (ARM template) in the Azure portal. You edit and deploy the template from the portal.
-ARM templates are JSON files that define the resources you need to deploy for your solution. To understand the concepts associated with deploying and managing your Azure solutions, see [template deployment overview](overview.md).
+ARM templates are JSON or Bicep files that define the resources you need to deploy for your solution. To understand the concepts associated with deploying and managing your Azure solutions, see [template deployment overview](overview.md).
After completing the tutorial, you deploy an Azure Storage account. The same process can be used to deploy other Azure resources.
Rather than manually building an entire ARM template, let's start by retrieving
You can use the portal for quickly developing and deploying ARM templates. In general, we recommend using Visual Studio Code for developing your ARM templates, and Azure CLI or Azure PowerShell for deploying the template, but you can use the portal for quick deployments without installing those tools.
-In this section, let's suppose you have an ARM template that you want to deploy one time with setting up the other tools.
+In this section, let's suppose you have an ARM template that you want to deploy one time without setting up the other tools.
1. Again, select **Deploy a custom template** in the portal.
azure-video-indexer Edit Speakers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-speakers.md
+
+ Title: Edit speakers in the Azure Video Indexer website
+description: The article demonstrates how to edit speakers with the Azure Video Indexer website.
+ Last updated : 11/01/2022+++
+# Edit speakers with the Azure Video Indexer website
+
+Azure Video Indexer identifies speakers in your video but in some cases you may want to edit these names. You can perform the following editing actions, while in the edit mode. The following editing actions only apply to the currently selected video.
+
+- Add new speaker.
+- Rename existing speaker.
+
+ The update applies to all speakers identified by this name.
+- Assign a speaker for a transcript line.
+
+The article demonstrates how to edit speakers with the [Azure Video Indexer website](https://www.videoindexer.ai/). The same editing operations are possible with an API. To use API, call [update video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index).
+
+## Prerequisites
+
+1. Sign in to the [Azure Video Indexer website](https://www.videoindexer.ai/).
+2. Select a video.
+3. Select the **Timeline** tab.
+4. Choose to view speakers.
++
+## Add a new speaker
+
+This action allows adding new speakers that were not identified by Azure Video Indexer. To add a new speaker from the website for the selected video, do the following:
+
+1. Select the edit mode.
+
+ :::image type="content" alt-text="Screenshot of how to edit speakers." source="./media/edit-speakers-website/edit.png":::
+1. Go to the speakers drop down menu above the transcript line you wish to assign a new speaker to.
+1. Select **Assign a new speaker**.
+
+ :::image type="content" alt-text="Screenshot of how to add a new speaker." source="./media/edit-speakers-website/assign-new.png":::
+1. Add the name of the speaker you would like to assign.
+1. Press a checkmark to save.
+
+> [!NOTE]
+> Speaker names should be unique across the speakers in the current video.
+
+## Rename an existing speaker
+
+This action allows renaming an existing speaker that was identified by Azure Video Indexer. To rename a speaker from the website for the selected video, do the following:
+
+1. Select the edit mode.
+1. Go to the transcript line where the speaker you wish to rename appears.
+1. Select **Rename selected speaker**.
+
+ :::image type="content" alt-text="Screenshot of how to rename a speaker." source="./media/edit-speakers-website/rename.png":::
+
+ This action will update speakers by this name.
+1. Press a checkmark to save.
+
+## Assign a speaker to a transcript line
+
+This action allows assigning a speaker to a specific transcript line with a wrong assignment. To assign a speaker to a transcript line from the website, do the following:
+
+1. Go to the transcript line you want to assign a different speaker to.
+1. Select a speaker from the speakers drop down menu above that you wish to assign.
+
+ The update only applies to the currently selected transcript line.
+
+If the speaker you wish to assign doesn't appear on the list you can either **Assign a new speaker** or **Rename an existing speaker** as described above.
+
+## Limitations
+
+When adding a new speaker or renaming a speaker, the new name should be unique.
+
+## Next steps
+
+[Insert or remove transcript lines in the Azure Video Indexer website](edit-transcript-lines-portal.md)
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 09/15/2022 Last updated : 11/07/2022
In order to upload a video from a URL, change your code to send nu
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null); ```
+## November 2022
+
+### Speakers' names can now be edited from the Azure Video Indexer website
+
+You can now add new speakers, rename identified speakers and modify speakers assigned to a particular transcript line using the [Azure Video Indexer website](https://www.videoindexer.ai/). For details on how to edit speakers from the **Timeline** pane, see [Edit speakers with the Azure Video Indexer website](edit-speakers.md).
+
+The same capabilities are available from the Azure Video Indexer [upload video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) API.
+ ## October 2022 ### A new built-in role: Video Indexer Restricted Viewer
backup Install Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/install-mars-agent.md
Title: Install the Microsoft Azure Recovery Services (MARS) agent description: Learn how to install the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines. Previously updated : 10/21/2022 Last updated : 11/15/2022
batch Batch Certificate Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-certificate-migration-guide.md
On *February 29, 2024*, the Azure Batch account certificates feature will be ret
## About the feature
-Certificates are often required in various scenarios such as decrypting a secret, securing communication channels, or [accessing another service](credential-access-key-vault.md). Currently, Azure Batch offers two ways to manage certificates on Batch pools. You can add certificates to a Batch account or you can use the Azure Key Vault VM extension to manage certificates on Batch pools. Only the [certificate functionality on an Azure Batch account](https://learn.microsoft.com/rest/api/batchservice/certificate) and the functionality it extends to Batch pools via `CertificateReference` to [Add Pool](https://learn.microsoft.com/rest/api/batchservice/pool/add#certificatereference), [Patch Pool](https://learn.microsoft.com/rest/api/batchservice/pool/patch#certificatereference), [Update Properties](https://learn.microsoft.com/rest/api/batchservice/pool/update-properties#certificatereference) and the corresponding references on Get and List Pool APIs are being retired.
+Certificates are often required in various scenarios such as decrypting a secret, securing communication channels, or [accessing another service](credential-access-key-vault.md). Currently, Azure Batch offers two ways to manage certificates on Batch pools. You can add certificates to a Batch account or you can use the Azure Key Vault VM extension to manage certificates on Batch pools. Only the [certificate functionality on an Azure Batch account](/rest/api/batchservice/certificate) and the functionality it extends to Batch pools via `CertificateReference` to [Add Pool](/rest/api/batchservice/pool/add#certificatereference), [Patch Pool](/rest/api/batchservice/pool/patch#certificatereference), [Update Properties](/rest/api/batchservice/pool/update-properties#certificatereference) and the corresponding references on Get and List Pool APIs are being retired.
## Feature end of support
batch Job Pool Lifetime Statistics Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/job-pool-lifetime-statistics-migration-guide.md
The Azure Batch lifetime statistics API for jobs and pools will be retired on *A
## About the feature
-Currently, you can use API to retrieve lifetime statistics for [jobs](https://learn.microsoft.com/rest/api/batchservice/job/get-all-lifetime-statistics) and [pools](https://learn.microsoft.com/rest/api/batchservice/pool/get-all-lifetime-statistics) in Batch. The API collects statistical data from when the Batch account was created for all jobs and pools created for the lifetime of the Batch account.
+Currently, you can use API to retrieve lifetime statistics for [jobs](/rest/api/batchservice/job/get-all-lifetime-statistics#http) and [pools](/rest/api/batchservice/pool/get-all-lifetime-statistics#pools) in Batch. The API collects statistical data from when the Batch account was created for all jobs and pools created for the lifetime of the Batch account.
To make statistical data available to customers, the Batch service performs aggregation and roll-ups on a periodic basis. Due to these lifetime stats APIs being rarely exercised by Batch customers, these APIs are being retired as alternatives exist. ## Feature end of support
-The lifetime statistics API is designed and maintained to help you gather information about usage of your Batch pools and jobs across the lifetime of your Batch account. Alternatives exist to gather data at a fine-grained level on a [per job](https://learn.microsoft.com/rest/api/batchservice/job/get#jobstatistics) or [per pool](https://learn.microsoft.com/rest/api/batchservice/pool/get#poolstatistics) basis. Only the lifetime statistics APIs are being retired.
+The lifetime statistics API is designed and maintained to help you gather information about usage of your Batch pools and jobs across the lifetime of your Batch account. Alternatives exist to gather data at a fine-grained level on a [per job](/rest/api/batchservice/job/get#jobstatistics) or [per pool](/rest/api/batchservice/pool/get#poolstatistics) basis. Only the lifetime statistics APIs are being retired.
When the job and pool lifetime statistics API is retired on April 30, 2023, the API will no longer work, and it will return an appropriate HTTP response error code to the client.
When the job and pool lifetime statistics API is retired on April 30, 2023, the
### Aggregate with per job or per pool statistics
-You can get statistics for any active job or pool in a Batch account. For jobs, you can issue a [Get Job](https://learn.microsoft.com/rest/api/batchservice/job/get) request and view the [JobStatistics object](https://learn.microsoft.com/rest/api/batchservice/job/get#jobstatistics). For pools, you can issue a [Get Pool](https://learn.microsoft.com/rest/api/batchservice/pool/get) request and view the [PoolStatistics object](https://learn.microsoft.com/rest/api/batchservice/pool/get#poolstatistics). You'll then be able to use these results and aggregate as needed across jobs and pools that are relevant for your analysis workflow.
+You can get statistics for any active job or pool in a Batch account. For jobs, you can issue a [Get Job](/rest/api/batchservice/job/get) request and view the [JobStatistics object](/rest/api/batchservice/job/get#jobstatistics). For pools, you can issue a [Get Pool](/rest/api/batchservice/pool/get) request and view the [PoolStatistics object](/rest/api/batchservice/pool/get#poolstatistics). You'll then be able to use these results and aggregate as needed across jobs and pools that are relevant for your analysis workflow.
### Set up logs in the Azure portal
The Azure portal has various options to enable monitoring and logs. System logs
## Next steps
-For more information, see the Batch [Job](https://learn.microsoft.com/rest/api/batchservice/job) or [Pool](https://learn.microsoft.com/rest/api/batchservice/pool) API. For Azure Monitor logs, see [this article](../azure-monitor/logs/data-platform-logs.md).
+For more information, see the Batch [Job](/rest/api/batchservice/job) or [Pool](/rest/api/batchservice/pool) API. For Azure Monitor logs, see [this article](../azure-monitor/logs/data-platform-logs.md).
cloudfoundry How Cloud Foundry Integrates With Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/how-cloud-foundry-integrates-with-azure.md
vm-linux Previously updated : 05/11/2018 Last updated : 11/14/2022+ # Integrate Cloud Foundry with Azure
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.7.0 | Generally available |
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.7.0 | Generally available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.8.0 | Generally available |
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.8.0 | Generally available |
| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.6.0 | Generally available |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.7.0 | Generally available |
## Prerequisites
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)+
+Release note for `3.8.0-amd64`:
+
+**Features**
+* Security upgrade.
+
+| Image Tags | Notes | Digest |
+|-|:|:-|
+| `latest` | | `sha256:83716502cc7baefea64e0d5d64db8f5db0b2f14d48b6b53d96748df72952749b`|
+| `3.8.0-amd64` | | `sha256:83716502cc7baefea64e0d5d64db8f5db0b2f14d48b6b53d96748df72952749b`|
+
+# [Previous version](#tab/previous)
+ Release note for `3.7.0-amd64`: **Features**
Release note for `3.7.0-amd64`:
| `latest` | | `sha256:551113f7df4840bde91bbe3d9902af5a09153462ca450490347547d95ab1c08e`| | `3.7.0-amd64` | | `sha256:551113f7df4840bde91bbe3d9902af5a09153462ca450490347547d95ab1c08e`|
-# [Previous version](#tab/previous)
Release note for `3.6.0-amd64`: **Features**
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia
# [Latest version](#tab/current)
+Release note for `3.8.0-amd64-<locale>`:
+
+**Features**
+* Security upgrade.
++
+| Image Tags | Notes |
+|-|:--|
+| `latest` | Container image with the `en-US` locale. |
+| `3.8.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.8.0-amd64-en-us`. |
+
+This container has the following locales available.
+
+| Locale for v3.8.0 | Notes | Digest |
+|--|:--|:--|
+| `ar-ae`| Container image with the `ar-ae` locale. | `sha256:64c340cd9039500433418e75d8c2466c777eaccd839364ffe6ca629cd6ba02d4` |
+| `ar-bh`| Container image with the `ar-bh` locale. | `sha256:54dd50d519816197b9de209acf8e7166d603bcf4e8b7a89a591c1261b3849754` |
+| `ar-eg`| Container image with the `ar-eg` locale. | `sha256:2590d224db446eb8cdeb7318d8c85b1f11c2d0b93e49213f09abc0da34578751` |
+| `ar-iq`| Container image with the `ar-iq` locale. | `sha256:0416090017a6b37152ccc9ce56b9d0056fa3c94636a561f515cadcaaf65c66db` |
+| `ar-jo`| Container image with the `ar-jo` locale. | `sha256:abe4c5804923018f98e083f2980a2c14e3c6db6c4d0459dc6d7f895a3ea90c33` |
+| `ar-kw`| Container image with the `ar-kw` locale. | `sha256:e6eac82c8f86885b0117cc11ef03ab37ac97ae3bf0040aca5d497393e3aaa9e3` |
+| `ar-lb`| Container image with the `ar-lb` locale. | `sha256:3b0a1ae4d820e59ed15a28a9210717f1d8f6e2b1d1e240a8cd92e46d13e9cca3` |
+| `ar-om`| Container image with the `ar-om` locale. | `sha256:d0e6a0bfa2c8477a8c7ef6886b144e480e22b7dc00f86999c19b2f7de7e2d6c4` |
+| `ar-qa`| Container image with the `ar-qa` locale. | `sha256:3989d4b7ae524ed41e6958d4db15ae195342cacbcfff444160fd8bb42d6e3d2b` |
+| `ar-sa`| Container image with the `ar-sa` locale. | `sha256:047e87f28c009f82ae70b07a0cdc68b1a493623d5d1c64c604f8eda419d077eb` |
+| `ar-sy`| Container image with the `ar-sy` locale. | `sha256:10f87729b1555cc0cc39a37024006511c0d67a2517d4c329d7e7f8c4978b5e61` |
+| `az-az`| Container image with the `az-az` locale. | `sha256:c019cb328c4b53aef8408bccd2747ded0344c5512f2f5950bea3efb159643da2` |
+| `bg-bg`| Container image with the `bg-bg` locale. | `sha256:7d0eed1d1748760700ca9141260ed797626100792f8a8f88d694ba8dee789521` |
+| `bn-in`| Container image with the `bn-in` locale. | `sha256:d04be7ac5aec92e193ef2ec2001bc31d36f506e83480de74e2f475dc8067244c` |
+| `bs-ba`| Container image with the `bs-ba` locale. | `sha256:f2ad12a0f6866dfb1dd7aff119887b44b27ae8a880febc66df92d9883985e242` |
+| `ca-es`| Container image with the `ca-es` locale. | `sha256:3f18959f97c6790349a07ef9f9740b57b2b87e8a1b0be0e03cc5fad3090e1398` |
+| `cs-cz`| Container image with the `cs-cz` locale. | `sha256:a040c687482fc04ae66d368bc7b18b028deec693f9e578ca9c3cf09c9032a1e4` |
+| `cy-gb`| Container image with the `cy-gb` locale. | `sha256:37fd9f512d19565f3f1fb8c08dd98ed6ba324e56e2a5f8edc1566a1b858341d4` |
+| `da-dk`| Container image with the `da-dk` locale. | `sha256:965f19924e46992947e037bd0939bcd678b2035db8021679290a78e86640d276` |
+| `de-at`| Container image with the `de-at` locale. | `sha256:c3043011c7188a325a2877b9cf9d08e1b43e564f2f0e048df4f758f5fceafd5e` |
+| `de-ch`| Container image with the `de-ch` locale. | `sha256:32bb411029d1c5c80e4f66153722894a348fc314ab98cc31e5f0a76c73890c51` |
+| `de-de`| Container image with the `de-de` locale. | `sha256:1723fd3e855a7902f86f944211c766c0d53122b911fba9f8de8ab05ef40bbb5a` |
+| `el-gr`| Container image with the `el-gr` locale. | `sha256:2a8c5cc9bf95d8b50265966d2085f445f2c0875241358919f153227a6b2109b5` |
+| `en-au`| Container image with the `en-au` locale. | `sha256:f7401edf647c93dcb38126016d09bc982badc39751508a418d2b6063cdf74f34` |
+| `en-ca`| Container image with the `en-ca` locale. | `sha256:12de9a2ef64d5efce8ccc61a4a24e6304a90fc322430cf2a616e87fa65750753` |
+| `en-gb`| Container image with the `en-gb` locale. | `sha256:28e8b5b92d419b20b25fd4ba01094bae648f21a688f28833e541ea41be0b842e` |
+| `en-gh`| Container image with the `en-gh` locale. | `sha256:304e78d2dd59c0f1e1b57767b8f1ee2e9651e844b7295a8b25f424005bdb9151` |
+| `en-hk`| Container image with the `en-hk` locale. | `sha256:b88f9d248988affe8f7977cedec034cd9e11b2b63b24b30340fde9f4773cb56f` |
+| `en-ie`| Container image with the `en-ie` locale. | `sha256:b420de2fea91c7900b22ac64a2e1d31af4b4ff9b409de7ab94e0bde39369eff0` |
+| `en-in`| Container image with the `en-in` locale. | `sha256:608af694700e46b37eaf01a1bb8bf77593838724d4e75d81a94bdeabd90e5598` |
+| `en-ke`| Container image with the `en-ke` locale. | `sha256:a8b9ed9796f78e808ac42a81a0ba10188d762736966853e576729a2b84abadaa` |
+| `en-nz`| Container image with the `en-nz` locale. | `sha256:2966bdbca51298eccf92aee22bcc2cb89a5eb01bf338c381d8acdf4a045ad5c8` |
+| `en-ph`| Container image with the `en-ph` locale. | `sha256:4bc53c8d403a052af0d2a99f4781f0e91a03b12de06463b5555bfc138c7ba814` |
+| `en-sg`| Container image with the `en-sg` locale. | `sha256:b93668980fbd75e17888455c06d864946af24f5a947dbbb46364e67f7fa08625` |
+| `en-tz`| Container image with the `en-tz` locale. | `sha256:8e2ffb045cefd0194b37131b15efae4315bcb4e6b52ff073663fde0d46e338ae` |
+| `en-us`| Container image with the `en-us` locale. | `sha256:fcaefc42b9ed44a207f8885977a1b18cd409bd6a99f5cad0add9605417f84b96` |
+| `en-za`| Container image with the `en-za` locale. | `sha256:9070edb235f64bd36f09c8d5a3ea40cf93765313b2a1dc453f456fa1835a3b28` |
+| `es-ar`| Container image with the `es-ar` locale. | `sha256:bfa7aca8903991644de4cb835d1bc535b75ba795b6a0aeb0d3dfdf5829a4ced9` |
+| `es-bo`| Container image with the `es-bo` locale. | `sha256:7d884a249d7501f465c33fdb3ea674e7c9423c0d88669ca5dc7e08f5eea776c9` |
+| `es-cl`| Container image with the `es-cl` locale. | `sha256:f47b1e2173bc6f6c57edfacd33a2f10be591997994621e6152392d5cddf70a9a` |
+| `es-co`| Container image with the `es-co` locale. | `sha256:9c9a3bbaec81a81c7a5d44ffad17f38c89e7352b012659ea9ab0bb66dc148d98` |
+| `es-cr`| Container image with the `es-cr` locale. | `sha256:b358adf3c8fe62043f09fd97b0cbb4c02f17ac0181e29c527acda0b98ec2f5cc` |
+| `es-cu`| Container image with the `es-cu` locale. | `sha256:7b36e84d34ab2515837bebf879bf91bdba93b6f1ac786d75cfc31744222b29dd` |
+| `es-do`| Container image with the `es-do` locale. | `sha256:6746d0f2a99f1f9bc598774507b5a0791fecc6d95e0f83661adc84988161b2d4` |
+| `es-ec`| Container image with the `es-ec` locale. | `sha256:72f71d7c17d30d4c1466ce0f9cadcf75b000a126d6c1659aa4f9c6643484b7f0` |
+| `es-es`| Container image with the `es-es` locale. | `sha256:0daaad181a605a43dfe49945574886f092abde6db6d37ba728ed48bc97bfe376` |
+| `es-gt`| Container image with the `es-gt` locale. | `sha256:819b8647d90441a6f21aeec83151c89a7ecb5831a7ae4ab8da3aefdcb38ddbac` |
+| `es-hn`| Container image with the `es-hn` locale. | `sha256:3649b9673d775aee1a7bdbdfd3522a4606176bc8acf943b6d82b32b1f73aa7ab` |
+| `es-mx`| Container image with the `es-mx` locale. | `sha256:640313f48f8231372910897a697e8961c892da6cc706458110811d614d3037cd` |
+| `es-ni`| Container image with the `es-ni` locale. | `sha256:15dc245012404cd5cab24042831d803fda0cf86a37146d64ff5c83381017a5b0` |
+| `es-pa`| Container image with the `es-pa` locale. | `sha256:888a57e26ab9c0d55caa7218b409b692b7a3406c27406f4f3e1c8c48959e7e72` |
+| `es-pe`| Container image with the `es-pe` locale. | `sha256:13c585a507a248cd3fe17693be078a3060b1e9b579b31e8c99022c247f42c9bf` |
+| `es-pr`| Container image with the `es-pr` locale. | `sha256:007ee01db5c5dab0577fcec897ac2102870a62ef9b226858b98743b0e33b68eb` |
+| `es-py`| Container image with the `es-py` locale. | `sha256:a7c847b94ab7c954bdd403eb98c02fc1dca57d24249c6ecde77ecf6670a075d8` |
+| `es-sv`| Container image with the `es-sv` locale. | `sha256:f63eb3c93b46c79c86f481766c708a814234874a79b14edfe638dfaa768e5ab0` |
+| `es-us`| Container image with the `es-us` locale. | `sha256:349249313cc96bca944147918e20fae7ebc38f2aef07bd00c2a533824561085f` |
+| `es-uy`| Container image with the `es-uy` locale. | `sha256:092e529f88ccd46fdb90d19a25ccc28eb1fa0752632b5727563419e53d93c63f` |
+| `es-ve`| Container image with the `es-ve` locale. | `sha256:61093d4f068232b08a083ab11c0f5ba9f0a5193ecd40b21900dada527f2ac179` |
+| `et-ee`| Container image with the `et-ee` locale. | `sha256:5f1431812757a778e67578a6d3a2899c2a476e7e001c2fcb8e43e01985a55c97` |
+| `eu-es`| Container image with the `eu-es` locale. | `sha256:e40c97b88ddadd431f96944a10a5a5b834b7540712014af92f2925355fcd9466` |
+| `fa-ir`| Container image with the `fa-ir` locale. | `sha256:cb39294c94590d80874aadbb76d8a940e811524ceeff41107dc352567faabb3c` |
+| `fi-fi`| Container image with the `fi-fi` locale. | `sha256:cb18c8ff2a460e18b2bf4f56c7df45dbe2b198719a10e8b3246998f227d758db` |
+| `fil-ph`| Container image with the `fil-ph` locale. | `sha256:6c48d0ae54d1c0e0e72aeb7d31f0ef49f32c636c67373db6ba514ea32a9bdb9b` |
+| `fr-ca`| Container image with the `fr-ca` locale. | `sha256:c63da4dd7fc8645ad1c2a17b6fbcfc13ac7fb26f50530a8973a29f6f41b163cd` |
+| `fr-ch`| Container image with the `fr-ch` locale. | `sha256:681f326dd9be2352a458324aa3bd95ba4fd763ff47ac4eb72322ee622838f2e9` |
+| `fr-fr`| Container image with the `fr-fr` locale. | `sha256:0e2b75122ba47567998452de08ec28bba00451afc6f9b75e6ba0b52b832e6d1f` |
+| `ga-ie`| Container image with the `ga-ie` locale. | `sha256:9b09b33ea169bbbb72735112791e83513ff5b901af565bdcf0861908b8cab82e` |
+| `gl-es`| Container image with the `gl-es` locale. | `sha256:444a29e369f2f917953caca9507d5f573dba0c064d89d33df2e4ea82cbbf3680` |
+| `gu-in`| Container image with the `gu-in` locale. | `sha256:9b223c0b5f89d429ab9bb4499407c17178fb5994b933c1a1b274b1f96a05651c` |
+| `he-il`| Container image with the `he-il` locale. | `sha256:afd8e54e1624f4c6ce85f4db273164068b5dfc41b9a992163053b26c3c0a1cdd` |
+| `hi-in`| Container image with the `hi-in` locale. | `sha256:92ad7e666049174485654404024c974c94adfde6c03b677ca7935f773f15dd34` |
+| `hr-hr`| Container image with the `hr-hr` locale. | `sha256:4f838f82db349de3a0819422899e3c648bcece708265227cc7792d55da432296` |
+| `hu-hu`| Container image with the `hu-hu` locale. | `sha256:a420b1ac19356523a382e4096871a6d6362268db7609c38075a7db0c4a4a5351` |
+| `hy-am`| Container image with the `hy-am` locale. | `sha256:fe3699f123a4ac7c3b49a638457e70c2369ecbe4a5c34301070b481e650c08f6` |
+| `id-id`| Container image with the `id-id` locale. | `sha256:dee7fbd9e02b7f41dab416b663e807f19c0d6fff388fd0c632a788af00058750` |
+| `it-ch`| Container image with the `it-ch` locale. | `sha256:0c61089bcd8347ad8a5a89a4fd57460c3e3a90d57ae958e874cfd4d3ed54ff23` |
+| `it-it`| Container image with the `it-it` locale. | `sha256:2fc3589c7a6dc13cbadfbb902ee42a2dfdc40f478539ff94d6753238dede17e4` |
+| `ja-jp`| Container image with the `ja-jp` locale. | `sha256:9d065723f696bb1b4a665c9b0151bf429841bb806926ef6a30ce279ba1bc7e0a` |
+| `ka-ge`| Container image with the `ka-ge` locale. | `sha256:8cac496530a4b243118097e0a03db96cf4fc009eff6a38a6aaf5d517dfe2f653` |
+| `kk-kz`| Container image with the `kk-kz` locale. | `sha256:d52d2ae775be93351c3ae22fe7181057d5444ec0e1cc431fe52256e2d2cd2bdf` |
+| `ko-kr`| Container image with the `ko-kr` locale. | `sha256:7abeeeacd39625084dc60788915806042b8688ec1e672c1c7e5c1942b40f8e9a` |
+| `lt-lt`| Container image with the `lt-lt` locale. | `sha256:395793a3f8bfcc5c493eeb5ab281f03c13afe944d5c345e73e99bce013fc17df` |
+| `lv-lv`| Container image with the `lv-lv` locale. | `sha256:0eac895c65ef8955a5664240a3b0e15858ab7a42263d0609dbbd99aeff5864c4` |
+| `mk-mk`| Container image with the `mk-mk` locale. | `sha256:cbe7104cb013447952bcd07e303cc1bd6613cf352f1dd9f302d74803bc0f9fdb` |
+| `mn-mn`| Container image with the `mn-mn` locale. | `sha256:e20ea8bf8f2456dbb4495a636f7f851cb19466fab182757377c5d6ba0a23c112` |
+| `mr-in`| Container image with the `mr-in` locale. | `sha256:ffa3f49960d91f7a6663a4a5f57333d4f37fff75c2aeff34e8b0926fe5ab3c73` |
+| `ms-my`| Container image with the `ms-my` locale. | `sha256:6c5085c7a15a060290acf74f7380d00650943e961aa3e1c83a63700f1fe10f0a` |
+| `mt-mt`| Container image with the `mt-mt` locale. | `sha256:e3a7a2e9c7c05522c41704ed883704a00384f63e9ae771291ad26579c62297e4` |
+| `nb-no`| Container image with the `nb-no` locale. | `sha256:f3935943a6a59052dfea1009998b42971ffdefa77a3136fe956f6f8fa0880183` |
+| `ne-np`| Container image with the `ne-np` locale. | `sha256:1aa0af196507045af17e7183aacfe5b2f582ed671b043c1c52e0262a40e42f40` |
+| `nl-nl`| Container image with the `nl-nl` locale. | `sha256:d21fbea72b33ea190ed773ec9910e6d2f9a36cff99315819a48aa5159290f523` |
+| `pl-pl`| Container image with the `pl-pl` locale. | `sha256:089f5f47dd123b4976eb8967981e40bc41c0f0f4ae9d9f93d4f52e0c7663c827` |
+| `ps-af`| Container image with the `ps-af` locale. | `sha256:8f27ed7dfdd56fb442bc1183e42771ec717162da068e822d811f030cd0ce32e7` |
+| `pt-br`| Container image with the `pt-br` locale. | `sha256:fa0da0948ed4af68f0b1f81c3af4377b20077c4d7d53ccc27c0e78355be51534` |
+| `pt-pt`| Container image with the `pt-pt` locale. | `sha256:0571489702281af2b738d2dd7b47a06f785e4a99e225b9f967a9ac5aca557873` |
+| `ro-ro`| Container image with the `ro-ro` locale. | `sha256:7b19e108fa781c6cf3c62f19339968a044f08a8991e2f6d89fdb2dbdf81657d2` |
+| `ru-ru`| Container image with the `ru-ru` locale. | `sha256:1116cc5871654f17c97e92647aeabae55635b72c26970257ec8078465c5ac69a` |
+| `sk-sk`| Container image with the `sk-sk` locale. | `sha256:8168b773841f5719347769a02054e7327c1fd09a695cab307b38767ffd6ddb4e` |
+| `sl-si`| Container image with the `sl-si` locale. | `sha256:6d8983f1d381ba4ad7f65f0de16bcbb0f9ae4154f3a1251fd4a01568ad81e36b` |
+| `so-so`| Container image with the `so-so` locale. | `sha256:5b00364338e8c885794374b4d3071097cd434a952722d47b6023a23106149fdb` |
+| `sq-al`| Container image with the `sq-al` locale. | `sha256:4141cae4629f3601abb7011bd02f3f89d84f4a8c96082ea5f4015243d0e4cb0b` |
+| `sv-se`| Container image with the `sv-se` locale. | `sha256:49592d8afce75ce1860af9faa039d38f480eef522f683379acc331970dc3adf9` |
+| `ta-in`| Container image with the `ta-in` locale. | `sha256:7313a9badb5eceeb623bfb92f70a3e361078e83234f2588f5c6aac5e465d1f40` |
+| `te-in`| Container image with the `te-in` locale. | `sha256:f8cddc20e960bf040cd3796b3c164e0bcf69eabe6e323fd83d37e006c5063c21` |
+| `th-th`| Container image with the `th-th` locale. | `sha256:432230daad8fa04da685a35e6897d56bfe133a4d1e331c75484254999c633691` |
+| `tr-tr`| Container image with the `tr-tr` locale. | `sha256:bdaf53c37db8797e198653b4d1a9c8a0445e8efd380ce9ba84b5dd000b8c231a` |
+| `uk-ua`| Container image with the `uk-ua` locale. | `sha256:ba960b9cf09dc5e299406793da2b8104e8e75738b58317c18bfff9b36d7d56ad` |
+| `vi-vn`| Container image with the `vi-vn` locale. | `sha256:29798b257c8fdfe42f9bc92ca9aebb3c73ca678dc273b2622aa9fd15030df90c` |
+| `wuu-cn`| Container image with the `wuu-cn` locale. | `sha256:f763c83ea4f48f5efdc1184eb4f5c16a1a985363f60a916c5b11d218ef2bb2a2` |
+| `yue-cn`| Container image with the `yue-cn` locale. | `sha256:8a725feff32cc1220dbba1d13f056fb90b7bba880783fc0cbd9a9fe172ca578a` |
+| `zh-cn`| Container image with the `zh-cn` locale. | `sha256:a4273ebc9170e784f27882b2b32e6980eefe20d7b73dd468855366908a1f05ed` |
+| `zh-cn-sichuan`| Container image with the `zh-cn-sichuan` locale. | `sha256:2bac26451c9b5ebc82c7e58a398802e551640b53b8522d807fbe023b99173926` |
+| `zh-hk`| Container image with the `zh-hk` locale. | `sha256:e421d001e151f6803d5d62a25709af130fb14148d336923b8bdc3665246139ef` |
+| `zh-tw`| Container image with the `zh-tw` locale. | `sha256:557097c657b8894969d0f0d1e90806d4423de61921f8266b70c973faa1f9e847` |
+
+# [Previous version](#tab/previous)
+++ Release note for `3.7.0-amd64-<locale>`: **Features**
This container has the following locales available.
| `zh-hk`| Container image with the `zh-HK` locale. | `sha256:f3f8b50f982c19f31eea553ed92ebfb6c7e333a4d2fa55c81a1c8b680afd6101` | | `zh-tw`| Container image with the `zh-` locale. | `sha256:20245c6b1b4da4a393e6d0aaa3c1a013f03de69eec351d9b7e5fe9d542c1f098` |
-# [Previous version](#tab/previous)
Release note for `3.6.0-amd64-<locale>`:
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
+Release notes for `v2.7.0`:
+
+**Features**
+* Security upgrade.
+
+| Image Tags | Notes |
+||:|
+| `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
+| `2.7.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.7.0-amd64-en-us-arianeural`. |
++
+| v2.6.0 Locales and voices | Notes |
+|-|:|
+| `am-et-amehaneural`| Container image with the `am-ET` locale and `am-ET-amehaneural` voice.|
+| `am-et-mekdesneural`| Container image with the `am-ET` locale and `am-ET-mekdesneural` voice.|
+| `ar-bh-lailaneural`| Container image with the `ar-BH` locale and `ar-BH-lailaneural` voice.|
+| `ar-eg-salmaneural`| Container image with the `ar-EG` locale and `ar-EG-salmaneural` voice.|
+| `ar-eg-shakirneural`| Container image with the `ar-EG` locale and `ar-EG-shakirneural` voice.|
+| `ar-sa-hamedneural`| Container image with the `ar-SA` locale and `ar-SA-hamedneural` voice.|
+| `ar-sa-zariyahneural`| Container image with the `ar-SA` locale and `ar-SA-zariyahneural` voice.|
+| `az-az-babekneural`| Container image with the `az-AZ` locale and `az-AZ-babekneural` voice.|
+| `az-az-banuneural`| Container image with the `az-AZ` locale and `az-AZ-banuneural` voice.|
+| `cs-cz-antoninneural`| Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice.|
+| `cs-cz-vlastaneural`| Container image with the `cs-CZ` locale and `cs-CZ-vlastaneural` voice.|
+| `de-ch-janneural`| Container image with the `de-CH` locale and `de-CH-janneural` voice.|
+| `de-ch-lenineural`| Container image with the `de-CH` locale and `de-CH-lenineural` voice.|
+| `de-de-conradneural`| Container image with the `de-DE` locale and `de-DE-conradneural` voice.|
+| `de-de-katjaneural`| Container image with the `de-DE` locale and `de-DE-katjaneural` voice.|
+| `en-au-natashaneural`| Container image with the `en-AU` locale and `en-AU-natashaneural` voice.|
+| `en-au-williamneural`| Container image with the `en-AU` locale and `en-AU-williamneural` voice.|
+| `en-ca-claraneural`| Container image with the `en-CA` locale and `en-CA-claraneural` voice.|
+| `en-ca-liamneural`| Container image with the `en-CA` locale and `en-CA-liamneural` voice.|
+| `en-gb-libbyneural`| Container image with the `en-GB` locale and `en-GB-libbyneural` voice.|
+| `en-gb-ryanneural`| Container image with the `en-GB` locale and `en-GB-ryanneural` voice.|
+| `en-gb-sonianeural`| Container image with the `en-GB` locale and `en-GB-sonianeural` voice.|
+| `en-us-arianeural`| Container image with the `en-US` locale and `en-US-arianeural` voice.|
+| `en-us-guyneural`| Container image with the `en-US` locale and `en-US-guyneural` voice.|
+| `en-us-jennyneural`| Container image with the `en-US` locale and `en-US-jennyneural` voice.|
+| `es-es-alvaroneural`| Container image with the `es-ES` locale and `es-ES-alvaroneural` voice.|
+| `es-es-elviraneural`| Container image with the `es-ES` locale and `es-ES-elviraneural` voice.|
+| `es-mx-dalianeural`| Container image with the `es-MX` locale and `es-MX-dalianeural` voice.|
+| `es-mx-jorgeneural`| Container image with the `es-MX` locale and `es-MX-jorgeneural` voice.|
+| `fa-ir-dilaraneural`| Container image with the `fa-IR` locale and `fa-IR-dilaraneural` voice.|
+| `fa-ir-faridneural`| Container image with the `fa-IR` locale and `fa-IR-faridneural` voice.|
+| `fil-ph-angeloneural`| Container image with the `fil-PH` locale and `fil-PH-angeloneural` voice.|
+| `fil-ph-blessicaneural`| Container image with the `fil-PH` locale and `fil-PH-blessicaneural` voice.|
+| `fr-ca-antoineneural`| Container image with the `fr-CA` locale and `fr-CA-antoineneural` voice.|
+| `fr-ca-jeanneural`| Container image with the `fr-CA` locale and `fr-CA-jeanneural` voice.|
+| `fr-ca-sylvieneural`| Container image with the `fr-CA` locale and `fr-CA-sylvieneural` voice.|
+| `fr-fr-deniseneural`| Container image with the `fr-FR` locale and `fr-FR-deniseneural` voice.|
+| `fr-fr-henrineural`| Container image with the `fr-FR` locale and `fr-FR-henrineural` voice.|
+| `he-il-avrineural`| Container image with the `he-IL` locale and `he-IL-avrineural` voice.|
+| `he-il-hilaneural`| Container image with the `he-IL` locale and `he-IL-hilaneural` voice.|
+| `hi-in-madhurneural`| Container image with the `hi-IN` locale and `hi-IN-madhurneural` voice.|
+| `hi-in-swaraneural`| Container image with the `hi-IN` locale and `hi-IN-swaraneural` voice.|
+| `id-id-ardineural`| Container image with the `id-ID` locale and `id-ID-ardineural` voice.|
+| `id-id-gadisneural`| Container image with the `id-ID` locale and `id-ID-gadisneural` voice.|
+| `it-it-diegoneural`| Container image with the `it-IT` locale and `it-IT-diegoneural` voice.|
+| `it-it-elsaneural`| Container image with the `it-IT` locale and `it-IT-elsaneural` voice.|
+| `it-it-isabellaneural`| Container image with the `it-IT` locale and `it-IT-isabellaneural` voice.|
+| `ja-jp-keitaneural`| Container image with the `ja-JP` locale and `ja-JP-keitaneural` voice.|
+| `ja-jp-nanamineural`| Container image with the `ja-JP` locale and `ja-JP-nanamineural` voice.|
+| `ka-ge-ekaneural`| Container image with the `ka-GE` locale and `ka-GE-ekaneural` voice.|
+| `ka-ge-giorgineural`| Container image with the `ka-GE` locale and `ka-GE-giorgineural` voice.|
+| `ko-kr-injoonneural`| Container image with the `ko-KR` locale and `ko-KR-injoonneural` voice.|
+| `ko-kr-sunhineural`| Container image with the `ko-KR` locale and `ko-KR-sunhineural` voice.|
+| `pt-br-antonioneural`| Container image with the `pt-BR` locale and `pt-BR-antonioneural` voice.|
+| `pt-br-franciscaneural`| Container image with the `pt-BR` locale and `pt-BR-franciscaneural` voice.|
+| `so-so-muuseneural`| Container image with the `so-SO` locale and `so-SO-muuseneural` voice.|
+| `so-so-ubaxneural`| Container image with the `so-SO` locale and `so-SO-ubaxneural` voice.|
+| `sv-se-hillevineural`| Container image with the `sv-SE` locale and `sv-SE-hillevineural` voice.|
+| `sv-se-mattiasneural`| Container image with the `sv-SE` locale and `sv-SE-mattiasneural` voice.|
+| `sv-se-sofieneural`| Container image with the `sv-SE` locale and `sv-SE-sofieneural` voice.|
+| `th-th-acharaneural`| Container image with the `th-TH` locale and `th-TH-acharaneural` voice.|
+| `th-th-niwatneural`| Container image with the `th-TH` locale and `th-TH-niwatneural` voice.|
+| `th-th-premwadeeneural`| Container image with the `th-TH` locale and `th-TH-premwadeeneural` voice.|
+| `tr-tr-ahmetneural`| Container image with the `tr-TR` locale and `tr-TR-ahmetneural` voice.|
+| `tr-tr-emelneural`| Container image with the `tr-TR` locale and `tr-TR-emelneural` voice.|
+| `zh-cn-xiaochenneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaochenneural` voice.|
+| `zh-cn-xiaohanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaohanneural` voice.|
+| `zh-cn-xiaomoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaomoneural` voice.|
+| `zh-cn-xiaoqiuneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoqiuneural` voice.|
+| `zh-cn-xiaoruineural`| Container image with the `zh-CN` locale and `zh-CN-xiaoruineural` voice.|
+| `zh-cn-xiaoshuangneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoshuangneural` voice.|
+| `zh-cn-xiaoxiaoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxiaoneural` voice.|
+| `zh-cn-xiaoxuanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxuanneural` voice.|
+| `zh-cn-xiaoyanneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoyanneural` voice.|
+| `zh-cn-xiaoyouneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoyouneural` voice.|
+| `zh-cn-yunxineural`| Container image with the `zh-CN` locale and `zh-CN-yunxineural` voice.|
+| `zh-cn-yunyangneural`| Container image with the `zh-CN` locale and `zh-CN-yunyangneural` voice.|
+| `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.|
++
+# [Previous version](#tab/previous)
+ Release notes for `v2.6.0`:
Release notes for `v2.6.0`:
| `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.|
-# [Previous version](#tab/previous)
- Release notes for `v2.5.0`: **Features**
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Use the table below to find which model versions are supported by each feature:
| Feature | Supported Training config versions | Training config expiration | Deployment expiration | ||--|||
-| Custom text classification | `2022-05-01` | `04/10/2023` | `04/28/2024` |
-| Conversational language understanding | `2022-05-01` | `10/28/2022` | `10/28/2023` |
-| Conversational language understanding | `2022-09-01` | `04/10/2023` | `04/28/2024` |
-| Custom named entity recognition | `2022-05-01` | `04/10/2023` | `04/28/2024` |
-| Orchestration workflow | `2022-05-01` | `10/28/2022` | `10/28/2023` |
+| Custom text classification | `2022-05-01` | `2023-05-01` | `2024-04-30` |
+| Conversational language understanding | `2022-05-01` | `2022-10-28` | `2023-10-28` |
+| Conversational language understanding | `2022-09-01` | `2023-02-28` | `2024-02-28` |
+| Custom named entity recognition | `2022-05-01` | `2023-05-01` | `2024-04-30` |
+| Orchestration workflow | `2022-05-01` | `2023-05-01` | `2024-04-30` |
## API versions
cognitive-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/quickstart-personalizer-sdk.md
To clean up your Cognitive Services subscription, you can delete the resource or
* [Portal](../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../cognitive-services-apis-create-account-cli.md#clean-up-resources)
-## Download the quickstart trained model.
+## Download the quickstart trained model
If you'd like download a Personalizer model that has been trained on 5,000 events from the QuickStart example, you can visit the [Azure-Samples repository](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/quickstarts) and download the model zip file, then upload this to your Personalizer instance under the "Setup" -> "Model Import/Export" section. ## Next steps
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
# Pre-Call diagnostic The Pre-Call API enables developers to programmatically validate a clientΓÇÖs readiness to join an Azure Communication Services Call. The Pre-Call APIs can be accessed through the Calling SDK. They provide multiple diagnostics including device, connection, and call quality. Pre-Call APIs are available only for Web (JavaScript). We will be enabling these capabilities across platforms in the future, please provide us feedback on what platforms you would like to see Pre-Call APIs on.
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended.
+- An active Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](../../quickstarts/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)
+
+ ```azurecli-interactive
+ az communication identity token issue --scope voip --connection-string "yourConnectionString"
+ ```
+
+ For details, see [Use Azure CLI to Create and Manage Access Tokens](../../quickstarts/access-tokens.md?pivots=platform-azcli).
+ ## Accessing Pre-Call APIs >[!IMPORTANT]
->Pre-Call diagnostics are available starting on the version [1.5.2-alpha.20220415.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.5.2-alpha.20220415.1) of the Calling SDK. Make sure to use that version when trying the instructions below.
+>Pre-Call diagnostics are available starting on the version [1.9.1-beta.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.9.1-beta.1) of the Calling SDK. Make sure to use that version when trying the instructions below.
To Access the Pre-Call API, you will need to initialize a `callClient` and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method.
To Access the Pre-Call API, you will need to initialize a `callClient` and provi
import { CallClient, Features} from "@azure/communication-calling"; import { AzureCommunicationTokenCredential } from '@azure/communication-common';
-const tokenCredential = new AzureCommunicationTokenCredential();
+const callClient = new CallClient();
+const tokenCredential = new AzureCommunicationTokenCredential("INSERT ACCESS TOKEN");
const preCallDiagnosticsResult = await callClient.feature(Features.PreCallDiagnostics).startTest(tokenCredential); ```
Performs a quick call to check in-call metrics for audio and video and provides
InCall diagnostics leverages [media quality stats](./media-quality-sdk.md) to calculate quality scores and diagnose issues. During the pre-call diagnostic, the full set of media quality stats are available for consumption. These will include raw values across video and audio metrics that can be used programatically. The InCall diagnostic provides a convenience layer on top of media quality stats to consume the results without the need to process all the raw data. See section on media stats for instructions to access. + ```javascript const inCallDiagnostics = await preCallDiagnosticsResult.inCallDiagnostics;
InCall diagnostics leverages [media quality stats](./media-quality-sdk.md) to ca
```
-At this step, there are multiple failure points to watch out for:
+At this step, there are multiple failure points to watch out for. The values provided by the API are based on the threshold values required by the service. Those raw thresholds can be found in our [media quality stats documentation](./media-quality-sdk.md#best-practices).
- If connection fails, the user should be prompted to recheck their network connectivity. Connection failures can also be attributed to network conditions like DNS, Proxies or Firewalls. For more information on recommended network setting check out our [documentation](network-requirements.md). - If bandwidth is `Bad`, the user should be prompted to try out a different network or verify the bandwidth availability on their current one. Ensure no other high bandwidth activities might be taking place.
At this step, there are multiple failure points to watch out for:
### Media stats For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `preCallDiagnosticsResult` feature. See the [full list and description of the available metrics](./media-quality-sdk.md) in the linked article. You can subscribe to the call media stats to get full collection of them. This is the raw metrics that are used to calculate InCall diagnostic results and which can be consumed granularly for further analysis.
+```javascript
+
+const mediaStatsCollector = callMediaStatistics.startCollector();
+
+mediaStatsCollector.on('mediaStatsEmitted', (mediaStats: SDK.MediaStats) => {
+ // process the stats for the call.
+ console.log(mediaStats);
+});
+
+```
+ ## Pricing
-When the Pre-Call diagnostic test runs, behind the scenes it uses calling minutes to run the diagnostic. The test lasts for roughly 1 minute, using up 1 minute of calling which is charged at the standard rate of $0.004 per participant per minute. For the case of Pre-Call diagnostic, the charge will be for 1 participant x 1 minutes = $0.004.
+When the Pre-Call diagnostic test runs, behind the scenes it uses calling minutes to run the diagnostic. The test lasts for roughly 30 seconds, using up 30 seconds of calling which is charged at the standard rate of $0.004 per participant per minute. For the case of Pre-Call diagnostic, the charge will be for 1 participant x 30 seconds = $0.002.
## Next steps
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-get-started.md
Skip this step and use the information for the portal in the next step.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="package_dependencies":::
-2. Define a new instance of the ``MongoClient,`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to use the connection string.
+2. Define a new instance of the ``MongoClient`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to use the connection string.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="client_credentials":::
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-nodejs.md
The output of the app should be similar to this example:
## Clean up resources
-When you no longer need the Azure Cosmos DB for NoSQL account, you can delete the corresponding resource group.
+When you no longer need the Azure Cosmos DB for MongoDB account, you can delete the corresponding resource group.
### [Azure CLI](#tab/azure-cli)
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
For more information about anomaly detection and how to configure alerts, see [I
**Anomaly detection is now available by default in Azure global.**
+<a name="homev2"></a>
+
+## Recent and pinned views in the cost analysis preview
+
+Cost analysis is your tool for interactive analytics and insights. You've seen the addition of new views and capabilities, like anomaly detection, in the cost analysis preview, but classic cost analysis is still the best tool for quick data exploration with simple filtering and grouping. While these capabilities are coming to the preview, we're introducing a new experience that allows you to select which view you want to start with, whether that be a preview view, a built-in view, or a custom view you created.
+
+The first time you open the cost analysis preview, you'll see a list of all views. When you return, you'll see a list of the recently used views to help you get back to where you left off quicker than ever. You can pin any view or even rename or subscribe to alerts for your saved views.
+
+The recent and pinned views can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback about the preview.
++ <a name="aksnestedtable"></a> ## Grouping SQL databases and elastic pools
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
Previously updated : 08/24/2022 Last updated : 10/26/2022 # Copy and transform data in Snowflake using Azure Data Factory or Azure Synapse Analytics
The following sections provide details about properties that define entities spe
## Linked service properties
-The following properties are supported for a Snowflake-linked service.
+This Snowflake connector supports the following authentication types. See the corresponding sections for details.
+
+
+
+- [Basic authentication](#basic-authentication)
+- [OAuth authentication](#oauth-authentication)
+
+### Basic authentication
+
+The following properties are supported for a Snowflake linked service when using **Basic** authentication.
| Property | Description | Required | | : | :-- | :- | | type | The type property must be set to **Snowflake**. | Yes |
-| connectionString | Specifies the information needed to connect to the Snowflake instance. You can choose to put password or entire connection string in Azure Key Vault. Refer to the examples below the table, as well as the [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article, for more details.<br><br>Some typical settings:<br>- **Account name:** The [full account name](https://docs.snowflake.net/manuals/user-guide/connecting.html#your-snowflake-account-name) of your Snowflake account (including additional segments that identify the region and cloud platform), e.g. xy12345.east-us-2.azure.<br/>- **User name:** The login name of the user for the connection.<br>- **Password:** The password for the user.<br>- **Database:** The default database to use once connected. It should be an existing database for which the specified role has privileges.<br>- **Warehouse:** The virtual warehouse to use once connected. It should be an existing warehouse for which the specified role has privileges.<br>- **Role:** The default access control role to use in the Snowflake session. The specified role should be an existing role that has already been assigned to the specified user. The default role is PUBLIC. | Yes |
+| connectionString | Specifies the information needed to connect to the Snowflake instance. You can choose to put password or entire connection string in Azure Key Vault. Refer to the examples below the table, and the [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article, for more details.<br><br>Some typical settings:<br>- **Account name:** The [full account name](https://docs.snowflake.net/manuals/user-guide/connecting.html#your-snowflake-account-name) of your Snowflake account (including additional segments that identify the region and cloud platform), e.g. xy12345.east-us-2.azure.<br/>- **User name:** The login name of the user for the connection.<br>- **Password:** The password for the user.<br>- **Database:** The default database to use once connected. It should be an existing database for which the specified role has privileges.<br>- **Warehouse:** The virtual warehouse to use once connected. It should be an existing warehouse for which the specified role has privileges.<br>- **Role:** The default access control role to use in the Snowflake session. The specified role should be an existing role that has already been assigned to the specified user. The default role is PUBLIC. | Yes |
+| authenticationType ΓÇ»| Set this property to **Basic**. | Yes ΓÇ» ΓÇ»|
| connectVia | The [integration runtime](concepts-integration-runtime.md) that is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure integration runtime. | No | **Example:**
The following properties are supported for a Snowflake-linked service.
"properties": { "type": "Snowflake", "typeProperties": {
+ "authenticationType": "Basic",
"connectionString": "jdbc:snowflake://<accountname>.snowflakecomputing.com/?user=<username>&password=<password>&db=<database>&warehouse=<warehouse>&role=<myRole>" }, "connectVia": {
The following properties are supported for a Snowflake-linked service.
"properties": { "type": "Snowflake", "typeProperties": {
+ "authenticationType": "Basic",
"connectionString": "jdbc:snowflake://<accountname>.snowflakecomputing.com/?user=<username>&db=<database>&warehouse=<warehouse>&role=<myRole>", "password": { "type": "AzureKeyVaultSecret",
The following properties are supported for a Snowflake-linked service.
} ```
+### OAuth authentication
+
+The following properties are supported for a Snowflake linked service when using **OAuth** authenticaition.
+
+| Property ΓÇ» ΓÇ» ΓÇ» ΓÇ» | Description ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ»| Required |
+| : | :-- | :- |
+| type ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» | The type property must be set to **Snowflake**. ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ»| Yes ΓÇ» ΓÇ» ΓÇ»|
+| connectionString | Specifies the information needed to connect to the Snowflake instance. You can choose to put password or entire connection string in Azure Key Vault. Refer to the examples below the table, as well as the [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article, for more details.<br><br>Some typical settings:<br>- **Account name:** The ΓÇ»[full account name](https://docs.snowflake.net/manuals/user-guide/connecting.html#your-snowflake-account-name) of your Snowflake account (including additional segments that identify the region and cloud platform), e.g. xy12345.east-us-2.Azure.<br/>- **User name:** The login name of the user for the connection.<br>- **Database:** The default database to use once connected. It should be an existing database for which the specified role has privileges.<br>- **Warehouse:** The virtual warehouse to use once connected. It should be an existing warehouse for which the specified role has privileges.<br>- **Role:** The default access control role to use in the Snowflake session. The specified role should be an existing role that has already been assigned to the specified user. The default role is PUBLIC. | Yes ΓÇ» ΓÇ» ΓÇ»|
+| authenticationType | Set this property to **Oauth**.<br>It supports External OAuth for Microsoft Azure AD. To learn more about this, see this [article](https://docs.snowflake.com/en/user-guide/oauth-ext-overview.html).| Yes ΓÇ» ΓÇ» ΓÇ»|
+| oauthTokenEndpoint ΓÇ» ΓÇ» ΓÇ» ΓÇ»| The Azure AD OAuth token endpoint. Sample: `https://login.microsoftonline.com/<tenant ID>/discovery/v2.0/keys` ΓÇ»| Yes ΓÇ» ΓÇ» ΓÇ» |
+| clientId ΓÇ»| The application client ID supplied by Azure AD . | Yes ΓÇ» ΓÇ» ΓÇ»|
+| clientSecret ΓÇ»| The client secret corresponds to the client ID. ΓÇ»| Yes ΓÇ» ΓÇ» ΓÇ»|
+| oauthUserName ΓÇ»| The name of the Azure user. ΓÇ»| Yes ΓÇ» ΓÇ» ΓÇ»|
+| oauthPassword ΓÇ» | The password for the Azure user. | Yes ΓÇ» ΓÇ» ΓÇ»|
+| scope ΓÇ» | The OAuth scope. Sample: `api://<application (client) ID>/session:scope:MYROLE` | Yes ΓÇ» ΓÇ» ΓÇ»|
+| connectVia | The [integration runtime](concepts-integration-runtime.md) that is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure integration runtime. | No |
+
+**Example:**
+
+```json
+{
+ΓÇ» ΓÇ» "name": "SnowflakeLinkedService",
+ΓÇ» ΓÇ» "type": "Microsoft.DataFactory/factories/linkedservices",
+ΓÇ» ΓÇ» "properties": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "annotations": [],
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "Snowflake",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "typeProperties": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "connectionString": "jdbc:snowflake://<accountname>.snowflakecomputing.com/?user=<username>&db=<database>&warehouse=<warehouse>&role=<myRole>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "authenticationType": "Oauth",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "oauthTokenEndpoint": "https://login.microsoftonline.com/<tenant ID>/discovery/v2.0/keys",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "clientId": "<client Id>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "clientSecret": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "AzureKeyVaultSecret",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "store": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "referenceName": "<Azure Key Vault linked service name>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "LinkedServiceReference"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "secretName": "<secret name>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "oauthUserName": "<user name>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "oauthPassword": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "AzureKeyVaultSecret",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "store": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "referenceName": "<Azure Key Vault linked service name>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "LinkedServiceReference"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "secretName": "<secret name>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "scope": "api://<application (client) ID>/session:scope:MYROLE",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ΓÇ» ΓÇ» }
+}
+
+```
+
+>[!Note]
+>Currently, the OAuth authentication is not supported in mapping data flow and script activity.
+ ## Dataset properties For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article.
data-factory Connector Troubleshoot Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-snowflake.md
Previously updated : 06/21/2022 Last updated : 11/14/2022
This article provides suggestions to troubleshoot common problems with the Snowflake connector in Azure Data Factory and Azure Synapse.
-## Error message: IP % is not allowed to access Snowflake. Contact your local security administrator.
+## Error code: NotAllowToAccessSnowflake
- **Symptoms**: The copy activity fails with the following error:
- `Job failed due to reason: net.snowflake.client.jdbc.SnowflakeSQLException: IP % is not allowed to access Snowflake.  Contact your local security administrator. `
+ `IP % is not allowed to access Snowflake. Contact your local security administrator. `
- **Cause**: It's a connectivity issue and usually caused by firewall IP issues when integration runtimes access your Snowflake.
This article provides suggestions to troubleshoot common problems with the Snowf
- If you use an Azure Integration Runtime and the access is restricted to IPs approved in the firewall rules, you can add [Azure Integration Runtime IPs](azure-integration-runtime-ip-addresses.md) to the allowed list in Snowflake. - If you use a managed private endpoint and a network policy is in place on your Snowflake account, ensure Managed VNet CIDR is allowed. For more steps, refer to [How To: Set up a managed private endpoint from Azure Data Factory or Synapse to Snowflake](https://community.snowflake.com/s/article/How-to-set-up-a-managed-private-endpoint-from-Azure-Data-Factory-or-Synapse-to-Snowflake).
-## Error message: Failed to access remote file: access denied.
+## Error code: SnowflakeFailToAccess
-- **Symptoms**: The copy activity fails with the following error: -
- `ERROR [42501] Failed to access remote file: access denied. Please check your credentials,Source =SnowflakeODBC_sb64.dll..`
+- **Symptoms**:<br>
+The copy activity fails with the following error when using Snowflake as source:<br>
+ `Failed to access remote file: access denied. Please check your credentials`<br>
+The copy activity fails with the following error when using Snowflake as sink:<br>
+ `Failure using stage area. Cause: [This request is not authorized to perform this operation. (Status Code: 403; Error Code: AuthorizationFailure)`<br>
- **Cause**: The error pops up by the Snowflake COPY command and is caused by missing access permission on source/sink when execute Snowflake COPY commands. - **Recommendation**: Check your source/sink to make sure that you have granted proper access permission to Snowflake.
- - Direct copy: Make sure to grant access permission to Snowflake in the other source/sink.
- - Staged copy: The staging Azure Blob storage linked service must use shared access signature authentication. When you generate the shared access signature, make sure to set the allowed permissions and IP addresses to Snowflake in the staging Azure Blob storage. To learn more about this, see this [article](https://docs.snowflake.com/en/user-guide/data-load-azure-config.html#option-2-generating-a-sas-token).
-
+ - Direct copy: Make sure to grant access permission to Snowflake in the other source/sink. Currently, only Azure Blob Storage that uses shared access signature authentication is supported as source or sink. When you generate the shared access signature, make sure to set the allowed permissions and IP addresses to Snowflake in the Azure Blob Storage. For more information, see this [article](https://docs.snowflake.com/en/user-guide/data-load-azure-config.html#option-2-generating-a-sas-token).
+ - Staged copy: The staging Azure Blob Storage linked service must use shared access signature authentication. When you generate the shared access signature, make sure to set the allowed permissions and IP addresses to Snowflake in the staging Azure Blob Storage. For more information, see this [article](https://docs.snowflake.com/en/user-guide/data-load-azure-config.html#option-2-generating-a-sas-token).
+
## Next steps For more troubleshooting help, try these resources:
databox-online Azure Stack Edge Pro 2 Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-install.md
Previously updated : 05/17/2022 Last updated : 11/08/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro 2 in datacenter so I can use it to transfer data to Azure.
This device is shipped in a single box. Complete the following steps to unpack y
- One power cord. - One packaged bezel. - A pair of packaged Wi-Fi antennas in the accessory box.
+ > [!NOTE]
+ > The accessory box includes Wi-Fi antennas, but Wi-Fi capability is not supported for the Azure Stack Edge device. The antennas should not be used.
+ - One packaged mounting accessory which could be: - A 4-post rack slide rail, or - A 2-post rack slide, or
This device is shipped in two boxes. Complete the following steps to unpack your
- One power cord. - One packaged bezel. - A pair of packaged Wi-Fi antennas in the accessory box.
+ > [!NOTE]
+ > The accessory box includes Wi-Fi antennas, but Wi-Fi capability is not supported for the Azure Stack Edge device. The antennas should not be used.
+ - One packaged mounting accessory which could be: - A 4-post rack slide rail, or - A 2-post rack slide, or
Before you start cabling your device, you need the following things:
- Access to one power distribution unit. - At least one 100-GbE network switch to connect a 10/1-GbE or a 100-GbE network interface to the internet for data. At least one data network interface from among Port 2, Port 3, and Port 4 needs to be connected to the Internet (with connectivity to Azure). - A pair of Wi-Fi antennas (included in the accessory box).
+ > [!NOTE]
+ > The accessory box includes Wi-Fi antennas, but Wi-Fi capability is not supported for the Azure Stack Edge device. The antennas should not be used.
::: zone-end
Before you start cabling your device, you need the following things:
For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware+Compatible+Products). - At least one 100-GbE network switch to connect a 1-GbE or a 100-GbE network interface to the internet for data for each device. - A pair of Wi-Fi antennas (included in the accessory box).
-
+ > [!NOTE]
+ > The accessory box includes Wi-Fi antennas, but Wi-Fi capability is not supported for the Azure Stack Edge device. The antennas should not be used.
+ ::: zone-end > [!NOTE]
Follow these steps to cable your device for power:
::: zone-end
-### Wi-Fi antenna installation
-
-Follow these steps to install Wi-Fi antennas on your device:
-
-1. Locate the two Wi-Fi SMA RF threaded connectors on the back plane of the device. These gold-colored connectors are located on the faceplate of PCIe card slot, right below Port 3 and Port 4.
-
-1. Use a clockwise motion to thread the antennas onto the SMA connectors. Secure them using only your fingers. Do not use a tool or wrench.
-
- >[!NOTE]
- > Tighten the connectors sufficiently so that the antenna's rotary joints can turn without causing the threaded connectors to become loose.
-
-1. To position the antennas as desired, articulate the hinge and turn the rotary joint.
-- ### Network cabling ::: zone pivot="single-node"
dedicated-hsm Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/monitoring.md
na Previously updated : 11/18/2020 Last updated : 11/14/2022
dedicated-hsm Quickstart Create Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/quickstart-create-hsm-powershell.md
description: Create an Azure Dedicated HSM with Azure PowerShell
Previously updated : 11/13/2020 Last updated : 11/14/2022 ms.devlang: azurepowershell
dedicated-hsm Quickstart Hsm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/quickstart-hsm-azure-cli.md
ms.devlang: azurecli Previously updated : 01/06/2021 Last updated : 11/14/2022
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium | | **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium | | **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
+| **Anomalous secret access (Preview)**<br>(K8S_AnomalousSecretAccess) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected secret access request which is anomalous based on previous secret access activity. This activity is considered an anomaly when taking into account how the different features seen in the secret access operation are in relations to one another. The features monitored by this analytics include the user name used, the name of the secret, the name of the namespace, user agent used in the operation, or other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | CredentialAccess | Medium |
| **Attempt to stop apt-daily-upgrade.timer service detected**<br>(K8S.NODE_TimerServiceDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational | | **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium | | **Behavior similar to Fairware ransomware detected**<br>(K8S.NODE_FairwareMalware) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
defender-for-cloud Defender For Containers Vulnerability Assessment Elastic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-elastic.md
The triggers for an image scan are:
- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image within 2 hours. -- **Continuous scan** - Defender for Containers reassesses the images based on the latest database of vulnerabilities of Trivy. This reassessment is performed weekly.
+- **Continuous scan** - Defender for Containers reassesses the images based on the latest database of vulnerabilities of Trivy. This reassessment is performed weekly for as long as the image is still present in the registry.
## Prerequisites
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
During setup, Defender for Cloud checks to ensure that the machine can communica
- `https://qagpublic.qg3.apps.qualys.com` - Qualys' US data center - `https://qagpublic.qg2.apps.qualys.eu` - Qualys' European data center
-The extension doesn't currently accept any proxy configuration details.
+The extension doesn't currently accept any proxy configuration details. However, you can configure the Qualys agent's proxy settings locally in the Virtual Machine. Please follow the guidance in the Qualys documentation:
+- [Windows proxy configuration](https://qualysguard.qg2.apps.qualys.com/portal-help/en/ca/agents/win_proxy.htm)
+- [Linux proxy configuration](https://qualysguard.qg2.apps.qualys.com/portal-help/en/ca/agents/linux_proxy.htm)
### Can I remove the Defender for Cloud Qualys extension? If you want to remove the extension from a machine, you can do it manually or with any of your programmatic tools.
defender-for-cloud Monitoring Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md
Learn more about [using the Azure Monitor Agent with Defender for Cloud](auto-de
| Aspect | Azure virtual machines | Azure Arc-enabled machines | ||:|:--|
-| Release state: | Generally available (GA) | Preview |
+| Release state: | Generally available (GA) | Generally available (GA) |
| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) | [Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) | | Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Owner](../role-based-access-control/built-in-roles.md#owner) | | Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines |
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
The native cloud connector requires:
> [!NOTE] > To enable the Azure Arc auto-provisioning, you'll need **Owner** permission on the relevant Azure subscription.
- - More extensions should be enabled on the Arc-connected machines.
- - Log Analytics (LA) agent on Arc machines, and ensure the selected workspace has security solution installed. The LA agent is currently configured in the subscription level. All of your multicloud AWS accounts and GCP projects under the same subscription will inherit the subscription settings.
+ - Other extensions should be enabled on the Arc-connected machines:
+ - Microsoft Defender for Endpoint
+ - VA solution (TVM/Qualys)
+ - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA)
+
+ Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription will inherit the subscription settings for the LA agent and AMA.
Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
The native cloud connector requires:
- Other extensions should be enabled on the Arc-connected machines: - Microsoft Defender for Endpoint - VA solution (TVM/Qualys)
- - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
+ - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA)
- The LA agent is currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings regarding the LA agent.
+ Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription will inherit the subscription settings for the LA agent and AMA.
Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in November include: -- [Protect containers in your entire GKE organization with Defender for Containers](#protect-containers-in-your-entire-gke-organization-with-defender-for-containers)
+- [Protect containers across your GCP organization with Defender for Containers](#protect-containers-across-your-gcp-organization-with-defender-for-containers)
- [Validate Defender for Containers protections with sample alerts](#validate-defender-for-containers-protections-with-sample-alerts) - [Governance rules at scale (Preview)](#governance-rules-at-scale-preview)
-### Protect containers in your entire GKE organization with Defender for Containers
+### Protect containers across your GCP organization with Defender for Containers
-Defender for Containers helps you secure your Azure and multicloud container environments with environment hardening, vulnerability assessment, and run-time threat protection for nodes and clusters. GCP users enable this protection by connecting the GCP projects to Defender for Cloud using the native GCP connector.
-
-Now you can enable Defender for Containers for your GCP organization to protect clusters across your entire GCP organization. Create a new GCP connector or update your existing GCP connectors that connect organizations to Defender for Cloud, and enable Defender for Containers.
+Now you can enable [Defender for Containers](defender-for-containers-introduction.md) for your GCP environment to protect standard GKE clusters across an entire GCP organization. Just create a new GCP connector with Defender for Containers enabled or enable Defender for Containers on an existing organization level GCP connector.
Learn more about [connecting GCP projects and organizations](quickstart-onboard-gcp.md#connect-your-gcp-project) to Defender for Cloud.
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Title: Manage sensors with Defender for IoT in the Azure portal description: Learn how to onboard, view, and manage sensors with Defender for IoT in the Azure portal. Previously updated : 09/08/2022 Last updated : 11/13/2022
Make the downloaded activation file accessible to the sensor console admin so th
> As opposed to OT sensors, where you define your sensor's site, all Enterprise IoT sensors are automatically added to the **Enterprise network** site.
+## View your sensors
+
+All of your currently cloud-connected sensors, including both OT and Enterprise IoT sensors, are listed in the **Sites and sensors** page. For example:
++
+Details about each sensor are listed in the following columns:
+
+|Column name |Description |
+|||
+|**Sensor name**| Displays the name that you assigned to the sensor during the registration. |
+|**Sensor type**| Displays whether the sensor is locally connected, cloud-connected, or EIoT. |
+|**Zone**| Displays the zone that contains this sensor.|
+|**Subscription name**| Displays the name of the Microsoft Azure account subscription that this sensor belongs to. |
+|**Sensor version**| Displays the software version installed on your sensor. |
+|**Sensor status**| Displays a [sensor health message](sensor-health-messages.md). For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview).|
+|**Last connected (UTC)**| Displays how long ago the sensor was last connected.|
+|**Threat Intelligence version**| Displays the [Threat Intelligence version](how-to-work-with-threat-intelligence-packages.md) installed on the sensor. The name of the version is based on the day the package was built by Defender for IoT. |
+|**Threat Intelligence mode**| Displays whether the Threat Intelligence mode is manual or automatic. If it's manual, that means that you can [push newly released packages directly to sensors](how-to-work-with-threat-intelligence-packages.md) as needed. Otherwise, the new packages will be automatically installed on the cloud connected sensors. |
+|**Threat Intelligence update status**| Displays the update status of the Threat Intelligence package. The status can be either **Failed**, **In Progress**, **Update Available**, or **Ok**.|
## Site management options from the Azure portal
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-set-up-instance-cli.md
Use these values in the following [az dt command](/cli/azure/dt) to create the i
az dt create --dt-name <name-for-your-Azure-Digital-Twins-instance> --resource-group <your-resource-group> --location <region> ```
+There are several optional parameters that can be added to the command to specify additional things about your resource during creation, including creating a [system managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for the instance or enabling/disabling public network access. For a full list of supported parameters, see the [az dt create](/cli/azure/dt#az-dt-create) reference documentation.
+ ### Verify success and collect important values If the instance was created successfully, the result in the CLI looks something like this, outputting information about the resource you have created:
event-grid Receive Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/receive-events.md
Title: Receive events from Azure Event Grid to an HTTP endpoint description: Describes how to validate an HTTP endpoint, then receive and deserialize Events from Azure Event Grid Previously updated : 07/16/2021 Last updated : 11/14/2022 ms.devlang: csharp, javascript
SDKs for other languages are available via the [Publish SDKs](./sdk-overview.md#
The first thing you want to do is handle `Microsoft.EventGrid.SubscriptionValidationEvent` events. Every time someone subscribes to an event, Event Grid sends a validation event to the endpoint with a `validationCode` in the data payload. The endpoint is required to echo this back in the response body to [prove the endpoint is valid and owned by you](webhook-event-delivery.md). If you're using an [Event Grid Trigger](../azure-functions/functions-bindings-event-grid.md) rather than a WebHook triggered Function, endpoint validation is handled for you. If you use a third-party API service (like [Zapier](https://zapier.com/home) or [IFTTT](https://ifttt.com/)), you might not be able to programmatically echo the validation code. For those services, you can manually validate the subscription by using a validation URL that is sent in the subscription validation event. Copy that URL in the `validationUrl` property and send a GET request either through a REST client or your web browser.
-In C#, the `ParseMany()` method is used to deserialize a `BinaryData` instance containing 1 or more events into an array of `EventGridEvent`. If you knew ahead of time that your are deserializing only a single event, you could use the `Parse` method instead.
+In C#, the `ParseMany()` method is used to deserialize a `BinaryData` instance containing 1 or more events into an array of `EventGridEvent`. If you knew ahead of time that you are deserializing only a single event, you could use the `Parse` method instead.
To programmatically echo the validation code, use the following code.
Test the validation response function by pasting the sample event into the test
}] ```
-When you click Run, the Output should be 200 OK and `{"validationResponse":"512d38b6-c7b8-40c8-89fe-f46f9e9622b6"}` in the body:
+When you select Run, the Output should be 200 OK and `{"validationResponse":"512d38b6-c7b8-40c8-89fe-f46f9e9622b6"}` in the body:
:::image type="content" source="./media/receive-events/validation-request.png" alt-text="Validation request":::
Test the new functionality of the function by putting a [Blob storage event](./e
You should see the blob URL output in the function log:
-![Output log](./media/receive-events/blob-event-response.png)
+```
+2022-11-14T22:40:45.978 [Information] Executing 'Function1' (Reason='This function was programmatically called via the host APIs.', Id=8429137d-9245-438c-8206-f9e85ef5dd61)
+2022-11-14T22:40:46.012 [Information] C# HTTP trigger function processed a request.
+2022-11-14T22:40:46.017 [Information] Received events: [{"topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/xstoretestaccount","subject": "/blobServices/default/containers/testcontainer/blobs/testfile.txt","eventType": "Microsoft.Storage.BlobCreated","eventTime": "2017-06-26T18:41:00.9584103Z","id": "831e1650-001e-001b-66ab-eeb76e069631","data": {"api": "PutBlockList","clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760","requestId": "831e1650-001e-001b-66ab-eeb76e000000","eTag": "0x8D4BCC2E4835CD0","contentType": "text/plain","contentLength": 524288,"blobType": "BlockBlob","url": "https://example.blob.core.windows.net/testcontainer/testfile.txt","sequencer": "00000000000004420000000000028963","storageDiagnostics": {"batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"}},"dataVersion": "","metadataVersion": "1"}]
+2022-11-14T22:40:46.335 [Information] Got BlobCreated event data, blob URI https://example.blob.core.windows.net/testcontainer/testfile.txt
+2022-11-14T22:40:46.346 [Information] Executed 'Function1' (Succeeded, Id=8429137d-9245-438c-8206-f9e85ef5dd61, Duration=387ms)
+```
-You can also test by creating a Blob storage account or General Purpose V2 (GPv2) Storage account, [adding and event subscription](../storage/blobs/storage-blob-event-quickstart.md), and setting the endpoint to the function URL:
+You can also test by creating a Blob storage account or General Purpose V2 (GPv2) Storage account, [adding an event subscription](../storage/blobs/storage-blob-event-quickstart.md), and setting the endpoint to the function URL:
![Function URL](./media/receive-events/function-url.png)
frontdoor How To Enable Private Link Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app.md
This article will guide you through how to configure Azure Front Door Premium ti
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Create a [Private Link](../../private-link/create-private-link-service-portal.md) service for your origin web servers.
> [!NOTE] > Private endpoints requires your App Service plan or function hosting plan to meet some requirements. For more information, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
$LAW = "<your-Log-Analytics-workspace>"
# obtain workspace id for defined Log Analytics workspace $WorkspaceId = (Get-AzOperationalInsightsWorkspace ` -ResourceGroupName $resourceGroup `
- -Name $LAW).CustomerId
+ -Name $LAW).ResourceId
# obtain primary key for defined Log Analytics workspace $PrimaryKey = (Get-AzOperationalInsightsWorkspace `
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
Previously updated : 11/11/2022 Last updated : 11/14/2022
In order to begin the deployment and complete this tutorial, you'll need to have
When you've fulfilled these prerequisites, you're ready to use the **Deploy to Azure** button.
-## Deploy to Azure button
+## Use the Deploy to Azure button
1. Select the **Deploy to Azure** button below to begin the deployment within the Azure portal.
Once the deployment has competed, the following resources and access roles will
> [!TIP] > For detailed step-by-step instructions on how to manually deploy the MedTech service, see [How to manually deploy the MedTech service using the Azure portal](deploy-03-new-manual.md).
-## Create a device and send a test message
+## Create a device and send a test message
Now that your deployment has successfully completed, we'll connect to your IoT Hub, create a device, and send a test message to the IoT Hub using **VSCode** with the **Azure IoT Hub extension**. These steps will allow your MedTech service to:
Now that your deployment has successfully completed, we'll connect to your IoT H
:::image type="content" source="media\iot-hub-to-iot-connector\iot-create-device.png" alt-text="Screenshot of VSCode with the Azure IoT Hub extension selecting Create device for this tutorial." lightbox="media\iot-hub-to-iot-connector\iot-create-device.png":::
-4. To send a test message from the newly created device to your IoT Hub, right-click the device and select the **Send D2C Message to IoT Hub** option. For this example, we'll be using a device named **device-001**. You'll use the device you created as part of the previous step.
+4. To send a test message from the newly created device to your IoT Hub, right-click the device and select the **Send D2C Message to IoT Hub** option. For this example, we'll be using a device named **iot-001**. You'll use the device you created as part of the previous step.
> [!NOTE] > **D2C** stands for Device-to-Cloud. In this example, cloud is the IoT Hub that will be receiving the device message. IoT Hub allows two-way communications, which is why there's also the option to **Send C2D Message to Device Cloud** (C2D stands for Cloud-to-Device).
Now that your deployment has successfully completed, we'll connect to your IoT H
> > To learn more about IotJsonPathContentTemplate mappings usage with the MedTech service device mappings, see [How to use IotJsonPathContentTemplate mappings](how-to-use-iot-jsonpath-content-mappings.md).
-## View test data in the FHIR service (Optional)
+## Review metrics from test message
-If you provided your own Azure AD user object ID as the optional Fhir Contributor Principal ID when deploying this tutorial's template, then you have access to query FHIR resources in the FHIR service.
+Now that you've successfully sent a test message to your IoT Hub, you can now review your MedTech service metrics to verify that your MedTech service received, transformed, and persisted the test message into your FHIR service. To learn more about how to display the MedTech service monitoring tab metrics and the different metrics types, see [How to display the MedTech service monitoring tab metrics](how-to-use-monitoring-tab.md).
-Use this tutorial, [Access using Postman](/azure/healthcare-apis/fhir/use-postman) to get an Azure AD access token and view FHIR resources in the FHIR service.
+For your MedTech service metrics, you see can see that your MedTech service performed the following steps with the test message:
+
+* **Number of Incoming Messages** - Received the incoming test message from the device message event hub.
+* **Number of Normalized Messages** - Created five normalized messages.
+* **Number of Measurements** - Created five measurements.
+* **Number of FHIR resources** - Created five FHIR resources that will be persisted on your FHIR service.
+++
+## View test data in the FHIR service
+
+If you provided your own Azure AD user object ID as the optional **Fhir Contributor Principal ID** when deploying this tutorial's template, then you have access to query FHIR resources in your FHIR service.
+
+Use this tutorial: [Access using Postman](/azure/healthcare-apis/fhir/use-postman) to get an Azure AD access token and view FHIR resources in your FHIR service.
## Next steps
-In this tutorial, you deployed an Azure IoT Hub to route device data to the MedTech service.
+In this tutorial, you deployed a Quickstart ARM template in the Azure portal, connected to your Azure IoT Hub, created a device, and sent a test message to your MedTech service.
To learn about how to use device mappings, see
healthcare-apis How To Use Monitoring Tab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-tab.md
Title: How to use MedTech service metrics tab - Azure Health Data Services
-description: This article explains how to use MedTech service metrics tab.
+ Title: How to display the MedTech service monitoring tab metrics - Azure Health Data Services
+description: This article explains how to display the MedTech service monitoring tab metrics.
Previously updated : 10/10/2022 Last updated : 11/14/2022
-# How to use the MedTech service monitoring tab
+# How to display the MedTech service monitoring tab metrics
-In this article, you'll learn how to use the [MedTech service](iot-connector-overview.md) monitoring tab in the Azure portal. The monitoring tab provides access to crucial MedTech service metrics. These metrics can be used in assessing the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
+In this article, you'll learn how to use the [MedTech service](iot-connector-overview.md) monitoring tab in the Azure portal. The monitoring tab provides access to crucial MedTech service metrics. These metrics can be used in assessing the health and performance of your MedTech service and can be useful seeing patterns and/or trends or assisting with troubleshooting your MedTech service.
-## Use the MedTech service monitoring tab
+## Display the MedTech service monitoring tab metrics
1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
In this article, you'll learn how to use the [MedTech service](iot-connector-ove
:::image type="content" source="media\iot-monitoring-tab\display-metrics-tile.png" alt-text="Screenshot the MedTech service monitoring tab with drop-down menus." lightbox="media\iot-monitoring-tab\display-metrics-tile.png":::
-5. Select the pin icon to pin the tile to an Azure portal dashboard of your choosing.
-
- :::image type="content" source="media\iot-monitoring-tab\pin-metrics-to-dashboard.png" alt-text="Screenshot the MedTech service monitoring tile with red box around the pin icon." lightbox="media\iot-monitoring-tab\pin-metrics-to-dashboard.png":::
- > [!IMPORTANT] > If you leave the MedTech service monitoring tab, any customized settings you have made to the monitoring settings are lost and will have to be recreated. If you would like to save your customizations for future viewing, you can pin them to an Azure portal dashboard as a tile. >
- > To learn how to customize and save metrics settings to an Azure portal dashboard and tile, see [How to configure the MedTech service metrics](how-to-configure-metrics.md).
+ > To learn how to customize and save metrics settings to an Azure portal dashboard and tile, see [How to configure the MedTech service metrics](how-to-configure-metrics.md).
+5. **Optional** - Select the **pin icon** to save the metrics tile to an Azure portal dashboard of your choosing.
+
+ :::image type="content" source="media\iot-monitoring-tab\pin-metrics-to-dashboard.png" alt-text="Screenshot the MedTech service monitoring tile with red box around the pin icon." lightbox="media\iot-monitoring-tab\pin-metrics-to-dashboard.png":::
+
> [!TIP] > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure/azure/azure-monitor/essentials/metrics-getting-started)
To learn how to configure the MedTech service metrics, see
> [!div class="nextstepaction"] > [How to configure the MedTech service metrics](how-to-configure-metrics.md)
-To learn how to configure the MedTech service diagnostic settings to export logs to another location (for example: an Azure storage account) for audit, backup, or troubleshooting, see
+To learn how to enable the MedTech service diagnostic settings to export logs to another location (for example: an Azure storage account) for audit, backup, or troubleshooting, see
> [!div class="nextstepaction"] > [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
Azure IoT Edge for Linux on Windows supports the following architectures:
| - | -- | -- | | EFLOW 1.1 LTS | ![AMD64](./media/support/green-check.png) | | | EFLOW Continuous Release (CR) ([Public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)) | ![AMD64](./media/support/green-check.png) | ![ARM64](./media/support/green-check.png) |
+| EFLOW 1.4 LTS | ![AMD64](./media/support/green-check.png) | ![ARM64](./media/support/green-check.png) |
For more information about Windows ARM64 supported processors, see [Windows Processor Requirements](/windows-hardware/design/minimum/windows-processor-requirements).
The following table lists the components included in each release. Each release
| Release | IoT Edge | CBL-Mariner | Defender for IoT | | - | -- | -- | - |
-| **1.1 LTS** | 1.1 | 1.0 | - |
-| **Continuous Release** | 1.2 | 1.0 | 3.12.3 |
+| **1.1 LTS** | 1.1 | 2.0 | - |
+| **Continuous Release** | 1.3 | 2.0 | 3.12.3 |
+| **1.4 LTS** | 1.4 | 2.0 | - |
## Minimum system requirements
iot-edge Iot Edge For Linux On Windows Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-updates.md
Update [1.1.2110.0311](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2
<!-- end 1.1 --> :::moniker-end
-## Migrations between EFLOW 1.1 LTS and EFLOW CR
+## Migrations between EFLOW LTS and EFLOW CR trains
-IoT Edge for Linux on Windows doesn't support migrations between the different release trains. If you want to move from the 1.1LTS version to the Continuous Release (CR) version or viceversa, you'll have to uninstall the current version and install the new desired version.
+IoT Edge for Linux on Windows doesn't support migrations between the different release trains. If you want to move from the 1.1LTS or 1.4LTS version to the Continuous Release (CR) version or viceversa, you'll have to uninstall the current version and install the new desired version. To migrate between EFLOW 1.1LTS to EFLOW 1.4LTS, check [EFLOW LTS migration](https://aka.ms/AzEFLOW-LTS-Migration).
## Next steps
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows.md
Azure IoT Edge for Linux on Windows (EFLOW) allows you to run containerized Linux workloads alongside Windows applications in Windows deployments. Businesses that rely on Windows to power their edge devices and solutions can now take advantage of the cloud-native analytics solutions being built in Linux. <!-- iotedge-2020-11 --> >[!NOTE] >The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.2, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use once the general availability (GA) release is available. For more information, see [EFLOW continuous release](https://github.com/Azure/iotedge-eflow/wiki/EFLOW-Continuous-Release). :::moniker-end
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart.md
Run the following PowerShell commands on the target device where you want to dep
<!-- end 1.1 --> <!-- iotedge-2020-11 -->
- :::moniker range=">=iotedge-2020-11"
+ :::moniker range="iotedge-2020-11"
* **X64/AMD64** ```powershell $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
Run the following PowerShell commands on the target device where you want to dep
:::moniker-end <!-- end iotedge-2020-11 -->
+ <!-- iotedge-1.4 -->
+ :::moniker range=">=iotedge-1.4"
+ * **X64/AMD64**
+ ```powershell
+ $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
+ $ProgressPreference = 'SilentlyContinue'
+ Invoke-WebRequest "https://aka.ms/AzEFLOWMSI_1_4_LTS_X64" -OutFile $msiPath
+ ```
+
+ * **ARM64**
+ ```powershell
+ $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
+ $ProgressPreference = 'SilentlyContinue'
+ Invoke-WebRequest "https://aka.ms/AzEFLOWMSI_1_4_LTS_ARM64" -OutFile $msiPath
+ ```
+ :::moniker-end
+ <!-- end iotedge-1.4 -->
+ 1. Install IoT Edge for Linux on Windows on your device. ```powershell
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
If you don't have the **AzureEflow** folder in your PowerShell directory, use th
<!-- end iotedge-2018-06 --> <!-- iotedge-2020-11 --> 1. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows. * **X64/AMD64**
If you don't have the **AzureEflow** folder in your PowerShell directory, use th
:::moniker-end <!-- end iotedge-2020-11 -->
+<!-- iotedge-1.4 -->
+1. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows.
+
+ * **X64/AMD64**
+ ```powershell
+ $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
+ $ProgressPreference = 'SilentlyContinue'
+ Invoke-WebRequest "https://aka.ms/AzEFLOWMSI_1_4_LTS_X64" -OutFile $msiPath
+ ```
+
+ * **ARM64**
+ ```powershell
+ $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
+ $ProgressPreference = 'SilentlyContinue'
+ Invoke-WebRequest "https://aka.ms/AzEFLOWMSI_1_4_LTS_ARM64" -OutFile $msiPath
+ ```
+<!-- iotedge-1.4 -->
+ 1. Install IoT Edge for Linux on Windows on your device.
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
For more information about IoT Edge releases, see [Azure IoT Edge supported syst
### IoT Edge for Linux on Windows Azure IoT Edge for Linux on Windows (EFLOW) supports the following versions:
-* **EFLOW Continuous Release (CR)** based on the latest Azure IoT Edge version, it contains new features and capabilities that are in the latest stable release. For more information, see the [EFLOW release notes](https://github.com/Azure/iotedge-eflow/releases).
+* **EFLOW Continuous Release (CR)** based on the latest non-LTS Azure IoT Edge version, it contains new features and capabilities that are in the latest stable release. For more information, see the [EFLOW release notes](https://github.com/Azure/iotedge-eflow/releases).
* **EFLOW 1.1 (LTS)** based on Azure IoT Edge 1.1, it's the Long-term support version. This version will be stable through the supported lifetime of this version and won't include new features released in later versions. This version will be supported until Dec 2022 to match the IoT Edge 1.1 LTS release lifecycle. 
+* **EFLOW 1.4 (LTS)** based on Azure IoT Edge 1.4, it's the latest Long-term support version. This version will be stable through the supported lifetime of this version and won't include new features released in later versions. This version will be supported until Nov 2024 to match the IoT Edge 1.3 LTS release lifecycle. 
All new releases are made available in the [Azure IoT Edge for Linux on Windows project](https://github.com/Azure/iotedge-eflow).
This table provides recent version history for IoT Edge package releases, and hi
| IoT Edge release | Available in EFLOW branch | Release date | End of Support Date | Highlights | | - | - | | - | - |
-| 1.4 | Continuous release (CR) <br> Long-term support (LTS) | TBA | | |
+| 1.4 | Long-term support (LTS) | TBA | November 12, 2024 | [Azure IoT Edge 1.4.0](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0)<br/> [CBL-Mariner 2.0](https://microsoft.github.io/CBL-Mariner/announcing-mariner-2.0/)<br/> [USB passthrough using USB-Over-IP](https://aka.ms/AzEFLOW-USBIP)<br/>[File/Folder sharing between Windows OS and the EFLOW VM](https://aka.ms/AzEFLOW-FolderSharing) |
| 1.3 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.3.1.02092) | September 2022 | In support | [Azure IoT Edge 1.3.0](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0)<br/> [CBL-Mariner 2.0](https://microsoft.github.io/CBL-Mariner/announcing-mariner-2.0/)<br/> [USB passthrough using USB-Over-IP](https://aka.ms/AzEFLOW-USBIP)<br/>[File/Folder sharing between Windows OS and the EFLOW VM](https://aka.ms/AzEFLOW-FolderSharing) | | 1.2 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.2.7.07022) | January 2022 | September 2022 | [Public Preview](https://techcommunity.microsoft.com/t5/internet-of-things-blog/azure-iot-edge-for-linux-on-windows-eflow-continuous-release/ba-p/3169590) | | 1.1 | [Long-term support (LTS)](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2106.0) | June 2021 | December 13, 2022 | IoT Edge 1.1 LTS is supported through December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). <br> [Long-term support plan and supported systems updates](support.md) |
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
Inbound access from the `BatchNodeManagement` service tag to the virtual network
Follow these steps to [enable inbound access](/azure/load-testing/how-to-test-private-endpoint#configure-traffic-access) for the `BatchNodeManagement` service tag.
+### Creating or updating the load test fails with `Route Table has next hop set for address prefix 0.0.0.0/0`
+
+Your subnet route table has the next hop set type set to **Virtual appliance** for route [0.0.0.0/0](/azure/virtual-network/virtual-networks-udr-overview#default-route). This configuration would cause asymmetric routing for network packets while provisioning the virtual machines in the subnet.
+
+Perform either of two actions to resolve this error:
+
+- Use a different subnet, which doesn't have custom routes.
+- [Modify the subnet route table](/azure/virtual-network/manage-route-table) and set the next hop type for route 0.0.0.0/0 to **Internet**.
+
+Learn more about [virtual network traffic routing](/azure/virtual-network/virtual-networks-udr-overview).
+ ### Creating or updating the load test fails with `Subnet is in a different subscription than resource (ALTVNET011)` The virtual network isn't in the same subscription and region as your Azure load testing resource. Either move or recreate the Azure virtual network or the Azure load testing resource to the same subscription and region.
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Azure Machine Learning lets you bring data from a local machine or an existing cloud-based storage. In this article you will learn the main data concepts in Azure Machine Learning, including: > [!div class="checklist"]
-> - [**URIs**](#uris) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`. If you want to consume a file as an input of a job, You can define this job input by providing `type` as `uri_file`, `path` as where the file is.
+> - [**URIs**](#uris) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`. If you want to consume a file as an input of a job, you can define this job input by providing `type` as `uri_file`, `path` as where the file is.
> - [**MLTable**](#mltable) - `MLTable` helps you to abstract the schema definition for tabular data so it is more suitable for complex/changing schema or to be leveraged in automl. If you just want to create an data asset for a job or you want to write your own parsing logic in python you could use `uri_file`, `uri_folder`. > - [**Data asset**](#data-asset) - If you plan to share your data (URIs or MLTables) in your workspace to team members, or you want to track data versions, or track lineage, you can create data assets from URIs or MLTables you have. But if you didn't create data asset, you can still consume the data in jobs without lineange tracking, version management, etc. > - [**Datastore**](#datastore) - Azure Machine Learning Datastores securely keep the connection information(storage container name, credentials) to your data storage on Azure, so you don't have to code it in your scripts. You can use AzureML datastore uri and relative path to your data to point to your data. You can also register files/folders in your AzureML datastore into data assets.
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
View and change details of your project. In this tab you can:
[!INCLUDE [access](../../includes/machine-learning-data-labeling-access.md)]
-## Add new label class to a project
+## Add new labels to a project
[!INCLUDE [add-label](../../includes/machine-learning-data-labeling-add-label.md)]
machine-learning How To Create Text Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md
To directly upload your data:
> [!NOTE] > Incremental refresh is available for projects that use tabular (.csv or .tsv) dataset input. However, only new tabular files are added. Changes to existing tabular files will not be recognized from the refresh.
-## Specify label classes
+## Specify label categories
[!INCLUDE [classes](../../includes/machine-learning-data-labeling-classes.md)]
View and change details of your project. In this tab you can:
[!INCLUDE [access](../../includes/machine-learning-data-labeling-access.md)]
-## Add new label class to a project
+## Add new labels to a project
[!INCLUDE [add-label](../../includes/machine-learning-data-labeling-add-label.md)]
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
The following configurations are supported:
| Environment Name | OS | GPU Version| Python Version | PyTorch Version | ORT-training Version | DeepSpeed Version | torch-ort Version | | | | | | | | | |
-| AzureML-ACPT-pytorch-1.12-py39-cuda11.6-gpu | Ubuntu 20.04 | cu116 | 3.9 | 1.13.1 | 1.12.1 | 0.7.3 | 1.13.1 |
-| AzureML-ACPT-pytorch-1.12-py38-cuda11.6-gpu | Ubuntu 20.04 | cu116 | 3.8 | 1.12.0 | 1.12.0 | 0.7.3 | 1.12.0 |
+| AzureML-ACPT-pytorch-1.12-py39-cuda11.6-gpu | Ubuntu 20.04 | cu116 | 3.9 | 1.12.1 | 1.13.1 | 0.7.3 | 1.13.1 |
+| AzureML-ACPT-pytorch-1.12-py38-cuda11.6-gpu | Ubuntu 20.04 | cu116 | 3.8 | 1.12.1 | 1.12.0 | 0.7.3 | 1.12.0 |
| AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu | Ubuntu 20.04 | cu115 | 3.8 | 1.11.0 | 1.11.1 | 0.7.3 | 1.11.0 | | AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu | Ubuntu 20.04 | cu113 | 3.8 | 1.11.0 | 1.11.1 | 0.7.3 | 1.11.0 |
migrate Agent Based Migration Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/agent-based-migration-architecture.md
Title: Agent-based migration in Azure Migrate Server Migration description: Provides an overview of agent-based VMware VM migration in Azure Migrate.--
-ms.
++
+ms.
Previously updated : 02/17/2020 Last updated : 03/23/2021 - # Agent-based migration architecture This article provides an overview of the architecture and processes used for agent-based replication of VMware VMs with the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool.
migrate How To Automate Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-automate-migration.md
Title: Automate agentless VMware migrations in Azure Migrate description: Describes how to use scripts to migrate a large number of VMware VMs in Azure Migrate--
-ms.
++
+ms.
Last updated 5/2/2022
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
Title: Support for physical server migration in Azure Migrate description: Learn about support for physical server migration in Azure Migrate.--
-ms.
++
+ms.
Previously updated : 06/14/2020 Last updated : 07/22/2022 # Support matrix for migration of physical servers, AWS VMs, and GCP VMs
migrate Prepare Windows Server 2003 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-windows-server-2003-migration.md
Title: Prepare Windows Server 2003 servers for migration with Azure Migrate description: Learn how to prepare Windows Server 2003 servers for migration with Azure Migrate.--
-ms.
++
+ms.
Previously updated : 05/27/2020 Last updated : 10/31/2020
migrate Quickstart Create Migrate Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/quickstart-create-migrate-project.md
Title: Quickstart to create an Azure Migrate project using an Azure Resource Manager template. description: In this quickstart, you learn how to create an Azure Migrate project using an Azure Resource Manager template (ARM template). Previously updated : 04/23/2021--- Last updated : 07/28/2021++
+ms.
migrate Tutorial App Containerization Aspnet App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-app-service.md
Title: ASP.NET app containerization and migration to App Service description: This tutorial demonstrates how to containerize ASP.NET applications and migrate them to Azure App Service.---++
+ms.
Previously updated : 07/02/2021- Last updated : 10/14/2021 # ASP.NET app containerization and migration to Azure App Service
migrate Tutorial App Containerization Aspnet Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-kubernetes.md
Title: Azure App Containerization ASP.NET; Containerization and migration of ASP.NET applications to Azure Kubernetes. description: Tutorial:Containerize & migrate ASP.NET applications to Azure Kubernetes Service.---++
+ms.
Previously updated : 6/30/2021- Last updated : 03/24/2022 # ASP.NET app containerization and migration to Azure Kubernetes Service
migrate Tutorial App Containerization Azure Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-azure-pipeline.md
Title: Continuous Deployment for containerized applications with Azure DevOps description: Tutorial:Continuous Deployment for containerized applications with Azure DevOps--++
+ms.
Last updated 11/08/2021- # Continuous deployment for containerized applications with Azure DevOps
migrate Tutorial App Containerization Java App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md
Title: Containerization and migration of Java web applications to Azure App Service. description: Tutorial:Containerize & migrate Java web applications to Azure App Service.---++
+ms.
Last updated 5/2/2022- # Java web app containerization and migration to Azure App Service
migrate Tutorial App Containerization Java Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-kubernetes.md
Title: Azure App Containerization Java; Containerization and migration of Java web applications to Azure Kubernetes. description: Tutorial:Containerize & migrate Java web applications to Azure Kubernetes Service.---++
+ms.
Previously updated : 6/30/2021- Last updated : 03/24/2022 # Java web app containerization and migration to Azure Kubernetes Service
migrate Tutorial Migrate Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md
Title: Migrate Hyper-V VMs to Azure with Azure Migrate Server Migration description: Learn how to migrate on-premises Hyper-V VMs to Azure with Azure Migrate Server Migration--
-ms.
++
+ms.
Previously updated : 06/20/2022 Last updated : 08/18/2022
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
Title: Migrate machines as physical server to Azure with Azure Migrate. description: This article describes how to migrate physical machines to Azure with Azure Migrate.--
-ms.
++
+ms.
Previously updated : 01/02/2021 Last updated : 08/18/2022
migrate Tutorial Migrate Vmware Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-agent.md
Title: Migrate VMware vSphere VMs with agent-based Azure Migrate Server Migration description: Learn how to run an agent-based migration of VMware vSphere VMs with Azure Migrate.--
-ms.
++
+ms.
Last updated 10/04/2022
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
Title: Migrate VMware VMs to Azure (agentless) - PowerShell description: Learn how to run an agentless migration of VMware VMs with Azure Migrate through PowerShell.---++
+ms.
Previously updated : 08/20/2021 Last updated : 08/18/2022
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-monitoring.md
You can enable logging on your server. These resource logs can be sent to [Azure
## Query Performance Insight
-[Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible from the **Support + troubleshooting** section of your Azure Database for PostgreSQL server's portal page.
+[Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible from the **Intelligent Performance** section of your Azure Database for PostgreSQL server's portal page.
## Performance Recommendations
purview Concept Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-devops.md
Bob and Alice are DevOps users at their company. Given their role, they need to
## Next steps To get started with DevOps policies, consult the following guides:
-* Document: [Microsoft Purview DevOps policies on Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md)
-* Document: [Microsoft Purview DevOps policies on Azure SQL DB](./how-to-policies-devops-azure-sql-db.md)
+* Doc: [Microsoft Purview DevOps policies on Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md)
+* Doc: [Microsoft Purview DevOps policies on Azure SQL DB](./how-to-policies-devops-azure-sql-db.md)
+* Doc: [Microsoft Purview DevOps policies on resource groups and subscriptions](./how-to-policies-devops-resource-group.md)
* Blog: [New granular permissions for SQL Server 2022 and Azure SQL to help PoLP](https://techcommunity.microsoft.com/t5/sql-server-blog/new-granular-permissions-for-sql-server-2022-and-azure-sql-to/ba-p/3607507)
purview How To Policies Data Owner Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-arc-sql-server.md
Register each data source with Microsoft Purview to later define access policies
1. Select **Register** or **Apply** at the bottom
-Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture.
+Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture.
![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-arc-sql.png)
-> [!Note]
-> - If you want to create a policy on a resource group or subscription and have it enforced in Arc-enabled SQL servers, you will need to also register those servers independently for *Data Use Management* to provide their App ID. See this document on how to create policies at resource group or subscription level: [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-policies-data-owner-resource-group.md).
## Create and publish a data owner policy
Execute the steps in the **Create a new policy** and **Publish a policy** sectio
> - Publish is a background operation. It can take up to **5 minutes** for the changes to be reflected in this data source. > - Changing a policy does not require a new publish operation. The changes will be picked up with the next pull.
-### Test the policy
+
+## Unpublish a data owner policy
+Follow this link for the steps to [unpublish a data owner policy in Microsoft Purview](how-to-policies-data-owner-authoring-generic.md#unpublish-a-policy).
+
+## Update or delete a data owner policy
+Follow this link for the steps to [update or delete a data owner policy in Microsoft Purview](how-to-policies-data-owner-authoring-generic.md#update-or-delete-a-policy).
+
+## Test the policy
The Azure AD Accounts referenced in the access policies should now be able to connect to any database in the server to which the policies are published.
-#### Force policy download
+### Force policy download
It is possible to force an immediate download of the latest published policies to the current SQL database by running the following command. The minimal permission required to run it is membership in ##MS_ServerStateManager##-server role. ```sql
It is possible to force an immediate download of the latest published policies t
exec sp_external_policy_refresh reload ```
-#### Analyze downloaded policy state from SQL
+### Analyze downloaded policy state from SQL
The following DMVs can be used to analyze which policies have been downloaded and are currently assigned to Azure AD accounts. The minimal permission required to run them is VIEW DATABASE SECURITY STATE - or assigned Action Group *SQL Security Auditor*. ```sql
purview How To Policies Data Owner Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-authoring-generic.md
Before authoring data policies in the Microsoft Purview governance portal, you'l
## Create a new policy This section describes the steps to create a new policy in Microsoft Purview.
-Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-or-update-access-policies).
+Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-to-create-update-or-delete-access-policies).
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
The steps to publish a policy are as follows:
## Update or delete a policy Steps to update or delete a policy in Microsoft Purview are as follows.
-Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-or-update-access-policies)
+Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-to-create-update-or-delete-access-policies)
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
purview How To Policies Data Owner Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-azure-sql-db.md
Execute the steps in the **Create a new policy** and **Publish a policy** sectio
> - Publish is a background operation. It can take up to **5 minutes** for the changes to be reflected in this data source. > - Changing a policy does not require a new publish operation. The changes will be picked up with the next pull.
-### Test the policy
+
+## Unpublish a data owner policy
+Follow this link for the steps to [unpublish a data owner policy in Microsoft Purview](how-to-policies-data-owner-authoring-generic.md#unpublish-a-policy).
+
+## Update or delete a data owner policy
+Follow this link for the steps to [update or delete a data owner policy in Microsoft Purview](how-to-policies-data-owner-authoring-generic.md#update-or-delete-a-policy).
+
+## Test the policy
The Azure AD Accounts referenced in the access policies should now be able to connect to any database in the server to which the policies are published.
-#### Force policy download
+### Force policy download
It is possible to force an immediate download of the latest published policies to the current SQL database by running the following command. The minimal permission required to run it is membership in ##MS_ServerStateManager##-server role. ```sql
It is possible to force an immediate download of the latest published policies t
exec sp_external_policy_refresh reload ```
-#### Analyze downloaded policy state from SQL
+### Analyze downloaded policy state from SQL
The following DMVs can be used to analyze which policies have been downloaded and are currently assigned to Azure AD accounts. The minimal permission required to run them is VIEW DATABASE SECURITY STATE - or assigned Action Group *SQL Security Auditor*. ```sql
purview How To Policies Data Owner Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-resource-group.md
Previously updated : 10/10/2022 Last updated : 11/14/2022
In this guide we cover how to register an entire resource group or subscription
**Only these data sources are enabled for access policies on resource group or subscription**. Follow the **Prerequisites** section that is specific to the data source(s) in these guides: * [Data owner policies on an Azure Storage account](./how-to-policies-data-owner-storage.md#prerequisites)
-* [Data owner policies on an Azure SQL Database](./how-to-policies-data-owner-azure-sql-db.md#prerequisites)*
-* [Data owner policies on an Arc-enabled SQL Server](./how-to-policies-data-owner-arc-sql-server.md#prerequisites)*
+* [Data owner policies on an Azure SQL Database](./how-to-policies-data-owner-azure-sql-db.md#prerequisites)(*)
+* [Data owner policies on an Arc-enabled SQL Server](./how-to-policies-data-owner-arc-sql-server.md#prerequisites)(*)
-(*) Only the *SQL Performance monitoring* and *Security auditing* actions are fully supported for SQL-type data sources. The *Read* action needs a workaround described later in this guide. The *Modify* action is not currently supported for SQL-type data sources.
+(*) The *Modify* action is not currently supported for SQL-type data sources.
## Microsoft Purview configuration [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)] ### Register the subscription or resource group for Data Use Management
-The subscription or resource group needs to be registered with Microsoft Purview to later define access policies.
-
-To register your subscription or resource group, follow the **Prerequisites** and **Register** sections of this guide:
+The subscription or resource group needs to be registered with Microsoft Purview before you can create access policies. To register your subscription or resource group, follow the **Prerequisites** and **Register** sections of this guide:
- [Register multiple sources in Microsoft Purview](register-scan-azure-multiple-sources.md#prerequisites)
In the end, your resource will have the **Data Use Management** toggle **Enable
![Screenshot shows how to register a resource group or subscription for policy by toggling the enable tab in the resource editor.](./media/how-to-policies-data-owner-resource-group/register-resource-group-for-policy.png) >[!Important]
-> - If you want to create a policy on a resource group or subscription and have it enforced in Arc-enabled SQL servers, you will need to also register those servers independently for *Data use management* to provide their App ID.
+> - If you create a policy on a resource group or subscription and want to have it enforced in Arc-enabled SQL servers, you will need to also register those servers independently and enable *Data use management* which captures their App ID: [See this document](./how-to-policies-devops-arc-sql-server.md#register-data-sources-in-microsoft-purview).
+ ## Create and publish a data owner policy Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-policies-data-owner-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to the example shown in the image: a policy that provides security group *sg-Finance* *modify* access to resource group *finance-rg*. Use the Data source box in the Policy user experience.
Execute the steps in the **Create a new policy** and **Publish a policy** sectio
> - Publish is a background operation. For example, Azure Storage accounts can take up to **2 hours** to reflect the changes. > - Changing a policy does not require a new publish operation. The changes will be picked up with the next pull.
->[!Warning]
-> **Known Issues**
-> - No implicit connect permission is provided to SQL type data sources (e.g.: Azure SQL DB, SQL server on Azure Arc-enabled servers) when creating a policy with *Read* action on a resource group or subscription. To support this scenario, provide the connect permission to the Azure AD principals locally, i.e. directly in the SQL-type data sources.
+## Unpublish a data owner policy
+Follow this link for the steps to [unpublish a data owner policy in Microsoft Purview](how-to-policies-data-owner-authoring-generic.md#unpublish-a-policy).
+
+## Update or delete a data owner policy
+Follow this link for the steps to [update or delete a data owner policy in Microsoft Purview](how-to-policies-data-owner-authoring-generic.md#update-or-delete-a-policy).
## Additional information - Creating a policy at subscription or resource group level will enable the Subjects to access Azure Storage system containers, for example, *$logs*. If this is undesired, first scan the data source and then create finer-grained policies for each (that is, at container or sub-container level).
purview How To Policies Data Owner Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-storage.md
Execute the steps in the **Create a new policy** and **Publish a policy** sectio
>[!Important] > - Publish is a background operation. Azure Storage accounts can take up to **2 hours** to reflect the changes. +
+## Unpublish a data owner policy
+Follow this link for the steps to [unpublish a data owner policy in Microsoft Purview](how-to-policies-data-owner-authoring-generic.md#unpublish-a-policy).
+
+## Update or delete a data owner policy
+Follow this link for the steps to [update or delete a data owner policy in Microsoft Purview](how-to-policies-data-owner-authoring-generic.md#update-or-delete-a-policy).
+ ## Data Consumption - Data consumer can access the requested dataset using tools such as Power BI or Azure Synapse Analytics workspace. - Sub-container access: Policy statements set below container level on a Storage account are supported. However, users will not be able to browse to the data asset using Azure portal's Storage Browser or Microsoft Azure Storage Explorer tool if access is granted only at file or folder level of the Azure Storage account. This is because these apps attempt to crawl down the hierarchy starting at container level, and the request fails because no access has been granted at that level. Instead, the App that requests the data must execute a direct access by providing a fully qualified name to the data object. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide.
purview How To Policies Devops Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-arc-sql-server.md
Last updated 11/04/2022
-# Provision access to Arc-enabled SQL Server for DevOps actions (preview)
+# Provision access to system metadata in Arc-enabled SQL Server (preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
The Arc-enabled SQL Server data source needs to be registered first with Microso
Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture. ![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-arc-sql.png)
-> [!Note]
-> If you want to create a policy on a resource group or subscription and have it enforced in Arc-enabled SQL servers, you will need to also register those servers independently for *Data use management* to provide their App ID.
## Create a new DevOps policy Follow this link for the steps to [create a new DevOps policy in Microsoft Purview](how-to-policies-devops-authoring-generic.md#create-a-new-devops-policy).
Follow this link for the steps to [delete a DevOps policies in Microsoft Purview
>[!Important] > DevOps policies are auto-published and changes can take up to **5 minutes** to be enforced by the data source.
-### Test the policy
+## Test the policy
The Azure AD Accounts referenced in the access policies should now be able to connect to any database in the server to which the policies are published.
-#### Force policy download
+### Force policy download
It is possible to force an immediate download of the latest published policies to the current SQL database by running the following command. The minimal permission required to run it is membership in ##MS_ServerStateManager##-server role. ```sql
It is possible to force an immediate download of the latest published policies t
exec sp_external_policy_refresh reload ```
-#### Analyze downloaded policy state from SQL
+### Analyze downloaded policy state from SQL
The following DMVs can be used to analyze which policies have been downloaded and are currently assigned to Azure AD accounts. The minimal permission required to run them is VIEW DATABASE SECURITY STATE - or assigned Action Group *SQL Security Auditor*. ```sql
Check the blog and related docs
* Video: [Microsoft Purview DevOps policies on data sources and resource groups](https://youtu.be/YCDJagrgEAI) * Video: [Reduce the effort with Microsoft Purview DevOps policies on resource groups](https://youtu.be/yMMXCeIFCZ8) * Doc: [Microsoft Purview DevOps policies on Azure SQL DB](./how-to-policies-devops-azure-sql-db.md)
+* Doc: [Microsoft Purview DevOps policies on resource groups and subscriptions](./how-to-policies-devops-resource-group.md)
purview How To Policies Devops Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-azure-sql-db.md
Last updated 11/04/2022
-# Provision access to Azure SQL Database for DevOps actions (preview)
+# Provision access to system metadata in Azure SQL Database (preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
Follow this link for the steps to [delete a DevOps policies in Microsoft Purview
>[!Important] > DevOps policies are auto-published and changes can take up to **5 minutes** to be enforced by the data source.
-### Test the policy
+## Test the policy
The Azure AD Accounts referenced in the access policies should now be able to connect to any database in the server to which the policies are published.
-#### Force policy download
+### Force policy download
It is possible to force an immediate download of the latest published policies to the current SQL database by running the following command. The minimal permission required to run it is membership in ##MS_ServerStateManager##-server role. ```sql
It is possible to force an immediate download of the latest published policies t
exec sp_external_policy_refresh reload ```
-#### Analyze downloaded policy state from SQL
+### Analyze downloaded policy state from SQL
The following DMVs can be used to analyze which policies have been downloaded and are currently assigned to Azure AD accounts. The minimal permission required to run them is VIEW DATABASE SECURITY STATE - or assigned Action Group *SQL Security Auditor*. ```sql
Check the blog and related docs
* Video: [Microsoft Purview DevOps policies on data sources and resource groups](https://youtu.be/YCDJagrgEAI) * Video: [Reduce the effort with Microsoft Purview DevOps policies on resource groups](https://youtu.be/yMMXCeIFCZ8) * Doc: [Microsoft Purview DevOps policies on Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md)
+* Doc: [Microsoft Purview DevOps policies on resource groups and subscriptions](./how-to-policies-devops-resource-group.md)
+
purview How To Policies Devops Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-resource-group.md
+
+ Title: Provision access to resource groups and subscriptions for DevOps actions
+description: Step-by-step guide showing how to provision access to entire resource groups and subscriptions through Microsoft Purview DevOps policies
+++++ Last updated : 11/14/2022++
+# Provision access to system metadata in resource groups or subscriptions
+
+[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata (DMVs and DMFs) via *SQL Performance Monitoring* or *SQL Security Auditing* actions. They can be created only on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source. Microsoft Purview access policies apply to Azure AD Accounts only.
+
+In this guide we cover how to register an entire resource group or subscription and then create a single policy that will manage access to **all** data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards.
+
+## Prerequisites
+
+**Only these data sources are enabled for access policies on resource group or subscription**. Follow the **Prerequisites** section that is specific to the data source(s) in these guides:
+* [DevOps policies on an Azure SQL Database](./how-to-policies-devops-azure-sql-db.md#prerequisites)
+* [DevOps policies on an Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md#prerequisites)
+
+## Microsoft Purview Configuration
+
+### Register the subscription or resource group for Data Use Management
+The subscription or resource group needs to be registered with Microsoft Purview before you can create access policies. To register your subscription or resource group, follow the **Prerequisites** and **Register** sections of this guide:
+
+- [Register multiple sources in Microsoft Purview](register-scan-azure-multiple-sources.md#prerequisites)
+
+After you've registered your resources, you'll need to enable the Data Use Management option. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
+
+In the end, your resource will have the **Data Use Management** toggle **Enabled**, as shown in the screenshot:
+
+![Screenshot shows how to register a resource group or subscription for policy by toggling the enable tab in the resource editor.](./media/how-to-policies-data-owner-resource-group/register-resource-group-for-policy.png)
+
+>[!Important]
+> - If you create a policy on a resource group or subscription and want to have it enforced in Arc-enabled SQL servers, you will need to also register those servers independently and enable *Data use management* which captures their App ID: [See this document](./how-to-policies-devops-arc-sql-server.md#register-data-sources-in-microsoft-purview).
++
+## Create a new DevOps policy
+Follow this link for the steps to [create a new DevOps policy in Microsoft Purview](how-to-policies-devops-authoring-generic.md#create-a-new-devops-policy).
+
+## List DevOps policies
+Follow this link for the steps to [list DevOps policies in Microsoft Purview](how-to-policies-devops-authoring-generic.md#list-devops-policies).
+
+## Update a DevOps policy
+Follow this link for the steps to [update a DevOps policies in Microsoft Purview](how-to-policies-devops-authoring-generic.md#update-a-devops-policy).
+
+## Delete a DevOps policy
+Follow this link for the steps to [delete a DevOps policies in Microsoft Purview](how-to-policies-devops-authoring-generic.md#delete-a-devops-policy).
++
+### Test the policy
+To test the policy see the DevOps policy guides for the underlying data sources listed in the [next steps section](#next-steps) of this document.
+
+## Next steps
+Check the blog and related docs
+* Blog: [Microsoft Purview DevOps policies enable at scale access provisioning for IT operations](https://techcommunity.microsoft.com/t5/microsoft-purview-blog/microsoft-purview-devops-policies-enable-at-scale-access/ba-p/3604725)
+* Video: [Reduce the effort with Microsoft Purview DevOps policies on resource groups](https://youtu.be/yMMXCeIFCZ8)
+* Doc: [Microsoft Purview DevOps policies on Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md)
+* Doc: [Microsoft Purview DevOps policies on Azure SQL DB](./how-to-policies-devops-azure-sql-db.md)
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
To create and run a new scan, do the following:
### Supported policies The following types of policies are supported on this data resource from Microsoft Purview:
-* [DevOps policies](how-to-policies-devops-arc-sql-server.md)
-* [Data Owner](how-to-policies-data-owner-arc-sql-server.md)
+- [DevOps policies](concept-policies-devops.md)
+- [Data owner policies](concept-policies-data-owner.md)
+
+### Access policy pre-requisites on Arc enabled SQL Server
+
+### Configure the Microsoft Purview account for policies
+
+### Register the data source and enable Data use management
+The Arc-enabled SQL Server data source needs to be registered first with Microsoft Purview, before policies can be created.
+
+1. Sign in to Microsoft Purview Studio.
+
+1. Navigate to the **Data map** feature on the left pane, select **Sources**, then select **Register**. Type "Azure Arc" in the search box and select **SQL Server on Azure Arc**. Then select **Continue**
+![Screenshot shows how to select a source for registration.](./media/how-to-policies-data-owner-sql/select-arc-sql-server-for-registration.png)
+
+1. Enter a **Name** for this registration. It is best practice to make the name of the registration the same as the server name in the next step.
+
+1. select an **Azure subscription**, **Server name** and **Server endpoint**.
+
+1. **Select a collection** to put this registration in.
+
+1. Enable Data Use Management. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
+
+1. Upon enabling Data Use Management, Microsoft Purview will automatically capture the **Application ID** of the App Registration related to this Arc-enabled SQL server. Come back to this screen and hit the refresh button on the side of it to refresh, in case the association between the Arc-enabled SQL server and the App Registration changes in the future.
+
+1. Select **Register** or **Apply** at the bottom
+
+Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture.
+![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-arc-sql.png)
+
+### Create a policy
+To create an access policy for Arc-enabled SQL Server, follow these guides:
+* [DevOps policy on a single Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md#create-a-new-devops-policy)
+* [Data owner policy on a single Arc-enabled SQL Server](./how-to-policies-data-owner-arc-sql-server.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on a single Arc-enabled SQL Server in your subscription.
+* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
## Next steps Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.-
+- [DevOps policies in Microsoft Purview](concept-policies-devops.md)
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Troubleshoot Policy Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-policy-distribution.md
Previously updated : 11/09/2022 Last updated : 11/14/2022 # Tutorial: troubleshoot distribution of Microsoft Purview access policies (preview)
This guide will use examples for Azure SQL Server as data source.
* Register a data source, enable *Data use management*, and create a policy. To do so, follow one of the Microsoft Purview policies guides. To follow along the examples in this tutorial you can [create a DevOps policy for Azure SQL Database](how-to-policies-devops-azure-sql-db.md) * To establish a bearer token and to call any data plane APIs, see [the documentation about how to call REST APIs for Microsoft Purview data planes](tutorial-using-rest-apis.md). In order to be authorized to fetch policies, you need to be Policy Author, Data Source Admin or Data Curator at root-collection level in Microsoft Purview. You can assign those roles by following this guide: [managing Microsoft Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
-## Overview
+## Overviewrelecloud-sql-srv1
There are two ways to fetch access policies from Microsoft Purview - Full pull: Provides a complete set of policies for a particular data resource scope. - Delta pull: Provides an incremental view of policies, that is, what changed since the last pull request, regardless of whether the last pull was a full or a delta one. A full pull is required prior to issuing the first delta pull.
where the path /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName
##### Example parameters: - Microsoft Purview account: relecloud-pv-- Data source Resource ID: /subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1
+- Data source Resource ID: /subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1
##### Example request: ```
-GET https://relecloud-pv.purview.azure.com/pds/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyElements?api-version=2021-01-01-preview
+GET https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyElements?api-version=2021-01-01-preview
``` ##### Example response:
GET https://relecloud-pv.purview.azure.com/pds/subscriptions/b285630c-8185-456b-
"kind": "policy", "updatedAt": "2022-11-04T20:57:20.9389522Z", "version": 1,
- "elementJson": "{\"id\":\"9912572d-58bc-4835-a313-b913ac5bef97\",\"name\":\"Finance-rg_sqlsecurityauditor\",\"kind\":\"policy\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389522Z\",\"decisionRules\":[{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"fromRule\":\"purviewdatarole_builtin_sqlsecurityauditor\",\"attributeName\":\"derived.purview.role\",\"attributeValueIncludes\":\"purviewdatarole_builtin_sqlsecurityauditor\"}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_0235e4df-0d3f-41ca-98ed-edf1b8bfcf9f\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_45fa5236-a2a3-4291-9f0a-813b2883f118\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/databases/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]}]}"
+ "elementJson": "{\"id\":\"9912572d-58bc-4835-a313-b913ac5bef97\",\"name\":\"marketing-rg_sqlsecurityauditor\",\"kind\":\"policy\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389522Z\",\"decisionRules\":[{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"fromRule\":\"purviewdatarole_builtin_sqlsecurityauditor\",\"attributeName\":\"derived.purview.role\",\"attributeValueIncludes\":\"purviewdatarole_builtin_sqlsecurityauditor\"}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_0235e4df-0d3f-41ca-98ed-edf1b8bfcf9f\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_45fa5236-a2a3-4291-9f0a-813b2883f118\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/databases/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]}]}"
}, { "id": "f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4", "scopes": [
- "/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg"
+ "/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"
], "kind": "policyset", "updatedAt": "2022-11-04T20:57:20.9389456Z", "version": 1,
- "elementJson": "{\"id\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"name\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"kind\":\"policyset\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389456Z\",\"preconditionRules\":[{\"dnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}]]}],\"policyRefs\":[\"9912572d-58bc-4835-a313-b913ac5bef97\"]}"
+ "elementJson": "{\"id\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"name\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"kind\":\"policyset\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389456Z\",\"preconditionRules\":[{\"dnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}]]}],\"policyRefs\":[\"9912572d-58bc-4835-a313-b913ac5bef97\"]}"
} ] }
Provide the syncToken you got from the prior pull in any successive delta pulls.
##### Example parameters: - Microsoft Purview account: relecloud-pv-- Data source Resource ID: /subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1
+- Data source Resource ID: /subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1
- syncToken: 820:0 ##### Example request: ```
-https://relecloud-pv.purview.azure.com/pds/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyEvents?api-version=2021-01-01-preview&syncToken=820:0
+https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyEvents?api-version=2021-01-01-preview&syncToken=820:0
``` ##### Example response:
https://relecloud-pv.purview.azure.com/pds/subscriptions/b285630c-8185-456b-80ae
"eventType": "Microsoft.Purview/PolicyElements/Delete", "id": "f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4", "scopes": [
- "/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg"
+ "/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"
], "kind": "policyset", "updatedAt": "2022-11-04T20:57:20.9389456Z", "version": 1,
- "elementJson": "{\"id\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"name\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"kind\":\"policyset\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389456Z\",\"preconditionRules\":[{\"dnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}]]}],\"policyRefs\":[\"9912572d-58bc-4835-a313-b913ac5bef97\"]}"
+ "elementJson": "{\"id\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"name\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"kind\":\"policyset\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389456Z\",\"preconditionRules\":[{\"dnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}]]}],\"policyRefs\":[\"9912572d-58bc-4835-a313-b913ac5bef97\"]}"
}, { "eventType": "Microsoft.Purview/PolicyElements/Delete", "id": "9912572d-58bc-4835-a313-b913ac5bef97", "scopes": [
- "/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg"
+ "/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"
], "kind": "policy", "updatedAt": "2022-11-04T20:57:20.9389522Z", "version": 1,
- "elementJson": "{\"id\":\"9912572d-58bc-4835-a313-b913ac5bef97\",\"name\":\"Finance-rg_sqlsecurityauditor\",\"kind\":\"policy\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389522Z\",\"decisionRules\":[{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"fromRule\":\"purviewdatarole_builtin_sqlsecurityauditor\",\"attributeName\":\"derived.purview.role\",\"attributeValueIncludes\":\"purviewdatarole_builtin_sqlsecurityauditor\"}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_0235e4df-0d3f-41ca-98ed-edf1b8bfcf9f\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_45fa5236-a2a3-4291-9f0a-813b2883f118\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/databases/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]}]}"
+ "elementJson": "{\"id\":\"9912572d-58bc-4835-a313-b913ac5bef97\",\"name\":\"marketing-rg_sqlsecurityauditor\",\"kind\":\"policy\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389522Z\",\"decisionRules\":[{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"fromRule\":\"purviewdatarole_builtin_sqlsecurityauditor\",\"attributeName\":\"derived.purview.role\",\"attributeValueIncludes\":\"purviewdatarole_builtin_sqlsecurityauditor\"}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_0235e4df-0d3f-41ca-98ed-edf1b8bfcf9f\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_45fa5236-a2a3-4291-9f0a-813b2883f118\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/databases/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]}]}"
} ] } ```
-In this example, the delta pull communicates the event that the policy on the resource group Finance-rg, which had the scope ```"scopes": ["/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg"]``` was deleted, per the ```"eventType": "Microsoft.Purview/PolicyElements/Delete"```.
+In this example, the delta pull communicates the event that the policy on the resource group marketing-rg, which had the scope ```"scopes": ["/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"]``` was deleted, per the ```"eventType": "Microsoft.Purview/PolicyElements/Delete"```.
## Policy constructs
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
Previously updated : 05/12/2021 Last updated : 11/14/2022 # Customer Lockbox for Microsoft Azure
security Encryption Atrest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-atrest.md
na Previously updated : 11/10/2022 Last updated : 11/14/2022
security Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-overview.md
Previously updated : 10/26/2021 Last updated : 11/14/2022 # Azure encryption overview
service-fabric Service Fabric Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started.md
For latest Runtime and SDK you can download from below:
| Package |Version| | | |
-|[Install Service fabric runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.9.1.1390.9590.exe) | 9.1.1390 |
+|[Install Service Fabric Runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.9.1.1390.9590.exe) | 9.1.1390 |
|[Install Service Fabric SDK](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.6.1.1390.msi) | 6.1.1390 | You can find direct links to the installers for previous releases on [Service Fabric Releases](https://github.com/microsoft/service-fabric/tree/master/release_notes)
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
Title: Enable Microsoft Defender for Storage
+ Title: Protect your Azure storage accounts using Microsoft Defender for Cloud
-description: Configure Microsoft Defender for Storage to detect anomalies in account activity and be notified of potentially harmful attempts to access your account.
+description: Configure Microsoft Defender for Storage to detect anomalies in account activity and be notified of potentially harmful attempts to access the storage accounts in your subscription.
# Enable Microsoft Defender for Storage
-> [!NOTE]
-> A new pricing plan is now available for Microsoft Defender for Cloud that charges you according to the number of storage accounts that you protect (per-storage).
->
-> In the legacy pricing plan, the cost increases according to the number of analyzed transactions in the storage account (per-transaction). The new per-storage account plan fixes costs per storage account, but accounts with an exceptionally high transaction volume incur an overage charge.
->
-> For details about the pricing plans, see [Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-- **Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks. Microsoft Defender for Storage continuously analyzes the transactions of [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/), [Azure Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), and [Azure Files](https://azure.microsoft.com/services/storage/files/) services. When potentially malicious activities are detected, security alerts are generated. Alerts are shown in Microsoft Defender for Cloud with the details of the suspicious activity, appropriate investigation steps, remediation actions, and security recommendations.
Analyzed transactions of Azure Blob Storage include operation types such asΓÇ»`G
**Defender for Storage doesn't access the Storage account data, doesn't require you to enable access logs, and has no impact on Storage performance.**
+> [!NOTE]
+> A new pricing is now available for Microsoft Defender for Cloud that charges you according to the number of storage accounts that you protect (per-storage account).
+>
+> In the legacy pricing, the cost increases according to the number of analyzed transactions in the storage account (per-transaction). The new per-storage account fixes costs per storage account, but accounts with an exceptionally high transaction volume incur an overage charge.
+>
+> For details about Defender for Storage pricing, see [Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+ Learn more about the [benefits, features, and limitations of Defender for Storage](../../defender-for-cloud/defender-for-storage-introduction.md). You can also learn more about Defender for Storage in the [Defender for Storage episode](../../defender-for-cloud/episode-thirteen.md) of the Defender for Cloud in the Field video series. ## Availability
Learn more about the [benefits, features, and limitations of Defender for Storag
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
+|Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) and in the [Defender plans page](#azure-portal) in the Azure portal |
|Protected storage types:|[Blob Storage](../blobs/storage-blobs-introduction.md) (Standard/Premium StorageV2, Block Blobs) <br>[Azure Files](../files/storage-files-introduction.md) (over REST API and SMB)<br>[Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) (Standard/Premium accounts with hierarchical namespaces enabled)| |Clouds:|:::image type="icon" source="../../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="../../defender-for-cloud/media/icons/yes-icon.png"::: Azure Government (Only for per-transaction plan)<br>:::image type="icon" source="../../defender-for-cloud/media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="../../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts|
-## Set up Microsoft Defender for Storage for the per-storage account pricing plan
+## Set up Microsoft Defender for Storage
+
+## [Per-storage account pricing](#tab/per-storage-account/)
> [!NOTE]
-> You can only enable the per-storage account pricing plan at the subscription level.
+> You can only enable per-storage account pricing at the subscription level.
-With the Defender for Storage per-storage account pricing plan, you can configure Microsoft Defender for Storage on your subscriptions in several ways. When the plan is enabled at the subscription level, Microsoft Defender for Storage is automatically enabled for all your existing and new storage accounts created under that subscription.
+With the Defender for Storage per-storage account pricing, you can configure Defender for Storage on your subscriptions in several ways to protect all your existing and new storage accounts in that subscription.
You can configure Microsoft Defender for Storage on your subscriptions in several ways:
You can configure Microsoft Defender for Storage on your subscriptions in severa
### Azure portal
-To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using the Azure portal:
+To enable Microsoft Defender for Storage at the subscription level with per-storage account pricing using the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com/).
To enable Microsoft Defender for Storage at the subscription level with the per-
:::image type="content" source="media/azure-defender-storage-configure/defender-for-cloud-select-subscription.png" alt-text="Screenshot showing how to select a subscription in Defender for Cloud." lightbox="media/azure-defender-storage-configure/defender-for-cloud-select-subscription.png":::
-1. In the Defender plans page, to enable Defender for Storage either:
+1. In the Defender plans page, to enable Defender for Storage per-storage account pricing either:
- Select **Enable all Microsoft Defender plans** to enable Microsoft Defender for Cloud in the subscription. - For Microsoft Defender for Storage, select **On** to turn on Defender for Storage, and select **Save**.
+ - If you currently have Defender for Storage enabled with per-transaction pricing, select the **New pricing plan available** link and confirm the pricing change.
:::image type="content" source="media/azure-defender-storage-configure/enable-azure-defender-security-center.png" alt-text="Screenshot showing how to enable Defender for Storage in Defender for Cloud." lightbox="media/azure-defender-storage-configure/enable-azure-defender-security-center.png":::
Microsoft Defender for Storage is now enabled for this storage account.
To disable the plan, select **Off** for Defender for Storage in the Defender plans page.
-### Bicep template
+### Enable per-storage account pricing programmatically
+
+#### Bicep template
-To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using [Bicep](../../azure-resource-manager/bicep/overview.md), add the following to your Bicep template:
+To enable Microsoft Defender for Storage at the subscription level with per-storage account pricing using [Bicep](../../azure-resource-manager/bicep/overview.md), add the following to your Bicep template:
```bicep resource symbolicname 'Microsoft.Security/pricings@2022-03-01' = {
To disable the plan, set the `pricingTier` property value to `Free` and remove t
Learn more about the [Bicep template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-bicep&source=docs).
-### ARM template
+#### ARM template
-To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using an ARM template, add this JSON snippet to the resources section of your ARM template:
+To enable Microsoft Defender for Storage at the subscription level with per-storage account pricing using an ARM template, add this JSON snippet to the resources section of your ARM template:
```json {
To disable the plan, set the `pricingTier` property value to `Free` and remove t
Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-arm-template).
-### Terraform template
+#### Terraform template
-To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using a Terraform template, add this code snippet to your template with your subscription ID as the `parent_id` value:
+To enable Microsoft Defender for Storage at the subscription level with per-storage account pricing using a Terraform template, add this code snippet to your template with your subscription ID as the `parent_id` value:
```terraform resource "azapi_resource" "symbolicname" {
To disable the plan, set the `pricingTier` property value to `Free` and remove t
Learn more about the [Terraform template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-terraform).
-### REST API
+#### REST API
-To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using the Microsoft Defender for Cloud REST API, create a PUT request with this endpoint and body:
+To enable Microsoft Defender for Storage at the subscription level with per-storage account pricing using the Microsoft Defender for Cloud REST API, create a PUT request with this endpoint and body:
```http PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2022-03-01
PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Micros
Replace `{subscriptionId}` with your subscription ID. > [!TIP]
-> You can use the [Get](/rest/api/defenderforcloud/pricings/get.md) and [List](/rest/api/defenderforcloud/pricings/list.md) API requests to see all of the Defender for Cloud plans that are enabled for the subscription.
+> You can use the [Get](/rest/api/defenderforcloud/pricings/get) and [List](/rest/api/defenderforcloud/pricings/list) API requests to see all of the Defender for Cloud plans that are enabled for the subscription.
To disable the plan, set the `-pricingTier` property value to `Free` and remove the `subPlan` parameter.
-Learn more about the [updating Defender plans with the REST API](/rest/api/defenderforcloud/pricings/update.md) in HTTP, Java, Go and JavaScript.
+Learn more about the [updating Defender plans with the REST API](/rest/api/defenderforcloud/pricings/update) in HTTP, Java, Go and JavaScript.
-## Set up Microsoft Defender for Storage for the per-transaction pricing plan
+## [Per-transaction pricing](#tab/per-transaction/)
-For the Defender for Storage per-transaction pricing plan, we recommend that you [configure the plan for each subscription](#set-up-the-per-transaction-pricing-plan-for-a-subscription) so that all existing and new storage accounts are protected. If you want to only protect specific accounts, [configure the plan for each account](#set-up-the-per-transaction-pricing-plan-for-an-account).
+For the Defender for Storage per-transaction pricing, we recommend that you enable Defender for Storage for each subscription so that all existing and new storage accounts are protected. If you want to only protect specific accounts, [configure Defender for Storage for each account](#set-up-per-transaction-pricing-for-an-account).
-### Set up the per-transaction pricing plan for a subscription
+### Set up per-transaction pricing for a subscription
You can configure Microsoft Defender for Storage on your subscriptions in several ways:
You can configure Microsoft Defender for Storage on your subscriptions in severa
#### Bicep template
-To enable Microsoft Defender for Storage at the subscription level with the per-transaction plan using [Bicep](../../azure-resource-manager/bicep/overview.md), add the following to your Bicep template:
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using [Bicep](../../azure-resource-manager/bicep/overview.md), add the following to your Bicep template:
```bicep resource symbolicname 'Microsoft.Security/pricings@2022-03-01' = {
Learn more about the [Bicep template AzAPI reference](/azure/templates/microsoft
#### ARM template
-To enable Microsoft Defender for Storage at the subscription level with the per-transaction plan using an ARM template, add this JSON snippet to the resources section of your ARM template:
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using an ARM template, add this JSON snippet to the resources section of your ARM template:
```json {
Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.s
#### Terraform template
-To enable Microsoft Defender for Storage at the subscription level with the per-transaction plan using a Terraform template, add this code snippet to your template with your subscription ID as the `parent_id` value:
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using a Terraform template, add this code snippet to your template with your subscription ID as the `parent_id` value:
```terraform resource "azapi_resource" "symbolicname" {
Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.s
#### PowerShell
-To enable Microsoft Defender for Storage at the subscription level with the per-transaction plan using PowerShell:
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using PowerShell:
1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps.md). 1. Use the `Connect-AzAccount` cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps.md).
Learn more about the [using PowerShell with Microsoft Defender for Cloud](../../
#### Azure CLI
-To enable Microsoft Defender for Storage at the subscription level with the per-transaction plan using Azure CLI:
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using Azure CLI:
1. If you don't have it already, [install the Azure CLI](/cli/azure/install-azure-cli). 1. Use the `az login` command to sign in to your Azure account. Learn more about [signing in to Azure with Azure CLI](/cli/azure/authenticate-azure-cli).
Learn more about the [`az security pricing create`](/cli/azure/security/pricing.
#### REST API
-To enable Microsoft Defender for Storage at the subscription level with the per-transaction plan using the Microsoft Defender for Cloud REST API, create a PUT request with this endpoint and body:
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using the Microsoft Defender for Cloud REST API, create a PUT request with this endpoint and body:
```http PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2022-03-01
Replace `{subscriptionId}` with your subscription ID.
To disable the plan, set the `-pricingTier` property value to `Free` and remove the `subPlan` parameter.
-Learn more about the [updating Defender plans with the REST API](/rest/api/defenderforcloud/pricings/update.md) in HTTP, Java, Go and JavaScript.
+Learn more about the [updating Defender plans with the REST API](/rest/api/defenderforcloud/pricings/update) in HTTP, Java, Go and JavaScript.
-### Set up the per-transaction pricing plan for an account
+### Set up per-transaction pricing for an account
-You can configure Microsoft Defender for Storage on your accounts in several ways:
+You can configure Microsoft Defender for Storage with per-transaction pricing on your accounts in several ways:
- [Azure portal](#azure-portal-1) - [ARM template](#arm-template-2)
You can configure Microsoft Defender for Storage on your accounts in several way
#### Azure portal
-To enable Microsoft Defender for Storage for a specific account with the per-transaction plan using the Azure portal:
+To enable Microsoft Defender for Storage for a specific account with per-transaction pricing using the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Navigate to your storage account. 1. In the Security + networking section of the Storage account menu, select **Microsoft Defender for Cloud**. 1. Select **Enable Defender on this storage account only**. Microsoft Defender for Storage is now enabled for this storage account. If you want to disable Defender for Storage on the account, select **Disable**. #### ARM template
-To enable Microsoft Defender for Storage for a specific storage account with the per-transaction plan using an ARM template, use [the prepared Azure template](https://azure.microsoft.com/resources/templates/storage-advanced-threat-protection-create/).
+To enable Microsoft Defender for Storage for a specific storage account with per-transaction pricing using an ARM template, use [the prepared Azure template](https://azure.microsoft.com/resources/templates/storage-advanced-threat-protection-create/).
If you want to disable Defender for Storage on the account:
If you want to disable Defender for Storage on the account:
#### PowerShell
-To enable Microsoft Defender for Storage for a specific storage account with the per-transaction plan using PowerShell:
+To enable Microsoft Defender for Storage for a specific storage account with per-transaction pricing using PowerShell:
1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps.md). 1. Use the Connect-AzAccount cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps.md).
To enable Microsoft Defender for Storage for a specific storage account with the
Replace `<subscriptionId>`, `<resource-group>`, and `<storage-account>` with the values for your environment.
-If you want to disable the per-transaction plan for a specific storage account, use the [`Disable-AzSecurityAdvancedThreatProtection`](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection.md) cmdlet:
+If you want to disable per-transaction pricing for a specific storage account, use the [`Disable-AzSecurityAdvancedThreatProtection`](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection.md) cmdlet:
```powershell Disable-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/"
Learn more about the [using PowerShell with Microsoft Defender for Cloud](../../
#### Azure CLI
-To enable Microsoft Defender for Storage for a specific storage account with the per-transaction plan using Azure CLI:
+To enable Microsoft Defender for Storage for a specific storage account with per-transaction pricing using Azure CLI:
1. If you don't have it already, [install the Azure CLI](/cli/azure/install-azure-cli). 1. Use the `az login` command to sign in to your Azure account. Learn more about [signing in to Azure with Azure CLI](/cli/azure/authenticate-azure-cli).
az security atp storage update \
Learn more about the [az security atp storage](/cli/azure/security/atp/storage#az-security-atp-storage-update) command.
-## FAQ - Microsoft Defender for Storage pricing plans
+
-### Can I switch from an existing per-transaction plan to the per-storage account plan?
+## FAQ - Microsoft Defender for Storage pricing
+### Can I switch from an existing per-transaction pricing to per-storage account pricing?
-Yes, you can migrate to the per-storage account plan from the Azure portal or all the other supported enablement methods. To migrate to the per-storage account plan, [enable the per-storage account plan at the subscription level](#set-up-microsoft-defender-for-storage-for-the-per-storage-account-pricing-plan).
+Yes, you can migrate to per-storage account pricing in the Azure portal or using any of the other supported enablement methods. To migrate to per-storage account pricing, [enable per-storage account pricing at the subscription level](#set-up-microsoft-defender-for-storage).
-### Can I return to the per-transaction plan after switching to the per-storage account plan?
+### Can I return to per-transaction pricing after switching to per-storage account pricing?
-Yes, you can enable the per-transaction to migrate back from the per-storage account plan using all enablement methods except for the Azure portal.
+Yes, you can [enable per-transaction pricing](#set-up-microsoft-defender-for-storage) to migrate back from per-storage account pricing using all enablement methods except for the Azure portal.
-### Will you continue supporting the per-transaction plan?
+### Will you continue supporting per-transaction pricing?
-Yes, you can [enable the per-transaction plan](#set-up-microsoft-defender-for-storage-for-the-per-transaction-pricing-plan) from all the enablement methods, except for the Azure portal.
+Yes, you can [enable per-transaction pricing](#set-up-microsoft-defender-for-storage) from all the enablement methods, except for the Azure portal.
-### Can I exclude specific storage accounts from protections in the per-storage account plan?
+### Can I exclude specific storage accounts from protections in per-storage account pricing?
-No, you can only enable the per-storage account pricing plan for each subscription. All storage accounts in the subscription are protected.
+No, you can only enable per-storage account pricing for each subscription. All storage accounts in the subscription are protected.
-### How long does it take for the per-storage account plan to be enabled?
+### How long does it take for per-storage account pricing to be enabled?
-When you enable Microsoft Defender for Storage at the subscription level for the per-storage account or per-transaction plans, it takes up to 24 hours for the plan to be enabled.
+When you enable Microsoft Defender for Storage at the subscription level for per-storage account or per-transaction pricing, it takes up to 24 hours for the plan to be enabled.
-### Is there any difference in the feature set of the per-storage account plan compared to the legacy per-transaction plan?
+### Is there any difference in the feature set of per-storage account pricing compared to the legacy per-transaction pricing?
-No. Both the per-storage account and per-transaction plans include the same features. The only difference is the pricing plan.
+No. Both per-storage account and per-transaction pricing include the same features. The only difference is the pricing.
-### How can I estimate the cost of the pricing plans?
+### How can I estimate the cost for each pricing?
-To estimate the cost of each of the pricing plans for your environment, we created a [pricing estimation workbook](https://aka.ms/dfstoragecosttool) and a PowerShell script that you can run in your environment.
+To estimate the cost according to each pricing for your environment, we created a [pricing estimation workbook](https://aka.ms/dfstoragecosttool) and a PowerShell script that you can run in your environment.
## Next steps
synapse-analytics Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md
Learn how to use backup and restore in Azure Synapse Dedicated SQL pool. Use ded
A *data warehouse snapshot* creates a restore point you can leverage to recover or copy your data warehouse to a previous state. Since dedicated SQL pool is a distributed system, a data warehouse snapshot consists of many files that are located in Azure storage. Snapshots capture incremental changes from the data stored in your data warehouse.
-A *data warehouse restore* is a new data warehouse that is created from a restore point of an existing or deleted data warehouse. Restoring your data warehouse is an essential part of any business continuity and disaster recovery strategy because it re-creates your data after accidental corruption or deletion. Data warehouse snapshot is also a powerful mechanism to create copies of your data warehouse for test or development purposes. Dedicated SQL pool restore rates can vary depending on the database size and location of the source and target data warehouse.
+A *data warehouse restore* is a new data warehouse that is created from a restore point of an existing or deleted data warehouse. Restoring your data warehouse is an essential part of any business continuity and disaster recovery strategy because it re-creates your data after accidental corruption or deletion. Data warehouse snapshot is also a powerful mechanism to create copies of your data warehouse for test or development purposes.
+
+> [!NOTE]Dedicated SQL pool Recovery Time Objective (RTO) rates can vary. Factors that may affect the recovery (restore) time:
+> - The database size
+> - The location of the source and target data warehouse (i.e., geo-restore)
## Automatic Restore Points
virtual-desktop App Attach Image Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-image-prep.md
To expand the MSIX image:
>[!NOTE] > If you're using packages from the Microsoft Store for Business or Education on your network or on devices not connected to the internet, you'll need to download and install package licenses from the Microsoft Store to run the apps. To get the licenses, see [Use packages offline](app-attach.md#use-packages-offline).
-6. Go the mounted VHD and open the app folder to make sure the package contents are there.
+6. Go to the mounted VHD and open the app folder to make sure the package contents are there.
7. Unmount the VHD.
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resize-vm.md
Previously updated : 11/14/2021 Last updated : 11/14/2022
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 11/08/2022 Last updated : 11/14/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log -- November 07, 2022: Added HANA hook susChkSrv for scale-up pacemaker cluster in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [High availability of SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md).
+- November 14, 2022: Proveded more details about nconnect mount option in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
+- November 14, 2022: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to update suggested timeouts for `FileSystem` Pacemaker cluster resources
+- November 07, 2022: Added HANA hook susChkSrv for scale-up pacemaker cluster in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [High availability of SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md)
- November 07, 2022: Added monitor operation for azure-lb resource in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [SAP HANA scale-out with HSR and Pacemaker on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [Set up IBM Db2 HADR on Azure virtual machines (VMs)](dbms-guide-ha-ibm.md), [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md), [Azure VMs high availability for SAP NetWeaver on SLES](high-availability-guide-suse.md), [High availability for NFS on Azure VMs on SLES](high-availability-guide-suse-nfs.md), [Azure VMs high availability for SAP NetWeaver on SLES multi-SID guide](high-availability-guide-suse-multi-sid.md) - October 31, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) to fix script location for DRBD 9.0 - October 31, 2022: Change in [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) to update the guideline for sizing `/hana/shared`
virtual-machines Hana Vm Operations Netapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-netapp.md
keywords: 'SAP, Azure, ANF, HANA, Azure NetApp Files, snapshot'
Previously updated : 02/07/2022 Last updated : 11/14/2022
When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware
> [!IMPORTANT] > If there's a mismatch between User ID for <b>sid</b>adm and the Group ID for `sapsys` between the virtual machine and the Azure NetApp configuration, the permissions for files on Azure NetApp volumes, mounted to the VM, would be be displayed as `nobody`. Make sure to specify the correct User ID for <b>sid</b>adm and the Group ID for `sapsys`, when [on-boarding a new system](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRxjSlHBUxkJBjmARn57skvdUQlJaV0ZBOE1PUkhOVk40WjZZQVJXRzI2RC4u) to Azure NetApp Files.
+## NCONNECT mount option
+Nconnect is a mount option for NFS volumes hosted on ANF that allows the NFS client to open multiple sessions against a single NFS volume. Using nconnect with a value of larger than 1 also triggers the NFS client to use more than one RPC session on the client side (in the guest OS) to handle the traffic between the guest OS and the mounted NFS volumes. The usage of multiple sessions handling traffic of one NFS volume, but also the usage of multiple RPC sessions can address performance and throughput scenarios like:
+
+- Mounting of multiple ANF hosted NFS volumes with different [service levels](../../../azure-netapp-files/azure-netapp-files-service-levels.md#supported-service-levels) in one VM
+- The maximum write throughput for a volume and a single Linux session is between 1.2 and 1.4 GB/s. Having multiple sessions against one ANF hosted NFS volume can increase the throughput
+
+For Linux OS releases that support nconnect as a mount option and some important configuration considerations of nconnect, especially with different NFS server endpoints, read the document [Linux NFS mount options best practices for Azure NetApp Files](../../../azure-netapp-files/performance-linux-mount-options.md).
+ ## Sizing for HANA database on Azure NetApp Files
Important to understand is the performance relationship the size and that there
The table below demonstrates that it could make sense to create a large ΓÇ£StandardΓÇ¥ volume to store backups and that it doesn't make sense to create a ΓÇ£UltraΓÇ¥ volume larger than 12 TB because the maximal physical bandwidth capacity of a single volume would be exceeded.
-The maximum write throughput for a volume and a single Linux session is between 1.2 and 1.4 GB/s. If you require more throughput for /han). For more details on HANA data volume striping read these articles:
+If you require more than the maximum write throughput for your **/hana/data** volume than a single Linux session can provide, you could also use SAP HANA data volume partitioning as an alternative. SAP HANA data volume partitioning stripes the I/O activity during data reload or HANA savepoints across multiple HANA data files that are located on multiple NFS shares. For more details on HANA data volume striping read these articles:
- [The HANA Administrator's Guide](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.05/en-US/40b2b2a880ec4df7bac16eae3daef756.html?q=hana%20data%20volume%20partitioning) - [Blog about SAP HANA ΓÇô Partitioning Data Volumes](https://blogs.sap.com/2020/10/07/sap-hana-partitioning-data-volumes/)
Therefore you could consider to deploy similar throughput for the ANF volumes as
Documentation on how to deploy an SAP HANA scale-out configuration with standby node using NFS v4.1 volumes that are hosted in ANF is published in [SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server](./sap-hana-scale-out-standby-netapp-files-suse.md). ## Linux Kernel Settings
-To successfully deploy SAP HANA on ANF Linux kernel settings need to be implemented according to SAP note [3024346](https://launchpad.support.sap.com/#/notes/3024346).
+To successfully deploy SAP HANA on ANF, Linux kernel settings need to be implemented according to SAP note [3024346](https://launchpad.support.sap.com/#/notes/3024346).
-For systems using High Availability (HA) using pacemaker and Azure Load Balancer following settings need to be implemeneted in file /etc/sysctl.d/91-NetApp-HANA.conf
+For systems using High Availability (HA) using pacemaker and Azure Load Balancer following settings need to be implemented in file /etc/sysctl.d/91-NetApp-HANA.conf
``` net.core.rmem_max = 16777216
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 1 ```
-Systems running with no pacemaker and Azure Load Balancer should implemented these settings in /etc/sysctl.d/91-NetApp-HANA.conf
+Systems running with no pacemaker and Azure Load Balancer should implement these settings in /etc/sysctl.d/91-NetApp-HANA.conf
``` net.core.rmem_max = 16777216
SAP HANA supports:
Creating storage-based snapshot backups is a simple four-step procedure, 1. Creating a HANA (internal) database snapshot - an activity you or tools need to perform
-1. SAP HANA write data to the datafiles to create a consistent state on the storage - HANA performs this step as a result of creating a HANA snapshot
+1. SAP HANA writes data to the datafiles to create a consistent state on the storage - HANA performs this step as a result of creating a HANA snapshot
1. Create a snapshot on the **/hana/data** volume on the storage - a step you or tools need to perform. There's no need to perform a snapshot on the **/hana/log** volume 1. Delete the HANA (internal) database snapshot and resume normal operation - a step you or tools need to perform
virtual-machines Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md
vm-linux Previously updated : 05/10/2022 Last updated : 11/14/2022
In this example each cluster node has its own HANA NFS filesystems /hana/shared,
2. **[1]** Create the Filesystem resources for the **hanadb1** mounts. ```
- pcs resource create hana_data1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-data-mnt00001 directory=/hana/data fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
- pcs resource create hana_log1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-log-mnt00001 directory=/hana/log fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
- pcs resource create hana_shared1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-shared-mnt00001 directory=/hana/shared fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
+ pcs resource create hana_data1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-data-mnt00001 directory=/hana/data fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
+ pcs resource create hana_log1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-log-mnt00001 directory=/hana/log fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
+ pcs resource create hana_shared1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-shared-mnt00001 directory=/hana/shared fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
``` 3. **[2]** Create the Filesystem resources for the **hanadb2** mounts. ```
- pcs resource create hana_data2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-data-mnt00001 directory=/hana/data fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
- pcs resource create hana_log2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-log-mnt00001 directory=/hana/log fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
- pcs resource create hana_shared2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-shared-mnt00001 directory=/hana/shared fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
+ pcs resource create hana_data2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-data-mnt00001 directory=/hana/data fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
+ pcs resource create hana_log2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-log-mnt00001 directory=/hana/log fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
+ pcs resource create hana_shared2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-shared-mnt00001 directory=/hana/shared fstype=nfs options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
```
- `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation so that each monitor performs a read/write test on the filesystem. Without this attribute, the monitor operation only verifies that the filesystem is mounted. This can be a problem because when connectivity is lost, the filesystem may remain mounted despite being inaccessible.
+ `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation so that each monitor performs a read/write test on the filesystem. Without this attribute, the monitor operation only verifies that the filesystem is mounted. This can be a problem because when connectivity is lost, the filesystem may remain mounted despite being inaccessible.
- `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAPHana resource depends on the failed resource, but it also can fail altogether. The SAPHana resource cannot stop successfully if the NFS server holding the HANA executables is inaccessible.
+ `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAPHana resource depends on the failed resource, but it also can fail altogether. The SAPHana resource cannot stop successfully if the NFS server holding the HANA executables is inaccessible.
+
+ The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals. For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). The timeouts in the above configuration may need to be adapted to the specific SAP setup.
4. **[1]** Configuring Location Constraints
virtual-machines Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-rhel.md
vm-windows Previously updated : 05/10/2022 Last updated : 11/14/2022
For the next part of this process, you need to create file system resources. Her
```bash # /hana/shared file system for site 1 pcs resource create fs_hana_shared_s1 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \
- fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,intr,noatime,sec=sys,vers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 \
+ fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,intr,noatime,sec=sys,vers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 op stop interval=0 timeout=120 # /hana/shared file system for site 2 pcs resource create fs_hana_shared_s2 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \
- fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,intr,noatime,sec=sys,vers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 \
+ fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,intr,noatime,sec=sys,vers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 op stop interval=0 timeout=120 # clone the /hana/shared file system resources for both site1 and site2
For the next part of this process, you need to create file system resources. Her
The `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system might remain mounted, despite being inaccessible.
- The `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, and then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAP HANA resource depends on the failed resource, but it also can fail altogether. The SAP HANA resource can't stop successfully, if the NFS share holding the HANA binaries is inaccessible.
+ The `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, and then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAP HANA resource depends on the failed resource, but it also can fail altogether. The SAP HANA resource can't stop successfully, if the NFS share holding the HANA binaries is inaccessible.
+
+ The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals. For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). The timeouts in the above configuration may need to be adapted to the specific SAP setup.
+
1. **[1]** Configure and verify the node attributes. All SAP HANA DB nodes on replication site 1 are assigned attribute `S1`, and all SAP HANA DB nodes on replication site 2 are assigned attribute `S2`.
virtual-network Quickstart Create Nat Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md
Last updated 11/11/2022-+ # Quickstart: Create a NAT gateway using the Azure portal
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
Title: Create, change, or delete an Azure virtual network peering | Microsoft Docs
-description: Create, change, or delete a virtual network peering. With virtual network peering, you connect virtual networks in the same region and across regions.
+ Title: Create, change, or delete an Azure virtual network peering
+description: Learn how to create, change, or delete a virtual network peering. With virtual network peering, you connect virtual networks in the same region and across regions.
- tags: azure-resource-manager- Previously updated : 09/01/2021 Last updated : 11/14/2022 + # Create, change, or delete a virtual network peering
Learn how to create, change, or delete a virtual network peering. Virtual networ
## Before you begin
+If you don't have an Azure account with an active subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article:
+
+- **Portal users**: Sign in to the [Azure portal](https://portal.azure.com) with an Azure account that has the [necessary permissions](#permissions) to work with peerings.
+
+- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
+
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings.
-Complete the following tasks before completing steps in any section of this article:
+- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
-- If you don't already have an Azure account, sign up for a [free trial account](https://azure.microsoft.com/free).-- If using the portal, open [Azure portal](https://portal.azure.com), and sign in with an account that has the [necessary permissions](#permissions) to work with peerings.-- If using PowerShell commands to complete tasks in this article, either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` with an account that has the [necessary permissions](#permissions) to work with peering, to create a connection with Azure.-- If using Azure CLI commands to complete tasks in this article, run the commands via either the [Azure Cloud Shell](https://shell.azure.com/bash) or the Azure CLI running locally. This tutorial requires the Azure CLI version 2.0.31 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If you're running the Azure CLI locally, you also need to run `az login` with an account that has the [necessary permissions](#permissions) to work with peering, to create a connection with Azure.
+ If you're running Azure CLI locally, use Azure CLI version 2.0.31 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings.
-The account you log into, or connect to Azure with, must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that gets assigned the appropriate actions listed in [Permissions](#permissions).
+
+The account you log into, or connect to Azure with, must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md) that gets assigned the appropriate actions listed in [Permissions](#permissions).
## Create a peering
-Before creating a peering, familiarize yourself with the requirements and constraints and [necessary permissions](#permissions).
+Before creating a peering, familiarize yourself with the [requirements and constraints](#requirements-and-constraints) and [necessary permissions](#permissions).
+
+# [**Portal**](#tab/peering-portal)
1. In the search box at the top of the Azure portal, enter *Virtual networks* in the search box. When **Virtual networks** appear in the search results, select it. Don't select **Virtual networks (classic)**, as you can't create a peering from a virtual network deployed through the classic deployment model.
Before creating a peering, familiarize yourself with the requirements and constr
:::image type="content" source="./media/virtual-network-manage-peering/select-vnet.png" alt-text="Screenshot of selecting VNetA from the virtual networks page.":::
-1. Select **Peerings** under *Settings* and then select **+ Add**.
+1. Select **Peerings** under **Settings** and then select **+ Add**.
:::image type="content" source="./media/virtual-network-manage-peering/vneta-peerings.png" alt-text="Screenshot of peerings page for VNetA.":::
-1. <a name="add-peering"></a>Enter or select values for the following settings:
-
- :::image type="content" source="./media/virtual-network-manage-peering/add-peering.png" alt-text="Screenshot of peering configuration page." lightbox="./media/virtual-network-manage-peering/add-peering-expanded.png":::
+1. <a name="add-peering"></a>Enter or select values for the following settings, and then select **Add**.
| Settings | Description | | -- | -- |
- | Peering link name (This virtual network) | The name for the peering must be unique within the virtual network. |
- | Traffic to remote virtual network | Select **Allow (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other with the same bandwidth and latency as if they were connected to the same virtual network. All communication between resources in the two virtual networks is over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is **Allowed**. To learn more about network security group service tags, see [Network security groups overview](./network-security-groups-overview.md#service-tags). Select **Block all traffic to the remote virtual network** if you don't want traffic to flow to the peered virtual network by default. You can select this setting if you have peering between two virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is disabled, traffic doesn't flow between the peered virtual networks by default; however, traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Disabling the **Block all traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
- | Traffic forwarded from remote virtual network | Select **Allowed (default)** if you want traffic *forwarded* by a network virtual appliance in a virtual network (that didn't originate from the virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named Spoke1, Spoke2, and Hub. A peering exists between each spoke virtual network and the Hub virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the Hub virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't set for the peering between each spoke virtual network and the hub virtual network, traffic doesn't flow between the spoke virtual networks because the hub isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it does not create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You don't need to check this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway. |
- | Virtual network gateway or Route Server | Select **Use this virtual network's gateway or Route Server**: </br> - If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Checking this box allows traffic from the peered virtual network to flow through the gateway attached to this virtual network to the on-premises network. If you check this box, the peered virtual network cannot have a gateway configured. </br> - If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> The peered virtual network must have the **Use the remote virtual network's gateway or Route Server** select when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **None (default)**, traffic from the peered virtual network still flows to this virtual network, but cannot flow through a virtual network gateway attached to this virtual network or able to learn routes from the Route Server. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager). |
- | Remote virtual network peering link name | The name for the remote virtual network peer. |
+ | **This virtual network** | |
+ | Peering link name | The name of the peering on this virtual network. The name must be unique within the virtual network. |
+ | Traffic to remote virtual network | - Select **Allow (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Allow**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). </br> - Select **Block all traffic to the remote virtual network** if you don't want traffic to flow to the peered virtual network by default. You can select this setting if you have peering between two virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is selected, traffic doesn't flow between the peered virtual networks by default; however, traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Selecting the **Block all traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)** if you want traffic *forwarded* by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named Spoke1, Spoke2, and Hub. A peering exists between each spoke virtual network and the Hub virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the Hub virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the hub virtual network, traffic doesn't flow between the spoke virtual networks because the hub isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* |
+ | Virtual network gateway or Route Server | Select **Use this virtual network's gateway or Route Server**: </br> - If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway deployed in this virtual network to the on-premises network. </br> - If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select **Use this virtual network's gateway or Router Server**, the peered virtual network can't have a gateway configured. The peered virtual network must have the **Use the remote virtual network's gateway or Route Server** selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **None (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway deployed in this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a hub (see the hub and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> Select **Use the remote virtual network's gateway or Route Server**: </br>- If you want to allow traffic from this virtual network to flow through a virtual network gateway deployed in the virtual network you're peering with. For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br>- If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway deployed in it, and must have the **Use this virtual network's gateway or Route Server** setting selected. If you leave this setting as **None (default)**, traffic from this virtual network can still flow to the peered virtual network, but can't flow through a virtual network gateway in the peered virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. |
+ | **Remote virtual network** | |
+ | Peering link name | The name of the peering on the remote virtual network. The name must be unique within the virtual network. |
| Virtual network deployment model | Select which deployment model the virtual network you want to peer with was deployed through. |
- | I know my resource ID | If you have read access to the virtual network you want to peer with, leave this checkbox unchecked. If you don't have read access to the virtual network or subscription you want to peer with, check this box. Enter the full resource ID of the virtual network you want to peer with in the **Resource ID** box that appeared when you checked the box. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). If the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're creating the peering from, first add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory) in the opposite tenant. |
- | Resource ID | This field appears when you checked the box . The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). If the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're creating the peering from, first add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory) in the opposite tenant.
- | Subscription | Select the [subscription](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription) of the virtual network you want to peer with. One or more subscriptions are listed, depending on how many subscriptions your account has read access to. If you checked the **Resource ID** checkbox, this setting isn't available. |
- | Virtual network | Select the virtual network you want to peer with. You can select a virtual network created through either Azure deployment model. If you want to select a virtual network in a different region, you must select a virtual network in a [supported region](#cross-region). You must have read access to the virtual network for it to be visible in the list. If a virtual network is listed, but grayed out, it may be because the address space for the virtual network overlaps with the address space for this virtual network. If virtual network address spaces overlap, they cannot be peered. If you checked the **Resource ID** checkbox, this setting isn't available. |
- | Traffic to remote virtual network | Select **Allow (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources connected to either virtual network to communicate with each other with the same bandwidth and latency as if they were connected to the same virtual network. All communication between resources in the two virtual networks is over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is **Allowed**. (To learn more about network security group service tags, see [Network security groups overview](./network-security-groups-overview.md#service-tags).) Select **Block all traffic to the remote virtual network** if you don't want traffic to flow to the peered virtual network by default. You might select **Block all traffic to the remote virtual network** if you've peered a virtual network with another virtual network, but occasionally want to disable default traffic flow between the two virtual networks. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is disabled, traffic doesn't flow between the peered virtual networks by default; however, traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Disabling the **Block all traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
- | Traffic forwarded from remote virtual network | Leave as **Allow (default)** to allow traffic *forwarded* by a network virtual appliance in a virtual network (that didn't originate from the virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named Spoke1, Spoke2, and Hub. A virtual network peering exists between each spoke virtual network and the Hub virtual network, but virtual network peering doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the Hub virtual network, and user-defined routes gets applied to each spoke virtual network to route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the hub virtual network, traffic doesn't flow between the spoke virtual networks because the hub isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You don't need to check this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway. |
- | Virtual network gateway or Route Server | Select **Use this virtual network's gateway or Route Server**: </br>- If you have a virtual network gateway attached to this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway attached to this virtual network to the on-premises network. </br>- If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select *Use **this** virtual network's gateway or Router Server*, the peered virtual network can't have a gateway configured. The peered virtual network must have the *Use the **remote** virtual network's gateway or Route Server* selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **None (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway attached to this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a hub (see the hub and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md?toc=%2fazure%2fvirtual-network%2ftoc.json). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> Select **Use the remote virtual network gateway or Route Server**: </br>- If you want to allow traffic from this virtual network to flow through a virtual network gateway attached to the virtual network you're peering with. For example, the virtual network you're peering with has a VPN gateway attached that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway attached to the peered virtual network. </br>- If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway attached to it and must have the **Use this virtual network's gateway or Route Server** option selected. If you leave this setting as **None (default)**, traffic from the peered virtual network can still flow to this virtual network, but can't flow through a virtual network gateway attached to this virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md?toc=%2fazure%2fvirtual-network%2ftoc.json)* |
-
+ | I know my resource ID | If you have read access to the virtual network you want to peer with, leave this checkbox unchecked. If you don't have read access to the virtual network or subscription you want to peer with, check this checkbox. Enter the full resource ID of the virtual network you want to peer with in the **Resource ID** box that appeared when you checked the checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). If the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're creating the peering from, first add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant. |
+ | Resource ID | This field appears when you check **I know my resource ID** checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). If the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're creating the peering from, first add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant.
+ | Subscription | Select the [subscription](../azure-glossary-cloud-terminology.md#subscription) of the virtual network you want to peer with. One or more subscriptions are listed, depending on how many subscriptions your account has read access to. If you checked the **I know my resource ID** checkbox, this setting isn't available. |
+ | Virtual network | Select the virtual network you want to peer with. You can select a virtual network created through either Azure deployment model. If you want to select a virtual network in a different region, you must select a virtual network in a [supported region](#cross-region). You must have read access to the virtual network for it to be visible in the list. If a virtual network is listed, but grayed out, it may be because the address space for the virtual network overlaps with the address space for this virtual network. If virtual network address spaces overlap, they can't be peered. If you checked the **I know my resource ID** checkbox, this setting isn't available. |
+ | Traffic to remote virtual network | - Select **Allow (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Allow**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). </br> - Select **Block all traffic to the remote virtual network** if you don't want traffic to flow to the peered virtual network by default. You can select this setting if you have peering between two virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is selected, traffic doesn't flow between the peered virtual networks by default; however, traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Selecting the **Block all traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)** if you want traffic *forwarded* by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named Spoke1, Spoke2, and Hub. A peering exists between each spoke virtual network and the Hub virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the Hub virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the hub virtual network, traffic doesn't flow between the spoke virtual networks because the hub isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* |
+ | Virtual network gateway or Route Server | Select **Use this virtual network's gateway or Route Server**: </br>- If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway deployed in this virtual network to the on-premises network. </br>- If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select **Use this virtual network's gateway or Router Server**, the peered virtual network can't have a gateway configured. The peered virtual network must have the **Use the remote virtual network's gateway or Route Server** selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **None (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway deployed in this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a hub (see the hub and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> Select **Use the remote virtual network's gateway or Route Server**: </br>- If you want to allow traffic from this virtual network to flow through a virtual network gateway deployed in the virtual network you're peering with. For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br>- If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway deployed in it, and must have the **Use this virtual network's gateway or Route Server** setting selected. If you leave this setting as **None (default)**, traffic from this virtual network can still flow to the peered virtual network, but can't flow through a virtual network gateway in the peered virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. |
+ :::image type="content" source="./media/virtual-network-manage-peering/add-peering.png" alt-text="Screenshot of peering configuration page." lightbox="./media/virtual-network-manage-peering/add-peering-expanded.png":::
+ > [!NOTE] > If you use a Virtual Network Gateway to send on-premises traffic transitively to a peered VNet, the peered VNet IP range for the on-premises VPN device must be set to 'interesting' traffic. Otherwise, your on-premises resources won't be able to communicate with resources in the peered VNet.
-1. Select **Add** to configure the peering to the virtual network you selected. After a few seconds, select the **Refresh** button and the peering status will change from *Updating* to *Connected*.
+1. Select the **Refresh** button after a few seconds, and the peering status will change from *Updating* to *Connected*.
:::image type="content" source="./media/virtual-network-manage-peering/vnet-peering-connected.png" alt-text="Screenshot of virtual network peering status on peerings page."::: For step-by-step instructions for implementing peering between virtual networks in different subscriptions and deployment models, see [next steps](#next-steps).
-### Commands
+# [**PowerShell**](#tab/peering-powershell)
+
+Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create virtual network peerings.
+
+```azurepowershell-interactive
+## Place the virtual network VNetA configuration into a variable. ##
+$vnetA = Get-AzVirtualNetwork -Name VNetA -ResourceGroupName myResourceGroup
+## Place the virtual network VNetB configuration into a variable. ##
+$vnetB = Get-AzVirtualNetwork -Name VNetB -ResourceGroupName myResourceGroup
+## Create peering from VNetA to VNetB. ##
+Add-AzVirtualNetworkPeering -Name VNetAtoVNetB -VirtualNetwork $vnetA -RemoteVirtualNetworkId $vnetB.Id
+## Create peering from VNetB to VNetA. ##
+Add-AzVirtualNetworkPeering -Name VNetBtoVNetA -VirtualNetwork $vnetB -RemoteVirtualNetworkId $vnetA.Id
+```
+
+# [**Azure CLI**](#tab/peering-cli)
-- **Azure CLI**: [az network vnet peering create](/cli/azure/network/vnet/peering)-- **PowerShell**: [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering)
+1. Use [az network vnet peering create](/cli/azure/network/vnet/peering#az-network-vnet-peering-create) to create virtual network peerings.
+
+```azurecli-interactive
+## Create peering from VNetA to VNetB. ##
+az network vnet peering create --name VNetAtoVNetB --vnet-name VNetA --remote-vnet VNetB --resource-group myResourceGroup --allow-vnet-access --allow-forwarded-traffic
+## Create peering from VNetB to VNetA. ##
+az network vnet peering create --name VNetBtoVNetA --vnet-name VNetB --remote-vnet VNetA --resource-group myResourceGroup --allow-vnet-access --allow-forwarded-traffic
+```
++ ## View or change peering settings
-Before changing a peering, familiarize yourself with the requirements and constraints and [necessary permissions](#permissions).
+Before changing a peering, familiarize yourself with the [requirements and constraints](#requirements-and-constraints) and [necessary permissions](#permissions).
+
+# [**Portal**](#tab/peering-portal)
-1. Select the virtual network that you would like to view or change the virtual network peering settings.
+1. Select the virtual network that you would like to view or change its peering settings.
:::image type="content" source="./media/virtual-network-manage-peering/vnet-list.png" alt-text="Screenshot of the list of virtual networks in the subscription.":::
Before changing a peering, familiarize yourself with the requirements and constr
:::image type="content" source="./media/virtual-network-manage-peering/change-peering-settings.png" alt-text="Screenshot of changing virtual network peering settings.":::
-**Commands**
-- **Azure CLI**: [az network vnet peering list](/cli/azure/network/vnet/peering) to list peerings for a virtual network, [az network vnet peering show](/cli/azure/network/vnet/peering) to show settings for a specific peering, and [az network vnet peering update](/cli/azure/network/vnet/peering) to change peering settings.|-- **PowerShell**: [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to retrieve view peering settings and [Set-AzVirtualNetworkPeering](/powershell/module/az.network/set-azvirtualnetworkpeering) to change settings.
+# [**PowerShell**](#tab/peering-powershell)
+
+Use [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to list peerings of a virtual network and their settings.
+
+```azurepowershell-interactive
+Get-AzVirtualNetworkPeering -VirtualNetworkName VNetA -ResourceGroupName myResourceGroup
+```
+
+Use [Set-AzVirtualNetworkPeering](/powershell/module/az.network/set-azvirtualnetworkpeering) to change peering settings.
+
+```azurepowershell-interactive
+## Place the virtual network peering configuration into a variable. ##
+$peering = Get-AzVirtualNetworkPeering -VirtualNetworkName VNetA -ResourceGroupName myResourceGroup -Name VNetAtoVNetB
+# Allow traffic forwarded from remote virtual network. ##
+$peering.AllowForwardedTraffic = $True
+## Update the peering with changes made. ##
+Set-AzVirtualNetworkPeering -VirtualNetworkPeering $peering
+```
++
+# [**Azure CLI**](#tab/peering-cli)
+
+Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to list peerings of a virtual network.
+
+```azurecli-interactive
+az network vnet peering list --resource-group myResourceGroup --vnet-name VNetA --out table
+```
+
+Use [az network vnet peering show](/cli/azure/network/vnet/peering#az-network-vnet-peering-show) to show settings for a specific peering.
+
+```azurecli-interactive
+az network vnet peering show --resource-group myResourceGroup --name VNetAtoVNetB --vnet-name VNetA
+```
+
+Use [az network vnet peering update](/cli/azure/network/vnet/peering#az-network-vnet-peering-update) to change peering settings.
+
+```azurecli-interactive
+## Block traffic forwarded from remote virtual network. ##
+az network vnet peering update --resource-group myResourceGroup --name VNetAtoVNetB --vnet-name VNetA --set allowForwardedTraffic=false
+```
++ ## Delete a peering
-Before deleting a peering, ensure your account has the [necessary permissions](#permissions).
+Before deleting a peering, familiarize yourself with the [requirements and constraints](#requirements-and-constraints) and [necessary permissions](#permissions).
+
+# [**Portal**](#tab/peering-portal)
-When a peering is deleted, traffic can no longer flow between two virtual networks. When deleting a virtual networking peering, the corresponding peering will also be removed. If you want virtual networks to communicate sometimes, but not always, rather than deleting a peering, you can set the **Traffic to remote virtual network** setting to **Block all traffic to the remote virtual network** instead. You may find disabling and enabling network access easier than deleting and recreating peerings.
+When a peering between two virtual networks is deleted, traffic can no longer flow between the virtual networks. If you want virtual networks to communicate sometimes, but not always, rather than deleting a peering, you can set the **Traffic to remote virtual network** setting to **Block all traffic to the remote virtual network** instead. You may find disabling and enabling network access easier than deleting and recreating peerings.
1. Select the virtual network in the list that you want to delete a peering for.
When a peering is deleted, traffic can no longer flow between two virtual networ
:::image type="content" source="./media/virtual-network-manage-peering/confirm-deletion.png" alt-text="Screenshot of peering delete confirmation.":::
-1. Complete the previous steps to delete the peering from the other virtual network in the peering.
+ > [!NOTE]
+ > When you delete a virtual network peering from a virtual network, the peering from the remote virtual network will also be deleted.
+
+# [**PowerShell**](#tab/peering-powershell)
+
+Use [Remove-AzVirtualNetworkPeering](/powershell/module/az.network/remove-azvirtualnetworkpeering) to delete virtual network peerings
+
+```azurepowershell-interactive
+## Delete VNetA to VNetB peering. ##
+Remove-AzVirtualNetworkPeering -Name VNetAtoVNetB -VirtualNetworkName VNetA -ResourceGroupName myResourceGroup
+## Delete VNetB to VNetA peering. ##
+Remove-AzVirtualNetworkPeering -Name VNetBtoVNetA -VirtualNetworkName VNetB -ResourceGroupName myResourceGroup
+```
+
-**Commands**
+# [**Azure CLI**](#tab/peering-cli)
-- **Azure CLI**: [az network vnet peering delete](/cli/azure/network/vnet/peering)-- **PowerShell**: [Remove-AzVirtualNetworkPeering](/powershell/module/az.network/remove-azvirtualnetworkpeering)
+Use [az network vnet peering delete](/cli/azure/network/vnet/peering#az-network-vnet-peering-delete) to delete virtual network peerings.
+
+```azurecli-interactive
+## Delete VNetA to VNetB peering. ##
+az network vnet peering delete --resource-group myResourceGroup --name VNetAtoVNetB --vnet-name VNetA
+## Delete VNetB to VNetA peering. ##
+az network vnet peering delete --resource-group myResourceGroup --name VNetBtoVNetA --vnet-name VNetB
+```
++ ## Requirements and constraints -- <a name="cross-region"></a>You can peer virtual networks in the same region, or different regions. Peering virtual networks in different regions is also referred to as *Global VNet Peering*. -- When creating a global peering, the peered virtual networks can exist in any Azure public cloud region or China cloud regions or Government cloud regions. You can't peer across clouds. For example, a VNet in Azure public cloud cannot be peered to a VNet in Azure China cloud.-- Resources in one virtual network can't communicate with the front-end IP address of a Basic internal load balancer in a globally peered virtual network. Support for Basic Load Balancer only exists within the same region. Support for Standard Load Balancer exists for both, VNet Peering and Global VNet Peering. Services that use a Basic load balancer won't work over Global VNet Peering are documented [here.](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers)
+- <a name="cross-region"></a>You can peer virtual networks in the same region, or different regions. Peering virtual networks in different regions is also referred to as *Global VNet Peering*.
+
+- When creating a global peering, the peered virtual networks can exist in any Azure public cloud region or China cloud regions or Government cloud regions. You can't peer across clouds. For example, a VNet in Azure public cloud can't be peered to a VNet in Azure China cloud.
+
+- Resources in one virtual network can't communicate with the front-end IP address of a Basic Internal Load Balancer in a globally peered virtual network. Support for Basic Load Balancer only exists within the same region. Support for Standard Load Balancer exists for both, VNet Peering and Global VNet Peering. Some services that use a Basic load balancer don't work over global virtual network peering. For more information, see [Constraints related to Global VNet Peering and Load Balancers](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers).
+ - You can use remote gateways or allow gateway transit in globally peered virtual networks and locally peered virtual networks.-- The virtual networks can be in the same, or different subscriptions. When you peer virtual networks in different subscriptions, both subscriptions can be associated to the same or different Azure Active Directory tenant. If you don't already have an AD tenant, you can [create one](../active-directory/develop/quickstart-create-new-tenant.md?toc=%2fazure%2fvirtual-network%2ftoc.json-a-new-azure-ad-tenant).+
+- The virtual networks can be in the same, or different [subscriptions](#next-steps). When you peer virtual networks in different subscriptions, both subscriptions can be associated to the same or different Azure Active Directory tenant. If you don't already have an AD tenant, you can [create one](../active-directory/develop/quickstart-create-new-tenant.md).
+ - The virtual networks you peer must have non-overlapping IP address spaces.-- You can't add address ranges to, or delete address ranges from a virtual network's address space once a virtual network is peered with another virtual network. To add or remove address ranges, delete the peering, add or remove the address ranges, then re-create the peering. To add address ranges to, or remove address ranges from virtual networks, see [Manage virtual networks](manage-virtual-network.md).-- You can peer two virtual networks deployed through Resource Manager or a virtual network deployed through Resource Manager with a virtual network deployed through the classic deployment model. You can't peer two virtual networks created through the classic deployment model. If you're not familiar with Azure deployment models, read the [Understand Azure deployment models](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json) article. You can use a [VPN Gateway](../vpn-gateway/design.md?toc=%2fazure%2fvirtual-network%2ftoc.json#V2V) to connect two virtual networks created through the classic deployment model.-- When peering two virtual networks created through Resource Manager, a peering must be configured for each virtual network in the peering. You see one of the following types for peering status:
- - *Initiated:* When you create the peering to the second virtual network from the first virtual network, the peering status is *Initiated*.
- - *Connected:* When you create the peering from the second virtual network to the first virtual network, its peering status is *Connected*. If you view the peering status for the first virtual network, you see its status changed from *Initiated* to *Connected*. The peering is not successfully established until the peering status for both virtual network peerings is *Connected*.
-- When peering a virtual network created through Resource Manager with a virtual network created through the classic deployment model, you only configure a peering for the virtual network deployed through Resource Manager. You cannot configure peering for a virtual network (classic), or between two virtual networks deployed through the classic deployment model. When you create the peering from the virtual network (Resource Manager) to the virtual network (Classic), the peering status is *Updating*, then shortly changes to *Connected*.-- A peering is established between two virtual networks. Peerings by itself are not transitive. If you create peerings between:
- - VirtualNetwork1 & VirtualNetwork2 - VirtualNetwork1 & VirtualNetwork2
- - VirtualNetwork2 & VirtualNetwork3 - VirtualNetwork2 & VirtualNetwork3
--
- There is no peering between VirtualNetwork1 and VirtualNetwork3 through VirtualNetwork2. If you want to create a virtual network peering between VirtualNetwork1 and VirtualNetwork3, you have to create a peering between VirtualNetwork1 and VirtualNetwork3. There is no peering between VirtualNetwork1 and VirtualNetwork3 through VirtualNetwork2. If you want VirtualNetwork1 and VirtualNetwork3 to directly communicate, you have to create an explicit peering between VirtualNetwork1 and VirtualNetwork3 or go through an NVA in the Hub network.
-- You can't resolve names in peered virtual networks using default Azure name resolution. To resolve names in other virtual networks, you must use [Azure DNS for private domains](../dns/private-dns-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or a custom DNS server. To learn how to set up your own DNS server, see [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).-- Resources in peered virtual networks in the same region can communicate with each other with the same bandwidth and latency as if they were in the same virtual network. Each virtual machine size has its own maximum network bandwidth however. To learn more about maximum network bandwidth for different virtual machine sizes, see [Windows](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine sizes.+
+- You can peer two virtual networks deployed through Resource Manager or a virtual network deployed through Resource Manager with a virtual network deployed through the classic deployment model. You can't peer two virtual networks created through the classic deployment model. If you're not familiar with Azure deployment models, read the [Understand Azure deployment models](../azure-resource-manager/management/deployment-models.md) article. You can use a [VPN Gateway](../vpn-gateway/design.md#V2V) to connect two virtual networks created through the classic deployment model.
+
+- When peering two virtual networks created through Resource Manager, a peering must be configured for each virtual network in the peering. You see one of the following types for peering status:
+
+ - *Initiated:* When you create the first peering, its status is *Initiated*.
+ - *Connected:* When you create the second peering, peering status becomes *Connected* for both peerings. The peering isn't successfully established until the peering status for both virtual network peerings is *Connected*.
+
+- When peering a virtual network created through Resource Manager with a virtual network created through the classic deployment model, you only configure a peering for the virtual network deployed through Resource Manager. You can't configure peering for a virtual network (classic), or between two virtual networks deployed through the classic deployment model. When you create the peering from the virtual network (Resource Manager) to the virtual network (Classic), the peering status is *Updating*, then shortly changes to *Connected*.
+- A peering is established between two virtual networks. Peerings by themselves aren't transitive. If you create peerings between:
+
+ - VirtualNetwork1 and VirtualNetwork2
+ - VirtualNetwork2 and VirtualNetwork3
+
+ There's no connectivity between VirtualNetwork1 and VirtualNetwork3 through VirtualNetwork2. If you want VirtualNetwork1 and VirtualNetwork3 to directly communicate, you have to create an explicit peering between VirtualNetwork1 and VirtualNetwork3, or go through an NVA in the Hub network. To learn more, see [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
+
+- You can't resolve names in peered virtual networks using default Azure name resolution. To resolve names in other virtual networks, you must use [Azure Private DNS](../dns/private-dns-overview.md) or a custom DNS server. To learn how to set up your own DNS server, see [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
+
+- Resources in peered virtual networks in the same region can communicate with each other with the same latency as if they were within the same virtual network. The network throughput is based on the bandwidth that's allowed for the virtual machine, proportionate to its size. There isn't any extra restriction on bandwidth within the peering. Each virtual machine size has its own maximum network bandwidth. To learn more about maximum network bandwidth for different virtual machine sizes, see [Sizes for virtual machines in Azure](../virtual-machines/sizes.md).
+ - A virtual network can be peered to another virtual network, and also be connected to another virtual network with an Azure virtual network gateway. When virtual networks are connected through both peering and a gateway, traffic between the virtual networks flows through the peering configuration, rather than the gateway. - Point-to-Site VPN clients must be downloaded again after virtual network peering has been successfully configured to ensure the new routes are downloaded to the client.-- There is a nominal charge for ingress and egress traffic that utilizes a virtual network peering. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/virtual-network).
+- There's a nominal charge for ingress and egress traffic that utilizes a virtual network peering. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/virtual-network).
## Permissions The accounts you use to work with virtual network peering must be assigned to the following roles: -- [Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor): For a virtual network deployed through Resource Manager.-- [Classic Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#classic-network-contributor): For a virtual network deployed through the classic deployment model.
+- [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor): For a virtual network deployed through Resource Manager.
+- [Classic Network Contributor](../role-based-access-control/built-in-roles.md#classic-network-contributor): For a virtual network deployed through the classic deployment model.
-If your account is not assigned to one of the previous roles, it must be assigned to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that is assigned the necessary actions from the following table:
+If your account isn't assigned to one of the previous roles, it must be assigned to a [custom role](../role-based-access-control/custom-roles.md) that is assigned the necessary actions from the following table:
| Action | Name | | | |
If your account is not assigned to one of the previous roles, it must be assigne
## Next steps -- A virtual network peering is created between virtual networks created through the same, or different deployment models that exist in the same, or different subscriptions. Complete a tutorial for one of the following scenarios:
+- A virtual network peering can be created between virtual networks created through the same, or different deployment models that exist in the same, or different subscriptions. Complete a tutorial for one of the following scenarios:
|Azure deployment model | Subscription | | ||
If your account is not assigned to one of the previous roles, it must be assigne
|One Resource Manager, one classic |[Same](create-peering-different-deployment-models.md)| | |[Different](create-peering-different-deployment-models-subscriptions.md)| -- Learn how to create a [hub and spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?toc=%2fazure%2fvirtual-network%2ftoc.json)
+- Learn how to create a [hub and spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke)
- Create a virtual network peering using [PowerShell](powershell-samples.md) or [Azure CLI](cli-samples.md) sample scripts, or using Azure [Resource Manager templates](template-samples.md) - Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks