Updates from: 03/15/2022 02:09:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md
Previously updated : 09/15/2021 Last updated : 03/11/2022
Your final configuration file should look like the following JSON:
1. Go to `https://localhost:44316`. 1. Select **Sign Up/In**.
- ![Screenshot of the "Sign Up/In" button on the project Welcome page.](./media/configure-authentication-sample-web-app/web-app-sign-in.png)
+ :::image type="content" source="./media/configure-authentication-sample-web-app/web-app-sign-in.png" alt-text="Screenshot of the sign in and sign up button on the project Welcome page.":::
1. Complete the sign-up or sign-in process. After successful authentication, you'll see your display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, select **Claims**.
-![Screenshot of the web app token claims.](./media/configure-authentication-sample-web-app/web-app-token-claims.png)
+ :::image type="content" source="./media/configure-authentication-sample-web-app/web-app-token-claims.png" alt-text="Screenshot of the web app token claims.":::
## Deploy your application
active-directory-b2c Partner F5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-f5.md
Title: Tutorial to enable Secure Hybrid Access to applications with Azure AD B2C and F5 BIG-IP description: Learn how to integrate Azure AD B2C authentication with F5 BIG-IP for secure hybrid access --++
active-directory-b2c Quickstart Web App Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-web-app-dotnet.md
In this quickstart, you use an ASP.NET application to sign in using a social ide
git clone https://github.com/Azure-Samples/active-directory-b2c-dotnet-webapp-and-webapi.git ```
- There are two projects are in the sample solution:
+ There are two projects in the sample solution:
- **TaskWebApp** - A web application that creates and edits a task list. The web application uses the **sign-up or sign-in** user flow to sign up or sign in users. - **TaskService** - A web API that supports the create, read, update, and delete task list functionality. The web API is protected by Azure AD B2C and called by the web application.
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md
As you design the virtual network for Azure AD DS, the following considerations
A managed domain connects to a subnet in an Azure virtual network. Design this subnet for Azure AD DS with the following considerations:
-* A managed domain must be deployed in its own subnet. Don't use an existing subnet or a gateway subnet.
+* A managed domain must be deployed in its own subnet. Don't use an existing subnet or a gateway subnet. This includes the usage of remote gateways settings in the virtual network peering which puts the managed domain in an unsupported state.
* A network security group is created during the deployment of a managed domain. This network security group contains the required rules for correct service communication. * Don't create or use an existing network security group with your own custom rules. * A managed domain requires 3-5 IP addresses. Make sure that your subnet IP address range can provide this number of addresses.
For more information about some of the network resources and connection options
* [Azure virtual network peering](../virtual-network/virtual-network-peering-overview.md) * [Azure VPN gateways](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md)
-* [Azure network security groups](../virtual-network/network-security-groups-overview.md)
+* [Azure network security groups](../virtual-network/network-security-groups-overview.md)
active-directory Provision On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md
Previously updated : 05/11/2021 Last updated : 03/09/2022
There are currently a few known limitations to on-demand provisioning. Post your
* Amazon Web Services (AWS) application does not support on-demand provisioning. * On-demand provisioning of groups and roles isn't supported. * On-demand provisioning supports disabling users that have been unassigned from the application. However, it doesn't support disabling or deleting users that have been disabled or deleted from Azure AD. Those users won't appear when you search for a user.
+* Provisioning multiple roles on a user isn't supported by on-demand provisioning.
## Next steps
-* [Troubleshooting provisioning](./application-provisioning-config-problem.md)
+* [Troubleshooting provisioning](./application-provisioning-config-problem.md)
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
Last updated 02/02/2022--++
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
Previously updated : 03/10/2022 Last updated : 03/14/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here. - This article describes how to enable CloudKnox Permissions Management (CloudKnox) in your organization. Once you've enabled CloudKnox, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms. > [!NOTE]
To enable CloudKnox in your organization:
- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/). - You must be eligible for or have an active assignment to the global administrator role as a user in that tenant. - > [!NOTE] > During public preview, CloudKnox doesn't perform a license check.
To enable CloudKnox in your organization:
- To view a video on how to enable CloudKnox in your Azure AD tenant, select [Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo). - To view a video on how to configure and onboard AWS accounts in CloudKnox, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
+- To view a video on how to configure and onboard GCP accounts in CloudKnox, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
+ ## How to enable CloudKnox on your Azure AD tenant
active-directory Cloudknox Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-gcp.md
Previously updated : 03/10/2022 Last updated : 03/14/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here. - This article describes how to onboard a Google Cloud Platform (GCP) project on CloudKnox Permissions Management (CloudKnox). > [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
+## View a training video on configuring and onboarding a GCP account
+
+To view a video on how to configure and onboard GCP accounts in CloudKnox, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
+ ## Onboard a GCP project
This article describes how to onboard a Google Cloud Platform (GCP) project on C
Optionally, specify **G-Suite IDP Secret Name** and **G-Suite IDP User Email** to enable G-Suite integration.
- You can either download and run the script at this point or you can do it in the Google Cloud Shell, as described in [later in this article](cloudknox-onboard-gcp.md#4-run-scripts-in-cloud-shell-optional-if-not-already-executed).
+ You can either download and run the script at this point or you can do it in the Google Cloud Shell, as described [later in this article](cloudknox-onboard-gcp.md#4-run-scripts-in-cloud-shell-optional-if-not-already-executed).
1. Select **Next**. ### 3. Set up GCP member projects.
active-directory Cloudknox Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-training-videos.md
Previously updated : 03/09/2022 Last updated : 03/14/2022
To view step-by-step training videos on how to use CloudKnox Permissions Management (CloudKnox) features, select a link below.
-## Enable CloudKnox in your Azure Active Directory (Azure AD) tenant
+## Onboard CloudKnox in your organization
-To view a video on how to enable CloudKnox in your Azure AD tenant, select
-[Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
-## Configure and onboard Amazon Web Services (AWS) accounts
+### Enable CloudKnox in your Azure Active Directory (Azure AD) tenant
-To view a video on how to configure and onboard Amazon Web Services (AWS) accounts in CloudKnox Permissions Management, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
+To view a video on how to enable CloudKnox in your Azure AD tenant, select [Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+### Configure and onboard Amazon Web Services (AWS) accounts
+
+To view a video on how to configure and onboard Amazon Web Services (AWS) accounts in CloudKnox, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
+
+### Configure and onboard Google Cloud Platform (GCP) accounts
+
+To view a video on how to configure and onboard Google Cloud Platform (GCP) accounts in CloudKnox, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
<!## Privilege on demand (POD) work flows
To view a video on how to configure and onboard Amazon Web Services (AWS) accoun
- View a step-by-step video on [how to create group-based permissions](https://vimeo.com/462797947/d041de9157).>
-<!## Next steps>
+## Next steps
+
+- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](cloudknox-overview.md)
+- For a list of frequently asked questions (FAQs) about CloudKnox, see [FAQs](cloudknox-faqs.md).
+- For information on how to start viewing information about your authorization system in CloudKnox, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
Last updated 1/19/2022 --++
active-directory Single Sign On Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-saml-protocol.md
If `SPNameQualifier` is specified, Azure AD will include the same `SPNameQualifi
Azure AD ignores the `AllowCreate` attribute.
-### RequestAuthnContext
+### RequestedAuthnContext
The `RequestedAuthnContext` element specifies the desired authentication methods. It is optional in `AuthnRequest` elements sent to Azure AD. Azure AD supports `AuthnContextClassRef` values such as `urn:oasis:names:tc:SAML:2.0:ac:classes:Password`. ### Scoping
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
The authorization code flow begins with the client directing the user to the `/a
Some permissions are admin-restricted, for example, writing data to an organization's directory by using `Directory.ReadWrite.All`. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions. To request access to admin-restricted scopes, you should request them directly from a Global Administrator. For more information, see [Admin-restricted permissions](v2-permissions-and-consent.md#admin-restricted-permissions).
+Unless specified otherwise, there are no default values for optional parameters. There is, however, default behavior for a request omitting optional parameters. The default behavior is to either sign in the sole current user, show the account picker if there are multiple users, or show the login page if there are no users signed in.
+ ```http // Line breaks for legibility only
active-directory Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md
Previously updated : 04/11/2017 Last updated : 03/14/2022
# Azure Active Directory B2B collaboration code and PowerShell samples ## PowerShell example
-You can bulk-invite external users to an organization from email addresses that you have stored in a .CSV file.
+
+You can bulk-invite external users to an organization from email addresses that you've stored in a .CSV file.
1. Prepare the .CSV file Create a new CSV file and name it invitations.csv. In this example, the file is saved in C:\data, and contains the following information:
You can bulk-invite external users to an organization from email addresses that
foreach ($email in $invitations) {New-AzureADMSInvitation -InvitedUserEmailAddress $email.InvitedUserEmailAddress -InvitedUserDisplayName $email.Name -InviteRedirectUrl https://wingtiptoysonline-dev-ed.my.salesforce.com -InvitedUserMessageInfo $messageInfo -SendInvitationMessage $true} ```
-This cmdlet sends an invitation to the email addresses in invitations.csv. Additional features of this cmdlet include:
+This cmdlet sends an invitation to the email addresses in invitations.csv. More features of this cmdlet include:
+ - Customized text in the email message - Including a display name for the invited user - Sending messages to CCs or suppressing email messages altogether ## Code sample
-Here we illustrate how to call the invitation API, in "app-only" mode, to get the redemption URL for the resource to which you are inviting the B2B user. The goal is to send a custom invitation email. The email can be composed with an HTTP client, so you can customize how it looks and send it through the Microsoft Graph API.
+
+The code sample illustrates how to call the invitation API and get the redemption URL. Use the redemption URL to send a custom invitation email. The email can be composed with an HTTP client, so you can customize how it looks and send it through the Microsoft Graph API.
++
+# [HTTP](#tab/http)
+
+```http
+POST https://graph.microsoft.com/v1.0/invitations
+Content-type: application/json
+{
+ "invitedUserEmailAddress": "david@fabrikam.com",
+ "invitedUserDisplayName": "David",
+ "inviteRedirectUrl": "https://myapp.contoso.com",
+ "sendInvitationMessage": true
+}
+```
+
+# [C#](#tab/csharp)
```csharp
+using System;
+using System.Threading.Tasks;
+using Microsoft.Graph;
+using Azure.Identity;
+ namespace SampleInviteApp {
- using System;
- using System.Linq;
- using System.Net.Http;
- using System.Net.Http.Headers;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- using Newtonsoft.Json;
class Program { /// <summary>
- /// Microsoft Graph resource.
- /// </summary>
- static readonly string GraphResource = "https://graph.microsoft.com";
-
- /// <summary>
- /// Microsoft Graph invite endpoint.
- /// </summary>
- static readonly string InviteEndPoint = "https://graph.microsoft.com/v1.0/invitations";
-
- /// <summary>
- ///  Authentication endpoint to get token.
- /// </summary>
- static readonly string EstsLoginEndpoint = "https://login.microsoftonline.com";
-
- /// <summary>
- /// This is the tenantid of the tenant you want to invite users to.
+ /// This is the tenant ID of the tenant you want to invite users to.
/// </summary> private static readonly string TenantID = ""; /// <summary> /// This is the application id of the application that is registered in the above tenant.
- /// The required scopes are available in the below link.
- /// https://developer.microsoft.com/graph/docs/api-reference/v1.0/api/invitation_post
/// </summary> private static readonly string TestAppClientId = "";
namespace SampleInviteApp
/// Main method. /// </summary> /// <param name="args">Optional arguments</param>
- static void Main(string[] args)
- {
- Invitation invitation = CreateInvitation();
- SendInvitation(invitation);
- }
-
- /// <summary>
- /// Create the invitation object.
- /// </summary>
- /// <returns>Returns the invitation object.</returns>
- private static Invitation CreateInvitation()
+ static async Task Main(string[] args)
{
- // Set the invitation object.
- Invitation invitation = new Invitation();
- invitation.InvitedUserDisplayName = InvitedUserDisplayName;
- invitation.InvitedUserEmailAddress = InvitedUserEmailAddress;
- invitation.InviteRedirectUrl = "https://www.microsoft.com";
- invitation.SendInvitationMessage = true;
- return invitation;
+ string InviteRedeemUrl = await SendInvitation();
} /// <summary> /// Send the guest user invite request. /// </summary>
- /// <param name="invitation">Invitation object.</param>
- private static void SendInvitation(Invitation invitation)
+ private static async string SendInvitation()
{
- string accessToken = GetAccessToken();
+ /// Get the access token for our application to talk to Microsoft Graph.
+ var scopes = new[] { "https://graph.microsoft.com/.default" };
+ var clientSecretCredential = new ClientSecretCredential(TenantID, TestAppClientId, TestAppClientSecret);
+ var graphClient = new GraphServiceClient(clientSecretCredential, scopes);
- HttpClient httpClient = GetHttpClient(accessToken);
-
- // Make the invite call.
- HttpContent content = new StringContent(JsonConvert.SerializeObject(invitation));
- content.Headers.Add("ContentType", "application/json");
- var postResponse = httpClient.PostAsync(InviteEndPoint, content).Result;
- string serverResponse = postResponse.Content.ReadAsStringAsync().Result;
- Console.WriteLine(serverResponse);
+ // Create the invitation object.
+ var invitation = new Invitation
+ {
+ InvitedUserEmailAddress = InvitedUserEmailAddress,
+ InvitedUserDisplayName = InvitedUserDisplayName,
+ InviteRedirectUrl = "https://www.microsoft.com",
+ SendInvitationMessage = true
+ };
+
+ // Send the invitation
+ var GraphResponse = await graphClient.Invitations
+ .Request()
+ .AddAsync(invitation);
+
+ // Return the invite redeem URL
+ return GraphResponse.InviteRedeemUrl;
}
+ }
+}
+```
- /// <summary>
- /// Get the HTTP client.
- /// </summary>
- /// <param name="accessToken">Access token</param>
- /// <returns>Returns the Http Client.</returns>
- private static HttpClient GetHttpClient(string accessToken)
- {
- // setup http client.
- HttpClient httpClient = new HttpClient();
- httpClient.Timeout = TimeSpan.FromSeconds(300);
- httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
- httpClient.DefaultRequestHeaders.Add("client-request-id", Guid.NewGuid().ToString());
- Console.WriteLine(
- "CorrelationID for the request: {0}",
- httpClient.DefaultRequestHeaders.GetValues("client-request-id").Single());
- return httpClient;
- }
+# [JavaScript](#tab/javascript)
- /// <summary>
- /// Get the access token for our application to talk to Microsoft Graph.
- /// </summary>
- /// <returns>Returns the access token for our application to talk to Microsoft Graph.</returns>
- private static string GetAccessToken()
- {
- string accessToken = null;
+Install the following npm packages:
- // Get the access token for our application to talk to Microsoft Graph.
- try
- {
- AuthenticationContext testAuthContext =
- new AuthenticationContext(string.Format("{0}/{1}", EstsLoginEndpoint, TenantID));
- AuthenticationResult testAuthResult = testAuthContext.AcquireTokenAsync(
- GraphResource,
- new ClientCredential(TestAppClientId, TestAppClientSecret)).Result;
- accessToken = testAuthResult.AccessToken;
- }
- catch (AdalException ex)
- {
- Console.WriteLine("An exception was thrown while fetching the token: {0}.", ex);
- throw;
- }
-
- return accessToken;
- }
+```bash
+npm install express
+npm install isomorphic-fetch
+npm install @azure/identity
+npm install @microsoft/microsoft-graph-client
+```
- /// <summary>
- /// Invitation class.
- /// </summary>
- public class Invitation
- {
- /// <summary>
- /// Gets or sets display name.
- /// </summary>
- public string InvitedUserDisplayName { get; set; }
-
- /// <summary>
- /// Gets or sets display name.
- /// </summary>
- public string InvitedUserEmailAddress { get; set; }
-
- /// <summary>
- /// Gets or sets a value indicating whether Invitation Manager should send the email to InvitedUser.
- /// </summary>
- public bool SendInvitationMessage { get; set; }
-
- /// <summary>
- /// Gets or sets invitation redirect URL
- /// </summary>
- public string InviteRedirectUrl { get; set; }
- }
- }
+```javascript
+const express = require('express')
+const app = express()
+
+const { Client } = require("@microsoft/microsoft-graph-client");
+const { TokenCredentialAuthenticationProvider } = require("@microsoft/microsoft-graph-client/authProviders/azureTokenCredentials");
+const { ClientSecretCredential } = require("@azure/identity");
+require("isomorphic-fetch");
+
+// This is the application id of the application that is registered in the above tenant.
+const CLIENT_ID = ""
+
+// Client secret of the application.
+const CLIENT_SECRET = ""
+
+// This is the tenant ID of the tenant you want to invite users to. For example fabrikam.onmicrosoft.com
+const TENANT_ID = ""
+
+async function sendInvite() {
+
+ // Initialize a confidential client application. For more info, visit: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/samples/AzureIdentityExamples.md#authenticating-a-service-principal-with-a-client-secret
+ const credential = new ClientSecretCredential(TENANT_ID, CLIENT_ID, CLIENT_SECRET);
+
+ // Initialize the Microsoft Graph authentication provider. For more info, visit: https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=Javascript#using--for-server-side-applications
+ const authProvider = new TokenCredentialAuthenticationProvider(credential, { scopes: ['https://graph.microsoft.com/.default'] });
+
+ // Create MS Graph client instance. For more info, visit: https://github.com/microsoftgraph/msgraph-sdk-javascript/blob/dev/docs/CreatingClientInstance.md
+ const client = Client.initWithMiddleware({
+ debugLogging: true,
+ authProvider,
+ });
+
+ // Create invitation object
+ const invitation = {
+ invitedUserEmailAddress: 'david@fabrikam.com',
+ invitedUserDisplayName: 'David',
+ inviteRedirectUrl: 'https://www.microsoft.com',
+ sendInvitationMessage: true
+ };
+
+ // Execute the MS Graph command. For more information, visit: https://docs.microsoft.com/en-us/graph/api/invitation-post
+ graphResponse = await client.api('/invitations')
+ .post(invitation);
+
+ // Return the invite redeem URL
+ return graphResponse.inviteRedeemUrl
}+
+const inviteRedeemUrl = await sendInvite();
+ ``` + ## Next steps
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Required attributes for the SAML 2.0 response from the IdP:
Required claims for the SAML 2.0 token issued by the IdP:
-|Attribute |Value |
+|Attribute Name |Value |
||| |NameID Format |`urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` |
-|emailaddress |`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` |
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` | emailaddress |
### WS-Fed configuration
active-directory Active Directory Properties Area https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-properties-area.md
You add your organization's privacy information in the **Properties** area of Az
- **Technical contact.** Type the email address for the person to contact for technical support within your organization.
- - **Global privacy contact.** Type the email address for the person to contact for inquiries about personal data privacy. This person is also who Microsoft contacts if there's a data breach. If there's no person listed here, Microsoft contacts your global administrators.
+ - **Global privacy contact.** Type the email address for the person to contact for inquiries about personal data privacy. This person is also who Microsoft contacts if there's a data breach related to Azure Active Directory services . If there's no person listed here, Microsoft contacts your global administrators. For Microsoft 365 related privacy incident notifications please see [Microsoft 365 Message center FAQs](https://docs.microsoft.com/microsoft-365/admin/manage/message-center?view=o365-worldwide#frequently-asked-questions&preserve-view=true)
- **Privacy statement URL.** Type the link to your organization's document that describes how your organization handles both internal and external guest's data privacy.
You add your organization's privacy information in the **Properties** area of Az
## Next steps - [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md)-- [Add or change profile information for a user in Azure Active Directory](active-directory-users-profile-azure-portal.md)
+- [Add or change profile information for a user in Azure Active Directory](active-directory-users-profile-azure-portal.md)
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/choose-ad-authn.md
Last updated 01/05/2022 --++ # Choose the right authentication method for your Azure Active Directory hybrid identity solution
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
Title: Secure hybrid access with F5 description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure Hybrid Access-+ Last updated 11/12/2020-+
active-directory F5 Aad Password Less Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
Title: Configure F5 BIG-IP SSL-VPN solution in Azure AD
description: Tutorial to configure F5ΓÇÖs BIG-IP based Secure socket layer Virtual private network (SSL-VPN) solution with Azure Active Directory (AD) for Secure Hybrid Access (SHA) -+ Last updated 10/12/2020-+
active-directory F5 Big Ip Forms Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-forms-advanced.md
Title: Configure F5 BIG-IPΓÇÖs Access Policy Manager for form-based SSO description: Learn how to configure F5's BIG-IP Access Policy Manager and Azure Active Directory for secure hybrid access to form-based applications.-+ Last updated 10/20/2021-+
active-directory F5 Big Ip Header Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md
Title: Configure F5 BIG-IP Access Policy Manager for header-based SSO description: Learn how to configure F5's BIG-IP Access Policy Manager (APM) and Azure Active Directory SSO for header-based authentication -+ Last updated 11/10/2021-+
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
Title: Configure F5 BIG-IPΓÇÖs Easy Button for Header-based SSO description: learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to header-based applications using F5ΓÇÖs BIG-IP Easy Button Guided Configuration. -+ Last updated 01/07/2022-+
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
Title: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication description: Learn how to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration. -+ Last updated 12/13/2021-+
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
Title: Configure F5 BIG-IP Easy Button for Kerberos SSO description: Learn to implement Secure Hybrid Access (SHA) with Single Sign-on to Kerberos applications using F5ΓÇÖs BIG-IP Easy Button guided configuration.. -+ Last updated 12/20/2021-+
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
Title: Configure F5 BIG-IPΓÇÖs Easy Button for Header-based and LDAP SSO description: Learn to configure F5ΓÇÖs BIG-IP Access Policy Manager (APM) and Azure Active Directory (Azure AD) for secure hybrid access to header-based applications that also require session augmentation through Lightweight Directory Access Protocol (LDAP) sourced attributes. -+ Last updated 11/22/2021-+
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
Title: Configure F5 BIG-IP Easy Button for SSO to Oracle EBS description: Learn to implement SHA with header-based SSO to Oracle EBS using F5ΓÇÖs BIG-IP Easy Button guided configuration -+ Last updated 1/31/2022-+
active-directory F5 Big Ip Oracle Jde Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-jde-easy-button.md
Title: Configure F5 BIG-IP Easy Button for SSO to Oracle JDE description: Learn to implement SHA with header-based SSO to Oracle JD Edwards using F5ΓÇÖs BIG-IP Easy Button guided configuration -+ Last updated 02/03/2022-+
active-directory F5 Big Ip Oracle Peoplesoft Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button.md
Title: Configure F5 BIG-IP Easy Button for SSO to Oracle PeopleSoft description: Learn to implement SHA with header-based SSO to Oracle PeopleSoft using F5 BIG-IP Easy Button guided configuration. -+ Last updated 02/26/2022-+
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
Title: Configure F5 BIG-IP Easy Button for SSO to SAP ERP description: Learn to secure SAP ERP using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration. -+ Last updated 3/1/2022-+
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Data Explorer | [Configure managed identities for your Azure Data Explorer cluster](/azure/data-explorer/configure-managed-identities-cluster?tabs=portal) | | Azure Data Factory | [Managed identity for Data Factory](../../data-factory/data-factory-service-identity.md) | | Azure Data Lake Storage Gen1 | [Customer-managed keys for Azure Storage encryption](../../storage/common/customer-managed-keys-overview.md) |
-| Azure Data Share | [Roles and requirements for Azure Data Share](../../data-share/concepts-roles-permissions.md) |
+| Azure Data Share | [Roles and requirements for Azure Data Share](../../data-share/concepts-roles-permissions.md) |
+| Azure DevTest Labs | [Enable user-assigned managed identities on lab virtual machines in Azure DevTest Labs](../../devtest-labs/enable-managed-identities-lab-vms.md) |
| Azure Digital Twins | [Enable a managed identity for routing Azure Digital Twins events](../../digital-twins/how-to-enable-managed-identities-portal.md) | | Azure Event Grid | [Event delivery with a managed identity](../../event-grid/managed-service-identity.md) | Azure Image Builder | [Azure Image Builder overview](../../virtual-machines/image-builder-overview.md#permissions) |
active-directory List Role Assignments Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/list-role-assignments-users.md
Follow these steps to list Azure AD roles for a user using the Azure portal. You
Follow these steps to list Azure AD roles assigned to a user using PowerShell.
-1. Install AzureADPreview and Microsoft.Graph module using [Install-module](/powershell/azure/active-directory/install-adv2).
+1. Install Microsoft.Graph module using [Install-module](/powershell/azure/active-directory/install-adv2).
```powershell
- Install-module -name AzureADPreview
Install-module -name Microsoft.Graph ```
-
-2. Open a PowerShell window and use [Import-Module](/powershell/module/microsoft.powershell.core/import-module) to import the AzureADPreview module. For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
-
- ```powershell
- Import-Module -Name AzureADPreview -Force
- ```
-
-3. In a PowerShell window, use [Connect-AzureAD](/powershell/module/azuread/connect-azuread) to sign in to your tenant.
- ```powershell
- Connect-AzureAD
- ```
-4. Use [Get-AzureADMSRoleAssignment](/powershell/module/azuread/get-azureadmsroleassignment) to get roles assigned directly to a user.
-
- ```powershell
- #Get the user
- $userId = (Get-AzureADUser -Filter "userPrincipalName eq 'alice@contoso.com'").ObjectId
-
- #Get direct role assignments to the user
- $directRoles = (Get-AzureADMSRoleAssignment -Filter "principalId eq '$userId'").RoleDefinitionId
- ```
-
-5. To get transitive roles assigned to the user, use the following cmdlets.
-
- a. Use [Get-AzureADMSGroup](/powershell/module/azuread/get-azureadmsgroup) to get the list of all role assignable groups.
+3. In a PowerShell window, Use [Connect-MgGraph](/graph/powershell/get-started) to sign into and use Microsoft Graph PowerShell cmdlets.
```powershell
- $roleAssignableGroups = (Get-AzureADMsGroup -All $true | Where-Object IsAssignableToRole -EQ 'True').Id
+ Connect-MgGraph
```
- b. Use [Connect-MgGraph](/graph/powershell/get-started) to sign into and use Microsoft Graph PowerShell cmdlets.
-
- ```powershell
- Connect-MgGraph -Scopes "User.Read.AllΓÇ¥
- ```
-
- c. Use [checkMemberObjects](/graph/api/user-checkmemberobjects) API to figure out which of the role assignable groups the user is member of.
-
- ```powershell
- $uri = "https://graph.microsoft.com/v1.0/directoryObjects/$userId/microsoft.graph.checkMemberObjects"
+4. Use the [List transitiveRoleAssignments](/graph/api/rbacapplication-list-transitiveroleassignments) API to get roles assigned directly and transitively to a user.
- $userRoleAssignableGroups = (Invoke-MgGraphRequest -Method POST -Uri $uri -Body @{"ids"= $roleAssignableGroups}).value
- ```
-
- d. Use [Get-AzureADMSRoleAssignment](/powershell/module/azuread/get-azureadmsroleassignment) to loop through the groups and get the roles assigned to them.
-
```powershell
- $transitiveRoles=@()
- foreach($item in $userRoleAssignableGroups){
- $transitiveRoles += (Get-AzureADMSRoleAssignment -Filter "principalId eq '$item'").RoleDefinitionId
- }
+ $response = $null
+ $uri = "https://graph.microsoft.com/beta/roleManagement/directory/transitiveRoleAssignments?`$count=true&`$filter=principalId eq '6b937a9d-c731-465b-a844-2d5b5368c161'"
+ $method = 'GET'
+ $headers = @{'ConsistencyLevel' = 'eventual'}
+
+ $response = (Invoke-MgGraphRequest -Uri $uri -Headers $headers -Method $method -Body $null).value
```
-6. Combine both direct and transitive role assignments of the user.
-
- ```powershell
- $allRoles = $directRoles + $transitiveRoles
- ```
-
## Microsoft Graph API Follow these steps to list Azure AD roles assigned to a user using the Microsoft Graph API in [Graph Explorer](https://aka.ms/ge). 1. Sign in to the [Graph Explorer](https://aka.ms/ge).
-1. Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get roles assigned directly to a user. Add following query to the URL and select **Run query**.
+1. Use the [List transitiveRoleAssignments](/graph/api/rbacapplication-list-transitiveroleassignments) API to get roles assigned directly and transitively to a user. Add following query to the URL.
```http
- GET https://graph.microsoft.com/v1.0/rolemanagement/directory/roleAssignments?$filter=principalId eq '55c07278-7109-4a46-ae60-4b644bc83a31'
+ GET https://graph.microsoft.com/beta/rolemanagement/directory/transitiveRoleAssignments?$count=true&$filter=principalId eq '6b937a9d-c731-465b-a844-2d5b5368c161'
```
-3. To get transitive roles assigned to the user, follow these steps.
+3. Navigate to **Request headers** tab. Add `ConsistencyLevel` as key and `Eventual` as its value.
- a. Use the [List groups](/graph/api/group-list) API to get the list of all role assignable groups.
-
- ```http
- GET https://graph.microsoft.com/v1.0/groups?$filter=isAssignableToRole eq true
- ```
-
- b. Pass this list to the [checkMemberObjects](/graph/api/user-checkmemberobjects) API to figure out which of the role assignable groups the user is member of.
-
- ```http
- POST https://graph.microsoft.com/v1.0/users/55c07278-7109-4a46-ae60-4b644bc83a31/checkMemberObjects
- {
- "ids": [
- "936aec09-47d5-4a77-a708-db2ff1dae6f2",
- "5425a4a0-8998-45ca-b42c-4e00920a6382",
- "ca9631ad-2d2a-4a7c-88b7-e542bd8a7e12",
- "ea3cee12-360e-411d-b0ba-2173181daa76",
- "c3c263bb-b796-48ee-b4d2-3fbc5be5f944"
- ]
- }
- ```
-
- c. Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to loop through the groups and get the roles assigned to them.
-
- ```http
- GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments?$filter=principalId eq '5425a4a0-8998-45ca-b42c-4e00920a6382'
- ```
+5. Select **Run query**.
## Next steps
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Groups Administrator](#groups-administrator) | Members of this role can create/manage groups, create/manage groups settings like naming and expiration policies, and view groups activity and audit reports. | fdd7a751-b60b-444a-984c-02652fe8fa1c | > | [Guest Inviter](#guest-inviter) | Can invite guest users independent of the 'members can invite guests' setting. | 95e79109-95c0-4d8e-aee3-d01accf2d47b | > | [Helpdesk Administrator](#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators. | 729827e3-9c14-49f7-bb1b-9608f156bbb8 |
-> | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning, Azure AD Connect, and federation settings. | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 |
+> | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning, Azure AD Connect, Pass-through Authentication (PTA), Password hash synchronization (PHS), Seamless Single sign-on (Seamless SSO), and federation settings. | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 |
> | [Identity Governance Administrator](#identity-governance-administrator) | Manage access using Azure AD for identity governance scenarios. | 45d8d3c5-c802-45c6-b32a-1d70b5e1e86e | > | [Insights Administrator](#insights-administrator) | Has administrative access in the Microsoft 365 Insights app. | eb1f4a8d-243a-41f0-9fbd-c7cdf6c5ef7c | > | [Insights Business Leader](#insights-business-leader) | Can view and share dashboards and insights via the Microsoft 365 Insights app. | 31e939ad-9672-4796-9c2e-873181342d2d |
This role was previously called "Password Administrator" in the [Azure portal](h
## Hybrid Identity Administrator
-Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning as well as manage Azure AD Connect and federation settings. Users can also troubleshoot and monitor logs using this role.
+Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning as well as manage Azure AD Connect, Pass-through Authentication (PTA), Password hash synchronization (PHS), Seamless Single Sign-On (Seamless SSO), and federation settings. Users can also troubleshoot and monitor logs using this role.
> [!div class="mx-tableFixed"] > | Actions | Description |
aks Open Service Mesh Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-troubleshoot.md
Information on how OSM issues and manages certificates to Envoy proxies running
### Upgrading Envoy
-When a new pod is created in a namespace monitored by the add-on, OSM will inject an [envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. Information regarding how to update the envoy version can be found in the [Upgrade Guide](https://docs.openservicemesh.io/docs/getting_started/upgrade/#envoy) on the OpenServiceMesh docs site.
+When a new pod is created in a namespace monitored by the add-on, OSM will inject an [envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. Information regarding how to update the envoy version can be found in the [Upgrade Guide](https://docs.openservicemesh.io/docs/getting_started/) on the OpenServiceMesh docs site.
aks Security Hardened Vm Host Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-hardened-vm-host-image.md
Title: Security hardening in AKS virtual machine hosts description: Learn about the security hardening in AKS VM host OS - Last updated 03/29/2021-
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
Title: Use system node pools in Azure Kubernetes Service (AKS) description: Learn how to create and manage system node pools in Azure Kubernetes Service (AKS) - Last updated 06/18/2020-
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
The second control flow policy is in the outbound section and conditionally appl
#### Example
-This example shows how to perform content filtering by removing data elements from the response received from the backend service when using the `Starter` product. For a demonstration of configuring and using this policy, see [Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky](https://azure.microsoft.com/documentation/videos/episode-177-more-api-management-features-with-vlad-vinogradsky/) and fast-forward to 34:30. Start at 31:50 to see an overview of [The Dark Sky Forecast API](https://developer.forecast.io/) used for this demo.
+This example shows how to perform content filtering by removing data elements from the response received from the backend service when using the `Starter` product. The example backend response includes root-level properties similar to the [OpenWeather One Call API](https://openweathermap.org/api/one-call-api).
```xml
-<!-- Copy this snippet into the outbound section to remove a number of data elements from the response received from the backend service based on the name of the api product -->
+<!-- Copy this snippet into the outbound section to remove a number of data elements from the response received from the backend service based on the name of the product -->
<choose> <when condition="@(context.Response.StatusCode == 200 && context.Product.Name.Equals("Starter"))"> <set-body>@{ var response = context.Response.Body.As<JObject>();
- foreach (var key in new [] {"minutely", "hourly", "daily", "flags"}) {
+ foreach (var key in new [] {"current", "minutely", "hourly", "daily", "alerts"}) {
response.Property (key).Remove (); } return response.ToString();
api-management Api Management Caching Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-caching-policies.md
Use the `cache-lookup` policy to perform cache look up and return a valid cached
``` #### Example using policy expressions
-This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backed service's `Cache-Control` directive. For a demonstration of configuring and using this policy, see [Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky](https://azure.microsoft.com/documentation/videos/episode-177-more-api-management-features-with-vlad-vinogradsky/) and fast-forward to 25:25.
+This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backend service's `Cache-Control` directive.
```xml <!-- The following cache policy snippets demonstrate how to control API Management response cache duration with Cache-Control headers sent by the backend service. -->
The `cache-store` policy caches responses according to the specified cache setti
``` #### Example using policy expressions
-This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backed service's `Cache-Control` directive. For a demonstration of configuring and using this policy, see Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky and fast-forward to 25:25.
+This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backend service's `Cache-Control` directive.
```xml <!-- The following cache policy snippets demonstrate how to control API Management response cache duration with Cache-Control headers sent by the backend service. -->
api-management Api Management Cross Domain Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-cross-domain-policies.md
Use the `cross-domain` policy to make the API accessible from Adobe Flash and Mi
```xml <cross-domain>
+ <cross-domain-policy>
<allow-http-request-headers-from domain='*' headers='*' />
+ </cross-domain-policy>
</cross-domain> ```
Use the `cross-domain` policy to make the API accessible from Adobe Flash and Mi
This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes). - **Policy sections:** inbound-- **Policy scopes:** all scopes
+- **Policy scopes:** global
## <a name="CORS"></a> CORS The `cors` policy adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients.
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-role-based-access-control.md
If none of the built-in roles meet your specific needs, custom roles can be crea
> [!NOTE] > To be able to see an API Management instance in the Azure portal, a custom role must include the ```Microsoft.ApiManagement/service/read``` action.
-When you create a custom role, it's easier to start with one of the built-in roles. Edit the attributes to add **Actions**, **NotActions**, or **AssignableScopes**, and then save the changes as a new role. The following example begins with the "API Management Service Reader" role and creates a custom role called "Calculator API Editor." You can assign the custom role to a specific API. Consequently, this role only has access to that API.
+When you create a custom role, it's easier to start with one of the built-in roles. Edit the attributes to add **Actions**, **NotActions**, or **AssignableScopes**, and then save the changes as a new role. The following example begins with the "API Management Service Reader" role and creates a custom role called "Calculator API Editor." You can assign the custom role at the scope of a specific API. Consequently, this role only has access to that API.
```powershell $role = Get-AzRoleDefinition "API Management Service Reader Role"
$role.Description = 'Has read access to Contoso APIM instance and write access t
$role.Actions.Add('Microsoft.ApiManagement/service/apis/write') $role.Actions.Add('Microsoft.ApiManagement/service/apis/*/write') $role.AssignableScopes.Clear()
-$role.AssignableScopes.Add('/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.ApiManagement/service/<service name>/apis/<api ID>')
+$role.AssignableScopes.Add('/subscriptions/<Azure subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.ApiManagement/service/<APIM service instance name>/apis/<API name>')
New-AzRoleDefinition -Role $role
-New-AzRoleAssignment -ObjectId <object ID of the user account> -RoleDefinitionName 'Calculator API Contributor' -Scope '/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.ApiManagement/service/<service name>/apis/<api ID>'
+New-AzRoleAssignment -ObjectId <object ID of the user account> -RoleDefinitionName 'Calculator API Contributor' -Scope '/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.ApiManagement/service/<APIM service instance name>/apis/<API name>'
``` The [Azure Resource Manager resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftapimanagement) article contains the list of permissions that can be granted on the API Management level.
api-management Api Management Transformation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-transformation-policies.md
In this example the policy routes the request to a service fabric backend, using
``` #### Filter response based on product
- This example shows how to perform content filtering by removing data elements from the response received from the backend service when using the `Starter` product. For a demonstration of configuring and using this policy, see [Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky](https://azure.microsoft.com/documentation/videos/episode-177-more-api-management-features-with-vlad-vinogradsky/) and fast-forward to 34:30. Start at 31:50 to see an overview of [The Dark Sky Forecast API](https://developer.forecast.io/) used for this demo.
+ This example shows how to perform content filtering by removing data elements from the response received from a backend service when using the `Starter` product. The example backend response includes root-level properties similar to the [OpenWeather One Call API](https://openweathermap.org/api/one-call-api).
```xml
-<!-- Copy this snippet into the outbound section to remove a number of data elements from the response received from the backend service based on the name of the api product -->
+<!-- Copy this snippet into the outbound section to remove a number of data elements from the response received from the backend service based on the name of the product -->
<choose> <when condition="@(context.Response.StatusCode == 200 && context.Product.Name.Equals("Starter"))"> <set-body>@{ var response = context.Response.Body.As<JObject>();
- foreach (var key in new [] {"minutely", "hourly", "daily", "flags"}) {
+ foreach (var key in new [] {"current", "minutely", "hourly", "daily", "alerts"}) {
response.Property (key).Remove (); } return response.ToString();
OriginalUrl.
#### Forward context information to the backend service
- This example shows how to apply policy at the API level to supply context information to the backend service. For a demonstration of configuring and using this policy, see [Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky](https://azure.microsoft.com/documentation/videos/episode-177-more-api-management-features-with-vlad-vinogradsky/) and fast-forward to 10:30. At 12:10 there is a demo of calling an operation in the developer portal where you can see the policy at work.
+ This example shows how to apply policy at the API level to supply context information to the backend service.
```xml <!-- Copy this snippet into the inbound element to forward some context information, user id and the region the gateway is hosted in, to the backend service for logging or evaluation -->
OriginalUrl.
> Multiple values of a header are concatenated to a CSV string, for example: > `headerName: value1,value2,value3` >
-> Exceptions include standardized headers, which values:
+> Exceptions include standardized headers whose values:
> - may contain commas (`User-Agent`, `WWW-Authenticate`, `Proxy-Authenticate`), > - may contain date (`Cookie`, `Set-Cookie`, `Warning`), > - contain date (`Date`, `Expires`, `If-Modified-Since`, `If-Unmodified-Since`, `Last-Modified`, `Retry-After`).
OriginalUrl.
``` #### Forward context information to the backend service
- This example shows how to apply policy at the API level to supply context information to the backend service. For a demonstration of configuring and using this policy, see [Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky](https://azure.microsoft.com/documentation/videos/episode-177-more-api-management-features-with-vlad-vinogradsky/) and fast-forward to 10:30. At 12:10 there is a demo of calling an operation in the developer portal where you can see the policy at work.
+ This example shows how to apply policy at the API level to supply context information to the backend service.
```xml <!-- Copy this snippet into the inbound element to forward a piece of context, product name in this example, to the backend service for logging or evaluation -->
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
The following functionality found in the managed gateways is **not available** i
- Client certificate renegotiation. This means that for [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) to work, API consumers must present their certificates as part of the initial TLS handshake. To ensure this behavior, enable the Negotiate Client Certificate setting when configuring a self-hosted gateway custom hostname. - Built-in cache. Learn about using an [external Redis-compatible cache](api-management-howto-cache-external.md) in self-hosted gateways.
+### Container images
+
+We provide a variety of container images for self-hosted gateways to meet your needs:
+
+| Tag convention | Recommendation | Example | Rolling tag | Recommended for production |
+| - | -- | - | - | - |
+| `{major}.{minor}.{patch}` | Use this tag to always to run the same version of the gateway |`2.0.0` | ❌ | ✔️ |
+| `v{major}` | Use this tag to always run a major version of the gateway with every new feature and patch. |`v2` | ✔️ | ❌ |
+| `v{major}-preview` | Use this tag if you always want to run our latest preview container image. | `v2-preview` | ✔️ | ❌ |
+| `latest` | Use this tag if you want to evaluate the self-hosted gateway. | `latest` | ✔️ | ❌ |
+
+You can find a full list of available tags [here](https://mcr.microsoft.com/v2/azure-api-management/gateway/tags/list).
+
+#### Use of tags in our official deployment options
+
+Our deployment options in the Azure portal use the `v2` tag which allows customers to use the most recent version of the self-hosted gateway v2 container image with all feature updates and patches.
+
+> [!NOTE]
+> We provide the command and YAML snippets as reference, feel free to use a more specific tag if you wish to.
+
+When installing with our Helm chart, image tagging is optimized for you. The Helm chart's application version pins the gateway to a given version and does not rely on `latest`.
+
+Learn more on how to [install an API Management self-hosted gateway on Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
+
+#### Risk of using rolling tags
+
+Rolling tags are tags that are potentially updated when a new version of the container image is released. This allows container users to receive updates to the container image without having to update their deployments.
+
+This means that you can potentially run different versions in parallel without noticing it, for example when you perform scaling actions once `v2` tag was updated.
+
+Example - `v2` tag was released with `2.0.0` container image, but when `2.1.0` will be released, the `v2` tag will be linked to the `2.1.0` image.
+
+> [!IMPORTANT]
+> Consider using a specific version tag in production to avoid unintentional upgrade to a newer version.
+ ## Connectivity to Azure Self-hosted gateways require outbound TCP/IP connectivity to Azure on port 443. Each self-hosted gateway must be associated with a single API Management service and is configured via its management plane. A self-hosted gateway uses connectivity to Azure for:
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
This article uses Health check in the Azure portal to monitor App Service instan
- Furthermore, when scaling up or out, App Service pings the Health check path to ensure new instances are ready. > [!NOTE]
->- Health check doesn't follow 302 redirects. At most one instance will be replaced per hour, with a maximum of three instances per day per App Service Plan.
->- Note, if your health check is giving the status `Waiting for health check response` then the check is likely failing due to an HTTP status code of 307, which can happen if you have HTTPS redirect enabled but have `HTTPS Only` disabled.
+>- Health check doesn't follow 302 redirects.
+>- At most one instance will be replaced per hour, with a maximum of three instances per day per App Service Plan.
+>- If your health check is giving the status `Waiting for health check response` then the check is likely failing due to an HTTP status code of 307, which can happen if you have HTTPS redirect enabled but have `HTTPS Only` disabled.
## Enable Health Check
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md
ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a ms.devlang: java Previously updated : 12/10/2021 Last updated : 03/03/2022 -+ zone_pivot_groups: app-service-platform-windows-linux adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
adobe-target-content: ./quickstart-java-uiex
# Quickstart: Create a Java app on Azure App Service
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azure-cli) with the [Azure Web App Plugin for Maven](https://github.com/Microsoft/azure-maven-plugins/tree/develop/azure-webapp-maven-plugin) to deploy a .jar file, or .war file. Use the tabs to switch between Java SE and Tomcat instructions.
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azure-cli) with the [Azure Web App Plugin for Maven](https://github.com/Microsoft/azure-maven-plugins/tree/develop/azure-webapp-maven-plugin) to deploy a .jar file, or .war file. Use the tabs to switch between Java SE and Tomcat instructions.
# [Java SE](#tab/javase)
cd agoncal-application-petstore-ee7
## Configure the Maven plugin
-The deployment process to Azure App Service will use your Azure credentials from the Azure CLI automatically. If the Azure CLI is not installed locally, then the Maven plugin will authenticate with Oauth or device login. For more information, see [authentication with Maven plugins](https://github.com/microsoft/azure-maven-plugins/wiki/Authentication).
+The deployment process to Azure App Service will use your Azure credentials from the Azure CLI automatically. If the Azure CLI isn't installed locally, then the Maven plugin will authenticate with Oauth or device sign in. For more information, see [authentication with Maven plugins](https://github.com/microsoft/azure-maven-plugins/wiki/Authentication).
Run the Maven command below to configure the deployment. This command will help you to set up the App Service operating system, Java version, and Tomcat version. ```azurecli-interactive
-mvn com.microsoft.azure:azure-webapp-maven-plugin:2.3.0:config
+mvn com.microsoft.azure:azure-webapp-maven-plugin:2.5.0:config
``` ::: zone pivot="platform-windows"
JBoss EAP is only available on the Linux version of App Service. Select the **Li
::: zone-end
-You can modify the configurations for App Service directly in your `pom.xml` if needed. Some common ones are listed below:
+You can modify the configurations for App Service directly in your `pom.xml`. Some common configurations are listed below:
Property | Required | Description | Version |||
Property | Required | Description | Version
`<resourceGroup>` | true | Azure Resource Group for your Web App. | 0.1.0+ `<appName>` | true | The name of your Web App. | 0.1.0+ `<region>` | false | Specifies the region where your Web App will be hosted; the default value is **centralus**. All valid regions at [Supported Regions](https://azure.microsoft.com/global-infrastructure/services/?products=app-service) section. | 0.1.0+
-`<pricingTier>` | false | The pricing tier for your Web App. The default value is **P1v2** for production workload, while **B2** is the recommended minimum for Java dev/test. [Learn more](https://azure.microsoft.com/pricing/details/app-service/linux/)| 0.1.0+
-`<runtime>` | false | The runtime environment configuration, you could see the detail [here](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Web-App:-Configuration-Details). | 0.1.0+
-`<deployment>` | false | The deployment configuration, you could see the details [here](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Web-App:-Configuration-Details). | 0.1.0+
+`<pricingTier>` | false | The pricing tier for your Web App. The default value is **P1v2** for production workload, while **B2** is the recommended minimum for Java dev/test. For more information, see [App Service Pricing](https://azure.microsoft.com/pricing/details/app-service/linux/)| 0.1.0+
+`<runtime>` | false | The runtime environment configuration. For more information, see [Configuration Details](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Web-App:-Configuration-Details). | 0.1.0+
+`<deployment>` | false | The deployment configuration. For more information, see [Configuration Details](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Web-App:-Configuration-Details). | 0.1.0+
-Be careful about the values of `<appName>` and `<resourceGroup>` (`helloworld-1590394316693` and `helloworld-1590394316693-rg` accordingly in the demo), they will be used later.
+Be careful about the values of `<appName>` and `<resourceGroup>` (`helloworld-1590394316693` and `helloworld-1590394316693-rg` accordingly in the demo), they'll be used later.
> [!div class="nextstepaction"] > [I ran into an issue](https://www.research.net/r/javae2e?tutorial=quickstart-java&step=config)
mvn package azure-webapp:deploy -DskipTests
--
-Once deployment has completed, your application will be ready at `http://<appName>.azurewebsites.net/` (`http://helloworld-1590394316693.azurewebsites.net` in the demo). Open the url with your local web browser, you should see
+Once deployment is completed, your application will be ready at `http://<appName>.azurewebsites.net/` (`http://helloworld-1590394316693.azurewebsites.net` in the demo). Open the url with your local web browser, you should see
# [Java SE](#tab/javase)
JBoss EAP is only available on the Linux version of App Service. Select the **Li
## Clean up resources
-In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group from portal, or by running the following command in the Cloud Shell:
+In the preceding steps, you created Azure resources in a resource group. If you don't need the resources in the future, delete the resource group from portal, or by running the following command in the Cloud Shell:
```azurecli-interactive az group delete --name <your resource group name; for example: helloworld-1558400876966-rg> --yes
app-service Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md
Title: 'Quickstart: Create a Node.js web app'
description: Deploy your first Node.js Hello World to Azure App Service in minutes. ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a Previously updated : 09/14/2021 Last updated : 03/10/2022 ms.devlang: javascript #zone_pivot_groups: app-service-ide-oss
This quickstart configures an App Service app in the **Free** tier and incurs no
:::zone target="docs" pivot="development-environment-cli" - Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?utm_source=campaign&utm_campaign=vscode-tutorial-app-service-extension&mktingSource=vscode-tutorial-app-service-extension).-- Install [Node.js and npm](https://nodejs.org). Run the command `node --version` to verify that Node.js is installed.
+- Install [Node.js LTS and npm](https://nodejs.org). Run the command `node --version` to verify that Node.js is installed.
- Install <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a>, with which you run commands in any shell to provision and configure Azure resources. ::: zone-end
This quickstart configures an App Service app in the **Free** tier and incurs no
:::zone target="docs" pivot="development-environment-azure-portal" - Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?utm_source=campaign&utm_campaign=vscode-tutorial-app-service-extension&mktingSource=vscode-tutorial-app-service-extension).-- Install [Node.js and npm](https://nodejs.org). Run the command `node --version` to verify that Node.js is installed.
+- Install [Node.js LTS and npm](https://nodejs.org). Run the command `node --version` to verify that Node.js is installed.
- Have a FTP client (for example, [FileZilla](https://filezilla-project.org)), to connect to your app. ::: zone-end
In this step, you create a starter Node.js application and make sure it runs on
1. Create a simple Node.js application using the [Express Generator](https://expressjs.com/starter/generator.html), which is installed by default with Node.js and NPM. ```bash
- npx express-generator myExpressApp --view pug
+ npx express-generator myExpressApp --view ejs
``` 1. Change to the application's directory and install the NPM packages. ```bash
- cd myExpressApp
- npm install
+ cd myExpressApp && npm install
```
-1. Start the development server.
+1. Start the development server with debug information.
```bash
- npm start
+ DEBUG=myexpressapp:* npm start
``` 1. In a browser, navigate to `http://localhost:3000`. You should see something like this:
Before you continue, ensure that you have all the prerequisites installed and co
:::zone target="docs" pivot="development-environment-cli"
-In the terminal, make sure you're in the *myExpressApp* directory, and deploy the code in your local folder (*myExpressApp*) using the `az webapp up` command:
+In the terminal, make sure you're in the *myExpressApp* directory, and deploy the code in your local folder (*myExpressApp*) using the [az webapp up](/cli/azure/webapp#az-webapp-up) command:
# [Deploy to Linux](#tab/linux)
Azure App Service supports [**two types of credentials**](deploy-configure-crede
You can deploy changes to this app by making edits in Visual Studio Code, saving your files, and then redeploy to your Azure app. For example:
-1. From the sample project, open *views/index.pug* and change
+1. From the sample project, open *views/index.ejs* and change
- ```PUG
- p Welcome to #{title}
+ ```html
+ <p>Welcome to <%= title %></p>
``` to
- ```PUG
- p Welcome to Azure!
+ ```html
+ <p>Welcome to Azure</p>
``` :::zone target="docs" pivot="development-environment-vscode"
You can deploy changes to this app by making edits in Visual Studio Code, saving
:::zone target="docs" pivot="development-environment-cli"
-2. Save your changes, then redeploy the app using the `az webapp up` command again with no arguments:
+2. Save your changes, then redeploy the app using the [az webapp up](/cli/azure/webapp#az-webapp-up) command again with no arguments:
```azurecli az webapp up
az webapp log tail
The command uses the resource group name cached in the *.azure/config* file.
-You can also include the `--logs` parameter with then `az webapp up` command to automatically open the log stream on deployment.
+You can also include the `--logs` parameter with then [az webapp up](/cli/azure/webapp#az-webapp-up) command to automatically open the log stream on deployment.
Refresh the app in the browser to generate console logs, which include messages describing HTTP requests to the app. If no output appears immediately, try again in 30 seconds.
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Previously updated : 02/23/2022 Last updated : 03/10/2022 recommendations: false
Custom models can be one of two types, [**custom template**](concept-custom-temp
The custom neural (custom document) model is a deep learning model type that relies on a base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
-### Build mode
+## Build mode
The build custom model operation has added support for the *template* and *neural* custom models. Previous versions of the REST API and SDKs only supported a single build mode that is now known as the *template* mode.
This table provides links to the build mode programming language SDK references
|JavaScript | [DocumentBuildMode type](/javascript/api/@azure/ai-form-recognizer/documentbuildmode?view=azure-node-preview&preserve-view=true)| [buildModel.js](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js)| |Python | [DocumentBuildMode Enum](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.documentbuildmode?view=azure-python-preview&preserve-view=true#fields)| [sample_build_model.py](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_build_model.py)|
-## Model features
+## Compare model features
The table below compares custom template and custom neural features:
+|Feature|Custom template (form) | Custom neural (document) |
+||||
+|Document structure|Template, form, and structured | Structured, semi-structured, and unstructured|
+|Training time | 1 to 5 minutes | 20 minutes to 1 hour |
+|Data extraction | Key-value pairs, tables, selection marks, coordinates, and signatures | Key-value pairs and selection marks |
+|Document variations | Requires a model per each variation | Uses a single model for all variations |
+|Language support | Multiple [language support](language-support.md#read-layout-and-custom-form-template-model) | United States English (en-US) [language support](language-support.md#custom-neural-model) |
+ ## Custom model tools The following tools are supported by Form Recognizer v2.1:
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Previously updated : 02/15/2022 Last updated : 03/08/2022 recommendations: false
recommendations: false
The General document preview model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, selection marks, and entities from documents. General document is only available with the preview (v3.0) API. For more information on using the preview (v3.0) API, see our [migration guide](v3-migration-guide.md).
-The general document API supports most form types and will analyze your documents and extract keys and associated values. It is ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels.
+The general document API supports most form types and will analyze your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels.
> [!NOTE] > The ```2022-01-30-preview``` update to the general document model adds support for selection marks. ## General document features
-* The general document model is a pre-trained model, does not require labels or training.
+* The general document model is a pre-trained model, doesn't require labels or training.
* A single API extracts key-value pairs, selection marks entities, text, tables, and structure from documents.
You'll need the following resources:
## Key-value pairs
+Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field or in an unstructured document they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
-key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, this could be the label and the value the user entered for that field or in an unstructured document it could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
-
-Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document and if you have documents where same value is described in different ways, for example a customer or a user, the associated key will be either customer or user based on what the document contained.
-
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document and if you have documents where same value is described in different ways, for example, a customer or a user, the associated key will be either customer or user based on what the document contained.
## Entities Natural language processing models can identify parts of speech and classify each token or word. The named entity recognition model is able to identify entities like people, locations, and dates to provide for a richer experience. Identifying entities enables you to distinguish between customer types, for example, an individual or an organization.
-The key value pair extraction model and entity identification model are run in parallel on the entire document and not just on the values of the extracted key-value pairs. This ensures that complex structures where a key cannot be identified is still enriched by identifying the entities referenced. You can still match keys or values to entities based on the offsets of the identified spans.
+The key value pair extraction model and entity identification model are run in parallel on the entire document and not just on the values of the extracted key-value pairs. This process ensures that complex structures where a key can't be identified is still enriched by identifying the entities referenced. You can still match keys or values to entities based on the offsets of the identified spans.
* The general document is a pre-trained model and can be directly invoked via the REST API. * The general document model supports named entity recognition (NER) for several entity categories. NER is the ability to identify different entities in text and categorize them into pre-defined classes or types such as: person, location, event, product, and organization. Extracting entities can be useful in scenarios where you want to validate extracted values. The entities are extracted from the entire content and not just the extracted values.
-## General document model data extraction
+## Data extraction
| **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Entities** | | | :: |::| :: | :: |:: |
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
The ID document model combines Optical Character Recognition (OCR) with deep lea
***Sample U.S. Driver's License processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)*** ## Development options
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
The following tools are supported by Form Recognizer v3.0:
### Try Form Recognizer
-See how data, including tables, check boxes, and text, is extracted from forms and documents using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
+See how data is extracted from forms and documents using the Form Recognizer Studio or Sample Labeling tool. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
You'll need a form document. You can use our [sample form document](https://raw.
## Supported languages and locales
- Form Recognizer preview version introduces additional language support for the layout model. *See* [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
+*See* [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
-## Features
+## Data extraction
+
+The layout model extracts table structures, selection marks, printed and handwritten text, and bounding box coordinates from your documents.
### Tables and table headers
Layout API also extracts selection marks from documents. Extracted selection mar
### Text lines and words
-Layout API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided in lines, words, and bounding boxes. All the text information is included in the `readResults` section of the JSON output.
+The layout model extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Printed and handwritten text is extracted from lines and words. The service then returns bounding box coordinates, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
:::image type="content" source="./media/layout-text-extraction.png" alt-text="Layout text extraction output":::
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Previously updated : 02/15/2022 Last updated : 03/09/2022 recommendations: false
Azure Form Recognizer prebuilt models enable you to add intelligent document pro
| **Model** | **Description** | | | | | 🆕[Read (preview)](#read-preview) | Extract text lines, words, their locations, detected languages, and handwritten style if detected. |
+| 🆕[W-2 (preview)](#w-2-preview) | Extract employee, employer, wage information, etc. from US W-2 forms. |
| 🆕[General document (preview)](#general-document-preview) | Extract text, tables, structure, key-value pairs, and named entities. | | [Layout](#layout) | Extracts text and layout information from documents. | | [Invoice](#invoice) | Extract key information from English and Spanish invoices. | | [Receipt](#receipt) | Extract key information from English receipts. | | [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
-| 🆕[W-2 (preview)](#w-2-preview) | Extract employee, employer, wage information, etc. from US W-2 forms. |
| [Business card](#business-card) | Extract key information from English business cards. | | [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. | ### Read (preview)
+[:::image type="icon" source="media/studio/read-card.png" :::](https://formrecognizer.appliedai.azure.com/studio/read)
The Read API analyzes and extracts ext lines, words, their locations, detected languages, and handwritten style if detected.
The Read API analyzes and extracts ext lines, words, their locations, detected l
> [!div class="nextstepaction"] > [Learn more: read model](concept-read.md)
+### W-2 (preview)
+
+[:::image type="icon" source="media/studio/w2.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
+
+The W-2 model analyzes and extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms (copy A, B, C, D, 1, 2) on one page.
+
+***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: W-2 model](concept-w2.md)
+ ### General document (preview)
+[:::image type="icon" source="media/studio/general-document.png":::](https://formrecognizer.appliedai.azure.com/studio/document)
* The general document API supports most form types and will analyze your documents and associate values to keys and entries to tables that it discovers. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels.
The Read API analyzes and extracts ext lines, words, their locations, detected l
### Layout
+[:::image type="icon" source="media/studio/layout.png":::](https://formrecognizer.appliedai.azure.com/studio/layout)
The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from forms and documents.
The Layout API analyzes and extracts text, tables and headers, selection marks,
### Invoice
+[:::image type="icon" source="media/studio/invoice.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due. Currently, the model supports both English and Spanish invoices.
The invoice model analyzes and extracts key information from sales invoices. The
### Receipt
+[:::image type="icon" source="media/studio/receipt.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
The receipt model analyzes and extracts key information from sales receipts. The API analyzes printed and handwritten receipts and extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total.
The receipt model analyzes and extracts key information from sales receipts. The
### ID document
+[:::image type="icon" source="media/studio/id-document.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)
The ID document model analyzes and extracts key information from U.S. Driver's Licenses (all 50 states and District of Columbia) and biographical pages from international passports (excluding visa and other travel documents). The API analyzes identity documents and extracts key information such as first name, last name, address, and date of birth.
The ID document model analyzes and extracts key information from U.S. Driver's L
> [!div class="nextstepaction"] > [Learn more: identity document model](concept-id-document.md)
-### W-2 (preview)
--
-The W-2 model analyzes and extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms (copy A, B, C, D, 1, 2) on one page.
-
-***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: W-2 model](concept-w2.md)
- ### Business card
+[:::image type="icon" source="media/studio/business-card.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
The business card model analyzes and extracts key information from business card images. The API analyzes printed business card images and extracts key information such as first name, last name, company name, email address, and phone number.
The business card model analyzes and extracts key information from business card
### Custom
- :::image type="content" source="media/studio/custom.png" alt-text="Screenshot: Form Recognizer Studio custom icon.":::
+ [:::image type="icon" source="media/studio/custom.png":::](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)
The custom model analyzes and extracts data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
The custom model analyzes and extracts data from forms and documents specific to
> [!div class="nextstepaction"] > [Learn more: custom model](concept-custom.md)
-## Data extraction
+## Model data extraction
| **Model** | **Text extraction** |**Key-Value pairs** |**Fields**|**Selection Marks** | **Tables** |**Entities** |
- | | :: |::| :: | :: |:: |:: |
- |🆕Read (preview) | ✓ | || | | |
- |🆕General document (preview) | ✓ | ✓ || ✓ | ✓ | ✓ |
- | Layout | Γ£ô | || Γ£ô | Γ£ô | |
- | Invoice | Γ£ô | Γ£ô |Γ£ô| Γ£ô | Γ£ô ||
- |Receipt | Γ£ô | Γ£ô |Γ£ô| | ||
- | ID document | Γ£ô | Γ£ô |Γ£ô| | ||
- |🆕W-2 | ✓ | ✓ | ✓ | ✓ | ✓ ||
- | Business card | Γ£ô | Γ£ô | Γ£ô| | ||
- | Custom |Γ£ô | Γ£ô || Γ£ô | Γ£ô | Γ£ô |
-
+| |:: |::|:: |:: |:: |:: |
+|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | || | | |
+|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | ✓ | ✓ | ✓ | ✓ ||
+|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | ✓ || ✓ | ✓ | ✓ |
+| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | || Γ£ô | Γ£ô | |
+| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | Γ£ô |Γ£ô| Γ£ô | Γ£ô ||
+| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | Γ£ô |Γ£ô| | ||
+| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | Γ£ô |Γ£ô| | ||
+| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | Γ£ô | Γ£ô| | ||
+| [Custom](concept-custom.md#compare-model-features) |Γ£ô | Γ£ô || Γ£ô | Γ£ô | Γ£ô |
## Input requirements
The custom model analyzes and extracts data from forms and documents specific to
Form Recognizer v3.0 (preview) introduces several new features and capabilities:
-* [**Read (preview)**](concept-read.md) model is a new API that extracts text lines, words, their locations, detected languages, and handwrting style if detected.
+* [**Read (preview)**](concept-read.md) model is a new API that extracts text lines, words, their locations, detected languages, and handwritten text, if detected.
* [**General document (preview)**](concept-general-document.md) model is a new API that uses a pre-trained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents. * [**Receipt (preview)**](concept-receipt.md) model supports single-page hotel receipt processing. * [**ID document (preview)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Previously updated : 02/15/2022 Last updated : 03/09/2022 recommendations: false
# Form Recognizer read model
-The Form Recognizer v3.0 preview includes the new Read API. Read extracts text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP).
-
-**Data extraction features**
-
-| **Read model** | **Text Extraction** | **[Language detection](language-support.md#detected-languages-by-read)** |
-| | | |
-| Read | Γ£ô |Γ£ô |
+The Form Recognizer v3.0 preview includes the new Read API. Read extracts text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF and TIFF) and images (JPG, PNG, and BMP).
## Development options
The following resources are supported by Form Recognizer v3.0:
|-||| |**Read model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-read**|
+## Data extraction
+
+| **Read model** | **Text Extraction** | **[Language detection](language-support.md#detected-languages-by-read)** |
+| | | |
+prebuilt-read | Γ£ô |Γ£ô |
+ ### Try Form Recognizer
-See how text is extracted from forms and documents using the Form Recognizer Studio. You'll need the following:
+See how text is extracted from forms and documents using the Form Recognizer Studio. You'll need the following assets:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how text is extracted from forms and documents using the Form Recognizer Stu
* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location. * For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 50 MB.
-* Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
+* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
## Supported languages and locales
Read API extracts text from documents and images with multiple text angles and c
### Language detection (v3.0 preview)
-Read API in v3.0 preview 2 adds [language detection](language-support.md#detected-languages-by-read) as a new feature for text lines. Read will perdict the language at the text line level along with the confidence score.
+Read API in v3.0 preview 2 adds [language detection](language-support.md#detected-languages-by-read) as a new feature for text lines. Read will predict the language at the text line level along with the confidence score.
### Handwritten classification for text lines (Latin only)
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Previously updated : 02/15/2022 Last updated : 03/08/2022 recommendations: false
The prebuilt W-2 form, model is supported by Form Recognizer v3.0 with the follo
| Feature | Resources | Model ID | |-|-|--|
-|**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-tax.us.w2**|
+|**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
### Try Form Recognizer
See how data, including employee, employer, wage, and tax information is extract
|Name| Box | Type | Description | Standardized output| |:--|:-|:-|:-|:-| | Employee.SocialSecurityNumber | a | String | Employee's Social Security Number (SSN). | 123-45-6789 |
-| Employer.IdNumber | b | String | Employer's ID number (EIN), the business equivalent of a social security number.| 12-1234567 |
-| Employer.Name | c | String | Employer's name. | Contoso |
-| Employer.Address | c | String | Employer's address (with city). | 123 Example Street Sample City, CA |
-| Employer.ZipCode | c | String | Employer's zip code. | 12345 |
-| ControlNumber | d | String | A code identifying the unique W-2 in the records of employer. | R3D1 |
-| Employee.Name | e | String | Full name of the employee. | Henry Ross|
-| Employee.Address | f | String | Employee's address (with city). | 123 Example Street Sample City, CA |
-| Employee.ZipCode | f | String | Employee's zip code. | 12345 |
-| WagesTipsAndOtherCompensation | 1 | Number | A summary of your pay, including wages, tips and other compensation. | 50000 |
-| FederalIncomeTaxWithheld | 2 | Number | Federal income tax withheld. | 1111 |
-| SocialSecurityWages | 3 | Number | Social security wages. | 35000 |
-| SocialSecurityTaxWithheld | 4 | Number | Social security tax with held. | 1111 |
-| MedicareWagesAndTips | 5 | Number | Medicare wages and tips. | 45000 |
-| MedicareTaxWithheld | 6 | Number | Medicare tax with held. | 1111 |
-| SocialSecurityTips | 7 | Number | Social security tips. | 1111 |
-| AllocatedTips | 8 | Number | Allocated tips. | 1111 |
+| Employer.IdNumber | b | String | Employer's ID number (EIN), the business equivalent of a social security number| 12-1234567 |
+| Employer.Name | c | String | Employer's name | Contoso |
+| Employer.Address | c | String | Employer's address (with city) | 123 Example Street Sample City, CA |
+| Employer.ZipCode | c | String | Employer's zip code | 12345 |
+| ControlNumber | d | String | A code identifying the unique W-2 in the records of employer | R3D1 |
+| Employee.Name | e | String | Full name of the employee | Henry Ross|
+| Employee.Address | f | String | Employee's address (with city) | 123 Example Street Sample City, CA |
+| Employee.ZipCode | f | String | Employee's zip code | 12345 |
+| WagesTipsAndOtherCompensation | 1 | Number | A summary of your pay, including wages, tips and other compensation | 50000 |
+| FederalIncomeTaxWithheld | 2 | Number | Federal income tax withheld | 1111 |
+| SocialSecurityWages | 3 | Number | Social security wages | 35000 |
+| SocialSecurityTaxWithheld | 4 | Number | Social security tax with held | 1111 |
+| MedicareWagesAndTips | 5 | Number | Medicare wages and tips | 45000 |
+| MedicareTaxWithheld | 6 | Number | Medicare tax with held | 1111 |
+| SocialSecurityTips | 7 | Number | Social security tips | 1111 |
+| AllocatedTips | 8 | Number | Allocated tips | 1111 |
| VerificationCode | 9 | String | Verification Code on Form W-2 | A123-B456-C789-DXYZ |
-| DependentCareBenefits | 10 | Number | Dependent care benefits. | 1111 |
-| NonqualifiedPlans | 11 | Number | The non-qualified plan, a type of retirement savings plan that is employer-sponsored and tax-deferred. | 1111 |
-| AdditionalInfo | | Array of objects | An array of LetterCode and Amount. | |
-| LetterCode | 12a, 12b, 12c, 12d | String | Letter code. Refer to [IRS/W-2](https://www.irs.gov/pub/irs-prior/fw2--2014.pdf) for the semantics of the code values. | D |
+| DependentCareBenefits | 10 | Number | Dependent care benefits | 1111 |
+| NonqualifiedPlans | 11 | Number | The non-qualified plan, a type of retirement savings plan that is employer-sponsored and tax-deferred | 1111 |
+| AdditionalInfo | | Array of objects | An array of LetterCode and Amount | |
+| LetterCode | 12a, 12b, 12c, 12d | String | Letter code Refer to [IRS/W-2](https://www.irs.gov/pub/irs-prior/fw2--2014.pdf) for the semantics of the code values. | D |
| Amount | 12a, 12b, 12c, 12d | Number | Amount | 1234 |
-| IsStatutoryEmployee | 13 | String | Whether the RetirementPlan box is checked or not. | true |
-| IsRetirementPlan | 13 | String | Whether the RetirementPlan box is checked or not. | true |
-| IsThirdPartySickPay | 13 | String | Whether the ThirdPartySickPay box is checked or not. | false |
-| Other | 14 | String | Other info employers may use this field to report. | |
-| StateTaxInfos | | Array of objects | An array of state tax info including State, EmployerStateIdNumber, StateIncomeTax, StageWagesTipsEtc. | |
+| IsStatutoryEmployee | 13 | String | Whether the RetirementPlan box is checked or not | true |
+| IsRetirementPlan | 13 | String | Whether the RetirementPlan box is checked or not | true |
+| IsThirdPartySickPay | 13 | String | Whether the ThirdPartySickPay box is checked or not | false |
+| Other | 14 | String | Other info employers may use this field to report | |
+| StateTaxInfos | | Array of objects | An array of state tax info including State, EmployerStateIdNumber, StateIncomeTax, StageWagesTipsEtc | |
| State | 15 | String | State | CA |
-| EmployerStateIdNumber | 15 | String | Employer state number. | 123-123-1234 |
+| EmployerStateIdNumber | 15 | String | Employer state number | 123-123-1234 |
| StateWagesTipsEtc | 16 | Number | State wages, tips, etc. | 50000 |
-| StateIncomeTax | 17 | Number | State income tax. | 1535 |
-| LocalTaxInfos | | Array of objects | An array of local income tax info including LocalWagesTipsEtc, LocalIncomeTax, LocalityName. | |
+| StateIncomeTax | 17 | Number | State income tax | 1535 |
+| LocalTaxInfos | | Array of objects | An array of local income tax info including LocalWagesTipsEtc, LocalIncomeTax, LocalityName | |
| LocalWagesTipsEtc | 18 | Number | Local wages, tips, etc. | 50000 |
-| LocalIncomeTax | 19 | Number | Local income tax. | 750 |
+| LocalIncomeTax | 19 | Number | Local income tax | 750 |
| LocalityName | 20 | Number | Locality name. | CLEVELAND |
- | W2Copy | | String | Copy of W-2 forms A, B, C, D, 1, or 2. | Copy A For Social Security Administration |
-| TaxYear | | Number | Tax year. | 2020 |
-| W2FormVariant | | String | The variants of W-2 forms, including "W-2", "W-2AS", "W-2CM", "W-2GU", "W-2VI". | W-2 |
+ | W2Copy | | String | Copy of W-2 forms A, B, C, D, 1, or 2 | Copy A For Social Security Administration |
+| TaxYear | | Number | Tax year | 2020 |
+| W2FormVariant | | String | The variants of W-2 forms, including "W-2", "W-2AS", "W-2CM", "W-2GU", "W-2VI" | W-2 |
### Migration guide and REST API v3.0
See how data, including employee, employer, wage, and tax information is extract
* Complete a Form Recognizer quickstart:
- > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
-
-* Explore our REST API:
-
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
+|Programming language | :::image type="content" source="media/form-recognizer-icon.png" alt-text="Form Recognizer icon from the Azure portal."::: |Programming language
+|::|::|::|
+|[**C#**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)||[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)|
+|[**Java**](quickstarts/try-v3-java-sdk.md#prebuilt-model)||[**Python**](quickstarts/try-v3-python-sdk.md#prebuilt-model)|
+|[**REST API**](quickstarts/try-v3-rest-api.md#prebuilt-model)|||
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Previously updated : 02/15/2022 Last updated : 03/08/2022 recommendations: false keywords: automated data processing, document processing, automated data entry, forms processing
<!-- markdownlint-disable MD024 --> # What is Azure Form Recognizer? - Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. Form Recognizer analyzes your forms and documents, extracts text and data, maps field relationships as key-value pairs, and returns a structured JSON output. You quickly get accurate results that are tailored to your specific content without excessive manual intervention or extensive data science expertise. Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
-Form Recognizer easily identifies, extracts, and analyzes the following document data:
+Form Recognizer uses the following models to easily identify, extract, and analyze document data:
+
+* [**W-2 form model**](concept-w2.md) | Extract text and key information from US W2 tax forms.
+* [**Read model**](concept-read.md) | Extract printed and handwritten text lines, words, locations, and detected languages from documents and images.
+* [**General document model**](concept-general-document.md) | Extract key-value pairs, selection marks, and entities from documents.
+* [**Invoice model**](concept-invoice.md) | Extract text, selection marks, tables, key-value pairs, and key information from invoices.
+* [**Receipt model**](concept-receipt.md) | Extract text and key information from receipts.
+* [**ID document model**](concept-id-document.md) | Extract text and key information from driver licenses and international passports.
+* [**Business card model**](concept-business-card.md) | Extract text and key information from business cards.
+
+## Which Form Recognizer feature should I use?
-* Table structure and content.
-* Form elements and field values.
-* Typed and handwritten alphanumeric text.
-* Relationships between elements.
-* Key/value pairs
-* Element location with bounding box coordinates.
+This section helps you decide which Form Recognizer v3.0 supported feature you should use for your application:
+
+| What type of document do you want to analyze?| How is the document formatted? | Your best solution |
+| --|-| -|
+|<ul><li>**W-2 Form**</li></yl>| Is your W-2 document composed in United States English (en-US) text?|<ul><li>If **Yes**, use the [**W-2 Form**](concept-w2.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>|
+|<ul><li>**Text-only document**</li></yl>| Is your text-only document _printed_ in a [supported language](language-support.md#read-layout-and-custom-form-template-model) or, if handwritten, is it composed in English?|<ul><li>If **Yes**, use the [**Read**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>
+|<ul><li>**Invoice**</li></yl>| Is your invoice document composed in English or Spanish text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>
+|<ul><li>**Receipt**</li><li>**Business card**</li></ul>| Is your receipt or business card document composed in English text? | <ul><li>If **Yes**, use the [**Receipt**](concept-receipt.md) or [**Business Card**](concept-business-card.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>|
+|<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the[**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model</li></ul>|
+ |<ul><li>**Form** or **Document**</li></ul>| Is your form or document an industry-standard format commonly used in your business or industry?| <ul><li>If **Yes**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md).</li><li>If **No**, you can [**Train and build a custom model**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model).
## Form Recognizer features and development options
The following features and development options are supported by the Form Recogn
| Feature | Description | Development options | |-|--|-|
-|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> |
-|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-layout-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
+|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#general-document-model)</li><li>[**C# SDK**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzePrebuiltRead.md)</li><li>[**Python SDK**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-bet#general-document-model)</li><li>[**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/118feb81eb57dbf6b4f851ef2a387ed1b1a86bde/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/readDocument.js)</li></ul> |
+|[🆕 **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul> |
+|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> |
+|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#layout-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>| |[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>| |[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
The following features and development options are supported by the Form Recogn
-## Which Form Recognizer feature should I use?
-
-This section helps you decide which Form Recognizer feature you should use for your application.
-| What type of document do you want to analyze?| How is the document formatted? | Your best solution |
-| --|-| -|
-|<ul><li>**Invoice**</li><li>**Receipt**</li><li>**Business card**</li></ul>| Is your invoice, receipt, or business card document composed of English-text? | <ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md), [**Receipt**](concept-receipt.md), or [**Business Card**](concept-business-card.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>|
-|<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the[**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model</li></ul>|
- |<ul><li>**Form** or **Document**</li></ul>| Is your form or document an industry-standard format commonly used in your business or industry?| <ul><li>If **Yes**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md).</li><li>If **No**, you can [**Train and build a custom model**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model).
## How to use Form Recognizer documentation
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Previously updated : 02/15/2022 Last updated : 03/08/2022 recommendations: false
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-[Reference documentation](/dotnet/api/azure.ai.formrecognizer.documentanalysis?view=azure-dotnet-preview&preserve-view=true) |[Library Source Code](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.3/sdk/formrecognizer/Azure.AI.FormRecognizer/) |[Package (NuGet)](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.3) | [Samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
+[Reference documentation](/dotnet/api/azure.ai.formrecognizer.documentanalysis?view=azure-dotnet-preview&preserve-view=true) | [Library Source Code](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.3/sdk/formrecognizer/Azure.AI.FormRecognizer/) | [Package (NuGet)](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.3) | [Samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
Get started with Azure Form Recognizer using the C# programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
In this quickstart, you'll use following features to analyze and extract data an
> [!TIP] > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
-* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You will paste your key and endpoint into the code below later in the quickstart:
+* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart:
:::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
for (int i = 0; i < result.Tables.Count; i++)
## Prebuilt model
-Extract and analyze data from common document types using a pre-trained model.
+In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
-##### Choose a prebuilt model ID
-
-You are not limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the model IDs for the prebuilt models currently supported by the Form Recognizer service:
-
-* [**prebuilt-invoice**](../concept-invoice.md): extracts text, selection marks, tables, key-value pairs, and key information from invoices.
-* [**prebuilt-receipt**](../concept-receipt.md): extracts text and key information from receipts.
-* [**prebuilt-idDocument**](../concept-id-document.md): extracts text and key information from driver licenses and international passports.
-* [**prebuilt-businessCard**](../concept-business-card.md): extracts text and key information from business cards.
+> [!TIP]
+> You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
#### Try the prebuilt invoice model > [!div class="checklist"] >
-> * We wll analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
+> * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
> * We've added the file URI value to the `Uri fileUri` variable at the top of the Program.cs file. > * To analyze a given file at a URI, use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-invoice` as the model ID. The returned value is an `AnalyzeResult` object containing data from the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
Previously updated : 02/15/2022- Last updated : 03/08/2022+
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). * A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**Cognitive Services multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource.
-## Pretrained models
+## Prebuilt models
+
+Prebuilt models help you add Form Recognizer features to your apps without having to build, train, and publish your own models. You can choose from several prebuilt models, each of which has its own set of supported data fields. The choice of model to use for the analyze operation depends on the type of document to be analyzed. The following prebuilt models are currently supported by Form Recognizer:
+
+* [🆕 **General document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document): extract text, tables, structure, key-value pairs and named entities.
+* [🆕**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extract text and key information from W-2 tax forms.
+* [🆕 **Read**](https://formrecognizer.appliedai.azure.com/studio/read): extract text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP).
+* [**Layout**](https://formrecognizer.appliedai.azure.com/studio/layout): extract text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
+* [**Invoice**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice): extract text, selection marks, tables, key-value pairs, and key information from invoices.
+* [**Receipt**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt): extract text and key information from receipts.
+* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports.
+* [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards.
-After you have completed the prerequisites, navigate to the [Form Recognizer Studio General Documents preview](https://formrecognizer.appliedai.azure.com). In the following example, we use the General Documents feature. The steps to use other pre-trained features like [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument), and [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2) models are similar.
+After you've completed the prerequisites, navigate to the [Form Recognizer Studio General Documents preview](https://formrecognizer.appliedai.azure.com). In the following example, we use the General Documents feature. The steps to use other pre-trained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
1. Select a Form Recognizer service feature from the Studio home page.
-1. This is a one-time step unless you have already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
+1. This is a one-time step unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
1. Select the Analyze command to run analysis on the sample document or try your document by using the Add command.
After you have completed the prerequisites, navigate to the [Form Recognizer Stu
:::image border="true" type="content" source="../media/quickstarts/layout-get-started-v2.gif" alt-text="Form Recognizer Layout example":::
-## Prebuilt models
-
-There are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are prebuilt models currently supported by the Form Recognizer service:
-
-* [🆕 **General document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document)—Analyze and extract text, tables, structure, key-value pairs and named entities.
-* [**Invoice**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice): extracts text, selection marks, tables, key-value pairs, and key information from invoices.
-* [**Receipt**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt): extracts text and key information from receipts.
-* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extracts text and key information from driver licenses and international passports.
-* [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extracts text and key information from business cards.
-* [**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extracts text and key information from W-2 tax forms.
-
-1. In the output section's Content tab, browse the list of extracted key-value pairs and entities. For other Form Recognizer features, the Content tab will show the corresponding insights extracted.
-
-1. From the results tab, check out the formatted JSON response from the service. Search and browse the JSON response to understand the service results.
-
-1. From the Code tab, copy the code sample to get started on integrating the feature with your application.
-- ## Additional prerequisites for custom projects In addition to the Azure account and a Form Recognizer or Cognitive Services resource, you'll need:
A **standard performance** [**Azure Blob Storage account**](https://portal.azure
### Configure CORS
-[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you will need access to the CORS blade of your storage account.
+[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS blade of your storage account.
:::image type="content" source="../media/quickstarts/cors-updated-image.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
A **standard performance** [**Azure Blob Storage account**](https://portal.azure
4. Select all the available 8 options for **Allowed methods**. 5. Approve all **Allowed headers** and **Exposed headers** by entering an * in each field. 6. Set the **Max Age** to 120 seconds or any acceptable value.
-7. Click the save button at the top of the page to save the changes.
+7. Select the save button at the top of the page to save the changes.
CORS should now be configured to use the storage account from Form Recognizer Studio.
To create custom models, you start with configuring your project:
1. Review and submit your settings to create the project.
-1. From the labeling view, define the labels and their types that you are interested in extracting.
+1. From the labeling view, define the labels and their types that you're interested in extracting.
1. Select the text in the document and select the label from the drop-down list or the labels pane.
To create custom models, you start with configuring your project:
> [!NOTE] > Tables are currently only supported for custom template models. When training a custom neural model, labeled tables are ignored.
-1. Use the Delete command to delete models that are not required.
+1. Use the Delete command to delete models that aren't required.
1. Download model details for offline viewing.
To create custom models, you start with configuring your project:
Using tables as the visual pattern:
-For custom form models, while creating your custom models, you may need to extract data collections from your documents. These may appear in a couple of formats. Using tables as the visual pattern:
+For custom form models, while creating your custom models, you may need to extract data collections from your documents. Data collections may appear in a couple of formats. Using tables as the visual pattern:
* Dynamic or variable count of values (rows) for a given set of fields (columns)
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
Previously updated : 02/15/2022 Last updated : 03/08/2022 recommendations: false
This quickstart uses the Gradle dependency manager. You can find the client libr
mkdir -p src/main/java ```
- You will create the following directory structure:
+ You'll create the following directory structure:
:::image type="content" source="../media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure":::
public static void main(String[] args) {
## Prebuilt model
-Extract and analyze data from common document types using a pre-trained model.
+In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
-##### Choose a prebuilt model ID
-
-You're not limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the model IDs for the prebuilt models currently supported by the Form Recognizer service:
-
-* [**prebuilt-invoice**](../concept-invoice.md): extracts text, selection marks, tables, key-value pairs, and key information from invoices.
-* [**prebuilt-receipt**](../concept-receipt.md): extracts text and key information from receipts.
-* [**prebuilt-idDocument**](../concept-id-document.md): extracts text and key information from driver licenses and international passports.
-* [**prebuilt-businessCard**](../concept-business-card.md): extracts text and key information from business cards.
+> [!TIP]
+> You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
#### Try the prebuilt invoice model > [!div class="checklist"] >
-> * We wll analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
+> * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
> * We've added the file URL value to the `invoiceUrl` variable at the top of the file. > * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Previously updated : 02/15/2022 Last updated : 03/08/2022 recommendations: false
main().catch((error) => {
## Prebuilt model
-Extract and analyze data from common document types using a pre-trained model.
+In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
-##### Choose a prebuilt model ID
-
-You are not limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the model IDs for the prebuilt models currently supported by the Form Recognizer service:
-
-* [**prebuilt-invoice**](../concept-invoice.md): extracts text, selection marks, tables, key-value pairs, and key information from invoices.
-* [**prebuilt-receipt**](../concept-receipt.md): extracts text and key information from receipts.
-* [**prebuilt-idDocument**](../concept-id-document.md): extracts text and key information from driver licenses and international passports.
-* [**prebuilt-businessCard**](../concept-business-card.md): extracts text and key information from business cards.
+> [!TIP]
+> You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
#### Try the prebuilt invoice model > [!div class="checklist"] >
-> * We wll analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
+> * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
> * We've added the file URL value to the `invoiceUrl` variable at the top of the file. > * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
Previously updated : 02/15/2022 Last updated : 03/08/2022 recommendations: false
In this quickstart you'll use following features to analyze and extract data and
> [!TIP] > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
-* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You will paste your key and endpoint into the code below later in the quickstart:
+* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart:
:::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
if __name__ == "__main__":
## Prebuilt model
-Extract and analyze data from common document types using a pre-trained model.
+In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
-### Choose a prebuilt model ID
-
-You are not limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the model IDs for the prebuilt models currently supported by the Form Recognizer service:
-
-* [**prebuilt-invoice**](../concept-invoice.md): extracts text, selection marks, tables, key-value pairs, and key information from invoices.
-* [**prebuilt-receipt**](../concept-receipt.md): extracts text and key information from receipts.
-* [**prebuilt-idDocument**](../concept-id-document.md): extracts text and key information from driver licenses and international passports.
-* [**prebuilt-businessCard**](../concept-business-card.md): extracts text and key information from business cards.
+> [!TIP]
+> You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
#### Try the prebuilt invoice model > [!div class="checklist"] >
-> * We wll analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
+> * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
> * We've added the file URL value to the `invoiceUrl` variable at the top of the file. > * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Previously updated : 02/15/2022 Last updated : 03/08/2022
The following table illustrates the updates to the REST API calls.
In this quickstart you'll use following features to analyze and extract data and values from forms and documents:
-* [🆕 **General document**](#try-it-general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
+* [🆕 **General document**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
-* [**Layout**](#try-it-layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
+* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
-* [**Prebuilt Model**](#try-it-prebuilt-model)ΓÇöAnalyze and extract data from common document types, using a pre-trained model.
+* [**Prebuilt Model**](#prebuilt-model)ΓÇöAnalyze and extract data from common document types, using a pre-trained model.
## Prerequisites
In this quickstart you'll use following features to analyze and extract data and
### Select a code sample to copy and paste into your application:
-* [**General document**](#try-it-general-document-model)
+* [**General document**](#general-document-model)
-* [**Layout**](#try-it-layout-model)
+* [**Layout**](#layout-model)
-* [**Prebuilt Model**](#try-it-prebuilt-model)
+* [**Prebuilt Model**](#prebuilt-model)
> [!IMPORTANT] > > Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. See the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md) article for more information.
-## **Try it**: General document model
+## General document model
> [!div class="checklist"] >
curl -v -X GET "https://{endpoint}/formrecognizer/documentModels/prebuilt-docume
### Examine the response
-You'll receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation is not complete, the value of `"status"` will be `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
+You'll receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation isn't complete, the value of `"status"` will be `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
The `"analyzeResults"` node contains all of the recognized text. Text is organized by page, lines, tables, key-value pairs, and entities.
The `"analyzeResults"` node contains all of the recognized text. Text is organiz
```
-## **Try it**: Layout model
+## Layout model
> [!div class="checklist"] >
curl -v -X GET "https://{endpoint}/formrecognizer/documentModels/prebuilt-layout
You'll receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation isn't complete, the value of `"status"` will be `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
-## **Try it**: Prebuilt model
-
-This sample demonstrates how to analyze data from certain common document types with a pre-trained model, using an invoice as an example.
+## Prebuilt model
-> [!div class="checklist"]
->
-> * For this example, we wll analyze an invoice document using a prebuilt model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
+In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
-##### Choose the invoice prebuilt model ID
+> [!TIP]
+> You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
-You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the model IDs for the prebuilt models currently supported by the Form Recognizer service:
+#### Try the prebuilt invoice model
-* **prebuilt-invoice**: extracts text, selection marks, tables, key-value pairs, and key information from invoices.
-* **prebuilt-businessCard**: extracts text and key information from business cards.
-* **prebuilt-idDocument**: extracts text and key information from driver licenses and international passports.
-* **prebuilt-receipt**: extracts text and key information from receipts.
+> [!div class="checklist"]
+>
+> * Analyze an invoice document using a prebuilt model.
+> * You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
Before you run the command, make these changes:
automanage Arm Deploy Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/arm-deploy-arc.md
+
+ Title: Onboard an Azure Arc-enabled server to Azure Automanage with an ARM template
+description: Learn how to onboard an Azure Arc-enabled server to Azure Automanage with an Azure Resource Manager template.
+++ Last updated : 02/25/2022++
+# Onboard an Azure Arc-enabled server to Automanage with an Azure Resource Manager template (ARM template)
++
+Follow the steps to onboard an Azure Arc-enabled server to Automanage Best Practices using an ARM template.
+
+## Prerequisites
+* You must have an Azure Arc-enabled server already registered in your subscription
+* You must have necessary [Role-based access control permissions](./automanage-virtual-machines.md#required-rbac-permissions)
+* You must use one of the [supported operating systems](./automanage-arc.md#supported-operating-systems)
+
+## ARM template overview
+The following ARM template will onboard your specified Azure Arc-enabled server onto Azure Automanage Best Practices. Details on the ARM template and steps on how to deploy are located in the ARM template deployment [section](#arm-template-deployment).
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "machineName": {
+ "type": "String"
+ },
+ "configurationProfile": {
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.HybridCompute/machines/providers/configurationProfileAssignments",
+ "apiVersion": "2021-04-30-preview",
+ "name": "[concat(parameters('machineName'), '/Microsoft.Automanage/default')]",
+ "properties": {
+ "configurationProfile": "[parameters('configurationProfile')]"
+ }
+ }
+ ]
+}
+```
+
+## ARM template deployment
+This ARM template will create a configuration profile assignment for your specified Azure Arc-enabled machine.
+
+The `configurationProfile` value can be one of the following values:
+* "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction"
+* "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesDevTest"
+
+Follow these steps to deploy the ARM template:
+1. Save this ARM template as `azuredeploy.json`.
+1. Run this ARM template deployment with `az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json`.
+1. Provide the values for machineName, and configurationProfileAssignment when prompted.
+1. You're ready to deploy.
+
+As with any ARM template, it's possible to factor out the parameters into a separate `azuredeploy.parameters.json` file and use that as an argument when deploying.
+
+## Next steps
+Learn more about Automanage for [Azure Arc](./automanage-arc.md).
automanage Arm Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/arm-deploy.md
Follow these steps to deploy the ARM template:
As with any ARM template, it's possible to factor out the parameters into a separate `azuredeploy.parameters.json` file and use that as an argument when deploying. ## Next steps
-Learn more about Automanage for [Linux](./automanage-linux.md) and [Windows](./automanage-windows-server.md)
+Learn more about Automanage for [Linux](./automanage-linux.md) and [Windows](./automanage-windows-server.md)
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
The following are current known issues with PowerShell runbooks:
- Avoid importing `Az.Accounts` module to version 2.4.0 version for PowerShell 7 runtime as there can be an unexpected behavior using this version in Azure Automation. - You might encounter formatting problems with error output streams for the job running in PowerShell 7 runtime. - When you import a PowerShell 7.1 module thatΓÇÖs dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts was < 2.6.0.
+- When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON.
## PowerShell Workflow runbooks
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
Title: Connect your VMware vCenter to Azure Arc using the helper script
-description: In this quickstart, you'll learn how to use the helper script to connect your VMware vCenter to Azure Arc.
+ Title: Connect VMware vCenter Server to Azure Arc by using the helper script
+description: In this quickstart, you'll learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc.
Last updated 11/10/2021
-# Customer intent: As a VI admin, I want to connect my vCenter to Azure to enable self-service through Arc.
+# Customer intent: As a VI admin, I want to connect my vCenter Server instance to Azure to enable self-service through Azure Arc.
-# Quickstart: Connect your VMware vCenter to Azure Arc using the helper script
+# Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script
-To start using the Azure Arc-enabled VMware vSphere (preview) features, you'll need to connect your VMware vCenter Server to Azure Arc. This quickstart shows you how to connect your VMware vCenter Server to Azure Arc using a helper script.
+To start using the Azure Arc-enabled VMware vSphere (preview) features, you need to connect your VMware vCenter Server instance to Azure Arc. This quickstart shows you how to connect your VMware vCenter Server instance to Azure Arc by using a helper script.
-First, the script deploys a virtual appliance, called [Azure Arc resource bridge (preview)](../resource-bridge/overview.md), in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between your vCenter Server and Azure Arc.
+First, the script deploys a virtual appliance called [Azure Arc resource bridge (preview)](../resource-bridge/overview.md) in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between vCenter Server and Azure Arc.
> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+> In the interest of ensuring that new features are documented no later than their release, this article might include documentation for features that aren't yet publicly available.
## Prerequisites
First, the script deploys a virtual appliance, called [Azure Arc resource bridge
- An Azure subscription. -- A resource group in the subscription where you are a member of the *Owner/Contributor* role.
+- A resource group in the subscription where you're a member of the *Owner/Contributor* role.
### vCenter Server -- vCenter Server running version 6.7
+- vCenter Server version 6.7.
-- Allow inbound connections on TCP port (usually 443) so that the Arc resource bridge and VMware cluster extension can communicate with the vCenter server.
+- Inbound connections allowed on TCP port (usually 443) so that the Azure Arc resource bridge and VMware cluster extension can communicate with the vCenter Server instance.
-- A resource pool or a cluster with a minimum capacity of 16 GB of RAM, four vCPUs.
+- A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs.
- A datastore with a minimum of 100 GB of free disk space available through the resource pool or cluster. - An external virtual network/switch and internet access, directly or through a proxy. > [!NOTE]
-> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 2500 VMs. If your vCenter has more than 2500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
+> Azure Arc-enabled VMware vSphere (preview) supports vCenter Server instances with a maximum of 2,500 virtual machines (VMs). If your vCenter Server instance has more than 2,500 VMs, we don't recommend that you use Azure Arc-enabled VMware vSphere with it at this point.
-### vSphere accounts
+### vSphere account
-A vSphere account that can:
-- read all inventory -- deploy, and update VMs to all the resource pools (or clusters), networks, and virtual machine templates that you want to use with Azure Arc.
+You need a vSphere account that can:
+- Read all inventory.
+- Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc.
-This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere (preview) and the Azure Arc resource bridge (preview) VM deployment.
+This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere (preview) and the deployment of the Azure Arc resource bridge (preview) VM.
### Workstation
-A Windows or Linux machine that can access both your vCenter Server and internet, directly or through a proxy.
+You need a Windows or Linux machine that can access both your vCenter Server instance and the internet, directly or through a proxy.
## Prepare vCenter Server 1. Create a resource pool with a reservation of at least 16 GB of RAM and four vCPUs. It should also have access to a datastore with at least 100 GB of free disk space.
-2. Ensure the vSphere accounts have the appropriate permissions.
+2. Ensure that the vSphere accounts have the appropriate permissions.
## Download the onboarding script
-1. Go to Azure portal.
+1. Go to the Azure portal.
-2. Search for **Azure Arc** and click on it.
+2. Search for **Azure Arc** and select it.
-3. On the **Overview** page, click on **Add** under **Add your infrastructure for free** or move to the **Infrastructure** tab.
+3. On the **Overview** page, select **Add** under **Add your infrastructure for free** or move to the **Infrastructure** tab.
-4. Under **Platform** section, click on **Add** under VMware.
+4. In the **Platform** section, select **Add** under **VMware vCenter**.
- :::image type="content" source="media/add-vmware-vcenter.png" alt-text="Screenshot showing how to add a VMware vCenter through Azure Arc center":::
+ :::image type="content" source="media/add-vmware-vcenter.png" alt-text="Screenshot that shows how to add VMware vCenter through Azure Arc.":::
-5. Select **Create a new resource bridge** and click **Next**
+5. Select **Create a new resource bridge**, and then select **Next**.
-6. Provide a name of your choice for Arc resource bridge. Eg. `contoso-nyc-resourcebridge`
+6. Provide a name of your choice for the Azure Arc resource bridge. For example: **contoso-nyc-resourcebridge**.
-7. Select a Subscription and Resource group where the resource bridge would be created.
+7. Select a subscription and resource group where the resource bridge will be created.
-8. Under Region, select an Azure location where the resource metadata would be stored. Currently supported regions are `East US` and `West Europe`.
+8. Under **Region**, select an Azure location where the resource metadata will be stored. Currently, supported regions are **East US** and **West Europe**.
-9. Provide a name for the Custom location. This will be the name which you will see when you deploy VMs. Name it for the datacenter or physical location of your datacenter. Eg: `contoso-nyc-dc`
+9. Provide a name for **Custom location**. This is the name that you'll see when you deploy VMs. Name it for the datacenter or the physical location of your datacenter. For example: **contoso-nyc-dc**.
-10. Leave the option for **Use the same subscription and resource group as your resource bridge** checked.
+10. Leave **Use the same subscription and resource group as your resource bridge** selected.
-11. Provide a name for your vCenter in Azure. Eg: `contoso-nyc-vcenter`
+11. Provide a name for your vCenter Server instance in Azure. For example: **contoso-nyc-vcenter**.
-12. Click on **Next: Download and run script >**
+12. Select **Next: Download and run script**.
-13. If your subscription is not registered with all the required resource providers, a **Register** button will appear. Click the button before proceeding to the next step.
+13. If your subscription is not registered with all the required resource providers, a **Register** button will appear. Select the button before you proceed to the next step.
- :::image type="content" source="media/register-arc-vmware-providers.png" alt-text="Screenshot showing button to register required resource providers during vCenter onboarding to Arc":::
+ :::image type="content" source="media/register-arc-vmware-providers.png" alt-text="Screenshot that shows the button to register required resource providers during vCenter onboarding to Azure Arc.":::
-14. Based on the operating system of your workstation, download the powershell or bash script, and copy it to the [workstation](#prerequisites).
+14. Based on the operating system of your workstation, download the PowerShell or Bash script and copy it to the [workstation](#prerequisites).
-15. [Optional] Click on **Next : Verification**. This page will show you the status of your onboarding once you run the script on your workstation. Closing this page will not affect the onboarding.
+15. If you want to see the status of your onboarding after you run the script on your workstation, select **Next: Verification**. Closing this page won't affect the onboarding.
## Run the script
-### Windows
+Use the following instructions to run the script, depending on which operating system your machine is using.
-Follow the below instructions to run the script on a windows machine:
+### Windows
-1. Open a PowerShell window and navigate to the folder where you have downloaded the powershell script.
+1. Open a PowerShell window and go to the folder where you've downloaded the PowerShell script.
-2. Execute the following command to allow the script to run as it is an unsigned script (if you close the session before you complete all the steps, run this again for new session.)
+2. Run the following command to allow the script to run, because it's an unsigned script. (If you close the session before you complete all the steps, run this command again for the new session.)
``` powershell-interactive Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass ```
-3. Execute the script
+3. Run the script:
``` powershell-interactive ./resource-bridge-onboarding-script.ps1
Follow the below instructions to run the script on a windows machine:
### Linux
-Follow the below instructions to run the script on a Linux machine:
-
-1. Open the terminal and navigate to the folder where you have downloaded the bash script.
+1. Open the terminal and go to the folder where you've downloaded the Bash script.
-2. Execute the script using the following command:
+2. Run the script by using the following command:
``` sh bash resource-bridge-onboarding-script.sh
Follow the below instructions to run the script on a Linux machine:
## Inputs for the script
-A typical onboarding using the script takes about 30-60 minutes and you will be prompted for the various details during the execution. Refer to the table below for information on them:
+A typical onboarding that uses the script takes 30 to 60 minutes. During the process, you're prompted for the following details:
-| **Requirements** | **Details** |
+| **Requirement** | **Details** |
| | |
-| **Azure login** | Log in to Azure by visiting [this](https://www.microsoft.com/devicelogin) site and using the code when prompted. |
-| **vCenter FQDN/Address** | FQDN for the vCenter (or an ip address). </br> Eg: `10.160.0.1` or `nyc-vcenter.contoso.com` |
-| **vCenter Username** | Username for the vSphere account. The required permissions for the account are listed in the prerequisites above. |
-| **vCenter password** | Password for the vSphere account |
-| **Data center selection** | Select the name of the datacenter (as shown in vSphere client) where the Arc resource bridge VM should be deployed |
-| **Network selection** | Select the name of the virtual network or segment to which VM must be connected. This network should allow the appliance to talk to the vCenter server and the Azure endpoints (or internet). |
-| **Static IP / DHCP** | If you have DHCP server in your network and want to use it, type ΓÇÿyΓÇÖ else ΓÇÿnΓÇÖ. On choosing static IP configuration, you will be asked the following </br> 1. `Static IP address prefix` : Network address in CIDR notation E.g. `192.168.0.0/24` </br> 2. `Static gateway`: Eg. `192.168.0.0` </br> 3. `DNS servers`: Comma-separated list of DNS servers </br> 4. `Start range IP`: Minimum size of 2 available addresses is required, one of the IP is for the VM, and another one is reserved for upgrade scenarios. Provide the start IP of that range </br> 5. `End range IP`: the last IP of the IP range requested in previous field. </br> 6. `VLAN ID` (Optional) |
-| **Resource pool** | Select the name of the resource pool to which the Arc resource bridge VM would be deployed |
-| **Data store** | Select the name of the datastore to be used for Arc resource bridge VM |
-| **Folder** | Select the name of the vSphere VM and Template folder where Arc resource bridge VM should be deployed. |
-| **VM template Name** | Provide a name for the VM template that will be created in your vCenter based on the downloaded OVA. Eg: arc-appliance-template |
-| **Control Pane IP** | Provide a reserved IP address (a reserved IP address in your DHCP range or a static IP outside of DHCP range but still available on the network). Ensure this IP address isn't assigned to any other machine on the network. |
-| **Appliance proxy settings** | Type ΓÇÿyΓÇÖ if there is proxy in your appliance network, else type ΓÇÿnΓÇÖ. </br> You need to populate the following when you have proxy setup: </br> 1. `Http`: Address of http proxy server </br> 2. `Https`: Address of https proxy server </br> 3. `NoProxy`: Addresses to be excluded from proxy </br> 4. `CertificateFilePath`: For ssl based proxies, path to certificate to be used
-
-Once the command execution completed, your setup is complete and you can try out the capabilities of Azure Arc-enabled VMware vSphere. You can proceed to the [next steps.](browse-and-enable-vcenter-resources-in-azure.md).
+| **Azure login** | When you're prompted, go to the [device sign-in page](https://www.microsoft.com/devicelogin), enter the authorization code shown in the terminal, and sign in to Azure. |
+| **vCenter FQDN/Address** | Enter the fully qualified domain name for the vCenter Server instance (or an IP address). For example: **10.160.0.1** or **nyc-vcenter.contoso.com**. |
+| **vCenter Username** | Enter the username for the vSphere account. The required permissions for the account are listed in the [prerequisites](#prerequisites). |
+| **vCenter password** | Enter the password for the vSphere account. |
+| **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge's VM should be deployed. |
+| **Network selection** | Select the name of the virtual network or segment to which the VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). |
+| **Static IP / DHCP** | If you have DHCP server in your network and want to use it, enter **y**. Otherwise, enter **n**. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: Comma-separated list of DNS servers. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the VM, and the other is reserved for upgrade scenarios. Provide the starting IP of that range. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. </br> 6. **VLAN ID** (optional) |
+| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge's VM will be deployed. |
+| **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge's VM. |
+| **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. |
+| **VM template Name** | Provide a name for the VM template that will be created in your vCenter Server instance based on the downloaded OVA file. For example: **arc-appliance-template**. |
+| **Control Pane IP** | Provide a reserved IP address in your DHCP range, or provide a static IP address that's outside the DHCP range but still available on the network. Ensure that this IP address isn't assigned to any other machine on the network. |
+| **Appliance proxy settings** | Enter **y** if there's a proxy in your appliance network. Otherwise, enter **n**. </br> You need to populate the following boxes when you have a proxy set up: </br> 1. **Http**: Address of the HTTP proxy server. </br> 2. **Https**: Address of the HTTPS proxy server. </br> 3. **NoProxy**: Addresses to be excluded from the proxy. </br> 4. **CertificateFilePath**: For SSL-based proxies, the path to the certificate to be used.
+
+After the command finishes running, your setup is complete. You can now try out the capabilities of Azure Arc-enabled VMware vSphere.
## Next steps
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Allows you to set the timezone for your function app.
## WEBSITE\_VNET\_ROUTE\_ALL
+> [!IMPORTANT]
+> WEBSITE_VNET_ROUTE_ALL is a legacy app setting that has been replaced by the [vnetRouteAllEnabled configuration setting](../app-service/configure-vnet-integration-routing.md).
+ Indicates whether all outbound traffic from the app is routed through the virtual network. A setting value of `1` indicates that all traffic is routed through the virtual network. You need this setting when using features of [Regional virtual network integration](functions-networking-options.md#regional-virtual-network-integration). It's also used when a [virtual network NAT gateway is used to define a static outbound IP address](functions-how-to-use-nat-gateway.md). |Key|Sample value|
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
The following parameter types are supported by all C# modalities and extension v
| **byte[]** | Use for binary data messages. | | **Object** | When a message contains JSON, Functions tries to deserialize the JSON data into known plain-old CLR object type. |
-Messaging-specific parameter types contain additional message metadata. The specific types supported by the Event Grid trigger depend on the Functions runtime version, the extension package version, and the C# modality used.
+Messaging-specific parameter types contain additional message metadata. The specific types supported by the Service Bus trigger depend on the Functions runtime version, the extension package version, and the C# modality used.
# [Extension v5.x](#tab/extensionv5/in-process)
These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.serv
- [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)
-[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
+[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
The extension NuGet package you install depends on the C# mode you're using in y
Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.RabbitMQ).
+Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus).
# [Isolated process](#tab/isolated-process)
For a reference of host.json in Functions 1.x, see [host.json reference for Azur
[extension bundle]: ./functions-bindings-register.md#extension-bundles [NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus/
-[Update your extensions]: ./functions-bindings-register.md
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Create Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-vnet.md
To use your function app with virtual networks, you need to join it to a subnet.
:::image type="content" source="./media/functions-create-vnet/9-connect-app-subnet.png" alt-text="Screenshot of how to connect a function app to a subnet.":::
+1. Ensure that the **Route All** configuration setting is set to **Enabled**.
+
+ :::image type="content" source="./media/functions-create-vnet/10-enable-route-all.png" alt-text="Screenshot of how to enable route all functionality.":::
+ ## Configure your function app settings 1. In your function app, in the menu on the left, select **Configuration**.
To use your function app with virtual networks, you need to join it to a subnet.
| **WEBSITE_CONTENTSHARE** | files | The name of the file share you created in the storage account. Use this setting with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. | | **SERVICEBUS_CONNECTION** | myServiceBusConnectionString | Create this app setting for the connection string of your Service Bus. This storage connection string is from the [Get a Service Bus connection string](#get-a-service-bus-connection-string) section.| | **WEBSITE_CONTENTOVERVNET** | 1 | Create this app setting. A value of 1 enables your function app to scale when your storage account is restricted to a virtual network. |
- | **WEBSITE_VNET_ROUTE_ALL** | 1 | Create this app setting. When your app integrates with a virtual network, it uses the same DNS server as the virtual network. Your function app needs this setting so it can work with Azure DNS private zones. It's required when you use private endpoints. |
1. In the **Configuration** view, select the **Function runtime settings** tab.
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
The following table shows operating system and [language support](supported-lang
## Scale
-The following table compares the scaling behaviors of the various hosting plans.
+The following table compares the scaling behaviors of the various hosting plans.
+Maximum instances are given on a per-function app (Consumption) or per-plan (Premium/Dedicated) basis, unless otherwise indicated.
| Plan | Scale out | Max # instances | | | | |
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Azure Functions requires an Azure Storage account when you create a function app
||| | [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) | Maintain bindings state and function keys. <br/>Also used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). | | [Azure Files](../storage/files/storage-files-introduction.md) | File share used to store and run your function app code in a [Consumption Plan](consumption-plan.md) and [Premium Plan](functions-premium-plan.md). <br/>Azure Files is set up by default, but you can [create an app without Azure Files](#create-an-app-without-azure-files) under certain conditions. |
-| [Azure Queue Storage](../storage/queues/storage-queues-introduction.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
+| [Azure Queue Storage](../storage/queues/storage-queues-introduction.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md) and for failure and retry handling by [specific Azure Functions](./functions-bindings-storage-blob-trigger.md) triggers. |
| [Azure Table Storage](../storage/tables/table-storage-overview.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). | > [!IMPORTANT]
azure-government Secure Azure Computing Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/secure-azure-computing-architecture.md
Title: Secure Azure Computing Architecture description: Learn about the Secure Azure Computing Architecture (SACA). Using SACA allows US DoD and civilian customers to comply with the SCCA FRD.-- Last updated 08/27/2021
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
The following table highlights the specific parameters supported by setup for th
|OPINSIGHTS_WORKSPACE_ID | Workspace ID (guid) for the workspace to add | |OPINSIGHTS_WORKSPACE_KEY | Workspace key used to initially authenticate with the workspace | |OPINSIGHTS_WORKSPACE_AZURE_CLOUD_TYPE | Specify the cloud environment where the workspace is located <br> 0 = Azure commercial cloud (default) <br> 1 = Azure Government |
-|OPINSIGHTS_PROXY_URL | URI for the proxy to use |
+|OPINSIGHTS_PROXY_URL | URI for the proxy to use. Example: OPINSIGHTS_PROXY_URL=IPAddress:Port or OPINSIGHTS_PROXY_URL=FQDN:Port |
|OPINSIGHTS_PROXY_USERNAME | Username to access an authenticated proxy | |OPINSIGHTS_PROXY_PASSWORD | Password to access an authenticated proxy |
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
Last updated 12/24/2019
This article describes how to configure communication with Azure Automation and Azure Monitor by using the Log Analytics gateway when computers that are directly connected or that are monitored by Operations Manager have no internet access.
-The Log Analytics gateway is an HTTP forward proxy that supports HTTP tunneling using the HTTP CONNECT command. This gateway sends data to Azure Automation and a Log Analytics workspace in Azure Monitor on behalf of the computers that cannot directly connect to the internet.
+The Log Analytics gateway is an HTTP forward proxy that supports HTTP tunneling using the HTTP CONNECT command. This gateway sends data to Azure Automation and a Log Analytics workspace in Azure Monitor on behalf of the computers that cannot directly connect to the internet. The gateway is only for log agent related connectivity and does not support Azure Automation features like runbook, DSC, and others.
The Log Analytics gateway supports:
azure-monitor Alerts Action Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-action-rules.md
An alert processing rule definition covers several aspects:
### Which fired alerts are affected by this rule?
-Each alert processing rule has a **scope**. A scope is a list of one or more specific Azure resources, or specific resource group, or an entire subscription. The alert processing rule will apply to alerts that fired on resources within that scope.
+**SCOPE**
+Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, or specific resource group, or an entire subscription. **The alert processing rule will apply to alerts that fired on resources within that scope**.
-You can also define **filters** to narrow down which specific subset of alerts are affected. The available filters are:
+**FILTERS**
+You can also define filters to narrow down which specific subset of alerts are affected within the scope. The available filters are:
* **Alert Context (payload)** - the rule will apply only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type.
-* **Alert rule id** - the rule will apply only to alerts from a specific alert rule. The value should be the full resource ID, for example "/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY".
-You can locate the alert rule ID by opening a specific alert rule in the portal, clicking "Properties", and copying the "Resource ID" value. You can also locate it by listing your alert rules from CLI/PowerShell.
+* **Alert rule id** - the rule will apply only to alerts from a specific alert rule. The value should be the full resource ID, for example `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`.
+You can locate the alert rule ID by opening a specific alert rule in the portal, clicking "Properties", and copying the "Resource ID" value.
+You can also locate it by listing your alert rules from PowerShell or CLI.
* **Alert rule name** - the rule will apply only to alerts with this alert rule name. Can also be useful with a "Contains" operator. * **Description** - the rule will apply only to alerts that contain the specified string within the alert rule description field. * **Monitor condition** - the rule will apply only to alerts with the specified monitor condition, either "Fired" or "Resolved". * **Monitor service** - the rule will apply only to alerts from any of the specified monitor services. For example, use "Platform" to have the rule apply only to metric alerts. * **Resource** - the rule will apply only to alerts from the specified Azure resource.
-This filter is useful with "Does not equal" operator, or with "Contains" / "Does not contain" operators.
+For example, you can use this filter with "Does not equal" to exclude one or more resources when the rule's scope is a subscription.
* **Resource group** - the rule will apply only to alerts from the specified resource groups.
-This filter is useful with "Does not equal" operator, or with "Contains" / "Does not contain" operators.
-* **Resource type** - the rule will apply only to alerts on resource from the specified resource types, such as virtual machines.
-* **Severity** - the rule will apply only to alerts with the selected severities.
+For example, you can use this filter with "Does not equal" to exclude one or more resource groups when the rule's scope is a subscription.
+* **Resource type** - the rule will apply only to alerts on resource from the specified resource types, such as virtual machines. You can use "Equals" to match one or more specific resources, or you can use contains to match a resource type and all its child resources.
+For example, use "contains MICROSOFT.SQL/SERVERS" to match both SQL servers and all their child resources, like databases.
+* **Severity** - the rule will apply only to alerts with the selected severities.
-If you define multiple filters in a rule, all of them apply. For example, if you set **resource type = "Virtual Machines"** and **severity = "Sev0"**, then the rule will apply only for Sev0 alerts on virtual machines in the scope.
-
-> [!NOTE]
-> Each filter may include up to five values.
+**FILTERS BEHAVIOR**
+* If you define multiple filters in a rule, all of them apply - there is a logical AND between all filters.
+ For example, if you set both `resource type = "Virtual Machines` and `severity = "Sev0`, then the rule will apply only for Sev0 alerts on virtual machines in the scope.
+* Each filter may include up to five values, and there is a logical OR between the values.
+ For example, if you set `description contains ["this", "that"]`, then the rule will apply only to alerts whose description contains either "this" or "that".
### What should this rule do?
In the fourth tab (**Details**), you give this rule a name, pick where it will b
### [Azure CLI](#tab/azure-cli)
-You can use the Azure CLI to work with alert processing rules. See the `az monitor alert-processing-rules` page in the [Azure CLI docs](/cli/azure/monitor/alert-processing-rule) for detailed documentation and examples.
+You can use the Azure CLI to work with alert processing rules. See the `az monitor alert-processing-rules` [page in the Azure CLI docs](/cli/azure/monitor/alert-processing-rule) for detailed documentation and examples.
### Prepare your environment
You can use the Azure CLI to work with alert processing rules. See the `az monit
1. **Sign in**
- If you're using a local installation of the CLI, sign in using the [az login](/cli/azure/reference-index#az-login) command. Follow the steps displayed in your terminal to complete the authentication process.
+ If you're using a local installation of the CLI, sign in using the `az login` [command](/cli/azure/reference-index#az-login). Follow the steps displayed in your terminal to complete the authentication process.
```azurecli az login
For example, to create a rule that adds an action group to all alerts in a subsc
```azurecli az monitor alert-processing-rule create \name 'AddActionGroupToSubscription' \rule-type AddActionGroups \scopes "/subscriptions/sub1" \action-groups "/subscriptions/sub1/resourcegroups/rg1/providers/microsoft.insights/actiongroups/ag1" \enabled true \resource-group rg1 \description "Add action group ag1 to all alerts in the subscription"
+ --name 'AddActionGroupToSubscription' \
+ --rule-type AddActionGroups \
+ --scopes "/subscriptions/SUB1" \
+ --action-groups "/subscriptions/SUB1/resourcegroups/RG1/providers/microsoft.insights/actiongroups/AG1" \
+ --resource-group RG1 \
+ --description "Add action group AG1 to all alerts in the subscription"
``` The [CLI documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) include more examples and an explanation of each parameter. ### [PowerShell](#tab/powershell)
+You can use PowerShell to work with alert processing rules. See the `*-AzAlertProcessingRule` commands [in the PowerShell docs](/powershell/module/az.alertsmanagement) for detailed documentation and examples.
++ ### Create an alert processing rule using PowerShell Use the `Set-AzAlertProcessingRule` command to create alert processing rules. For example, to create a rule that adds an action group to all alerts in a subscription, run: ```powershell
-Set-AzAlertProcessingRule -ResourceGroupName rg1 -Name AddActionGroupToSubscription -Scope /subscriptions/MySubId -Description "Add action group ag1 to all alerts in the subscription" -AlertProcessingRuleType AddActionGroups -ActionGroupId /subscriptions/sub1/resourcegroups/rg1/providers/microsoft.insights/actiongroups/ag1
-
+Set-AzAlertProcessingRule `
+ -Name AddActionGroupToSubscription `
+ -AlertProcessingRuleType AddActionGroups `
+ -Scope /subscriptions/SUB1 `
+ -ActionGroupId /subscriptions/SUB1/resourcegroups/RG1/providers/microsoft.insights/actiongroups/AG1 `
+ -ResourceGroupName RG1 `
+ -Description "Add action group AG1 to all alerts in the subscription"
```
-The [CLI documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) include more examples and an explanation of each parameter.
+The [PowerShell documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) include more examples and an explanation of each parameter.
* * *
Before you manage alert processing rules with the Azure CLI, prepare your enviro
az monitor alert-processing-rules list # Get details of an alert processing rule
-az monitor alert-processing-rules show --resource-group MyResourceGroupName --name MyRule
+az monitor alert-processing-rules show --resource-group RG1 --name MyRule
# Update an alert processing rule
-az monitor alert-processing-rules update --resource-group MyResourceGroupName --name MyRule --status Disabled
+az monitor alert-processing-rules update --resource-group RG1 --name MyRule --status Disabled
# Delete an alert processing rule
-az monitor alert-processing-rules delete --resource-group MyResourceGroupName --name MyRule
+az monitor alert-processing-rules delete --resource-group RG1 --name MyRule
``` ### [PowerShell](#tab/powershell)
Before you manage alert processing rules with the Azure CLI, prepare your enviro
Get-AzAlertProcessingRule # Get details of an alert processing rule
-Get-AzAlertProcessingRule -ResourceGroupName MyResourceGroupName -Name MyRule | Format-List
+Get-AzAlertProcessingRule -ResourceGroupName RG1 -Name MyRule | Format-List
# Update an alert processing rule
-Update-AzAlertProcessingRule -ResourceGroupName MyResourceGroupName -Name MyRule -Enabled False
+Update-AzAlertProcessingRule -ResourceGroupName RG1 -Name MyRule -Enabled False
# Delete an alert processing rule
-Remove-AzAlertProcessingRule -ResourceGroupName MyResourceGroupName -Name MyRule
+Remove-AzAlertProcessingRule -ResourceGroupName RG1 -Name MyRule
``` * * *
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service, to track usage and diagnose issues. Previously updated : 05/11/2020 Last updated : 05/11/2020 ms.devlang: csharp, java, javascript, vb # Application Insights API for custom events and metrics
-Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Azure Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics, and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use.
+Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Azure Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics, and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use.
+ ## API summary
Normally, the SDK sends data at fixed intervals (typically 30 secs) or whenever
*.NET*
+When using Flush(), we recommend this [pattern](./console.md#full-example):
+ ```csharp telemetry.Flush(); // Allow some time for flushing before shutdown. System.Threading.Thread.Sleep(5000); ```
+When using FlushAsync(), we recommend this pattern:
+
+```csharp
+await telemetryClient.FlushAsync()
+```
+
+We recommend always flushing as part of the application shutdown to guarantee that telemetry is not lost.
+ *Java* ```java
telemetry.flush();
The function is asynchronous for the [server telemetry channel](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel/).
-We recommend using the flush() or flushAsync() methods in the shutdown activity of the Application when using the .NET SDK.
+> [!NOTE]
+> The Java and Javascript SDKs automatically flush on application shutdown.
## Authenticated users
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. Each node on the map represents an application component or its dependencies; and has health KPI and alerts status. You can click through from any component to more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also click through to Azure diagnostics, such as SQL Database Advisor recommendations. ++ ## What is a Component? Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
The example we'll use here is an [MVC application](/aspnet/core/tutorials/first-
> [!NOTE] > A preview [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net) is available. [Learn more](opentelemetry-overview.md). + ## Supported scenarios The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. Support covers the following:
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILog
> Do you need the log-capture module? It's a useful adapter for third-party loggers. But if you aren't already using NLog, log4Net, or System.Diagnostics.Trace, consider just calling [**Application Insights TrackTrace()**](./api-custom-events-metrics.md#tracktrace) directly. > >++ ## Install logging on your app Install your chosen logging framework in your project, which should result in an entry in app.config or web.config.
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Last updated 05/21/2020
# Troubleshooting no data - Application Insights for .NET/.NET Core ++ ## Some of my telemetry is missing *In Application Insights, I only see a fraction of the events that are being generated by my app.*
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
This procedure configures your ASP.NET web app to send telemetry to the [Applica
> [!NOTE] > A preview [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net) is available. [Learn more](opentelemetry-overview.md). + ## Prerequisites To add Application Insights to your ASP.NET website, you need to:
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Below are SDKs/scenarios not supported in the Public Preview:
- [Availability tests](availability-overview.md). - [Profiler](profiler-overview.md). + ## Prerequisites to enable Azure AD authentication ingestion - Familiarity with:
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments. ++ ## Enable agent-based monitoring # [Windows](#tab/Windows)
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Enabling monitoring on your ASP.NET based web applications running on [Azure App
> [!NOTE] > If both agent-based monitoring and manual SDK-based instrumentation is detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below. + ## Enable agent-based monitoring > [!NOTE]
azure-monitor Cloudservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/cloudservices.md
Last updated 09/05/2018
![Overview dashboard](./media/cloudservices/overview-graphs.png) + ## Prerequisites Before you begin, you need:
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
This document describes the sections you see in the configuration file, how they
> [!NOTE] > ApplicationInsights.config and .xml instructions do not apply to the .NET Core SDK. For configuring .NET Core applications, follow [this](./asp-net-core.md) guide. ++ ## Telemetry Modules (ASP.NET) Each Telemetry Module collects a specific type of data and uses the core API to send the data. The modules are installed by different NuGet packages, which also add the required lines to the .config file.
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
You need a subscription with [Microsoft Azure](https://azure.com). Sign in with
> [!NOTE] > It is *highly recommended* to use the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package and associated instructions from [here](./worker-service.md) for any Console Applications. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), and hence can be used in .NET Core 2.1 or higher, and .NET Framework 4.7.2 or higher. ++ ## Getting started > [!IMPORTANT]
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
In the world of microservices, every logical operation requires work to be done
This article explains the data model used by Application Insights to correlate telemetry sent by multiple components. It covers context-propagation techniques and protocols. It also covers the implementation of correlation tactics on different languages and platforms. ++ ## Data model for telemetry correlation Application Insights defines a [data model](../../azure-monitor/app/data-model.md) for distributed telemetry correlation. To associate telemetry with a logical operation, every telemetry item has a context field called `operation_Id`. This identifier is shared by every telemetry item in the distributed trace. So even if you lose telemetry from a single layer, you can still associate telemetry reported by other components.
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
Azure Application Insights displays data about your application in a Microsoft A
> [!IMPORTANT] > [Classic Application Insights has been deprecated](https://azure.microsoft.com/updates/we-re-retiring-classic-application-insights-on-29-february-2024/). Please follow these [instructions on how upgrade to workspace-based Application Insights](convert-classic-resource.md). + ## Sign in to Microsoft Azure If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
This also allows for common Azure role-based access control (Azure RBAC) across
> [!NOTE] > Data ingestion and retention for workspace-based Application Insights resources are billed through the Log Analytics workspace where the data is located. [Learn more]( ./pricing.md#workspace-based-application-insights) about billing for workspace-based Application Insights resources. ++ ## New capabilities Workspace-based Application Insights allows you to take advantage of the latest capabilities of Azure Monitor and Log Analytics including:
azure-monitor Custom Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-endpoints.md
To send data from Application Insights to certain regions, you'll need to override the default endpoint addresses. Each SDK requires slightly different modifications, all of which are described in this article. These changes require adjusting the sample code and replacing the placeholder values for `QuickPulse_Endpoint_Address`, `TelemetryChannel_Endpoint_Address`, and `Profile_Query_Endpoint_address` with the actual endpoint addresses for your specific region. The end of this article contains links to the endpoint addresses for regions where this configuration is required. > [!NOTE]
-> [Connection strings](./sdk-connection-string.md?tabs=net) are the new preferred method of setting custom endpoints within Application Insights.
+> [Connection strings](./sdk-connection-string.md?tabs=net) are the new preferred method of setting custom endpoints within Application Insights.
++
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
Both ASP.NET and ASP.NET Core applications deployed to Azure Web Apps run in a s
The sandbox environment does not allow direct access to system performance counters. However, a limited subset of counters are exposed as environment variables as described [here](https://github.com/projectkudu/kudu/wiki/Perf-Counters-exposed-as-environment-variables). Only a subset of counters are available in this environment, and the full list can be found [here](https://github.com/microsoft/ApplicationInsights-dotnet/blob/main/WEB/Src/PerformanceCollector/PerformanceCollector/Implementation/WebAppPerformanceCollector/CounterFactory.cs).
-The Application Insights SDK for [ASP.NET](https://nuget.org/packages/Microsoft.ApplicationInsights.Web) and [ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) detect, using environment variables, if code is deployed to a Web App and non-Windows container. This determines whether it collects performance counters from applications using environment variables when in a sandbox environment or utilizing the standard collection mechanism when hosted on a Windows Container or Virtual Machine. Sandbox environments include Azure Web Apps and Azure App Service Apps not running in a Windows container.
+The Application Insights SDK for [ASP.NET](https://nuget.org/packages/Microsoft.ApplicationInsights.Web) and [ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) detect, using environment variables, if code is deployed to a Web App and non-Windows container. This determines whether it collects performance counters from applications using environment variables when in a sandbox environment or utilizing the standard collection mechanism when hosted on a Windows Container or Virtual Machine.
## Performance counters in ASP.NET Core applications
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Each of these options is described in the [detailed instructions](status-monitor
## Release notes
+### 2.0.0-beta3
+
+- Update ApplicationInsights .NET/.NET Core SDK to 2.20.1-redfield.
+- Enable SQL query collection.
+ ### 2.0.0-beta2 - Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1-redfield.
azure-percept Connect Over Cellular Usb Multitech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-multitech.md
To learn how to prepare Azure Percept DK, go to [Connect Azure Percept DK over 5
### Prepare the modem Before you begin, your modem must be in Mobile Broadband Interface Model (MBIM) mode. To learn how to prepare the modem, see the [Telit wireless solutions Attention (AT) command reference guide](
-https://www.telit.com/wp-content/uploads/2018/01/Telit-LE910-V2-Modules-AT-Commands-Reference-Guide-r3.pdf).
+https://www.multitech.com/documents/publications/reference-guides/Telit_LE910-V2_Modules_AT_Commands_Reference_Guide_r5.pdf).
In this article, to enable the MBIM interface, we use AT command `AT#USBCFG=<mode>` to configure the correct USB mode.
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
Depending on the setup in your source SQL Server, there may be additional SQL Se
## Supported versions
-As you prepare for migrating SQL Server databases to SQL Server on Azure VMs, be sure to consider the versions of SQL Server that are supported. For a list of current supported SQL Server versions on Azure VMs, please see [SQL Server on Azure VMs](../../virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md#get-started-with-sql-server-vms).
+As you prepare for migrating SQL Server databases to SQL Server on Azure VMs, be sure to consider the versions of SQL Server that are supported. For a list of current supported SQL Server versions on Azure VMs, please see [SQL Server on Azure VMs](../../virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md#getting-started).
## Migration assets
azure-sql Sql Server Iaas Agent Extension Automate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
The SQL Server IaaS Agent extension allows for integration with the Azure portal
## Feature benefits
-The SQL Server IaaS Agent extension unlocks a number of feature benefits for managing your SQL Server VM.
+The SQL Server IaaS Agent extension unlocks a number of feature benefits for managing your SQL Server VM. You can register your SQL Server VM in lightweight management mode, which unlocks a few of the benefits, or in full management mode, which unlocks all available benefits.
The following table details these benefits:
-| Feature | Description |
-| | |
-| **Portal management** | Unlocks [management in the portal](manage-sql-vm-portal.md), so that you can view all of your SQL Server VMs in one place, and so that you can enable and disable SQL specific features directly from the portal. <br/> Management mode: Lightweight & full|
-| **Automated backup** |Automates the scheduling of backups for all databases for either the default instance or a [properly installed](./frequently-asked-questions-faq.yml) named instance of SQL Server on the VM. For more information, see [Automated backup for SQL Server in Azure virtual machines (Resource Manager)](automated-backup-sql-2014.md). <br/> Management mode: Full|
-| **Automated patching** |Configures a maintenance window during which important Windows and SQL Server security updates to your VM can take place, so you can avoid updates during peak times for your workload. For more information, see [Automated patching for SQL Server in Azure virtual machines (Resource Manager)](automated-patching.md). <br/> Management mode: Full|
-| **Azure Key Vault integration** |Enables you to automatically install and configure Azure Key Vault on your SQL Server VM. For more information, see [Configure Azure Key Vault integration for SQL Server on Azure Virtual Machines (Resource Manager)](azure-key-vault-integration-configure.md). <br/> Management mode: Full|
-| **View disk utilization in portal** | Allows you to view a graphical representation of the disk utilization of your SQL data files in the Azure portal. <br/> Management mode: Full |
-| **Flexible licensing** | Save on cost by [seamlessly transitioning](licensing-model-azure-hybrid-benefit-ahb-change.md) from the bring-your-own-license (also known as the Azure Hybrid Benefit) to the pay-as-you-go licensing model and back again. <br/> Management mode: Lightweight & full|
-| **Flexible version / edition** | If you decide to change the [version](change-sql-server-version.md) or [edition](change-sql-server-edition.md) of SQL Server, you can update the metadata within the Azure portal without having to redeploy the entire SQL Server VM. <br/> Management mode: Lightweight & full|
-| **Defender for Cloud portal integration** | If you've enabled [Microsoft Defender for SQL](../../../security-center/defender-for-sql-usage.md), then you can view Defender for Cloud recommendations directly in the [SQL virtual machines](manage-sql-vm-portal.md) resource of the Azure portal. See [Security best practices](security-considerations-best-practices.md) to learn more. <br/> Management mode: Lightweight & full|
-| **SQL best practices assessment** | Enables you to assess the health of your SQL Server VMs using configuration best practices. For more information, see [SQL best practices assessment](sql-assessment-for-sql-vm.md). <br/> Management mode: Full|
## Management modes
azure-sql Sql Server On Azure Vm Iaas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md
vm-windows-sql-server Previously updated : 11/27/2019 Last updated : 03/10/2022 # What is SQL Server on Windows Azure Virtual Machines?
> * [Windows](sql-server-on-azure-vm-iaas-what-is-overview.md) > * [Linux](../linux/sql-server-on-linux-vm-what-is-iaas-overview.md)
+This article provides an overview of SQL Server on Azure Virtual Machines (VMs) on the Windows platform.
+
+If you're new to SQL Server on Azure VMs, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
++
+## Overview
+ [SQL Server on Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/) enables you to use full versions of SQL Server in the cloud without having to manage any on-premises hardware. SQL Server virtual machines (VMs) also simplify licensing costs when you pay as you go. Azure virtual machines run in many different [geographic regions](https://azure.microsoft.com/regions/) around the world. They also offer a variety of [machine sizes](../../../virtual-machines/sizes.md). The virtual machine image gallery allows you to create a SQL Server VM with the right version, edition, and operating system. This makes virtual machines a good option for many different SQL Server workloads.
-If you're new to SQL Server on Azure VMs, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
-## Automated updates
+## Feature benefits
-SQL Server on Azure Virtual Machines can use [Automated Patching](automated-patching.md) to schedule a maintenance window for installing important Windows and SQL Server updates automatically.
+When you register your SQL Server on Azure VM with the [SQL IaaS agent extension](sql-server-iaas-agent-extension-automate-management.md) you unlock a number of feature benefits. You can register your SQL Server VM in lightweight management mode, which unlocks a few of the benefits, or in full management mode, which unlocks all available benefits. Registering with the extension is completely free.
-## Automated backups
+The following table details the benefits unlocked by the extension:
-SQL Server on Azure Virtual Machines can take advantage of [Automated Backup](automated-backup.md), which regularly creates backups of your database to blob storage. You can also manually use this technique. For more information, see [Use Azure Storage for SQL Server Backup and Restore](azure-storage-sql-server-backup-restore-use.md).
-Azure also offers an enterprise-class backup solution for SQL Server running in Azure VMs. A fully-managed backup solution, it supports Always On availability groups, long-term retention, point-in-time recovery, and central management and monitoring. For more information, see [Azure Backup for SQL Server in Azure VMs](../../../backup/backup-azure-sql-database.md).
-
-## High availability
-If you require high availability, consider configuring SQL Server Availability Groups. This involves multiple instances of SQL Server on Azure Virtual Machines in a virtual network. You can configure your high-availability solution manually, or you can use templates in the Azure portal for automatic configuration. For an overview of all high-availability options, see [High Availability and Disaster Recovery for SQL Server in Azure Virtual Machines](business-continuity-high-availability-disaster-recovery-hadr-overview.md).
-## Performance
+## Getting started
-Azure virtual machines offer different machine sizes to meet various workload demands. SQL Server VMs also provide automated storage configuration, which is optimized for your performance requirements. For more information about configuring storage for SQL Server VMs, see [Storage configuration for SQL Server VMs](storage-configuration.md). To fine-tune performance, see the [Performance best practices for SQL Server on Azure Virtual Machines](./performance-guidelines-best-practices-checklist.md).
+To get started with SQL Server on Azure VMs, review the following resources:
-## Get started with SQL Server VMs
+- **Create SQL VM**: To create your SQL Server on Azure VM, review the Quickstarts using the [Azure portal](sql-vm-create-portal-quickstart.md), [Azure PowerShell](sql-vm-create-powershell-quickstart.md) or an [ARM template](create-sql-vm-resource-manager-template.md). For more thorough guidance, review the [Provisioning guide](create-sql-vm-portal.md).
+- **Connect to SQL VM**: To connect to your SQL Server on Azure VMs, review the [ways to connect](ways-to-connect-to-sql.md).
+- **Migrate data**: Migrate your data to SQL Server on Azure VMs from [SQL Server](../../migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md), [Oracle](../../migration-guides/virtual-machines/oracle-to-sql-on-azure-vm-guide.md), or [Db2](../../migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md).
+- **Storage configuration**: For information about configuring storage for your SQL Server on Azure VMs, review [Storage configuration](storage-configuration.md).
+- **Performance**: Fine-tune the performance of your SQL Server on Azure VM by reviewing the [Performance best practices checklist](performance-guidelines-best-practices-checklist.md).
+- **Pricing**: For information about the pricing structure of your SQL Server on Azure VM, review the [Pricing guidance](pricing-guidance.md).
+- **Frequently asked questions**: For commonly asked questions, and scenarios, review the [FAQ](frequently-asked-questions-faq.yml).
+
+## Licensing
To get started, choose a SQL Server virtual machine image with your required version, edition, and operating system. The following sections provide direct links to the Azure portal for the SQL Server virtual machine gallery images.
+Azure only maintains one virtual machine image for each supported operating system, version, and edition combination. This means that over time images are refreshed, and older images are removed. For more information, see the **Images** section of the [SQL Server VMs FAQ](./frequently-asked-questions-faq.yml).
+ > [!TIP] > For more information about how to understand pricing for SQL Server images, see [Pricing guidance for SQL Server on Azure Virtual Machines](pricing-guidance.md). ### <a id="payasyougo"></a> Pay as you go+ The following table provides a matrix of pay-as-you-go SQL Server images. | Version | Operating system | Edition |
The following table provides a matrix of pay-as-you-go SQL Server images.
To see the available SQL Server on Linux virtual machine images, see [Overview of SQL Server on Azure Virtual Machines (Linux)](../linux/sql-server-on-linux-vm-what-is-iaas-overview.md). > [!NOTE]
-> It is now possible to change the licensing model of a pay-per-usage SQL Server VM to use your own license. For more information, see [How to change the licensing model for a SQL Server VM](licensing-model-azure-hybrid-benefit-ahb-change.md).
+> Change the licensing model of a pay-per-usage SQL Server VM to use your own license. For more information, see [How to change the licensing model for a SQL Server VM](licensing-model-azure-hybrid-benefit-ahb-change.md).
### <a id="BYOL"></a> Bring your own license+ You can also bring your own license (BYOL). In this scenario, you only pay for the VM without any additional charges for SQL Server licensing. Bringing your own license can save you money over time for continuous production workloads. For requirements to use this option, see [Pricing guidance for SQL Server Azure VMs](pricing-guidance.md#byol). To bring your own license, you can either convert an existing pay-per-usage SQL Server VM, or you can deploy an image with the prefixed **{BYOL}**. For more information about switching your licensing model between pay-per-usage and BYOL, see [How to change the licensing model for a SQL Server VM](licensing-model-azure-hybrid-benefit-ahb-change.md).
It is possible to deploy an older image of SQL Server that is not available in t
For more information about deploying SQL Server VMs using PowerShell, view [How to provision SQL Server virtual machines with Azure PowerShell](create-sql-vm-powershell.md).
-### Connect to the VM
-After creating your SQL Server VM, connect to it from applications or tools, such as SQL Server Management Studio (SSMS). For instructions, see [Connect to a SQL Server virtual machine on Azure](ways-to-connect-to-sql.md).
-
-### Migrate your data
-If you have an existing database, you'll want to move that to the newly provisioned SQL Server VM. For a list of migration options and guidance, see [Migrating a Database to SQL Server on an Azure VM](migrate-to-vm-from-sql-server.md).
-
-## Create and manage Azure SQL resources with the Azure portal
-
-The Azure portal provides a single page where you can manage [all of your Azure SQL resources](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Sql%2Fazuresql) including your SQL virtual machines.
-
-To access the **Azure SQL resources** page, select **Azure SQL** in the Azure portal menu, or search for and select **Azure SQL** from any page.
-
-![Search for Azure SQL](./media/sql-server-on-azure-vm-iaas-what-is-overview/search-for-azure-sql.png)
-
-> [!NOTE]
-> Azure SQL provides a quick and easy way to access all of your Azure SQL databases, elastic pools, logical servers, managed instances, and virtual machines. Azure SQL is not a service or resource.
-
-To manage existing resources, select the desired item in the list. To create new Azure SQL resources, select **+ Add**.
-
-![Create Azure SQL resource](./media/sql-server-on-azure-vm-iaas-what-is-overview/create-azure-sql-resource.png)
-
-After selecting **+ Add**, view additional information about the different options by selecting **Show details** on any tile.
-
-![databases tile details](./media/sql-server-on-azure-vm-iaas-what-is-overview/sql-vm-details.png)
-
-For details, see:
--- [Create a single database](../../database/single-database-create-quickstart.md)-- [Create an elastic pool](../../database/elastic-pool-overview.md#create-a-new-sql-database-elastic-pool-by-using-the-azure-portal)-- [Create a managed instance](../../managed-instance/instance-create-quickstart.md)-- [Create a SQL Server virtual machine](sql-vm-create-portal-quickstart.md)-
-## <a id="lifecycle"></a> SQL Server VM image refresh policy
-Azure only maintains one virtual machine image for each supported operating system, version, and edition combination. This means that over time images are refreshed, and older images are removed. For more information, see the **Images** section of the [SQL Server VMs FAQ](./frequently-asked-questions-faq.yml).
## Customer experience improvement program (CEIP)+ The Customer Experience Improvement Program (CEIP) is enabled by default. This periodically sends reports to Microsoft to help improve SQL Server. There is no management task required with CEIP unless you want to disable it after provisioning. You can customize or disable the CEIP by connecting to the VM with remote desktop. Then run the **SQL Server Error and Usage Reporting** utility. Follow the instructions to disable reporting. For more information about data collection, see the [SQL Server Privacy Statement](/sql/sql-server/sql-server-privacy). ## Related products and services
-### Windows virtual machines
-* [Azure Virtual Machines overview](../../../virtual-machines/windows/overview.md)
-### Storage
-* [Introduction to Microsoft Azure Storage](../../../storage/common/storage-introduction.md)
+Since SQL Server on Azure VMs is integrated into the Azure platform, review resources from related products and services that interact with the SQL Server on Azure VM ecosystem:
-### Networking
-* [Virtual Network overview](../../../virtual-network/virtual-networks-overview.md)
-* [IP addresses in Azure](../../../virtual-network/ip-services/public-ip-addresses.md)
-* [Create a Fully Qualified Domain Name in the Azure portal](../../../virtual-machines/create-fqdn.md)
+- **Windows virtual machines**: [Azure Virtual Machines overview](../../../virtual-machines/windows/overview.md)
+- **Storage**: [Introduction to Microsoft Azure Storage](../../../storage/common/storage-introduction.md)
+- **Networking**: [Virtual Network overview](../../../virtual-network/virtual-networks-overview.md), [IP addresses in Azure](../../../virtual-network/ip-services/public-ip-addresses.md), [Create a Fully Qualified Domain Name in the Azure portal](../../../virtual-machines/create-fqdn.md)
+- **SQL**: [SQL Server documentation](/sql/index), [Azure SQL Database comparison](../../azure-sql-iaas-vs-paas-what-is-overview.md)
-### SQL
-* [SQL Server documentation](/sql/index)
-* [Azure SQL Database comparison](../../azure-sql-iaas-vs-paas-what-is-overview.md)
## Next steps
azure-vmware Ecosystem App Monitoring Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-app-monitoring-solutions.md
Our application performance monitoring and troubleshooting partners have industr
You can find more information about these solutions here: -- [NETSCOUT](https://www.netscout.com/technology-partners/microsoft-azure)
+- [NETSCOUT](https://www.netscout.com/technology-partners/microsoft-azure)
+- [Turbonomic](https://blog.turbonomic.com/turbonomic-announces-partnership-and-support-for-azure-vmware-service)
azure-vmware Ecosystem Os Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-os-vms.md
+
+ Title: Operating system support for Azure VMware Solution virtual machines
+description: Learn about operating system support for your Azure VMware Solution virtual machines.
+ Last updated : 03/13/2022++
+# Operating system support for Azure VMware Solution virtual machines
+
+Azure VMware Solution supports a wide range of operating systems to be used in the guest virtual machines. Being based on VMware vSphere, currently 6.7 version, all operating systems currently supported by vSphere can be used by any Azure VMware Solution customer for their workloads.
+
+Check the list of operating systems and configurations supported in the [VMware Compatibility Guide](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software), create a query for ESXi 6.7 Update 3 and select all operating systems and vendors.
+
+Additionally to the supported operating systems by VMware on vSphere we have worked with Red Hat, SUSE and Canonical to extend the support model currently in place for Azure Virtual Machines to the workloads running on Azure VMware Solution, given that it is a first-party Azure service. You can check the following sites of vendors for more information about the benefits of running their operating system on Azure.
+
+- [Red Hat Enterprise Linux](https://access.redhat.com/ecosystem/microsoft-azure)
+- [Ubuntu Server](https://ubuntu.com/azure)
+- [SUSE Enterprise Linux Server](https://www.suse.com/partners/alliance/microsoft/)
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 02/02/2022 Last updated : 03/14/2022
As one of the [restore options](#restore-options), you can create a disk from a
1. In **Resource group**, select an existing resource group for the restored disks, or create a new one with a globally unique name. 1. In **Staging location**, specify the storage account to which to copy the VHDs. [Learn more](#storage-accounts).
- ![Select Resource group and Staging location](./media/backup-azure-arm-restore-vms/trigger-restore-operation1.png)
+ :::image type="content" source="./media/backup-azure-arm-restore-vms/trigger-restore-operation-disks.png" alt-text="Screenshot showing to select Resource disks.":::
1. Select **Restore** to trigger the restore operation.
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md
description: Learn how to deploy Azure Bastion using PowerShell.
Previously updated : 03/01/2022 Last updated : 03/14/2022 # Customer intent: As someone with a networking background, I want to deploy Bastion and connect to a VM.
# Deploy Bastion using Azure PowerShell
-This article shows you how to deploy Azure Bastion using PowerShell. Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on your VM and maintain yourself. An Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+This article shows you how to deploy Azure Bastion with the Standard SKU using PowerShell. Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on your VM and maintain yourself. An Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
Once you deploy Bastion to your virtual network, you can connect to your VMs via private IP address. This seamless RDP/SSH experience is available to all the VMs in the same virtual network. If your VM has a public IP address that you don't need for anything else, you can remove it.
You can also deploy Bastion by using the following other methods:
* [Azure CLI](create-host-cli.md) * [Quickstart - deploy with default settings](quickstart-host-portal.md)
-## Prerequisites
+If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-### Azure subscription
+> [!NOTE]
+> The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+>
-Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
+## Prerequisites
+
+The following prerequisites are required.
### Azure PowerShell [!INCLUDE [PowerShell](../../includes/vpn-gateway-cloud-shell-powershell-about.md)]
-> [!NOTE]
-> The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
->
+### <a name="values"></a>Example values
+
+You can use the following example values when creating this configuration, or you can substitute your own.
+
+**Basic VNet and VM values:**
-## <a name="createhost"></a>Deploy Bastion
+|**Name** | **Value** |
+| | |
+| Virtual machine| TestVM |
+| Resource group | TestRG1 |
+| Region | East US |
+| Virtual network | VNet1 |
+| Address space | 10.1.0.0/16 |
+| Subnets | FrontEnd: 10.1.0.0/24 |
-This section helps you deploy Azure Bastion using Azure PowerShell.
+**Azure Bastion values:**
-1. Create a virtual network and an Azure Bastion subnet. You must create the Azure Bastion subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. This is different than a VPN gateway subnet.
+|**Name** | **Value** |
+| | |
+| Name | VNet1-bastion |
+| Subnet Name | FrontEnd |
+| Subnet Name | AzureBastionSubnet|
+| AzureBastionSubnet addresses | A subnet within your VNet address space with a subnet mask /26 or larger.<br> For example, 10.1.1.0/26. |
+| Tier/SKU | Standard |
+| Public IP address | Create new |
+| Public IP address name | VNet1-ip |
+| Public IP address SKU | Standard |
+| Assignment | Static |
- [!INCLUDE [Note about BastionSubnet size.](../../includes/bastion-subnet-size.md)]
+## Deploy Bastion
+
+This section helps you create a virtual network, subnets, and deploy Azure Bastion using Azure PowerShell.
+
+1. Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed. If you're running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command.
```azurepowershell-interactive
- $subnetName = "AzureBastionSubnet"
- $subnet = New-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix 10.0.0.0/24
- $vnet = New-AzVirtualNetwork -Name "myVnet" -ResourceGroupName "myBastionRG" -Location "westeurope" -AddressPrefix 10.0.0.0/16 -Subnet $subnet
+ New-AzResourceGroup -Name TestRG1 -Location EastUS
```
-1. Create a public IP address for Azure Bastion. The public IP is the public IP address the Bastion resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating.
+1. Create a virtual network.
- The following example uses the **Standard SKU**. The Standard SKU lets you configure more Bastion features and connect to VMs using more connection types. For more information, see [Bastion SKUs](configuration-settings.md#skus).
+ ```azurepowershell-interactive
+ $virtualNetwork = New-AzVirtualNetwork `
+ -ResourceGroupName TestRG1 `
+ -Location EastUS `
+ -Name VNet1 `
+ -AddressPrefix 10.1.0.0/16
+ ```
+
+1. Set the configuration for the virtual network.
```azurepowershell-interactive
- $publicip = New-AzPublicIpAddress -ResourceGroupName "myBastionRG" -name "myPublicIP" -location "westeurope" -AllocationMethod Static -Sku Standard
+ $virtualNetwork | Set-AzVirtualNetwork
```
-1. Create a new Azure Bastion resource in the AzureBastionSubnet of your virtual network. It takes about 10 minutes for the Bastion resource to create and deploy.
+1. Configure and set a subnet for your virtual network. This will be the subnet to which you'll deploy a VM. The variable used for *-VirtualNetwork* was set in the previous steps.
```azurepowershell-interactive
- $bastion = New-AzBastion -ResourceGroupName "myBastionRG" -Name "myBastion" -PublicIpAddress $publicip -VirtualNetwork $vnet
+ $subnetConfig = Add-AzVirtualNetworkSubnetConfig `
+ -Name 'FrontEnd' `
+ -AddressPrefix 10.1.0.0/24 `
+ -VirtualNetwork $virtualNetwork
```
-## <a name="connect"></a>Connect to a VM
+ ```azurepowershell-interactive
+ $virtualNetwork | Set-AzVirtualNetwork
+ ```
-You can use any of the following articles to connect to a VM that's located in the virtual network to which you deployed Bastion. You can also use the [Connection steps](#steps) in the section below. Some connection types require the [Standard SKU](configuration-settings.md#skus).
+1. Configure and set the Azure Bastion subnet for your virtual network. This subnet is reserved exclusively for Azure Bastion resources. You must create the Azure Bastion subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. The example below also helps you add an Azure Bastion subnet to an existing VNet.
+ [!INCLUDE [Important about BastionSubnet size.](../../includes/bastion-subnet-size.md)]
+
+ Declare the variable.
+
+ ```azurepowershell-interactive
+ $virtualNetwork = Get-AzVirtualNetwork -Name "VNet1" `
+ -ResourceGroupName "TestRG1"
+ ```
+
+ Add the configuration.
+
+ ```azurepowershell-interactive
+ Add-AzVirtualNetworkSubnetConfig -Name "AzureBastionSubnet" `
+ -VirtualNetwork $virtualNetwork -AddressPrefix "10.1.1.0/26" `
+ ```
+
+ Set the configuration.
+
+ ```azurepowershell-interactive
+ $virtualNetwork | Set-AzVirtualNetwork
+ ```
+
+1. Create a public IP address for Azure Bastion. The public IP is the public IP address the Bastion resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating.
+
+ ```azurepowershell-interactive
+ $publicip = New-AzPublicIpAddress -ResourceGroupName "TestRG1" -name "VNet1-ip" -location "EastUS" -AllocationMethod Static -Sku Standard
+ ```
+
+1. Create a new Azure Bastion resource in the AzureBastionSubnet using the [New-AzBastion](/powershell/module/az.network/new-azbastion) command. The following example uses the **Standard SKU**. The Standard SKU lets you configure more Bastion features and connect to VMs using more connection types. For more information, see [Bastion SKUs](configuration-settings.md#skus). If you want to deploy using the Basic SKU, change the -Sku value to "Basic".
+
+ ```azurepowershell-interactive
+ New-AzBastion -ResourceGroupName "TestRG1" -Name "VNet1-bastion" `
+ -PublicIpAddressRgName "TestRG1" -PublicIpAddressName "VNet1-ip" `
+ -VirtualNetworkRgName "TestRG1" -VirtualNetworkName "VNet1" `
+ -Sku "Standard"
+ ```
+
+1. It takes about 10 minutes for the Bastion resources to deploy. You can create a VM in the next section while Bastion deploys to your virtual network.
+
+## <a name="create-vm"></a>Create a VM
+
+You can create a VM using the [Quickstart: Create a VM using PowerShell](../virtual-machines/windows/quick-create-powershell.md) or [Quickstart: Create a VM using the portal](../virtual-machines/windows/quick-create-portal.md) articles. Be sure you deploy the VM to the virtual network to which you deployed Bastion. The VM you create in this section isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later in this tutorial via Bastion.
+
+The following required roles for your resources.
+
+* Required VM roles:
+
+ * Reader role on the virtual machine.
+ * Reader role on the NIC with private IP of the virtual machine.
+
+* Required inbound ports:
+
+ * For Windows VMS - RDP (3389)
+ * For Linux VMs - SSH (22)
+
+## <a name="connect"></a>Connect to a VM
+
+You can use the [Connection steps](#steps) in the section below to easily connect to your VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus). You can also use any of the [VM connection articles](#articles) to connect to a VM.
### <a name="steps"></a>Connection steps [!INCLUDE [Connection steps](../../includes/bastion-vm-connect.md)]
+#### <a name="articles"></a>Connect to VM articles
++ ## <a name="ip"></a>Remove VM public IP address Azure Bastion doesn't use the public IP address to connect to the client VM. If you don't need the public IP address for your VM, you can disassociate the public IP address. See [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md).
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
This is the public IP address of the Bastion host resource on which RDP/SSH will
[!INCLUDE [Connect to a VM](../../includes/bastion-vm-connect.md)]
+### To enable audio output
++ ## Remove VM public IP address [!INCLUDE [Remove a public IP address from a VM](../../includes/bastion-remove-ip.md)]
cognitive-services Set Up Qnamaker Service Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md
# Manage QnA Maker resources
-Before you can create any QnA Maker knowledge bases, you must first set up a QnA Maker service in Azure. Anyone with authorization to create new resources in a subscription can set up a QnA Maker service. If you are trying the Custom question answering feature, you would need to create the Text Analytics resource and add the Custom question answering feature.
+Before you can create any QnA Maker knowledge bases, you must first set up a QnA Maker service in Azure. Anyone with authorization to create new resources in a subscription can set up a QnA Maker service. If you are trying the Custom question answering feature, you would need to create the Language resource and add the Custom question answering feature.
[!INCLUDE [Custom question answering](../includes/new-version.md)]
cognitive-services Translate With Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2-preview/how-to/translate-with-custom-model.md
After you publish your custom model, you can access it with the Translator API b
More information about the Translator Text API can be found on the [Translator API Reference](../../../reference/v3-0-translate.md) page.
-1. You may also want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslator/releases/tag/V2.9.4).
+1. You may also want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
## Next steps
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/language-support.md
| Hungarian | `hu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Icelandic | `is` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Indonesian | `id` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| 🆕 </br> Inuinnaqtun | `ikt` |✔|||||
+| Inuinnaqtun | `ikt` |Γ£ö|||||
| Inuktitut | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| 🆕 </br> Inuktitut (Latin) | `iu-Latn` |✔|||||
+| Inuktitut (Latin) | `iu-Latn` |Γ£ö|||||
| Irish | `ga` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|| | Italian | `it` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Japanese | `ja` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Serbian (Latin) | `sr-Latn` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovak | `sk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovenian | `sl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Somali | `so` |Γ£ö|||Γ£ö||
| Spanish | `es` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Swahili | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Swedish | `sv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Turkish | `tr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Turkmen | `tk` |Γ£ö|||| | Ukrainian | `uk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| 🆕 </br> Upper Sorbian | `hsb` |✔|||||
+| Upper Sorbian | `hsb` |Γ£ö|||||
| Urdu | `ur` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Uyghur | `ug` |Γ£ö|||| | Uzbek (Latin | `uz` |Γ£ö|||Γ£ö||
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
Previously updated : 02/24/2022 Last updated : 03/14/2022 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
Containerization is an approach to software distribution in which an application
- **Control over data**: Choose where your data gets processed by Cognitive Services. This can be essential if you can't send data to the cloud but need access to Cognitive Services APIs. Support consistency in hybrid environments ΓÇô across data, management, identity, and security. - **Control over model updates**: Flexibility in versioning and updating of models deployed in their solutions. - **Portable architecture**: Enables the creation of a portable application architecture that can be deployed on Azure, on-premises and the edge. Containers can be deployed directly to [Azure Kubernetes Service](../aks/index.yml), [Azure Container Instances](../container-instances/index.yml), or to a [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).-- **High throughput / low latency**: Provide customers the ability to scale for high throughput and low latency requirements by enabling Cognitive Services to run physically close to their application logic and data. Containers do not cap transactions per second (TPS) and can be made to scale both up and out to handle demand if you provide the necessary hardware resources.
+- **High throughput / low latency**: Provide customers the ability to scale for high throughput and low latency requirements by enabling Cognitive Services to run physically close to their application logic and data. Containers don't cap transactions per second (TPS) and can be made to scale both up and out to handle demand if you provide the necessary hardware resources.
- **Scalability**: With the ever growing popularity of containerization and container orchestration software, such as Kubernetes; scalability is at the forefront of technological advancements. Building on a scalable cluster foundation, application development caters to high availability. ## Containers in Azure Cognitive Services Azure Cognitive Services containers provide the following set of Docker containers, each of which contains a subset of functionality from services in Azure Cognitive Services. You can find instructions and image locations in the tables below. A list of [container images](containers/container-image-tags.md) is also available.
+> [!NOTE]
+> See [Install and run Form Recognizer containers](../applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md) for **Applied AI Services Form Recognizer** container instructions and image locations.
+ ### Decision containers | Service | Container | Description | Availability |
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
Previously updated : 09/09/2021 Last updated : 03/14/2022
Azure Cognitive Services offers many container images. The container registries and corresponding repositories vary between container images. Each container image name offers multiple tags. A container image tag is a mechanism of versioning the container image. This article is intended to be used as a comprehensive reference for listing all the Cognitive Services container images and their available tags.
+> [!NOTE]
+> See [Form Recognizer container image tags and release notes](../../applied-ai-services/form-recognizer/containers/form-recognizer-container-image-tags.md) for **Applied AI Services Form Recognizer** container tag information and updates.
+ > [!TIP] > When using [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/), pay close attention to the casing of the container registry, repository, container image name and corresponding tag - as they are **case sensitive**.
Regular monthly upgrade
Release note for `2.15.0-amd64`: **Fixes**
-* Fix container start issue that may occur when customer run it in some RHEL environments.
-* Fix model download nil error issue in some cases when customer download customized models.
+* Fix container start issue that may occur when customers run it in some RHEL environments.
+* Fix model download nil error issue in some cases when customers download customized models.
Release note for `2.14.0-amd64`:
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/language-support.md
Previously updated : 11/02/2021 Last updated : 03/14/2022
When creating a conversation project in CLU, you can specify the primary languag
The supported languages for conversation projects are:
-| **Language** | **Language Code** |
+| Language | Language code |
| | |
-| Brazilian Portuguese | `pt-br` |
-| Chinese | `zh-cn` |
-| Dutch | `nl-nl` |
-| English | `en-us` |
-| French | `fr-fr` |
-| German | `de-de` |
-| Gujarati | `gu-in` |
-| Hindi | `hi-in` |
-| Italian | `it-it` |
-| Japanese | `ja-jp` |
-| Korean | `ko-kr` |
-| Marathi | `mr-in` |
-| Spanish | `es-es` |
-| Tamil | `ta-in` |
-| Telugu | `te-in` |
-| Turkish | `tr-tr` |
+| Afrikaans | `af` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Belarusian | `be` |
+| Bulgarian | `bg` |
+| Bengali | `bn` |
+| Breton | `br` |
+| Bosnian | `bs` |
+| Catalan | `ca` |
+| Czech | `cs` |
+| Welsh | `cy` |
+| Danish | `da` |
+| German | `de`
+| Greek | `el` |
+| English (US) | `en-us` |
+| English (Uk) | `en-gb` |
+| Esperanto | `eo` |
+| Spanish | `es` |
+| Estonian | `et` |
+| Basque | `eu` |
+| Persian (Farsi) | `fa` |
+| Finnish | `fi` |
+| French | `fr` |
+| Western Frisian | `fy` |
+| Irish | `ga` |
+| Scottish Gaelic | `gd` |
+| Galician | `gl` |
+| Gujarati | `gu` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Croatian | `hr` |
+| Hungarian | `hu` |
+| Armenian | `hy` |
+| Indonesian | `id` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Georgian | `ka` |
+| Kazakh | `kk` |
+| Khmer | `km` |
+| Kannada | `kn` |
+| Korean | `ko` |
+| Kurdish (Kurmanji) | `ku` |
+| Kyrgyz | `ky` |
+| Latin | `la` |
+| Lao | `lo` |
+| Lithuanian | `lt` |
+| Latvian | `lv` |
+| Malagasy | `mg` |
+| Macedonian | `mk` |
+| Malayalam | `ml` |
+| Mongolian | `mn` |
+| Marathi | `mr` |
+| Malay | `ms` |
+| Burmese | `my` |
+| Nepali | `ne` |
+| Dutch | `nl` |
+| Norwegian (Bokmal) | `nb` |
+| Oriya | `or` |
+| Punjabi | `pa` |
+| Polish | `pl` |
+| Pashto | `ps` |
+| Portuguese (Brazil) | `pt-br` |
+| Portuguese (Portugal) | `pt-pt` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Sanskrit | `sa` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Albanian | `sq` |
+| Serbian | `sr` |
+| Sundanese | `su` |
+| Swedish | `sv` |
+| Swahili | `sw` |
+| Tamil | `ta` |
+| Telugu | `te` |
+| Thai | `th` |
+| Filipino | `tl` |
+| Turkish | `tr` |
+| Uyghur | `ug` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Chinese (Simplified) | `zh-hans` |
+| Chinese (Traditional) | `zh-hant` |
+| Zulu | `zu` |
#### Multilingual conversation projects
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/language-support.md
Previously updated : 11/02/2021 Last updated : 03/14/2022
With custom text classification, you can train a model in one language and test
Custom text classification supports `.txt` files in the following languages:
-| Language | Locale |
-|--|--|
-| English (United States) |`en-US` |
-| French (France) |`fr-FR` |
-| German |`de-DE` |
-| Italian |`it-IT` |
-| Spanish (Spain) |`es-ES` |
-| Portuguese (Portugal) | `pt-PT` |
-| Portuguese (Brazil) | `pt-BR` |
+| Language | Language Code |
+| | |
+| Afrikaans | `af` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Belarusian | `be` |
+| Bulgarian | `bg` |
+| Bengali | `bn` |
+| Breton | `br` |
+| Bosnian | `bs` |
+| Catalan | `ca` |
+| Czech | `cs` |
+| Welsh | `cy` |
+| Danish | `da` |
+| German | `de`
+| Greek | `el` |
+| English (US) | `en-us` |
+| Esperanto | `eo` |
+| Spanish | `es` |
+| Estonian | `et` |
+| Basque | `eu` |
+| Persian (Farsi) | `fa` |
+| Finnish | `fi` |
+| French | `fr` |
+| Western Frisian | `fy` |
+| Irish | `ga` |
+| Scottish Gaelic | `gd` |
+| Galician | `gl` |
+| Gujarati | `gu` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Croatian | `hr` |
+| Hungarian | `hu` |
+| Armenian | `hy` |
+| Indonesian | `id` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Georgian | `ka` |
+| Kazakh | `kk` |
+| Khmer | `km` |
+| Kannada | `kn` |
+| Korean | `ko` |
+| Kurdish (Kurmanji) | `ku` |
+| Kyrgyz | `ky` |
+| Latin | `la` |
+| Lao | `lo` |
+| Lithuanian | `lt` |
+| Latvian | `lv` |
+| Malagasy | `mg` |
+| Macedonian | `mk` |
+| Malayalam | `ml` |
+| Mongolian | `mn` |
+| Marathi | `mr` |
+| Malay | `ms` |
+| Burmese | `my` |
+| Nepali | `ne` |
+| Dutch | `nl` |
+| Norwegian (Bokmal) | `nb` |
+| Oriya | `or` |
+| Punjabi | `pa` |
+| Polish | `pl` |
+| Pashto | `ps` |
+| Portuguese (Brazil) | `pt-br` |
+| Portuguese (Portugal) | `pt-pt` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Sanskrit | `sa` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Albanian | `sq` |
+| Serbian | `sr` |
+| Sundanese | `su` |
+| Swedish | `sv` |
+| Swahili | `sw` |
+| Tamil | `ta` |
+| Telugu | `te` |
+| Thai | `th` |
+| Filipino | `tl` |
+| Turkish | `tr` |
+| Uyghur | `ug` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Chinese (Simplified) | `zh-hans` |
+| Zulu | `zu` |
## Next steps
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/language-support.md
Previously updated : 11/02/2021 Last updated : 03/14/2022
With custom NER, you can train a model in one language and test in another langu
Custom NER supports `.txt` files in the following languages:
-| Language | Locale |
-|--|--|
-| English (United States) |`en-US` |
-| French (France) |`fr-FR` |
-| German |`de-DE` |
-| Italian |`it-IT` |
-| Spanish (Spain) |`es-ES` |
-| Portuguese (Portugal) | `pt-PT` |
-| Portuguese (Brazil) | `pt-BR` |
+| Language | Language code |
+| | |
+| Afrikaans | `af` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Belarusian | `be` |
+| Bulgarian | `bg` |
+| Bengali | `bn` |
+| Breton | `br` |
+| Bosnian | `bs` |
+| Catalan | `ca` |
+| Czech | `cs` |
+| Welsh | `cy` |
+| Danish | `da` |
+| German | `de`
+| Greek | `el` |
+| English (US) | `en-us` |
+| Esperanto | `eo` |
+| Spanish | `es` |
+| Estonian | `et` |
+| Basque | `eu` |
+| Persian (Farsi) | `fa` |
+| Finnish | `fi` |
+| French | `fr` |
+| Western Frisian | `fy` |
+| Irish | `ga` |
+| Scottish Gaelic | `gd` |
+| Galician | `gl` |
+| Gujarati | `gu` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Croatian | `hr` |
+| Hungarian | `hu` |
+| Armenian | `hy` |
+| Indonesian | `id` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Georgian | `ka` |
+| Kazakh | `kk` |
+| Khmer | `km` |
+| Kannada | `kn` |
+| Korean | `ko` |
+| Kurdish (Kurmanji) | `ku` |
+| Kyrgyz | `ky` |
+| Latin | `la` |
+| Lao | `lo` |
+| Lithuanian | `lt` |
+| Latvian | `lv` |
+| Malagasy | `mg` |
+| Macedonian | `mk` |
+| Malayalam | `ml` |
+| Mongolian | `mn` |
+| Marathi | `mr` |
+| Malay | `ms` |
+| Burmese | `my` |
+| Nepali | `ne` |
+| Dutch | `nl` |
+| Norwegian (Bokmal) | `nb` |
+| Oriya | `or` |
+| Punjabi | `pa` |
+| Polish | `pl` |
+| Pashto | `ps` |
+| Portuguese (Brazil) | `pt-br` |
+| Portuguese (Portugal) | `pt-pt` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Sanskrit | `sa` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Albanian | `sq` |
+| Serbian | `sr` |
+| Sundanese | `su` |
+| Swedish | `sv` |
+| Swahili | `sw` |
+| Tamil | `ta` |
+| Telugu | `te` |
+| Thai | `th` |
+| Filipino | `tl` |
+| Turkish | `tr` |
+| Uyghur | `ug` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Chinese (Simplified) | `zh-hans` |
+| Zulu | `zu` |
## Next steps
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## March 2022
+
+* Expanded language support for:
+ * [Custom text classification](custom-classification/language-support.md)
+ * [Custom Named Entity Recognition (NER)](custom-named-entity-recognition/language-support.md)
+ * [Conversational language understanding](conversational-language-understanding/language-support.md)
+ ## February 2022 -- Model improvements for latest model-version for [text summarization](text-summarization/overview.md)
+* Model improvements for latest model-version for [text summarization](text-summarization/overview.md)
## December 2021
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
Previously updated : 05/25/2021 Last updated : 03/14/2022 tags: connectors
This connector includes a Microsoft MQ client that communicates with a remote MQ
## Available operations
-* Multi-tenant Azure Logic Apps: When you create a **Logic App (Consumption)** resource, you can connect to an MQ server only by using the *managed* MQ connector. This connector provides only actions, no triggers.
+* Consumption logic app: You can connect to an MQ server only by using the *managed* MQ connector. This connector provides only actions, no triggers.
-* Single-tenant Azure Logic Apps: When you create a single-tenant based logic app workflow, you can connect to an MQ server by using either the managed MQ connector, which includes *only* actions, or the *built-in* MQ operations, which includes triggers *and* actions.
+* Standard logic app: You can connect to an MQ server by using either the managed MQ connector, which includes *only* actions, or the *built-in* MQ operations, which include triggers *and* actions.
For more information about the difference between a managed connector and built-in operations, review [key terms in Logic Apps](../logic-apps/logic-apps-overview.md#logic-app-concepts).
These built-in MQ operations also have the following capabilities plus the benef
## Limitations
-The MQ connector doesn't use the message's **Format** field and doesn't make any character set conversions. The connector only puts whatever data appears in the message field into a JSON message and sends the message along.
+* The MQ connector doesn't support segmented messages.
+
+* The MQ connector doesn't use the message's **Format** field and doesn't make any character set conversions. The connector only puts whatever data appears in the message field into a JSON message and sends the message along.
## Prerequisites
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Depending on the current RU/s provisioned and resource settings, each resource c
| Maximum RU/s per container | 5,000 | | Maximum storage across all items per (logical) partition | 20 GB | | Maximum number of distinct (logical) partition keys | Unlimited |
-| Maximum storage per container | 50 GB |
+| Maximum storage per container | 50 GB * |
+
+> [!NOTE]
+> * Maximum storage limit is 30GB for Cassandra API.
## Control plane operations
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
tags: billing
Previously updated : 12/10/2021 Last updated : 03/11/2022
In the Azure portal, you can change your default payment method to a new credit
If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md).
-The supported payment methods for Microsoft Azure are credit cards and check/wire transfer. To get approved to pay by check/wire transfer, see [Pay for your Azure subscription by check or wire transfer](pay-by-invoice.md).
+The supported payment methods for Microsoft Azure are credit cards, debit cards, and check wire transfer. To get approved to pay by check wire transfer, see [Pay for your Azure subscription by check or wire transfer](pay-by-invoice.md).
With a Microsoft Customer Agreement, your payment methods are associated with billing profiles. Learn how to [check access to a Microsoft Customer Agreement](#check-the-type-of-your-account).
The following sections apply to customers who have a Microsoft Customer Agreemen
If you have a Microsoft Customer Agreement, your credit card is associated with a billing profile. To change the payment method for a billing profile, you must be the person who signed up for Azure and created the billing account or you must have the correct [MCA permissions](understand-mca-roles.md).
-If you'd like to change your billing profile's default payment method to check/wire transfer, see [Pay for Azure subscriptions by invoice](pay-by-invoice.md).
+If you'd like to change your billing profile's default payment method to check wire transfer, see [Pay for Azure subscriptions by invoice](pay-by-invoice.md).
To change your credit card, follow these steps:
cost-management-billing Resolve Past Due Balance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/resolve-past-due-balance.md
tags: billing
Previously updated : 10/27/2021 Last updated : 03/11/2022 + # Resolve past due balance for your pay-as-you-go Azure subscription
This article applies to customers who signed up for Azure online with a credit c
If you have a Microsoft Customer Agreement billing account, see [Pay Microsoft Customer Agreement bill](../understand/pay-bill.md) instead.
-If your payment isn't received or if we can't process your payment, you will get an email and see an alert in the Azure portal telling you that your subscription is past due. The email contains a link that takes you to the Settle balance page.
+If your payment isn't received or if we can't process your payment, you'll get an email and see an alert in the Azure portal telling you that your subscription is past due. The email contains a link that takes you to the Settle balance page.
-If your default payment method is credit card, the [Account Administrator](add-change-subscription-administrator.md#whoisaa) can settle the outstanding charges in the Azure portal. If you pay by invoice (check/wire transfer), send your payment to the location listed at the bottom of your invoice.
+If your default payment method is credit card, the [Account Administrator](add-change-subscription-administrator.md#whoisaa) can settle the outstanding charges in the Azure portal. If you pay by invoice (check wire transfer), send your payment to the location listed at the bottom of your invoice.
> [!IMPORTANT] > * If you have multiple subscriptions using the same credit card and they are all past due, you must pay the entire outstanding balance at once.
If your default payment method is credit card, the [Account Administrator](add-c
1. Sign in to the [Azure portal](https://portal.azure.com) as the Account Admin. 1. Search for **Cost Management + Billing**. 1. Select the past due subscription from the **Overview** page.
-1. In the **Subscription overview** page, click the red past due banner to settle the balance.
+1. In the **Subscription overview** page, select the red past due banner to settle the balance.
> [!NOTE] > If you are not the Account Administrator, you will not be able to settle the balance. - If your account is in good standing, you wonΓÇÖt see any banners. - If your account has a bill ready to be paid, youΓÇÖll see a blue banner that takes you to the Settle balance page. YouΓÇÖll also receive an email that has a link to the Settle balance page. - If your account is past due, youΓÇÖll see a red banner that says your account is past due that takes you to the Settle balance page. YouΓÇÖll also receive an email that has a link to the Settle balance page.
-1. In the new **Settle balance** page, click **Select payment method**.
-1. In the new blade on the right, select a credit card from the drop-down or add a new one by clicking the blue **Add new payment method** link. This credit card will become the active payment method for all subscriptions currently using the failed payment method.
+1. In the new **Settle balance** page, select **Select payment method**.
+1. In the new area on the right, select a credit card from the drop-down or add a new one by selecting the blue **Add new payment method** link. This credit card will become the active payment method for all subscriptions currently using the failed payment method.
> [!NOTE] > * The total outstanding balance reflects outstanding charges across all Microsoft services using the failed payment method. > * If the selected payment method also has outstanding charges for Microsoft services, this will be reflected in the total outstanding balance. You must pay those outstanding charges, too.
-1. Click **Pay**.
+1. Select **Pay**.
+
+## Settle balance might be Pay now
+
+Users in the following countries/locales don't see the **Settle balance** option. Instead, they use the [Pay now](../understand/pay-bill.md#pay-now-in-the-azure-portal) option to pay their bill.
+
+- AT - Austria
+- AU - Australia
+- BE - Belgium
+- BG - Bulgaria
+- CA - Canada
+- CH - Switzerland
+- CZ - Czech Republic
+- DE - Germany
+- DK - Denmark
+- EE - Estonia
+- ES - Spain
+- FI - Finland
+- FR - France
+- GB - United Kingdom
+- GR - Greece
+- HR - Croatia
+- HU - Hungary
+- IE - Ireland
+- IT - Italy
+- JP - Japan
+- KR - South Korea
+- LT - Lithuania
+- LV - Latvia
+- NL - Netherlands
+- NO - Norway
+- NZ - New Zealand
+- PL - Poland
+- PT - Portugal
+- RO - Romania
+- SE - Sweden
+- SK - Slovakia
+- TW - Taiwan
## Troubleshoot declined credit card
-If your credit card charge is declined by your financial institution, please reach out to your financial institution to resolve the issue. Check with your bank to make sure:
+If your credit card charge is declined by your financial institution, contact your financial institution to resolve the issue. Check with your bank to make sure:
- International transactions are enabled on the card. - The card has sufficient credit limit or funds to settle the balance. - Recurring payments are enabled on the card.
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
tags: billing, past due, pay now, bill, invoice, pay
Previously updated : 12/17/2021 Last updated : 03/11/2022
To pay invoices in the Azure portal, you must have the correct [MCA permissions]
The invoice status shows *paid* within 24 hours.
+## Pay now might be unavailable
+
+If you have an Microsoft Online Services Program account (pay-as-you-go account), the **Pay now** option might be unavailable. Instead, you might see a **Settle balance** banner. If so, see [Resolve past due balance](../manage/resolve-past-due-balance.md#resolve-past-due-balance-in-the-azure-portal).
+ ## Check access to a Microsoft Customer Agreement [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)]
data-factory Data Factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-introduction.md
Last updated 10/22/2021
-# Introduction to Azure Data Factory
+# Introduction to Azure Data Factory V1
> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"] > * [Version 1](data-factory-introduction.md) > * [Version 2 (current version)](../introduction.md)
ddos-protection Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/alerts.md
na Previously updated : 12/28/2020 Last updated : 3/11/2022
This [template](https://aka.ms/ddosalert) deploys the necessary components of an
You can select any of the available DDoS protection metrics to alert you when thereΓÇÖs an active mitigation during an attack, using the Azure Monitor alert configuration. 1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your DDoS Protection Plan.
-2. Under **Monitoring**, select **Metrics**.
-3. In the gray navigation bar, select **New alert rule**.
-4. Enter, or select your own values, or enter the following example values, accept the remaining defaults, and then select **Create alert rule**:
-
- |Setting |Value |
- | | |
- | Scope | Select **Select resource**. </br> Select the **Subscription** that contains the public IP address you want to log, select **Public IP Address** for **Resource type**, then select the specific public IP address you want to log metrics for. </br> Select **Done**. |
- | Condition | Select **Select condition**. </br> Under signal name, select **Under DDoS attack or not**. </br> Under **Operator**, select **Greater than or equal to**. </br> Under **Aggregation type**, select **Maximum**. </br> Under **Threshold value**, enter *1*. For the **Under DDoS attack or not** metric, **0** means you are not under attack while **1** means you are under attack. </br> Select **Done**. |
- | Actions | Select **Add actions groups**. </br> Select **Create action group**. </br> Under **Notifications**, under **Notification type**, select **Email/SMS message/Push/Voice**. </br> Under **Name**, enter _MyUnderAttackEmailAlert_. </br> Click the edit button, then select **Email** and as many of the following options you require, and then select **OK**. </br> Select **Review + create**. |
- | Alert rule details | Under **Alert rule name**, Enter _MyDdosAlert_. |
+
+1. Under **Monitoring**, select **Alerts**.
+
+1. Select the **+ New Alert Rule** button or select **+ Create** on the navigation bar, then select **Alert rule**.
+
+1. Close the **Select a Signal** page.
+
+1. On the **Create an alert rule** page, you'll see the follow tabs:
+
+ - Scope
+ - Condition
+ - Actions
+ - Details
+ - Tags
+ - Review + create
+
+ For each step use the values described below:
+
+ | Setting | Value |
+ |--|--|
+ | Scope | 1) Select **+ Select Scope**. <br/> 2) From the *Filter by subscription* dropdown list, select the **Subscription** that contains the public IP address you want to log. <br/> 3) From the *Filter by resource type* dropdown list, select **Public IP Address**, then select the specific public IP address you want to log metrics for. <br/> 4) Select **Done**. |
+ | Condition | 1) Select the **+ Add Condition** button <br/> 2) In the *Search by signal name* search box, select **Under DDoS attack or not**. <br/> 3) Leave *Chart period* and *Alert Logic* as default. <br/> 4) From the *Operator* drop-down, select **Greater than or equal to**. <br/> 5) From the *Aggregation type* drop-down, select **Maximum**. <br/> 6) In the *Threshold value* box, enter **1**. For the *Under DDoS attack or not metric*, **0** means you're not under attack while **1** means you are under attack. <br/> 7) Select **Done**. |
+ | Actions | 1) Select the **+ Create action group** button. <br/> 2) On the **Basics** tab, select your subscription, a resource group and provide the *Action group name* and *Display name*. <br/> 3) On the *Notifications* tab, under *Notification type*, select **Email/SMS message/Push/Voice**. <br/> 4) Under *Name*, enter **MyUnderAttackEmailAlert**. <br/> 5) On the *Email/SMS message/Push/Voice* page enter the **Email** and as many of the available options you require, and then select **OK**. <br/> 6) Select **Review + create** and then select **Create**. |
+ | Details | 1) Under *Alert rule name*, enter *MyDdosAlert*. <br/> 2) Select **Review + create** and then select **Create**. |
Within a few minutes of attack detection, you should receive an email from Azure Monitor metrics that looks similar to the following picture:
defender-for-cloud Defender For Container Registries Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-cicd.md
To enable vulnerability scans of images in your GitHub workflows:
subscription-token: ${{ secrets.AZ_SUBSCRIPTION_TOKEN }} ```
-1. Run the workflow that will push the image to the selected container registry. Once the image is pushed into the registry, a scan of the registry runs and you can view the CI/CD scan results along with the registry scan results within Microsoft Defender for Cloud.
+1. Run the workflow that will push the image to the selected container registry. Once the image is pushed into the registry, a scan of the registry runs and you can view the CI/CD scan results along with the registry scan results within Microsoft Defender for Cloud. Running the above YAML file will install an instance of Aqua Security's [Trivy](https://github.com/aquasecurity/trivy) in your build system. Trivy is licensed under the Apache 2.0 License and has dependencies on data feeds, many of which contain their own terms of use.
1. [View CI/CD scan results](#view-cicd-scan-results).
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
The **Azure Policy add-on for Kubernetes** collects cluster and workload configu
| Pod Name | Namespace | Kind | Short Description | Capabilities | Resource limits | Egress Required | |--|--|--|--|--|--|--|
-| azuredefender-collector-ds-* | kube-system | [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 64Mi<br> <br> cpu: 60m | No |
+| azuredefender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 64Mi<br> <br> cpu: 60m | No |
| azuredefender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No |
-| azuredefender-publisher-ds-* | kube-system | [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers' backend service where the data will be processed for and analyzed. | N/A | memory: 64Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
+| azuredefender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers' backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
\* resource limits aren't configurable
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Title: Microsoft Defender for Kubernetes - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Kubernetes. Previously updated : 11/23/2021 Last updated : 03/10/2022
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Microsoft Defender for Cloud provides environment hardening, workload protection, and run-time protections as outlined in [Container security in Defender for Cloud](defender-for-containers-introduction.md).
-
-Defender for Kubernetes protects your Kubernetes clusters whether they're running in:
--- **Azure Kubernetes Service (AKS)** - Microsoft's managed service for developing, deploying, and managing containerized applications.--- **Amazon Elastic Kubernetes Service (EKS) in a connected Amazon Web Services (AWS) account** (preview) - Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.--- **An unmanaged Kubernetes distribution** - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters on premises or on IaaS. Learn more in [Defend Azure Arc-enabled Kubernetes clusters running in on-premises and multi-cloud environments](defender-for-kubernetes-azure-arc.md).
+Defender for Cloud provides real-time threat protection for your Azure Kubernetes Service (AKS) containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
+Threat protection at the cluster level is provided by the analysis of the Kubernetes audit logs.
Host-level threat detection for your Linux AKS nodes is available if you enable [Microsoft Defender for servers](defender-for-servers-introduction.md) and its Log Analytics agent. However, if your cluster is deployed on an Azure Kubernetes Service virtual machine scale set, the Log Analytics agent is not currently supported. - ## Availability > [!IMPORTANT]
-> Microsoft Defender for Kubernetes has been replaced with **Microsoft Defender for Containers**. If you've already enabled Defender for Kubernetes on a subscription, you can continue to use it. However, you won't get Defender for Containers' improvements and new features.
+> Microsoft Defender for Kubernetes has been replaced with [**Microsoft Defender for Containers**](defender-for-servers-introduction.md). If you've already enabled Defender for Kubernetes on a subscription, you can continue to use it. However, you won't get Defender for Containers' improvements and new features.
> > This plan is no longer available for subscriptions where it isn't already enabled. >
Host-level threat detection for your Linux AKS nodes is available if you enable
|Aspect|Details| |-|:-|
-|Release state:|General availability (GA)<br>Protections for EKS clusters are preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|**Microsoft Defender for Kubernetes** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).<br>**Containers plan** for EKS clusters in connected AWS accounts is free while it's in preview.|
+|Release state:|General availability (GA)|
+|Pricing:|**Microsoft Defender for Kubernetes** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).|
|Required roles and permissions:|**Security admin** can dismiss alerts.<br>**Security reader** can view findings.|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview)|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
||| ## What are the benefits of Microsoft Defender for Kubernetes?
Our global team of security researchers constantly monitor the threat landscape.
In addition, Microsoft Defender for Kubernetes provides **cluster-level threat protection** by monitoring your clusters' logs. This means that security alerts are only triggered for actions and deployments that occur *after* you've enabled Defender for Kubernetes on your subscription.
-> [!TIP]
-> For EKS-based clusters, we monitor the control plane audit logs. These are enabled in the containers plan configuration:
-> :::image type="content" source="media/defender-for-kubernetes-intro/eks-audit-logs-enabled.png" alt-text="Screenshot of AWS connector's containers plan with audit logs enabled.":::
- Examples of security events that Microsoft Defender for Kubernetes monitors include: - Exposed Kubernetes dashboards - Creation of high privileged roles - Creation of sensitive mounts.
-For a full list of the cluster level alerts, see the [reference table of alerts](alerts-reference.md#alerts-k8scluster).
--
-## Protect Azure Kubernetes Service (AKS) clusters
-
-To protect your AKS clusters, enable the Defender plan on the relevant subscription:
-
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the relevant subscription.
-1. In the **Defender plans** page, set the status of Microsoft Defender for Kubernetes to **On**.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/enable-defender-for-kubernetes.png" alt-text="Screenshot of Microsoft Defender for Kubernetes plan being enabled.":::
-
-1. Select **Save**.
-
-## Protect Amazon Elastic Kubernetes Service clusters
-
-> [!IMPORTANT]
-> If you haven't already connected an AWS account, do so now using the instructions in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) and skip to step 3 below.
-
-To protect your EKS clusters, enable the Containers plan on the relevant account connector:
-
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the AWS connector.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/select-aws-connector.png" alt-text="Screenshot of Defender for Cloud's environment settings page showing an AWS connector.":::
-
-1. Set the toggle for the **Containers** plan to **On**.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/enable-containers-plan-on-aws-connector.png" alt-text="Screenshot of enabling Defender for Containers for an AWS connector.":::
-
-1. Optionally, to change the retention period for your audit logs, select **Configure**, enter the desired timeframe, and select **Save**.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/adjust-eks-logs-retention.png" alt-text="Screenshot of adjusting the retention period for EKS control pane logs." lightbox="./media/defender-for-kubernetes-intro/adjust-eks-logs-retention.png":::
-
-1. Continue through the remaining pages of the connector wizard.
-
-1. Azure Arc-enabled Kubernetes and the Defender extension should be installed and running on your EKS clusters. A dedicated Defender for Cloud recommendation deploys the extension (and Arc if necessary):
-
- 1. From Defender for Cloud's **Recommendations** page, search for **EKS clusters should have Azure Defender's extension for Azure Arc installed**.
- 1. Select an unhealthy cluster.
-
- > [!IMPORTANT]
- > You must select the clusters one at a time.
- >
- > Don't select the clusters by their hyperlinked names: select anywhere else in the relevant row.
-
- 1. Select **Fix**.
- 1. Defender for Cloud generates a script in the language of your choice: select Bash (for Linux) or PowerShell (for Windows).
- 1. Select **Download remediation logic**.
- 1. Run the generated script on your cluster.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/generate-script-defender-extension-kubernetes.gif" alt-text="Video of how to use the Defender for Cloud recommendation to generate a script for your EKS clusters that enables the Azure Arc extension. ":::
-
-### View recommendations and alerts for your EKS clusters
-
-> [!TIP]
-> You can simulate container alerts by following the instructions in [this blog post](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-demonstrate-the-new-containers-features-in-azure-security/ba-p/1011270).
-
-To view the alerts and recommendations for your EKS clusters, use the filters on the alerts, recommendations, and inventory pages to filter by resource type **AWS EKS cluster**.
-
+For a full list of the cluster level alerts, see alerts with "K8S.NODE_" prefix in the alert type in the [reference table of alerts](alerts-reference.md#alerts-k8scluster).
## FAQ - Microsoft Defender for Kubernetes -- [Can I still get cluster protections without the Log Analytics agent?](#can-i-still-get-cluster-protections-without-the-log-analytics-agent)-- [Does AKS allow me to install custom VM extensions on my AKS nodes?](#does-aks-allow-me-to-install-custom-vm-extensions-on-my-aks-nodes)-- [If my cluster is already running an Azure Monitor for containers agent, do I need the Log Analytics agent too?](#if-my-cluster-is-already-running-an-azure-monitor-for-containers-agent-do-i-need-the-log-analytics-agent-too)-- [Does Microsoft Defender for Kubernetes support AKS with virtual machine scale set nodes?](#does-microsoft-defender-for-kubernetes-support-aks-with-virtual-machine-scale-set-nodes)-
-### Can I still get cluster protections without the Log Analytics agent?
-
-**Microsoft Defender for Kubernetes** provides protections at the cluster level. If you also deploy the Log Analytics agent of **Microsoft Defender for servers**, you'll get the threat protection for your nodes that's provided with that plan. Learn more in [Introduction to Microsoft Defender for servers](defender-for-servers-introduction.md).
-
-We recommend deploying both, for the most complete protection possible.
-
-If you choose not to install the agent on your hosts, you'll only receive a subset of the threat protection benefits and security alerts. You'll still receive alerts related to network analysis and communications with malicious servers.
-
-### Does AKS allow me to install custom VM extensions on my AKS nodes?
-
-For Defender for Cloud to monitor your AKS nodes, they must be running the Log Analytics agent.
-
-AKS is a managed service and since the Log Analytics agent is a Microsoft-managed extension, it is also supported on AKS clusters. However, if your cluster is deployed on an Azure Kubernetes Service virtual machine scale set, the Log Analytics agent isn't currently supported.
+- [What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?](#what-happens-to-subscriptions-with-microsoft-defender-for-kubernetes-or-microsoft-defender-for-containers-enabled)
+- [Is Defender for Containers a mandatory upgrade?](#is-defender-for-containers-a-mandatory-upgrade)
+- [Does the new plan reflect a price increase?](#does-the-new-plan-reflect-a-price-increase)
-### If my cluster is already running an Azure Monitor for containers agent, do I need the Log Analytics agent too?
+### What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?
-For Defender for Cloud to monitor your nodes, they must be running the Log Analytics agent.
+Subscriptions that already have one of these plans enabled can continue to benefit from it.
-If your clusters are already running the Azure Monitor for containers agent, you can install the Log Analytics agent too and the two agents can work alongside one another without any problems.
+If you haven't enabled them yet, or create a new subscription, these plans can no longer be enabled.
-[Learn more about the Azure Monitor for containers agent](../azure-monitor/containers/container-insights-manage-agent.md).
+### Is Defender for Containers a mandatory upgrade?
-### Does Microsoft Defender for Kubernetes support AKS with virtual machine scale set nodes?
+No. Subscriptions that have either Microsoft Defender for Kubernetes or Microsoft Defender for Containers Registries enabled doesn't need to be upgraded to the new Microsoft Defender for Containers plan. However, they won't benefit from the new and improved capabilities and theyΓÇÖll have an upgrade icon shown alongside them in the Azure portal.
-If your cluster is deployed on an Azure Kubernetes Service virtual machine scale set, the Log Analytics agent is not currently supported.
+### Does the new plan reflect a price increase?
+No. ThereΓÇÖs no direct price increase. The new comprehensive Container security plan combines Kubernetes protection and container registry image scanning, and removes the previous dependency on the (paid) Defender for Servers plan.
## Next steps
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
description: This article lists Microsoft Defender for Cloud's security recommen
Previously updated : 01/12/2022 Last updated : 03/13/2022
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers |
-| VA | Registry scan | N/A | - | - | - | - |
-| VA | View vulnerabilities for running images | N/A | - | - | - | - |
-| Hardening | Control plane recommendations | N/A | - | - | - | - |
+| VA | Registry scan | - | - | - | - | - |
+| VA | View vulnerabilities for running images | - | - | - | - | - |
+| Hardening | Control plane recommendations | - | - | - | - | - |
| Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers | | Runtime Threat Detection | Agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers | | Runtime Threat Detection | Agent-based threat detection | EKS | Preview | X | Defender extension | Defender for Containers | | Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | EKS | Preview | X | Agentless | Free | | Discovery and Auto provisioning | Auditlog collection for agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Defender extension | N/A | N/A | X | - | - |
-| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | N/A | N/A | X | - | - |
+| Discovery and Auto provisioning | Auto provisioning of Defender extension | - | - | - | - | - |
+| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers |
-| VA | Registry scan | N/A | - | - | - | - |
-| VA | View vulnerabilities for running images | N/A | - | - | - | - |
-| Hardening | Control plane recommendations | N/A | - | - | - | - |
+| VA | Registry scan | - | - | - | - | - |
+| VA | View vulnerabilities for running images | - | - | - | - | - |
+| Hardening | Control plane recommendations | - | - | - | - | - |
| Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers | | Runtime Threat Detection | Agentless threat detection | GKE | Preview | X | Agentless | Defender for Containers | | Runtime Threat Detection | Agent-based threat detection | GKE | Preview | X | Defender extension | Defender for Containers |
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers | | VA | Registry scan | ACR, Private ACR | Preview | Γ£ô | Agentless | Defender for Containers | | VA | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
-| Hardening | Control plane recommendations | N/A | - | - | - | - |
+| Hardening | Control plane recommendations | - | - | - | - | - |
| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime Threat Detection | Threat detection via auditlog | Arc enabled K8s clusters | - | Γ£ô | Defender extension | Defender for Containers |
+| Runtime Threat Detection | Agentless threat detection | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
| Runtime Threat Detection | Agent-based threat detection | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
-| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free |
+| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | Arc enabled K8s clusters | Preview | X | Agentless | Free |
| Discovery and Auto provisioning | Auditlog collection for threat detection | Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers | | Discovery and Auto provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Γ£ô | Agentless | Defender for Containers | | Discovery and Auto provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| [Deprecating the recommendation to use service principals to protect your subscriptions](#deprecating-the-recommendation-to-use-service-principals-to-protect-your-subscriptions) | February 2022 | | [Moving recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices](#moving-recommendation-vulnerabilities-in-container-security-configurations-should-be-remediated-from-the-secure-score-to-best-practices) | February 2022 | | [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | March 2022 |
-| [AWS recommendations to GA](#aws-recommendations-to-ga) | March 2022 |
+| [AWS and GCP recommendations to GA](#aws-and-gcp-recommendations-to-ga) | March 2022 |
| [Relocation of custom recommendations](#relocation-of-custom-recommendations) | March 2022 | | [Deprecating Microsoft Defender for IoT device recommendations](#deprecating-microsoft-defender-for-iot-device-recommendations)| March 2022 | | [Deprecating Microsoft Defender for IoT device alerts](#deprecating-microsoft-defender-for-iot-device-alerts) | March 2022 |
Learn more:
- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) - [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md)
-### AWS recommendations to GA
+### AWS and GCP recommendations to GA
**Estimated date for change:** March 2022
-There are currently AWS recommendations in the preview stage. These recommendations come from the AWS Foundational Security Best Practices standard which is assigned by default. All of the recommendations will become Generally Available (GA) in March 2022.
+There are currently AWS and GCP recommendations in the preview stage. These recommendations come from the AWS Foundational Security Best Practices and GCP default standards which are assigned by default. All of the recommendations will become Generally Available (GA) in March 2022.
When these recommendations go live, their impact will be included in the calculations of your secure score. Expect changes to your secure score.
+#### AWS recommendations
+ **To find these recommendations**: 1. Navigate to **Environment settings** > **`AWS connector`** > **Standards (preview)**.
When these recommendations go live, their impact will be included in the calcula
:::image type="content" source="media/release-notes/aws-foundational.png" alt-text="Screenshot showing the location of the AWS Foundational Security Best Practices (preview).":::
+#### GCP recommendations
+
+**To find these recommendations**:
+
+1. Navigate to **Environment settings** > **`GCP connector`** > **Standards (preview)**.
+1. Right click on **GCP Default (preview)**, and select **view assessments**.
++ ### Relocation of custom recommendations **Estimated date for change:** March 2022
-Custom recommendation are those created by a user, and have no impact on the secure score. Therefore, the custom recommendations are being relocated from the Secure score recommendations tab to the All recommendations tab.
+Custom recommendations are those created by a user, and have no impact on the secure score. Therefore, the custom recommendations are being relocated from the Secure score recommendations tab to the All recommendations tab.
When the move occurs, the custom recommendations will be found via a new "recommendation type" filter.
defender-for-iot Tutorial Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-standalone-agent-binary-installation.md
You will need to copy the module identity connection string from the DefenderIoT
``` The `connection_string.txt` will now be located in the following path location `/etc/defender_iot_micro_agent/connection_string.txt`.
- Please note that the connection string includes a key that enables direct access to the module itself, therefore includes sensitive information that should only be used and readable by root users.
+
+ **Please note that the connection string includes a key that enables direct access to the module itself, therefore includes sensitive information that should only be used and readable by root users.**
1. Restart the service using this command:
digital-twins Tutorial Command Line App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-command-line-app.md
Rerun the `CreateModels` command to try re-uploading one of the same models you
CreateModels Room ```
-As models cannot be overwritten, this command will now return a service error.
+As models cannot be overwritten, this command will now return a service error indicating that some of the model IDs you are trying to create already exist.
+
For the details on how to delete existing models, see [Manage DTDL models](how-to-manage-model.md).
-```cmd/sh
-Response 409: Service request failed.
-Status: 409 (Conflict)
-
-Content:
-{"error":{"code":"ModelAlreadyExists","message":"Could not add model dtmi:example:Room;2 as it already exists. Use Model_List API to view models that already exist. See the Swagger example.(http://aka.ms/ModelListSwSmpl)"}}
-
-Headers:
-Strict-Transport-Security: REDACTED
-Date: Wed, 20 May 2020 00:53:49 GMT
-Content-Length: 223
-Content-Type: application/json; charset=utf-8
-```
## Create digital twins
event-hubs Process Data Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/process-data-azure-stream-analytics.md
Title: Process data from Event Hubs Azure using Stream Analytics | Microsoft Docs description: This article shows you how to process data from your Azure event hub using an Azure Stream Analytics job. Previously updated : 09/15/2021 Last updated : 03/14/2022
Here are the key benefits of Azure Event Hubs and Azure Stream Analytics integra
## End-to-end flow
+> [!IMPORTANT]
+> If you aren't a member of [owner](../role-based-access-control/built-in-roles.md#owner) or [contributor](../role-based-access-control/built-in-roles.md#contributor) roles at the Azure subscription level, you must be a member of the [Stream Analytics Query Tester](../role-based-access-control/built-in-roles.md#stream-analytics-query-tester) role at the Azure subscription level to successfully complete steps in this section. This role allows you to perform testing queries without creating a stream analytics job first. For instructions on assigning a role to a user, see [Assign AD roles to users](../active-directory/roles/manage-roles-portal.md).
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Navigate to your **Event Hubs namespace** and then navigate to the **event hub**, which has the incoming data. 1. Select **Process Data** on the event hub page.
expressroute Expressroute Troubleshooting Expressroute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md
Title: 'Azure ExpressRoute: Verify Connectivity - Troubleshooting Guide'
-description: This page provides instructions on troubleshooting and validating end-to-end connectivity of an ExpressRoute circuit.
+ Title: 'Verify Azure ExpressRoute connectivity - troubleshooting guide'
+description: This article provides instructions on troubleshooting and validating end-to-end connectivity of an ExpressRoute circuit.
-# Verifying ExpressRoute connectivity
-This article helps you verify and troubleshoot ExpressRoute connectivity. ExpressRoute extends an on-premises network into the Microsoft cloud over a private connection that is commonly facilitated by a connectivity provider. ExpressRoute connectivity traditionally involves three distinct network zones, as follows:
+# Verify ExpressRoute connectivity
-- Customer Network-- Provider Network-- Microsoft Datacenter
+This article helps you verify and troubleshoot Azure ExpressRoute connectivity. ExpressRoute extends an on-premises network into the Microsoft Cloud over a private connection that's commonly facilitated by a connectivity provider. ExpressRoute connectivity traditionally involves three distinct network zones:
-> [!NOTE]
-> In the ExpressRoute direct connectivity model (offered at 10/100 Gbps bandwidth), customers can directly connect to Microsoft Enterprise Edge (MSEE) routers' port. Therefore, in the direct connectivity model, there are only customer and Microsoft network zones.
->
+- Customer network
+- Provider network
+- Microsoft datacenter
+> [!NOTE]
+> In the ExpressRoute direct connectivity model (offered at a bandwidth of 10/100 Gbps), customers can directly connect to the port for Microsoft Enterprise Edge (MSEE) routers. The direct connectivity model includes only customer and Microsoft network zones.
-The purpose of this document is to help you identify if and where a connectivity issue exists. Thereby, to help seek support from the appropriate team to resolve an issue. If Microsoft support is needed to resolve an issue, open a support ticket with [Microsoft Support][Support].
+This article helps you identify if and where a connectivity issue exists. You can then seek support from the appropriate team to resolve the issue.
> [!IMPORTANT]
-> This document is intended to help diagnosing and fixing simple issues. It is not intended to be a replacement for Microsoft support. Open a support ticket with [Microsoft Support][Support] if you are unable to solve the problem using the guidance provided.
->
->
+> This article is intended to help you diagnose and fix simple issues. It's not intended to be a replacement for Microsoft support. If you can't solve a problem by using the guidance in this article, open a support ticket with [Microsoft Support][Support].
## Overview
-The following diagram shows the logical connectivity of a customer network to Microsoft network using ExpressRoute.
+
+The following diagram shows the logical connectivity of a customer network to the Microsoft network through ExpressRoute.
[![1]][1]
-In the preceding diagram, the numbers indicate key network points. These network points are referenced in this article at times by their associated number. Depending on the ExpressRoute connectivity model--Cloud Exchange Co-location, Point-to-Point Ethernet Connection, or Any-to-any (IPVPN)--the network points 3 and 4 may be switches (Layer 2 devices) or routers (Layer 3 devices). In the direct connectivity model, there are no network points 3 and 4; instead CEs (2) are directly connected to MSEEs via dark fiber. The key network points illustrated are as follows:
+In the preceding diagram, the numbers indicate key network points:
-1. Customer compute device (for example, a server or PC)
-2. CEs: Customer edge routers
-3. PEs (CE facing): Provider edge routers/switches that are facing customer edge routers. Referred to as PE-CEs in this document.
-4. PEs (MSEE facing): Provider edge routers/switches that are facing MSEEs. Referred to as PE-MSEEs in this document.
-5. MSEEs: Microsoft Enterprise Edge (MSEE) ExpressRoute routers
-6. Virtual Network (VNet) Gateway
-7. Compute device on the Azure VNet
+1. Customer compute device (for example, a server or PC).
+2. Customer edge routers (CEs).
+3. Provider edge routers/switches (PEs) that face customer edge routers.
+4. PEs that face Microsoft Enterprise Edge ExpressRoute routers (MSEEs). This article calls them *PE-MSEEs*.
+5. MSEEs.
+6. Virtual network gateway.
+7. Compute device on the Azure virtual network.
-If the Cloud Exchange Co-location, Point-to-Point Ethernet, or direct connectivity models are used, CEs (2) establish BGP peering with MSEEs (5).
+At times, this article references these network points by their associated number.
-If the Any-to-any (IPVPN) connectivity model is used, PE-MSEEs (4) establish BGP peering with MSEEs (5). PE-MSEEs propagate the routes received from Microsoft back to the customer network via the IPVPN service provider network.
+Depending on the ExpressRoute connectivity model, network points 3 and 4 might be switches (layer 2 devices) or routers (layer 3 devices). The ExpressRoute connectivity models are cloud exchange co-location, point-to-point Ethernet connection, or any-to-any (IPVPN).
-> [!NOTE]
->For high availability, Microsoft establishes a fully redundant parallel connectivity between MSEEs (5) and PE-MSEEs (4) pairs. A fully redundant parallel network path is also encouraged between customer network and PE-CEs pair. For more information regarding high availability, see the article [Designing for high availability with ExpressRoute][HA]
->
->
+In the direct connectivity model, there are no network points 3 and 4. Instead, CEs (2) are directly connected to MSEEs via dark fiber.
-The following are the logical steps, in troubleshooting ExpressRoute circuit:
+If the cloud exchange co-location, point-to-point Ethernet, or direct connectivity model is used, CEs (2) establish Border Gateway Protocol (BGP) peering with MSEEs (5).
-* [Verify circuit provisioning and state](#verify-circuit-provisioning-and-state)
-
-* [Validate Peering Configuration](#validate-peering-configuration)
-
-* [Validate ARP](#validate-arp)
-
-* [Validate BGP and routes on the MSEE](#validate-bgp-and-routes-on-the-msee)
-
-* [Confirm the traffic flow](#confirm-the-traffic-flow)
+If the any-to-any (IPVPN) connectivity model is used, PE-MSEEs (4) establish BGP peering with MSEEs (5). PE-MSEEs propagate the routes received from Microsoft back to the customer network via the IPVPN service provider network.
-* [Test private peering connectivity](#test-private-peering-connectivity)
+> [!NOTE]
+> For high availability, Microsoft establishes a fully redundant parallel connectivity between MSEE and PE-MSEE pairs. A fully redundant parallel network path is also encouraged between the customer network and PE/CE pairs. For more information about high availability, see the article [Designing for high availability with ExpressRoute][HA].
+The following sections represent the logical steps in troubleshooting an ExpressRoute circuit.
## Verify circuit provisioning and state
-Provisioning an ExpressRoute circuit establishes a redundant Layer 2 connections between CEs/PE-MSEEs (2)/(4) and MSEEs (5). For more information on how to create, modify, provision, and verify an ExpressRoute circuit, see the article [Create and modify an ExpressRoute circuit][CreateCircuit].
+
+Provisioning an ExpressRoute circuit establishes a redundant layer 2 connection between CEs/PE-MSEEs (2/4) and MSEEs (5). For more information on how to create, modify, provision, and verify an ExpressRoute circuit, see the article [Create and modify an ExpressRoute circuit][CreateCircuit].
>[!TIP]
->A service key uniquely identifies an ExpressRoute circuit. Should you need assistance from Microsoft or from an ExpressRoute partner to troubleshoot an ExpressRoute issue, provide the service key to readily identify the circuit.
->
->
+>A service key uniquely identifies an ExpressRoute circuit. If you need assistance from Microsoft or from an ExpressRoute partner to troubleshoot an ExpressRoute issue, provide the service key to readily identify the circuit.
### Verification via the Azure portal
-In the Azure portal, open the ExpressRoute circuit page. In the ![3][3] section of the page, the ExpressRoute essentials are listed as shown in the following screenshot:
+
+In the Azure portal, open the page for the ExpressRoute circuit. The ![3][3] section of the page lists the ExpressRoute essentials, as shown in the following screenshot:
![4][4]
-In the ExpressRoute Essentials, *Circuit status* indicates the status of the circuit on the Microsoft side. *Provider status* indicates if the circuit has been *Provisioned/Not provisioned* on the service-provider side.
+In the ExpressRoute essentials, **Circuit status** indicates the status of the circuit on the Microsoft side. **Provider status** indicates if the circuit has been provisioned or not provisioned on the service-provider side.
-For an ExpressRoute circuit to be operational, the *Circuit status* must be *Enabled* and the *Provider status* must be *Provisioned*.
+For an ExpressRoute circuit to be operational, **Circuit status** must be **Enabled**, and **Provider status** must be **Provisioned**.
> [!NOTE]
-> After configuring an ExpressRoute circuit, if the *Circuit status* is stuck in not enabled status, contact [Microsoft Support][Support]. On the other hand, if the *Provider status* is stuck in not provisioned status, contact your service provider.
->
->
+> After you configure an ExpressRoute circuit, if **Circuit status** is stuck in a **Not enabled** status, contact [Microsoft Support][Support]. If **Provider status** is stuck in a **Not provisioned** status, contact your service provider.
### Verification via PowerShell
-To list all the ExpressRoute circuits in a Resource Group, use the following command:
+
+To list all the ExpressRoute circuits in a resource group, use the following command:
```azurepowershell Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" ``` >[!TIP]
->If you are looking for the name of a resource group, you can get it by listing all the resource groups in your subscription, using the command *Get-AzResourceGroup*
->
+>If you're looking for the name of a resource group, you can get it by using the `Get-AzResourceGroup` command to list all the resource groups in your subscription.
-
-To select a particular ExpressRoute circuit in a Resource Group, use the following command:
+To select a particular ExpressRoute circuit in a resource group, use the following command:
```azurepowershell Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" -Name "Test-ER-Ckt" ```
-A sample response is:
+Here's an example response:
```output Name : Test-ER-Ckt
Peerings : []
Authorizations : [] ```
-To confirm if an ExpressRoute circuit is operational, pay particular attention to the following fields:
+To confirm that an ExpressRoute circuit is operational, pay particular attention to the following fields:
```output CircuitProvisioningState : Enabled
ServiceProviderProvisioningState : Provisioned
``` > [!NOTE]
-> After configuring an ExpressRoute circuit, if the *Circuit status* is stuck in not enabled status, contact [Microsoft Support][Support]. On the other hand, if the *Provider status* is stuck in not provisioned status, contact your service provider.
->
->
+> After you configure an ExpressRoute circuit, if **Circuit status** is stuck in a **Not enabled** status, contact [Microsoft Support][Support]. If **Provider status** is stuck in **Not provisioned** status, contact your service provider.
+
+## Validate peering configuration
+
+After the service provider has completed provisioning the ExpressRoute circuit, multiple routing configurations based on external BGP (eBGP) can be created over the ExpressRoute circuit between CEs/MSEE-PEs (2/4) and MSEEs (5). Each ExpressRoute circuit can have one or both of the following:
-## Validate Peering Configuration
-After the service provider has completed the provisioning the ExpressRoute circuit, multiple eBGP based routing configurations can be created over the ExpressRoute circuit between CEs/MSEE-PEs (2)/(4) and MSEEs (5). Each ExpressRoute circuit can have: Azure private peering (traffic to private virtual networks in Azure), and/or Microsoft peering (traffic to public endpoints of PaaS and SaaS). For more information on how to create and modify routing configuration, see the article [Create and modify routing for an ExpressRoute circuit][CreatePeering].
+- Azure private peering: traffic to private virtual networks in Azure
+- Microsoft peering: traffic to public endpoints of platform as a service (PaaS) and software as a service (SaaS)
+
+For more information on how to create and modify routing configuration, see the article [Create and modify routing for an ExpressRoute circuit][CreatePeering].
### Verification via the Azure portal > [!NOTE]
-> In IPVPN connectivity model, service providers handle the responsibility of configuring the peerings (layer 3 services). In such a model, after the service provider has configured a peering and if the peering is blank in the portal, try refreshing the circuit configuration using the refresh button on the portal. This operation will pull the current routing configuration from your circuit.
->
+> In an IPVPN connectivity model, service providers handle the responsibility of configuring the peerings (layer 3 services). In such a model, after the service provider has configured a peering and if the peering is blank in the portal, try refreshing the circuit configuration by using the refresh button on the portal. This operation will pull the current routing configuration from your circuit.
-In the Azure portal, status of an ExpressRoute circuit peering can be checked under the ExpressRoute circuit page. In the ![3][3] section of the page, the ExpressRoute peerings would be listed as shown in the following screenshot:
+In the Azure portal, you can check the status of an ExpressRoute circuit on the page for that circuit. The ![3][3] section of the page lists the ExpressRoute peerings, as shown in the following screenshot:
![5][5]
-In the preceding example, as noted Azure private peering is provisioned, but Azure public and Microsoft peerings aren't provisioned. A successfully provisioned peering context would also have the primary and secondary point-to-point subnets listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs. For the peerings that are provisioned, the listing also indicates who last modified the configuration.
+In the preceding example, Azure private peering is provisioned, but Azure public and Microsoft peerings aren't provisioned. A successfully provisioned peering context would also have the primary and secondary point-to-point subnets listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs. For the peerings that are provisioned, the listing also indicates who last modified the configuration.
> [!NOTE]
-> If enabling a peering fails, check if the primary and secondary subnets assigned match the configuration on the linked CE/PE-MSEE. Also check if the correct *VlanId*, *AzureASN*, and *PeerASN* are used on MSEEs and if these values maps to the ones used on the linked CE/PE-MSEE. If MD5 hashing is chosen, the shared key should be same on MSEE and PE-MSEE/CE pair. Previously configured shared key would not be displayed for security reasons. Should you need to change any of these configuration on an MSEE router, refer to [Create and modify routing for an ExpressRoute circuit][CreatePeering].
+> If enabling a peering fails, check if the assigned primary and secondary subnets match the configuration on the linked CE/PE-MSEE. Also check if the correct `VlanId`, `AzureASN`, and `PeerASN` values are used on MSEEs, and if these values map to the ones used on the linked CE/PE-MSEE.
>-
-> [!NOTE]
-> On a /30 subnet assigned for interface, Microsoft will pick the second usable IP address of the subnet for the MSEE interface. Therefore, ensure that the first usable IP address of the subnet has been assigned on the peered CE/PE-MSEE.
+> If MD5 hashing is chosen, the shared key should be the same on MSEE and CE/PE-MSEE pairs. Previously configured shared keys would not be displayed for security reasons.
>
+> If you need to change any of these configurations on an MSEE router, see [Create and modify routing for an ExpressRoute circuit][CreatePeering].
+> [!NOTE]
+> On a /30 subnet assigned for interface, Microsoft will choose the second usable IP address of the subnet for the MSEE interface. So, ensure that the first usable IP address of the subnet has been assigned on the peered CE/PE-MSEE.
### Verification via PowerShell
-To get the Azure private peering configuration details, use the following commands:
+
+To get the configuration details for Azure private peering, use the following commands:
```azurepowershell $ckt = Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" -Name "Test-ER-Ckt" Get-AzExpressRouteCircuitPeeringConfig -Name "AzurePrivatePeering" -ExpressRouteCircuit $ckt ```
-A sample response, for a successfully configured private peering, is:
+Here's an example response for a successfully configured private peering:
```output Name : AzurePrivatePeering
MicrosoftPeeringConfig : null
ProvisioningState : Succeeded ```
- A successfully enabled peering context would have the primary and secondary address prefixes listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs.
+A successfully enabled peering context would have the primary and secondary address prefixes listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs.
-To get the Azure public peering configuration details, use the following commands:
+To get the configuration details for Azure public peering, use the following commands:
```azurepowershell $ckt = Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" -Name "Test-ER-Ckt" Get-AzExpressRouteCircuitPeeringConfig -Name "AzurePublicPeering" -ExpressRouteCircuit $ckt ```
-To get the Microsoft peering configuration details, use the following commands:
+To get the configuration details for Microsoft peering, use the following commands:
```azurepowershell $ckt = Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" -Name "Test-ER-Ckt" Get-AzExpressRouteCircuitPeeringConfig -Name "MicrosoftPeering" -ExpressRouteCircuit $ckt ```
-If a peering isn't configured, there would be an error message. A sample response, when the stated peering (Azure Public peering in this example) isn't configured within the circuit:
+If a peering isn't configured, you'll get an error message. Here's an example response when the stated peering (Azure public peering in this case) isn't configured within the circuit:
```azurepowershell Get-AzExpressRouteCircuitPeeringConfig : Sequence contains no matching element
At line:1 char:1
``` > [!NOTE]
-> If enabling a peering fails, check if the primary and secondary subnets assigned match the configuration on the linked CE/PE-MSEE. Also check if the correct *VlanId*, *AzureASN*, and *PeerASN* are used on MSEEs and if these values maps to the ones used on the linked CE/PE-MSEE. If MD5 hashing is chosen, the shared key should be same on MSEE and PE-MSEE/CE pair. Previously configured shared key would not be displayed for security reasons. Should you need to change any of these configuration on an MSEE router, refer to [Create and modify routing for an ExpressRoute circuit][CreatePeering].
->
+> If enabling a peering fails, check if the assigned primary and secondary subnets match the configuration on the linked CE/PE-MSEE. Also check if the correct `VlanId`, `AzureASN`, and `PeerASN` values are used on MSEEs, and if these values map to the ones used on the linked CE/PE-MSEE.
+>
+> If MD5 hashing is chosen, the shared key should be the same on MSEE and CE/PE-MSEE pairs. Previously configured shared keys would not be displayed for security reasons.
>
+> If you need to change any of these configurations on an MSEE router, see [Create and modify routing for an ExpressRoute circuit][CreatePeering].
> [!NOTE]
-> On a /30 subnet assigned for interface, Microsoft will pick the second usable IP address of the subnet for the MSEE interface. Therefore, ensure that the first usable IP address of the subnet has been assigned on the peered CE/PE-MSEE.
->
+> On a /30 subnet assigned for interface, Microsoft will pick the second usable IP address of the subnet for the MSEE interface. So, ensure that the first usable IP address of the subnet has been assigned on the peered CE/PE-MSEE.
## Validate ARP
-The ARP table provides a mapping of the IP address and MAC address for a particular peering. The ARP table for an ExpressRoute circuit peering provides the following information for each interface (primary and secondary):
-* Mapping of on-premises router interface ip address to the MAC address
-* Mapping of ExpressRoute router interface ip address to the MAC address
-* Age of the mapping
-ARP tables can help validate layer 2 configuration and troubleshooting basic layer 2 connectivity issues.
+The Address Resolution Protocol (ARP) table provides a mapping of the IP address and MAC address for a particular peering. The ARP table for an ExpressRoute circuit peering provides the following information for each interface (primary and secondary):
+* Mapping of the IP address for the on-premises router interface to the MAC address
+* Mapping of the IP address for the ExpressRoute router interface to the MAC address
+* Age of the mapping
-See [Getting ARP tables in the Resource Manager deployment model][ARP] document, for how to view the ARP table of an ExpressRoute peering, and for how to use the information to troubleshoot layer 2 connectivity issue.
+ARP tables can help validate layer 2 configuration and troubleshoot basic layer 2 connectivity issues.
+To learn how to view the ARP table of an ExpressRoute peering and how to use the information to troubleshoot layer 2 connectivity issues, see [Getting ARP tables in the Resource Manager deployment model][ARP].
## Validate BGP and routes on the MSEE
-To get the routing table from MSEE on the *Primary* path for the *Private* routing context, use the following command:
+To get the routing table from MSEE on the primary path for the private routing context, use the following command:
```azurepowershell Get-AzExpressRouteCircuitRouteTable -DevicePath Primary -ExpressRouteCircuitName ******* -PeeringType AzurePrivatePeering -ResourceGroupName **** ```
-An example response is:
+Here's an example response:
```output Network : 10.1.0.0/16
Path : 123##
``` > [!NOTE]
-> If the state of a eBGP peering between an MSEE and a CE/PE-MSEE is in Active or Idle, check if the primary and secondary peer subnets assigned match the configuration on the linked CE/PE-MSEE. Also check if the correct *VlanId*, *AzureAsn*, and *PeerAsn* are used on MSEEs and if these values maps to the ones used on the linked PE-MSEE/CE. If MD5 hashing is chosen, the shared key should be same on MSEE and CE/PE-MSEE pair. Should you need to change any of these configuration on an MSEE router, refer to [Create and modify routing for an ExpressRoute circuit][CreatePeering].
->
-
+> If the state of a eBGP peering between an MSEE and a CE/PE-MSEE is **Active** or **Idle**, check if the assigned primary and secondary peer subnets match the configuration on the linked CE/PE-MSEE. Also check if the correct `VlanId`, `AzureASN`, and `PeerASN` values are used on MSEEs, and if these values map to the ones used on the linked CE/PE-MSEE. If MD5 hashing is chosen, the shared key should be the same on MSEE and CE/PE-MSEE pairs. If you need to change any of these configurations on an MSEE router, see [Create and modify routing for an ExpressRoute circuit][CreatePeering].
> [!NOTE]
-> If certain destinations are not reachable over a peering, check the route table of the MSEEs for the corresponding peering context. If a matching prefix (could be NATed IP) is present in the routing table, then check if there are firewalls/NSG/ACLs on the path that are blocking the traffic.
->
-
+> If certain destinations are not reachable over a peering, check the route table of the MSEEs for the corresponding peering context. If a matching prefix (could be NATed IP) is present in the routing table, then check if any firewalls, network security groups, or access control lists (ACLs) on the path are blocking the traffic.
The following example shows the response of the command for a peering that doesn't exist:
StatusCode: 400
``` ## Confirm the traffic flow
-To get the combined primary and secondary path traffic statistics--bytes in and out--of a peering context, use the following command:
+
+To get the combined primary and secondary path traffic statistics (bytes in and out) of a peering context, use the following command:
```azurepowershell Get-AzExpressRouteCircuitStats -ResourceGroupName $RG -ExpressRouteCircuitName $CircuitName -PeeringType 'AzurePrivatePeering' ```
-A sample output of the command is:
+Here's an example output of the command:
```output PrimaryBytesIn PrimaryBytesOut SecondaryBytesIn SecondaryBytesOut
PrimaryBytesIn PrimaryBytesOut SecondaryBytesIn SecondaryBytesOut
240780020 239863857 240565035 239628474 ```
-A sample output of the command for a non-existent peering is:
+Here's an example output of the command for a nonexistent peering:
```azurepowershell Get-AzExpressRouteCircuitRouteTable : The BGP Peering AzurePublicPeering with Service Key ********************* is not found.
StatusCode: 400
## Test private peering connectivity
-Test your private peering connectivity by **counting** packets arriving and leaving the Microsoft edge of your ExpressRoute circuit, on the Microsoft Enterprise Edge (MSEE) devices. This diagnostic tool works by applying an Access Control List (ACL) to the MSEE to count the number of packets that hit specific ACL rules. Using this tool will allow you to confirm connectivity by answering the questions such as:
+Test your private peering connectivity by counting packets arriving at and leaving the Microsoft edge of your ExpressRoute circuit on the MSEE devices. This diagnostic tool works by applying an ACL to the MSEE to count the number of packets that hit specific ACL rules. Using this tool will allow you to confirm connectivity by answering questions such as:
* Are my packets getting to Azure?
-* Are they getting back to on-prem?
+* Are they getting back to on-premises?
+
+### Run a test
-### Run test
-1. To access this diagnostic tool, select **Diagnose and solve problems** from your ExpressRoute circuit in the Azure portal.
+1. To access the diagnostic tool, select **Diagnose and solve problems** from your ExpressRoute circuit in the Azure portal.
- :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/diagnose-problems.png" alt-text="Screenshot of diagnose and solve problem page from ExpressRoute circuit.":::
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/diagnose-problems.png" alt-text="Screenshot of the button for diagnosing and solving problems from the ExpressRoute circuit.":::
1. Select the **Connectivity issues** card under **Common problems**.
- :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/connectivity-issues.png" alt-text="Screenshot of connectivity issues option.":::
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/connectivity-issues.png" alt-text="Screenshot of the option for connectivity issues.":::
-1. In the dropdown for *Tell us more about the problem you're experiencing*, select **Connectivity to Azure Private, Azure Public, or Dynamics 365 services.**
+1. In the **Tell us more about the problem you are experiencing** dropdown list, select **Connectivity to Azure Private, Azure Public or Dynamics 365 Services**.
- :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/tell-us-more.png" alt-text="Screenshot of drop-down option for problem user is experiencing.":::
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/tell-us-more.png" alt-text="Screenshot of the dropdown option for the problem that the user is experiencing.":::
1. Scroll down to the **Test your private peering connectivity** section and expand it.
- :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/test-private-peering.png" alt-text="Screenshot of troubleshooting connectivity issues options.":::
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/test-private-peering.png" alt-text="Screenshot of the options for troubleshooting connectivity issues, with the option for private peering highlighted.":::
+
+1. Run the [PsPing](/sysinternals/downloads/psping) test from your on-premises IP address to your Azure IP address, and keep it running during the connectivity test.
-1. Execute the [PsPing](/sysinternals/downloads/psping) test from your on-premises IP address to your Azure IP address and keep it running during the connectivity test.
+1. Fill out the fields of the form. Be sure to enter the same on-premises and Azure IP addresses that you used in step 5. Then select **Submit** and wait for your results to load.
-1. Fill out the fields of the form, making sure to enter the same on-premises and Azure IP addresses used in Step 5. Then select **Submit** and then wait for your results to load. Once your results are ready, review the information for interpreting them below.
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/form.png" alt-text="Screenshot of the form for debugging an A C L.":::
- :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/form.png" alt-text="Screenshot of debug ACL form.":::
+### Interpret results
-### Interpreting results
-Your test results for each MSEE device will look like the example below. You'll have two sets of results for the primary and secondary MSEE devices. Review the number of matches in and out and use the following scenarios to interpret the results:
-* **You see packet matches sent and received on both MSEEs:** This indicates healthy traffic inbound to and outbound from the MSEE on your circuit. If loss is occurring either on-premises or in Azure, it's happening downstream from the MSEE.
-* **If testing PsPing from on-premises to Azure *(received)* results show matches, but *sent* results show NO matches:** This indicates that traffic is getting inbound to Azure, but isn't returning to on-prem. Check for return-path routing issues (for example, are you advertising the appropriate prefixes to Azure? Is there a UDR overriding prefixes?).
-* **If testing PsPing from Azure to on-premises *(sent)* results show NO matches, but *(received)* results show matches:** This indicates that traffic is getting to on-premises, but isn't getting back. You should work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit.
-* **One MSEE shows NO matches, while the other shows good matches:** This indicates that one MSEE isn't receiving or passing any traffic. It could be offline (for example, BGP/ARP down).
+When your results are ready, you'll have two sets of them for the primary and secondary MSEE devices. Review the number of matches in and out, and use the following scenarios to interpret the results:
+
+* **You see packet matches sent and received on both MSEEs**: This result indicates healthy traffic inbound to and outbound from the MSEEs on your circuit. If loss is occurring either on-premises or in Azure, it's happening downstream from the MSEEs.
+* **If you're testing PsPing from on-premises to Azure, received results show matches, but sent results show no matches**: This result indicates that traffic is coming in to Azure but isn't returning to on-premises. Check for return-path routing issues. For example, are you advertising the appropriate prefixes to Azure? Is a user-defined route (UDR) overriding prefixes?
+* **If you're testing PsPing from Azure to on-premises, sent results show no matches, but received results show matches**: This result indicates that traffic is coming in to on-premises but isn't returning to Azure. Work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit.
+* **One MSEE shows no matches, but the other shows good matches**: This result indicates that one MSEE isn't receiving or passing any traffic. It might be offline (for example, BGP/ARP is down).
+
+Your test results for each MSEE device will look like the following example:
-#### Example
``` src 10.0.0.0 dst 20.0.0.0 dstport 3389 (received): 120 matches src 20.0.0.0 srcport 3389 dst 10.0.0.0 (sent): 120 matches ```+ This test result has the following properties:
-* IP Port: 3389
-* On-prem IP Address CIDR: 10.0.0.0
-* Azure IP Address CIDR: 20.0.0.0
+* IP port: 3389
+* On-premises IP address CIDR: 10.0.0.0
+* Azure IP address CIDR: 20.0.0.0
-## Verify virtual network gateway availability
+## Verify availability of the virtual network gateway
-The ExpressRoute virtual network gateway facilitates the management and control plane connectivity to private link services and private IPs deployed to an Azure virtual network. The virtual network gateway infrastructure is managed by Microsoft and sometimes undergoes maintenance. During a maintenance period, performance of the virtual network gateway may be reduced. You can use the *Diagnose and Solve* experience within the ExpressRoute Circuit page to troubleshoot connectivity issues to the virtual network and reactively detect if recent maintenance events reduced the virtual network gateway capacity.
+The ExpressRoute virtual network gateway facilitates the management and control plane connectivity to private link services and private IPs deployed to an Azure virtual network. The virtual network gateway infrastructure is managed by Microsoft and sometimes undergoes maintenance.
-1. To access this diagnostic tool, select **Diagnose and solve problems** from your ExpressRoute circuit in the Azure portal.
+During a maintenance period, performance of the virtual network gateway might be reduced. To troubleshoot connectivity issues to the virtual network and reactively detect if recent maintenance events reduced capacity for the virtual network gateway:
- :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/diagnose-problems.png" alt-text="Screenshot of selecting the diagnose and solve problem page from ExpressRoute circuit.":::
+1. Select **Diagnose and solve problems** from your ExpressRoute circuit in the Azure portal.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/diagnose-problems.png" alt-text="Screenshot of the button for diagnosing and solving problem from an ExpressRoute circuit.":::
1. Select the **Performance Issues** option.
- :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/performance-issues.png" alt-text="Screenshot of selecting the performance issue option.":::
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/performance-issues.png" alt-text="Screenshot of selecting the option for performance issues.":::
-1. Wait for the diagnostics to run and interpret the results:
+1. Wait for the diagnostics to run and interpret the results.
:::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/gateway-result.png" alt-text="Screenshot of the diagnostic results.":::
- Review if your virtual network gateway recently underwent maintenance. If maintenance occurred during a period when you experienced packet loss or latency, it's possible that the reduced capacity of the virtual network gateway contributed to connectivity issues you're experiencing with the target virtual network. Follow the recommended steps and also consider upgrading the [virtual network gateway SKU](expressroute-about-virtual-network-gateways.md#gwsku) to support a higher network throughput and avoid connectivity issues during future maintenance events.
+ If maintenance on your virtual network gateway occurred during a period when you experienced packet loss or latency, it's possible that the reduced capacity of the gateway contributed to connectivity issues you're experiencing with the target virtual network. Follow the recommended steps. To support a higher network throughput and avoid connectivity issues during future maintenance events, consider upgrading the [virtual network gateway SKU](expressroute-about-virtual-network-gateways.md#gwsku).
+
+## Next steps
-## Next Steps
For more information or help, check out the following links: - [Microsoft Support][Support]
For more information or help, check out the following links:
- [Create and modify routing for an ExpressRoute circuit][CreatePeering] <!--Image References-->
-[1]: ./media/expressroute-troubleshooting-expressroute-overview/expressroute-logical-diagram.png "Logical Express Route Connectivity"
+[1]: ./media/expressroute-troubleshooting-expressroute-overview/expressroute-logical-diagram.png "Diagram that shows logical ExpressRoute connectivity and connections between a customer network, a provider network, and a Microsoft datacenter."
[2]: ./media/expressroute-troubleshooting-expressroute-overview/portal-all-resources.png "All resources icon" [3]: ./media/expressroute-troubleshooting-expressroute-overview/portal-overview.png "Overview icon"
-[4]: ./media/expressroute-troubleshooting-expressroute-overview/portal-circuit-status.png "ExpressRoute Essentials sample screenshot"
-[5]: ./media/expressroute-troubleshooting-expressroute-overview/portal-private-peering.png "ExpressRoute Essentials sample screenshot"
+[4]: ./media/expressroute-troubleshooting-expressroute-overview/portal-circuit-status.png "Screenshot that shows an example of ExpressRoute essentials listed in the Azure portal."
+[5]: ./media/expressroute-troubleshooting-expressroute-overview/portal-private-peering.png "Screenshot that shows an example ExpressRoute peerings listed in the Azure portal."
<!--Link References--> [Support]: https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade
firewall Remote Work Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/remote-work-support.md
Azure Firewall is a managed, cloud-based network security service that protects
Work from home policies requires many IT organizations to address fundamental changes in capacity, network, security, and governance. Employees aren't protected by the layered security policies associated with on-premises services while working from home. Virtual Desktop Infrastructure (VDI) deployments on Azure can help organizations rapidly respond to this changing environment. However, you need a way to protect inbound/outbound Internet access to and from these VDI deployments. You can use Azure Firewall [DNAT rules](rule-processing.md) along with its [threat intelligence](threat-intel.md) based filtering capabilities to protect your VDI deployments.
-## Azure Windows Virtual Desktop support
+## Azure Virtual Desktop support
-Windows Virtual Desktop is a comprehensive desktop and app virtualization service running in Azure. ItΓÇÖs the only virtual desktop infrastructure (VDI) that delivers simplified management, multi-session Windows 10, optimizations for Microsoft 365 apps for enterprise, and support for Remote Desktop Services (RDS) environments. You can deploy and scale your Windows desktops and apps on Azure in minutes, and get built-in security and compliance features. Windows Virtual Desktop doesn't require you to open any inbound access to your virtual network. However, you must allow a set of outbound network connections for the Windows Virtual Desktop virtual machines that run in your virtual network. For more information, see [Use Azure Firewall to protect Window Virtual Desktop deployments](protect-azure-virtual-desktop.md).
+Azure Virtual Desktop is a comprehensive desktop and app virtualization service running in Azure. ItΓÇÖs the only virtual desktop infrastructure (VDI) that delivers simplified management, multi-session Windows 10/11, optimizations for Microsoft 365 apps for enterprise, and support for Remote Desktop Services (RDS) environments. You can deploy and scale your Windows desktops and apps on Azure in minutes, and get built-in security and compliance features. Azure Virtual Desktop doesn't require you to open any inbound access to your virtual network. However, you must allow a set of outbound network connections for the Windows Virtual Desktop virtual machines that run in your virtual network. For more information, see [Use Azure Firewall to protect Window Virtual Desktop deployments](protect-azure-virtual-desktop.md).
## Next steps
-Learn more about [Windows Virtual Desktop](../virtual-desktop/index.yml).
+Learn more about [Azure Virtual Desktop](../virtual-desktop/index.yml).
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 03/14/2022
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **CIS Microsoft Azure Foundations Benchmark v1.1.0** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[CIS Microsoft Azure Foundations Benchmark 1.1.0 blueprint sample](../../blueprints/samples/cis-azure-1-1-0.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 03/14/2022
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **CMMC Level 3** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[CMMC Level 3 blueprint sample](../../blueprints/samples/cmmc-l3.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 03/14/2022
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **CIS Microsoft Azure Foundations Benchmark v1.1.0** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[CIS Microsoft Azure Foundations Benchmark 1.1.0 blueprint sample](../../blueprints/samples/cis-azure-1-1-0.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 03/14/2022
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **CMMC Level 3** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[CMMC Level 3 blueprint sample](../../blueprints/samples/cmmc-l3.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 03/14/2022
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **HITRUST/HIPAA** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[HIPAA HITRUST 9.2 blueprint sample](../../blueprints/samples/hipaa-hitrust-9-2.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 12/27/2021 Last updated : 03/10/2022 # Azure HDInsight release notes
This article provides information about the **most recent** Azure HDInsight rele
Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/hdinsight/release-notes/releases).
-## Release date: 12/27/2021
+## Release date: 03/10/2022
This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days.
-The OS versions for this release are:
-- HDInsight 4.0: Ubuntu 18.04.5 LTS
+The OS versions for this release are:
+- HDInsight 4.0: Ubuntu 18.04.5
-HDInsight 4.0 image has been updated to mitigate Log4j vulnerability as described in [MicrosoftΓÇÖs Response to CVE-2021-44228 Apache Log4j 2.](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/)
+## Spark 3.1 is now generally available
-> [!Note]
-> * Any HDI 4.0 clusters created post 27 Dec 2021 00:00 UTC are created with an updated version of the image which mitigates the log4j vulnerabilities. Hence, customers need not patch/reboot these clusters.
-> * For new HDInsight 4.0 clusters created between 16 Dec 2021 at 01:15 UTC and 27 Dec 2021 00:00 UTC, HDInsight 3.6 or in pinned subscriptions after 16 Dec 2021 the patch is auto applied within the hour in which the cluster is created, however customers must then reboot their nodes for the patching to complete (except for Kafka Management nodes, which are automatically rebooted).
+Spark 3.1 is now Generally Available on HDInsight 4.0 release. This release includes
+
+* Adaptive Query Execution,
+* Convert Sort Merge Join to Broadcast Hash Join,
+* Spark Catalyst Optimizer,
+* Dynamic Partition Pruning,
+* Customers will be able to create new Spark 3.1 clusters and not Spark 3.0 (preview) clusters.
+
+For more details, see the [Apache Spark 3.1](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/spark-3-1-is-now-generally-available-on-hdinsight/ba-p/3253679) is now Generally Available on HDInsight - Microsoft Tech Community.
+
+For a complete list of improvements, see the [Apache Spark 3.1 release notes.](https://spark.apache.org/releases/spark-release-3-1-2.html)
+
+For more details on migration, see the [migration guide.](https://spark.apache.org/docs/latest/migration-guide.html)
+
+## Kafka 2.4 is now generally available
+
+Kafka 2.4.1 is now Generally Available. For more information, please see [Kafka 2.4.1 Release Notes.](http://kafka.apache.org/24/documentation.html)
+Other features include MirrorMaker 2 availability, new metric category AtMinIsr topic partition, Improved broker start-up time by lazy on demand mmap of index files, More consumer metrics to observe user poll behavior.
+
+## Map Datatype in HWC is now supported in HDInsight 4.0
+
+This release includes Map Datatype Support for HWC 1.0 (Spark 2.4) Via the spark-shell application, and all other all spark clients that HWC supports. Following improvements are included like any other data types:
+
+A user can
+* Create a Hive table with any column(s) containing Map datatype, insert data into it and read the results from it.
+* Create an Apache Spark dataframe with Map Type and do batch/stream reads and writes.
+
+### New regions
+
+HDInsight has now expanded its geographical presence to two new regions: China East 3 and China North 3.
+
+### OSS backport changes
+
+OSS backports that are included in Hive including HWC 1.0 (Spark 2.4) which supports Map data type.
+
+### Here are the OSS backported Apache JIRAs for this release:
+
+| Impacted Feature | Apache JIRA |
+||--|
+| Metastore direct sql queries with IN/(NOT IN) should be split based on max parameters allowed by SQL DB | [HIVE-25659](https://issues.apache.org/jira/browse/HIVE-25659) |
+| Upgrade log4j 2.16.0 to 2.17.0 | [HIVE-25825](https://issues.apache.org/jira/browse/HIVE-25825) |
+| Update Flatbuffer version | [HIVE-22827](https://issues.apache.org/jira/browse/HIVE-22827) |
+| Support Map data-type natively in Arrow format | [HIVE-25553](https://issues.apache.org/jira/browse/HIVE-25553) |
+| LLAP external client - Handle nested values when the parent struct is null | [HIVE-25243](https://issues.apache.org/jira/browse/HIVE-25243) |
+| Upgrade arrow version to 0.11.0 | [HIVE-23987](https://issues.apache.org/jira/browse/HIVE-23987) |
+
+## Deprecation notices
+### Azure Virtual Machine Scale Sets on HDInsight
+
+HDInsight will no longer use Azure Virtual Machine Scale Sets to provision the clusters, no breaking change is expected. Existing HDInsight clusters on virtual machine scale sets will have no impact, any new clusters on latest images will no longer use Virtual Machine Scale Sets.
+
+### Scaling of Azure HDInsight HBase workloads will now be supported only using manual scale
+
+Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](https://docs.microsoft.com/azure/hdinsight/hdinsight-scaling-best-practices)
+
+## HDInsight 3.6 end of support extension
+
+HDInsight 3.6 end of support is extended until September 30, 2022.
+
+Starting from September 30, 2022, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
+
+Customers who are on Azure HDInsight 3.6 clusters will continue to get [Basic support](https://docs.microsoft.com/azure/hdinsight/hdinsight-component-versioning#support-options-for-hdinsight-versions) until September 30, 2022. After September 30, 2022 customers won't be able to create new HDInsight 3.6 clusters.
hdinsight Sizing Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/sizing-guidelines.md
- Title: Interactive Query cluster sizing guide in Azure HDInsight
-description: Interactive Query sizing guide in Azure HDInsight
-- Previously updated : 05/08/2020--
-# Interactive Query cluster sizing guide in Azure HDInsight
-
-This document describes the sizing of the HDInsight Interactive Query cluster (LLAP) for a typical workload to achieve reasonable performance. The recommendations provided in this document are generic and specific workloads may need specific tuning.
-
-## Default VM types for Interactive Query
-
-| Node Type | Instance | Size |
-||||
-| Head | D13 v2 | 8 VCPUS, 56-GB RAM, 400 GB SSD |
-| Worker | D14 v2 | 16 VCPUS, 112-GB RAM, 800 GB SSD |
-| ZooKeeper | A4 v2 | 4 VCPUS, 8-GB RAM, 40 GB SSD |
-
-## Recommended configurations
-
-The recommended configurations values are based on the D14 v2 type worker node.
-
-| Key | Value | Description |
-||||
-| yarn.nodemanager.resource.memory-mb | 102400 (MB) | Total memory given, in MB, for all YARN containers on a node. |
-| yarn.scheduler.maximum-allocation-mb | 102400 (MB) | The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this value won't take effect. |
-| yarn.scheduler.maximum-allocation-vcores | 12 |The maximum number of CPU cores for every container request at the Resource Manager. Requests higher than this value won't take effect. |
-| yarn.scheduler.capacity.root.llap.capacity | 90% | YARN capacity allocation for LLAP queue. |
-| hive.server2.tez.sessions.per.default.queue | number_of_worker_nodes |The number of sessions for each queue named in the hive.server2.tez.default.queues. This number corresponds to number of query coordinators(Tez AMs). |
-| tez.am.resource.memory.mb | 4096 (MB) | The amount of memory in MB to be used by the tez AppMaster. |
-| hive.tez.container.size | 4096 (MB) | Specified Tez container size in MB. |
-| hive.llap.daemon.num.executors | 12 | Number of executors per LLAP daemon. |
-| hive.llap.io.threadpool.size | 12 | Thread pool size for executors. |
-| hive.llap.daemon.yarn.container.mb | 86016 (MB) | Total memory in MB used by individual LLAP daemons (Memory per daemon).|
-| hive.llap.io.memory.size | 409600 (MB) | Cache size in MB per LLAP daemon provided SSD cache is enabled. |
-| hive.auto.convert.join.noconditionaltask.size | 2048 (MB) | memory size in MB to do Map Join. |
-
-## LLAP daemon size estimations
-
-### yarn.nodemanager.resource.memory-mb
-
-This value indicates a maximum sum of memory in MB used by the YARN containers on each node. It specifies the amount of memory YARN can use on this node, so this value should be lesser than the total memory on that node.
-
-Set this value = [Total physical memory on node] ΓÇô [ memory for OS + Other services ].
-
-It's recommended to set this value to ~90% of the available RAM. For D14 v2, the recommended value is **102400 MB**.
-
-### yarn.scheduler.maximum-allocation-mb
-
-This value indicates the maximum allocation for every container request at the Resource Manager, in MB. Memory requests higher than the specified value won't take effect. The Resource Manager can only give memory to containers in increments of `yarn.scheduler.minimum-allocation-mb` and can't exceed the size specified by `yarn.scheduler.maximum-allocation-mb`. This value shouldn't be more than the total given memory of the node, which is specified by `yarn.nodemanager.resource.memory-mb`.
-
-For D14 v2 worker nodes, the recommended value is **102400 MB**
-
-### yarn.scheduler.maximum-allocation-vcores
-
-This configuration indicates the maximum number of virtual CPU cores for every container request at the Resource Manager. Requesting a higher value than this configuration won't take effect. This configuration is a global property of the YARN scheduler. For LLAP daemon container, this value can be set to 75% of total available virtual cores (VCORES). The remaining 25% should be reserved for NodeManager, DataNode, and other services running on the worker nodes.
-
-For D14 v2 worker nodes, there are 16 VCORES and 75% of 16 VCORES can be given. So the recommended value for LLAP daemon container is **12**.
-
-### hive.server2.tez.sessions.per.default.queue
-
-This configuration value determines the number of Tez sessions that should be launched in parallel for each of the queues specified by `hive.server2.tez.default.queues`. The value corresponds to the number of Tez AMs (Query Coordinators). It's recommended to be the same as the number of worker nodes to have one Tez AM per node. The number of Tez AMs can be higher than the number of LLAP daemon nodes. Their primary responsibility is to coordinate the query execution and assign query plan fragments to corresponding LLAP daemons for execution. It's recommended to keep it as multiple of a number of LLAP daemon nodes to achieve higher throughput.
-
-Default HDInsight cluster has four LLAP daemons running on four worker nodes, so the recommended value is **4**.
-
-### tez.am.resource.memory.mb, hive.tez.container.size
-
-`tez.am.resource.memory.mb` defines the Tez Application Master size.
-The recommended value is **4096 MB**.
-
-`hive.tez.container.size` defines the amount of memory given for Tez container. This value must be set between the YARN minimum container size(`yarn.scheduler.minimum-allocation-mb`) and the YARN maximum container size(`yarn.scheduler.maximum-allocation-mb`).
-It's recommended to be set to **4096 MB**.
-
-A general rule is to keep it lesser than the amount of memory per processor considering one processor per container. `Rreserve` memory for number of Tez AMs on a node before giving the memory for LLAP daemon. For instance, if you're using two Tez AMs(4 GB each) per node, give 82 GB out of 90 GB for LLAP daemon reserving 8 GB for two Tez AMs.
-
-### yarn.scheduler.capacity.root.llap.capacity
-
-This value indicates a percentage of capacity given for LLAP queue. The HDInsights Interactive query cluster gives 90% of the total capacity for LLAP queue and the remaining 10% is set to default queue for other container allocations.
-For D14v2 worker nodes, the recommended value is **90** for LLAP queue.
-
-### hive.llap.daemon.yarn.container.mb
-
-The total memory size for LLAP daemon depends on following components:
-
-* Configuration of YARN container size (`yarn.scheduler.maximum-allocation-mb`, `yarn.scheduler.maximum-allocation-mb`, `yarn.nodemanager.resource.memory-mb`)
-
-* Heap memory used by executors (Xmx)
-
- Its amount of RAM available after taking out headroom size.
- For D14 v2, HDI 4.0 - this value is (86 GB - 6 GB) = 80 GB
- For D14 v2, HDI 3.6 - this value is (84 GB - 6 GB) = 78 GB
-
-* Off-heap in-memory cache per daemon (hive.llap.io.memory.size)
-
-* Headroom
-
- It's a portion of off-heap memory used for Java VM overhead (metaspace, threads stack, gc data structures, and so on). This portion is observed to be around 6% of the heap size (Xmx). To be on the safer side, it can be calculated as 6% of total LLAP daemon memory size. Because it's possible when SSD cache is enabled, it will allow LLAP daemon to use all available in-memory space to be used only for heap.
- For D14 v2, the recommended value is ceil(86 GB x 0.06) ~= **6 GB**.
-
-Memory per daemon = [In-memory cache size] + [Heap size] + [Headroom].
-
-It can be calculated as follows:
-
-Tez AM memory per node = [ (Number of Tez AMs/Number of LLAP daemon nodes) * Tez AM size ].
-LLAP daemon container size = [ 90% of YARN max container memory ] ΓÇô [ Tez AM memory per node ].
-
-For D14 v2 worker node, HDI 4.0 - the recommended value is (90 - (1/1 * 4 GB)) = **86 GB**.
-(For HDI 3.6, recommended value is **84 GB** because you should reserve ~2 GB for slider AM.)
-
-### hive.llap.io.memory.size
-
-This configuration is the amount of memory available as cache for LLAP daemon. The LLAP daemons can use SSD as a cache. Setting `hive.llap.io.allocator.mmap` = true will enable SSD caching. The D14 v2 comes with ~800 GB of SSD and the SSD caching is enabled by default for interactive query Cluster (LLAP). It's configured to use 50% of the SSD space for off-heap cache.
-
-For D14 v2, the recommended value is **409600 MB**.
-
-For other VMs, with no SSD caching enabled, it's beneficial to give portion of available RAM for LLAP caching to achieve better performance. Adjust the total memory size for LLAP daemon as follows:
-
-Total LLAP daemon memory = [LLAP cache size] + [Heap size] + [Headroom].
-
-It's recommended to adjust the cache size and the heap size that is best suitable for your workload.
-
-### hive.llap.daemon.num.executors
-
-This configuration controls the number of executors that can execute tasks in parallel per LLAP daemon. This value is a balance of number of available VCORES, the amount of memory given per executor, and total memory available per LLAP daemon. Usually, we would like this value to be as close as possible to the number of cores.
-
-For D14 v2, there are 16 VCORES available, however not all of the VCORES can be given. The worker nodes also run other services like NodeManager, DataNode, and Metrics Monitor, that needs some portion of available VCORES. This value can be configured up to 75% of the total VCORES available on that node.
-
-For D14 v2, the recommended value is (.75 X 16) = **12**
-
-It's recommended that you reserve ~6 GB of heap space per executor. Adjust your number of executors based on available LLAP daemon size, and number of available VCORES per node.
-
-### hive.llap.io.threadpool.size
-
-This value specifies the thread pool size for executors. Since executors are fixed as specified, it will be same as number of executors per LLAP daemon.
-
-For D14 v2, it's recommended to set this value to **12**.
-
-This configuration can't exceed `yarn.nodemanager.resource.cpu-vcores` value.
-
-### hive.auto.convert.join.noconditionaltask.size
-
-Make sure you have `hive.auto.convert.join.noconditionaltask` enabled for this parameter to take effect. This configuration allows the user to specify the size of the tables that can fit in memory to do Map join. If the sum of the size of n-1 of the `tables/partitions` for n-way join is less than the configured value, the Map join will be chosen. The LLAP executor memory size should be used to calculate the threshold for autoconvert to Map Join.
-
-For D14 v2, it's recommended to set this value to **2048 MB**.
-
-We recommend adjusting this value that is suitable for your workload as setting this value too low may not use autoconvert feature. Setting it too high may result into GC pauses, which can adversely affect query performance.
-
-## Next steps
-
-* [Gateway guidelines](gateway-best-practices.md)
-* [Demystify Apache Tez Memory Tuning - Step by Step](https://community.cloudera.com/t5/Community-Articles/Demystify-Apache-Tez-Memory-Tuning-Step-by-Step/ta-p/245279)
-* [Map Join Memory Sizing For LLAP](https://community.cloudera.com/t5/Community-Articles/Map-Join-Memory-Sizing-For-LLAP/ta-p/247462)
-* [LLAP - a one-page architecture overview](https://community.cloudera.com/t5/Community-Articles/LLAP-a-one-page-architecture-overview/ta-p/247439)
-* [Hive LLAP deep dive](https://community.cloudera.com/t5/Community-Articles/Hive-LLAP-deep-dive/ta-p/248893)
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-connection.md
If you chose to create a new template that models the data correctly, migrate de
## Next steps
-If you need more help, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support).
+If you need more help, you can contact the Azure experts on the [Microsoft Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support).
For more information, see [Azure IoT support and help options](../../iot-fundamentals/iot-support-help.md).
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
It is the set of properties that contain the manufacturer and model.
|-|||--| |manufacturer|string|device to cloud|The device manufacturer of the device, reported through `deviceProperties`. This property is read from one of two places - the 'DeviceUpdateCore' interface will first attempt to read the 'aduc_manufacturer' value from the [Configuration file](device-update-configuration-file.md) file. If the value is not populated in the configuration file, it will default to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MANUFACTURER. This property will only be reported at boot time. Default value 'Contoso'| |model|string|device to cloud|The device model of the device, reported through `deviceProperties`. This property is read from one of two - the DeviceUpdateCore interface will first attempt to read the 'aduc_model' value from the [Configuration file](device-update-configuration-file.md) file. If the value is not populated in the configuration file, it will default to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MODEL. This property will only be reported at boot time. Default value 'Video'|
-|interfaceId|string|device to cloud|This property is used by the service to identify the interface version being used by the Device Update agent. It is required by Device Update service to manage and communicate with the agent. This property is set at 'dtmi:azure:iot:deviceUpdate;1' for device using DU agent version 0.8.0.|
+|interfaceId|string|device to cloud|This property is used by the service to identify the interface version being used by the Device Update agent. It is required by Device Update service to manage and communicate with the agent. This property is set at 'dtmi:azure:iot:deviceUpdateModel;1' for device using DU agent version 0.8.0.|
|aduVer|string|device to cloud|Version of the Device Update agent running on the device. This value is read from the build only if during compile time ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true). Customers can choose to opt-out of version reporting by setting the value to 0 (false). [How to customize Device Update agent properties](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md).| |doVer|string|device to cloud|Version of the Delivery Optimization agent running on the device. The value is read from the build only if during compile time ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true). Customers can choose to opt-out of the version reporting by setting the value to 0 (false).[How to customize Delivery Optimization agent properties](https://github.com/microsoft/do-client/blob/main/README.md#building-do-client-components).| |Custom compatibility Properties|User Defined|device to cloud|Implementer can define other device properties to be used for the compatibility check while targeting the update deployment|
The expected component name in your model is **deviceInformation** when this int
Model ID is how smart devices advertise their capabilities to Azure IoT applications with IoT Plug and Play.To learn more on how to build smart devices to advertise their capabilities to Azure IoT applications visit [IoT Plug and Play device developer guide](../iot-develop/concepts-developer-guide-device.md).
-Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID with a value of **"dtmi:azure:iot:deviceUpdate;1"** as part of the device connection. [Learn how to announce a model ID](../iot-develop/concepts-developer-guide-device.md#model-id-announcement).
+Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID with a value of **"dtmi:azure:iot:deviceUpdateModel;1"** as part of the device connection. [Learn how to announce a model ID](../iot-develop/concepts-developer-guide-device.md#model-id-announcement).
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
Title: Azure IoT Hub managed identity | Microsoft Docs description: How to use managed identities to allow egress connectivity from your IoT Hub to other Azure resources.-+ Last updated 09/02/2021-+ # IoT Hub support for managed identities
iot-hub Quickstart Control Device Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-control-device-android.md
You also need a _service connection string_ to enable the back-end service appli
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
-az iot hub connection-string show --policy-name service --name {YourIoTHubName} --output table
+az iot hub connection-string show --policy-name service --hub-name {YourIoTHubName} --output table
``` Make a note of the service connection string, which looks like:
iot-hub Tutorial Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-device-twins.md
az group create --name tutorial-iot-hub-rg --location $location
az iot hub create --name $hubname --location $location --resource-group tutorial-iot-hub-rg --partition-count 2 --sku F1 # Make a note of the service connection string, you need it later:
-az iot hub connection-string show --name $hubname --policy-name service -o table
+az iot hub connection-string show --hub-name $hubname --policy-name service -o table
```
load-balancer Tutorial Nat Rule Multi Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-nat-rule-multi-instance-portal.md
Title: "Tutorial: Create a multiple instance inbound NAT rule - Azure portal"
+ Title: "Tutorial: Create a multiple virtual machines inbound NAT rule - Azure portal"
description: This tutorial shows how to configure port forwarding using Azure Load Balancer to create a connection to multiple virtual machines in an Azure virtual network.
Last updated 03/10/2022
-# Tutorial: Create a multiple instance inbound NAT rule using the Azure portal
+# Tutorial: Create a multiple virtual machines inbound NAT rule using the Azure portal
Inbound NAT rules allow you to connect to virtual machines (VMs) in an Azure virtual network by using an Azure Load Balancer public IP address and port number.
In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a virtual network and virtual machines > * Create a standard SKU public load balancer with frontend IP, health probe, backend configuration, and load-balancing rule
-> * Create a multiple instance inbound NAT rule
+> * Create a multiple VMs inbound NAT rule
> * Create a NAT gateway for outbound internet access for the backend pool > * Install and configure a web server on the VMs to demonstrate the port forwarding and load-balancing rules
A virtual network and subnet is required for the resources in the tutorial. In t
| NIC network security group | Select **Advanced**. | | Configure network security group | Select the existing **myNSG** |
-## Create load balancer
+## Create a load balancer
You'll create a load balancer in this section. The frontend IP, backend pool, load-balancing, and inbound NAT rules are configured as part of the creation.
You'll create a load balancer in this section. The frontend IP, backend pool, lo
27. Select **Create**.
-## Create multiple instance inbound NAT rule
+## Create a multiple VMs inbound NAT rule
In this section, you'll create a multiple instance inbound NAT rule to the backend pool of the load balancer.
In this section, you'll create a multiple instance inbound NAT rule to the backe
6. Leave the rest at the default and select **Add**.
-## Create NAT gateway
+## Create a NAT gateway
In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
In this article, you'll learn how to download the test results from Azure Load T
The test results contain a comma-separated values (CSV) file with details of each application request. In addition, all files for running the Apache JMeter dashboard locally are included.
+> [!NOTE]
+> The Apache JMeter dashboard generation is temporarily disabled. You can download the CSV files with the test results.
+ :::image type="content" source="media/how-to-export-test-results/apache-jmeter-dashboard.png" alt-text="Screenshot that shows the downloaded test results on the Apache JMeter dashboard."::: > [!IMPORTANT]
load-testing Tutorial Cicd Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-azure-pipelines.md
# Tutorial: Identify performance regressions with Azure Load Testing Preview and Azure Pipelines
-This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and Azure Pipelines. You'll configure an Azure Pipelines CI/CD workflow with the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops) to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
+This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and Azure Pipelines. You'll configure an Azure Pipelines CI/CD workflow with the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing) to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
If you're using GitHub Actions for your CI/CD workflows, see the corresponding [GitHub Actions tutorial](./tutorial-cicd-github-actions.md).
To access Azure resources, create a service connection in Azure DevOps and use r
In this section, you'll set up an Azure Pipelines workflow that triggers the load test.
-The sample application repository already contains a pipelines definition file. This pipeline first deploys the sample web application to Azure App Service, and then invokes the load test by using the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops). The pipeline uses an environment variable to pass the URL of the web application to the Apache JMeter script.
+The sample application repository already contains a pipelines definition file. This pipeline first deploys the sample web application to Azure App Service, and then invokes the load test by using the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing). The pipeline uses an environment variable to pass the URL of the web application to the Apache JMeter script.
1. Install the **Azure Load Testing** task extension from the Azure DevOps Marketplace.
The sample application repository already contains a pipelines definition file.
:::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-select-repo.png" alt-text="Screenshot that shows how to select the sample application's GitHub repository.":::
- The repository contains an *azure-pipeline.yml* pipeline definition file. The following snippet shows how to use the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops) in Azure Pipelines:
+ The repository contains an *azure-pipeline.yml* pipeline definition file. The following snippet shows how to use the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing) in Azure Pipelines:
```yml - task: AzureLoadTest@1
In this tutorial, you'll reconfigure the sample application to accept only secur
You've now created an Azure Pipelines CI/CD workflow that uses Azure Load Testing for automatically running load tests. By using pass/fail criteria, you can set the status of the CI/CD workflow. With parameters, you can make the running of load tests configurable.
-* Learn more about the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops).
+* Learn more about the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing).
* Learn more about [Parameterizing a load test](./how-to-parameterize-load-tests.md). * Learn more [Define test pass/fail criteria](./how-to-define-test-criteria.md). * Learn more about [Configuring server-side monitoring](./how-to-monitor-server-side-metrics.md).
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
The following table lists the message size limits that apply to B2B protocols:
## Firewall configuration: IP addresses and service tags
-If your environment has strict network requirements or firewalls that limit traffic to specific IP addresses, your environment or firewall needs to allow access for *both* the [inbound](#inbound) and [outbound](#outbound) IP addresses used by the Azure Logic Apps service or runtime in the Azure region where your logic app resource exists. To set up this access, you can create [Azure Firewall rules](../firewall/rule-processing.md). *All* logic apps in the same region use the same IP address ranges.
+If your environment has strict network requirements and uses a firewall that limits traffic to specific IP addresses, your environment or firewall needs to permit incoming communication received by Azure Logic Apps and outgoing communication sent by Azure Logic Apps. To set up this access, you can create [Azure Firewall rules](../firewall/rule-processing.md) for your firewall to allow access for *both* [inbound](#inbound) and [outbound](#outbound) IP addresses used by Azure Logic Apps in your logic app's Azure region. *All* logic apps in the same region use the same IP address ranges.
> [!NOTE] > If you're using [Power Automate](/power-automate/getting-started), some actions, such as **HTTP** and **HTTP + OpenAPI**,
Before you set up your firewall with IP addresses, review these considerations:
### Inbound IP addresses
-This section lists the inbound IP addresses for the Azure Logic Apps service only. If you're using Azure Government, see [Azure Government - Inbound IP addresses](#azure-government-inbound).
-
+For Azure Logic Apps to receive incoming communication through your firewall, you have to allow traffic through the inbound IP addresses described in this section for your logic app's Azure region. If you're using Azure Government, see [Azure Government - Inbound IP addresses](#azure-government-inbound).
+
> [!TIP] > To help reduce complexity when you create security rules, you can optionally use the [service tag](../virtual-network/service-tags-overview.md), > **LogicAppsManagement**, rather than specify inbound Logic Apps IP address prefixes for each region.
This section lists the inbound IP addresses for the Azure Logic Apps service onl
### Outbound IP addresses
-This section lists the outbound IP addresses for the Azure Logic Apps service. If you're using Azure Government, see [Azure Government - Outbound IP addresses](#azure-government-outbound). If your workflow uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses) in your logic app's Azure region. If your workflow uses custom connectors that access on-premises resources through the [on-premises data gateway resource in Azure](logic-apps-gateway-connection.md), you need to set up the gateway installation to allow access for the corresponding [*managed connector* outbound IP addresses](/connectors/common/outbound-ip-addresses). For more information about setting up communication settings on the gateway, review these topics:
+For Azure Logic Apps to send outgoing communication through your firewall, you have to allow traffic through *all* the outbound IP addresses described in this section for your logic app's Azure region. If you're using Azure Government, see [Azure Government - Outbound IP addresses](#azure-government-outbound). If your workflow also uses any [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses any [custom connectors](/connectors/custom-connectors/), your firewall has to allow traffic through *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses) in your logic app's Azure region. If your workflow uses custom connectors that access on-premises resources through the [on-premises data gateway resource in Azure](logic-apps-gateway-connection.md), you need to set up the gateway installation to allow access for the corresponding [*managed connector* outbound IP addresses](/connectors/common/outbound-ip-addresses). For more information about setting up communication settings on the gateway, review these topics:
* [Adjust communication settings for the on-premises data gateway](/data-integration/gateway/service-gateway-communication) * [Configure proxy settings for the on-premises data gateway](/data-integration/gateway/service-gateway-proxy)
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
ms.suite: integration Previously updated : 01/15/2022 Last updated : 03/11/2022 # As a developer, I want to connect to my single-tenant logic app workflows with virtual networks using private endpoints and VNet integration.
For more information, review the following documentation:
1. After the designer opens, add the Request trigger as the first step in your workflow.
- > [!NOTE]
- > You can call Request triggers and webhook triggers only from inside your virtual network.
- > Managed API webhook triggers and actions won't work because they require a public endpoint to receive calls.
- 1. Based on your scenario requirements, add other actions that you want to run in your workflow. 1. When you're done, save your workflow.
For more information, review [Create single-tenant logic app workflows in Azure
- If accessed from outside your virtual network, monitoring view can't access the inputs and outputs from triggers and actions.
+- Managed API webhook triggers (*push* triggers) and actions won't work because they run in the public cloud and can't call into your private network. They require a public endpoint to receive calls. For example, such triggers include the Dataverse trigger and the Event Grid trigger.
+
+- If you use the Office 365 Outlook trigger, the workflow is triggered only hourly.
+ - Deployment from Visual Studio Code or Azure CLI works only from inside the virtual network. You can use the Deployment Center to link your logic app to a GitHub repo. You can then use Azure infrastructure to build and deploy your code. For GitHub integration to work, remove the `WEBSITE_RUN_FROM_PACKAGE` setting from your logic app or set the value to `0`.
machine-learning Dsvm Tutorial Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager.md
Title: 'Quickstart: Create a Data Science VM - Resource Manager template'
description: In this quickstart, you use an Azure Resource Manager template to quickly deploy a Data Science Virtual Machine --++ Last updated 06/10/2020
machine-learning Dsvm Ubuntu Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md
Title: 'Quickstart: Create an Ubuntu Data Science Virtual Machine'
description: Configure and create a Data Science Virtual Machine for Linux (Ubuntu) to do analytics and machine learning. --++ Last updated 03/10/2020
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
--++ Last updated 05/08/2020
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-pipelines-application-insights.md
# Collect machine learning pipeline log files in Application Insights for alerts and debugging
-The [OpenCensus](https://opencensus.io/quickstart/python/) python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline runs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across runs.
+The [OpenCensus](https://opencensus.io/quickstart/python/) Python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline runs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across runs.
Having your logs in once place will provide a history of exceptions and error messages. Since Application Insights integrates with Azure Alerts, you can also create alerts based on Application Insights queries.
from opencensus.ext.azure.log_exporter import AzureLogHandler
import logging ```
-Next, add the AzureLogHandler to the python logger.
+Next, add the AzureLogHandler to the Python logger.
```python logger = logging.getLogger(__name__)
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
In machine learning, **features** are the data fields used to predict a target d
## Supported model interpretability techniques
- `azureml-interpret` uses the interpretability techniques developed in [Interpret-Community](https://github.com/interpretml/interpret-community/), an open source python package for training interpretable models and helping to explain blackbox AI systems. [Interpret-Community](https://github.com/interpretml/interpret-community/) serves as the host for this SDK's supported explainers, and currently supports the following interpretability techniques:
+ `azureml-interpret` uses the interpretability techniques developed in [Interpret-Community](https://github.com/interpretml/interpret-community/), an open source Python package for training interpretable models and helping to explain blackbox AI systems. [Interpret-Community](https://github.com/interpretml/interpret-community/) serves as the host for this SDK's supported explainers, and currently supports the following interpretability techniques:
|Interpretability Technique|Description|Type| |--|--|--|
machine-learning How To Prebuilt Docker Images Inference Python Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prebuilt-docker-images-inference-python-extensibility.md
The [prebuilt Docker images for model inference](concept-prebuilt-docker-images-inference.md) contain packages for popular machine learning frameworks. There are two methods that can be used to add Python packages __without rebuilding the Docker image__:
-* [Dynamic installation](#dynamic): This approach uses a [requirements](https://pip.pypa.io/en/stable/cli/pip_install/#requirements-file-format) file to automatically restore python packages when the Docker container boots.
+* [Dynamic installation](#dynamic): This approach uses a [requirements](https://pip.pypa.io/en/stable/cli/pip_install/#requirements-file-format) file to automatically restore Python packages when the Docker container boots.
Consider this method __for rapid prototyping__. When the image starts, packages are restored using the `requirements.txt` file. This method increases startup of the image, and you must wait longer before the deployment can handle requests.
-* [Pre-installed python packages](#preinstalled): You provide a directory containing preinstalled Python packages. During deployment, this directory is mounted into the container for your entry script (`score.py`) to use.
+* [Pre-installed Python packages](#preinstalled): You provide a directory containing preinstalled Python packages. During deployment, this directory is mounted into the container for your entry script (`score.py`) to use.
Use this approach __for production deployments__. Since the directory containing the packages is mounted to the image, it can be used even when your deployments don't have public internet access. For example, when deployed into a secured Azure Virtual Network.
The [prebuilt Docker images for model inference](concept-prebuilt-docker-images-
## Dynamic installation
-This approach uses a [requirements](https://pip.pypa.io/en/stable/cli/pip_install/#requirements-file-format) file to automatically restore python packages when the image starts up.
+This approach uses a [requirements](https://pip.pypa.io/en/stable/cli/pip_install/#requirements-file-format) file to automatically restore Python packages when the image starts up.
To extend your prebuilt docker container image through a requirements.txt, follow these steps:
The following diagram is a visual representation of the dynamic installation pro
<a id="preinstalled"></a>
-## Pre-installed python packages
+## Pre-installed Python packages
This approach mounts a directory that you provide into the image. The Python packages from this directory can then be used by the entry script (`score.py`).
-To extend your prebuilt docker container image through pre-installed python packages, follow these steps:
+To extend your prebuilt docker container image through pre-installed Python packages, follow these steps:
> [!IMPORTANT] > You must use packages compatible with Python 3.7. All current images are pinned to Python 3.7.
Here are some things that may cause this problem:
* The [Model.package()](/python/api/azureml-core/azureml.core.model(class)) method lets you create a model package in the form of a Docker image or Dockerfile build context. Using Model.package() with prebuilt inference docker images triggers an intermediate image build that changes the non-root user to root user.
-* We encourage you to use our python package extensibility solutions. If other dependencies are required (such as `apt` packages), create your own [Dockerfile extending from the inference image](how-to-extend-prebuilt-docker-image-inference.md#buildmodel).
+* We encourage you to use our Python package extensibility solutions. If other dependencies are required (such as `apt` packages), create your own [Dockerfile extending from the inference image](how-to-extend-prebuilt-docker-image-inference.md#buildmodel).
## Frequently asked questions
Here are some things that may cause this problem:
| Compared item | Requirements.txt (dynamic installation) | Package Mount | | -- | -- | |
- | Solution | Create a `requirements.txt` that installs the specified packages when the container starts. | Create a local python environment with all of the dependencies. Mount this directory into container at runtime. |
+ | Solution | Create a `requirements.txt` that installs the specified packages when the container starts. | Create a local Python environment with all of the dependencies. Mount this directory into container at runtime. |
| Package Installation | No extra installation (assuming pip already installed) | Virtual environment or conda environment installation. | | Virtual environment Setup | No extra setup of virtual environment required, as users can pull the current local user environment with pip freeze as needed to create the `requirements.txt`. | Need to set up a clean virtual environment, may take extra steps depending on the current user local environment. | | [Debugging](how-to-inference-server-http.md) | Easy to set up and debug server, since dependencies are clearly listed. | Unclean virtual environment could cause problems when debugging of server. For example, it may not be clear if errors come from the environment or user code. |
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-datasets.md
In this article, you learn how to work with [Azure Machine Learning datasets](/python/api/azureml-core/azureml.core.dataset%28class%29) to train machine learning models. You can use datasets in your local or remote compute target without worrying about connection strings or data paths.
+* For structured data, see [Consume datasets in machine learning training scripts](#consume-datasets-in-machine-learning-training-scripts).
+
+* For unstructured data, see [Mount files to remote compute targets](#mount-files-to-remote-compute-targets).
+ Azure Machine Learning datasets provide a seamless integration with Azure Machine Learning training functionality like [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig), [HyperDrive](/python/api/azureml-train-core/azureml.train.hyperdrive), and [Azure Machine Learning pipelines](./how-to-create-machine-learning-pipelines.md). If you are not ready to make your data available for model training, but want to load your data to your notebook for data exploration, see how to [explore the data in your dataset](how-to-create-register-datasets.md#explore-data).
To create and train with datasets, you need:
* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install) (>= 1.13.0), which includes the `azureml-datasets` package. + > [!Note] > Some Dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux, Ubuntu, Fedora, and CentOS.
If you don't include the leading forward slash, '/', you'll need to prefix the
* [Train image classification models](https://aka.ms/filedataset-samplenotebook) with FileDatasets.
-* [Train with datasets using pipelines](./how-to-create-machine-learning-pipelines.md).
+* [Train with datasets using pipelines](./how-to-create-machine-learning-pipelines.md).
machine-learning How To Trigger Published Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-trigger-published-pipeline.md
pipeline_id = "aaaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
To run a pipeline on a recurring basis, you'll create a schedule. A `Schedule` associates a pipeline, an experiment, and a trigger. The trigger can either be a`ScheduleRecurrence` that describes the wait between runs or a Datastore path that specifies a directory to watch for changes. In either case, you'll need the pipeline identifier and the name of the experiment in which to create the schedule.
-At the top of your python file, import the `Schedule` and `ScheduleRecurrence` classes:
+At the top of your Python file, import the `Schedule` and `ScheduleRecurrence` classes:
```python
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
arg_parser.add_argument("--logging_level", type=str, help="logging level")
args, unknown_args = arg_parser.parse_known_args() print(args.logging_level)
-# Initialize python logger
+# Initialize Python logger
logger = logging.getLogger(__name__) logger.setLevel(args.logging_level.upper()) logger.info("Info log statement")
machine-learning How To Troubleshoot Prebuilt Docker Image Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-prebuilt-docker-image-inference.md
For problems when deploying a model from Azure Machine Learning to Azure Contain
HTTP server in our Prebuilt Docker Images run as *non-root user*, it may not have access right to all directories. Only write to directories you have access rights to. For example, the `/tmp` directory in the container.
-## Extra python packages not installed
+## Extra Python packages not installed
* Check if there's a typo in the environment variable or file name. * Check the container log to see if `pip install -r <your_requirements.txt>` is installed or not.
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-labeled-dataset.md
Previously updated : 02/15/2022 Last updated : 03/11/2022 # Customer intent: As an experienced Python developer, I need to export my data labels and use them for machine learning tasks.
Azure Machine Learning datasets with labels are referred to as labeled datasets.
## Export data labels
-When you complete a data labeling project, you can export the label data from a labeling project. Doing so, allows you to capture both the reference to the data and its labels, and export them in [COCO format](http://cocodataset.org/#format-data) or as an Azure Machine Learning dataset. Use the **Export** button on the **Project details** page of your labeling project.
+When you complete a data labeling project, you can [export the label data from a labeling project](how-to-create-image-labeling-projects.md#export-the-labels). Doing so, allows you to capture both the reference to the data and its labels, and export them in [COCO format](http://cocodataset.org/#format-data) or as an Azure Machine Learning dataset.
+
+Use the **Export** button on the **Project details** page of your labeling project.
+
+![Export button in studio UI](./media/how-to-use-labeled-dataset/export-button.png)
### COCO The COCO file is created in the default blob store of the Azure Machine Learning workspace in a folder within *export/coco*. >[!NOTE]
->In Object detection projects, the exported "bbox": [x,y,width,height]" values in COCO file are normalized. They are scaled to 1. Example : a bounding box at (10, 10) location, with 30 pixels width , 60 pixels height, in a 640x480 pixel image will be annotated as (0.015625. 0.02083, 0.046875, 0.125). Since the coordintes are normalized, it will show as '0.0' as "width" and "height" for all images. The actual width and height can be obtained using Python library like OpenCV or Pillow(PIL).
+>In object detection projects, the exported "bbox": [x,y,width,height]" values in COCO file are normalized. They are scaled to 1. Example : a bounding box at (10, 10) location, with 30 pixels width , 60 pixels height, in a 640x480 pixel image will be annotated as (0.015625. 0.02083, 0.046875, 0.125). Since the coordintes are normalized, it will show as '0.0' as "width" and "height" for all images. The actual width and height can be obtained using Python library like OpenCV or Pillow(PIL).
### Azure Machine Learning dataset
You can access the exported Azure Machine Learning dataset in the **Datasets** s
![Exported dataset](./media/how-to-create-labeling-projects/exported-dataset.png)
-Once you have exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data. Learn more at [Set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md)
+> [!TIP]
+> Once you have exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data. Learn more at [Set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md)
## Explore labeled datasets via pandas dataframe
The exported dataset is a [TabularDataset](/python/api/azureml-core/azureml.data
> The public preview methods download() and mount() are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features, and may change at any time. - ```Python import azureml.core from azureml.core import Dataset, Workspace - # get animal_labels dataset from the workspace animal_labels = Dataset.get_by_name(workspace, 'animal_labels') animal_pd = animal_labels.to_pandas_dataframe() # download the images to local
-animal_labels.download(stream_column='image_url')
+download_path = animal_labels.download(stream_column='image_url')
import matplotlib.pyplot as plt import matplotlib.image as mpimg #read images from downloaded path
-img = mpimg.imread(animal_pd.loc[0,'image_url'])
+img = mpimg.imread(download_path[0])
imgplot = plt.imshow(img) ```
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
Previously updated : 10/21/2021 Last updated : 03/14/2022
The information in the rest of this document provides information on what featur
| **[Compute instance](concept-compute-instance.md)** | | | | | Managed compute Instances for integrated Notebooks | GA | YES | YES | | Jupyter, JupyterLab Integration | GA | YES | YES |
-| Virtual Network (VNet) support | Public Preview | YES | YES |
+| Virtual Network (VNet) support | GA | YES | YES |
| **SDK support** | | | | | [Python SDK support](/python/api/overview/azure/ml/) | GA | YES | YES | | **[Security](concept-enterprise-security.md)** | | | |
The information in the rest of this document provides information on what featur
| **Compute instance** | | | | | Managed compute Instances for integrated Notebooks | GA | YES | N/A | | Jupyter, JupyterLab Integration | GA | YES | N/A |
-| Virtual Network (VNet) support | Public Preview | YES | N/A |
+| Virtual Network (VNet) support | GA | YES | N/A |
| **SDK support** | | | | | Python SDK support | GA | YES | N/A | | **Security** | | | |
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments with latest framework versions in Az
### PyTorch **Name**: AzureML-pytorch-1.10-ubuntu18.04-py38-cuda11-gpu
-**Description**: An environment for deep learning with PyTorch containing the AzureML Python SDK and other python packages.
+**Description**: An environment for deep learning with PyTorch containing the AzureML Python SDK and other Python packages.
* GPU: Cuda11 * OS: Ubuntu18.04 * PyTorch: 1.10
Other available PyTorch environments:
### Sklearn **Name**: AzureML-sklearn-1.0-ubuntu20.04-py38-cpu
-**Description**: An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and other python packages.
+**Description**: An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and other Python packages.
* OS: Ubuntu20.04 * Scikit-learn: 1.0
Other available Sklearn environments:
### TensorFlow **Name**: AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
-**Description**: An environment for deep learning with TensorFlow containing the AzureML Python SDK and other python packages.
+**Description**: An environment for deep learning with TensorFlow containing the AzureML Python SDK and other Python packages.
* GPU: Cuda11 * Horovod: 2.4.1 * OS: Ubuntu18.04
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-get-sas-uri.md
description: Generate a shared access signature (SAS) URI for a virtual hard dis
-- Last updated 06/23/2021
marketplace Azure Vm Image Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-image-test.md
description: Test and submit an Azure virtual machine offer in Azure Marketplace
--+++ Last updated 02/01/2022
marketplace Azure Vm Use Own Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-use-own-image.md
description: Publish a virtual machine offer to Azure Marketplace using your own
-- Last updated 11/10/2021
marketplace Submit Legal Notice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/submit-legal-notice.md
Previously updated : 03/08/2021 Last updated : 03/14/2022 # Notifying Microsoft regarding the Publisher Agreement
media-services Encode Basic Encoding Python Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-basic-encoding-python-quickstart.md
For samples, we recommend you always create and activate a Python virtual enviro
2. Create a virtual environment. ``` bash
- # py -3 uses the global python interpreter. You can also use python -m venv .venv.
+ # py -3 uses the global Python interpreter. You can also use python -m venv .venv.
py -3 -m venv .venv ```
open-datasets Dataset Open Cravat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-open-cravat.md
Last updated 04/16/2021
# OpenCravat: Open Custom Ranked Analysis of Variants Toolkit
-OpenCRAVAT is a python package that performs genomic variant interpretation including variant impact, annotation, and scoring. OpenCRAVAT has a modular architecture with a wide variety of analysis modules and annotation resources that can be selected and installed/run based on the needs of a given study.
+OpenCRAVAT is a Python package that performs genomic variant interpretation including variant impact, annotation, and scoring. OpenCRAVAT has a modular architecture with a wide variety of analysis modules and annotation resources that can be selected and installed/run based on the needs of a given study.
For more information on the data, see the [OpenCravat](https://opencravat.org/).
openshift Cluster Administration Cluster Admin Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/cluster-administration-cluster-admin-role.md
Title: Azure Red Hat OpenShift cluster administrator role | Microsoft Docs description: Assignment and usage of the Azure Red Hat OpenShift cluster administrator role --++ Last updated 09/25/2019
openshift Howto Aad App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-aad-app-configuration.md
Title: Azure Active Directory integration for Azure Red Hat OpenShift description: Learn how to create an Azure AD security group and user for testing apps on your Microsoft Azure Red Hat OpenShift cluster.--++ Last updated 05/13/2019
openshift Howto Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-tenant.md
Title: Create an Azure AD tenant for Azure Red Hat OpenShift description: Here's how to create an Azure Active Directory (Azure AD) tenant to host your Microsoft Azure Red Hat OpenShift cluster.--++ Last updated 05/13/2019
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
Title: Restrict egress traffic in an Azure Red Hat OpenShift (ARO) cluster description: Learn what ports and addresses are required to control egress traffic in Azure Red Hat OpenShift (ARO) -+ Last updated 04/09/2021
openshift Howto Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-setup-environment.md
Title: Set up your Azure Red Hat OpenShift development environment description: Here are the prerequisites for working with Microsoft Azure Red Hat OpenShift. keywords: red hat openshift setup set up--++ Last updated 11/04/2019
openshift Intro Openshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/intro-openshift.md
Title: Introduction to Azure Red Hat OpenShift description: Learn the features and benefits of Microsoft Azure Red Hat OpenShift to deploy and manage container-based applications.--++ Last updated 11/13/2020
openshift Supported Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/supported-resources.md
Title: Supported resources for Azure Red Hat OpenShift 3.11 description: Understand which Azure regions and virtual machine sizes are supported by Microsoft Azure Red Hat OpenShift.--++ Last updated 05/15/2019
openshift Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/troubleshoot.md
Title: Troubleshoot Azure Red Hat OpenShift description: Troubleshoot and resolve common issues with Azure Red Hat OpenShift--++ Last updated 05/08/2019
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
## Network security of private endpoints
-When you use private endpoints, traffic is secured to a private-link resource. The platform does an access control to validate network connections that reach only the specified private-link resource. To access more resources within the same Azure service, you need additional private endpoints.
-
-You can completely lock down your workloads to prevent them from accessing public endpoints to connect to a supported Azure service. This control provides an extra network security layer to your resources, and this security provides protection that helps prevent access to other resources that are hosted on the same Azure service.
+When you use private endpoints, traffic is secured to a private-link resource. The platform validates network connections, allowing only those that reach the specified private-link resource. To access additional sub-resources within the same Azure service, additional private endpoints with corresponding targets are required. In the case of Azure Storage, for instance, you would need separate private endpoints to access the _file_ and _blob_ sub-resources.
+
+Private endpoints provide a privately accessible IP address for the Azure service, but do not necessarily restrict public network access to it. [Azure App Service](tutorial-private-endpoint-webapp-portal.md) and [Azure Functions](../azure-functions/functions-create-vnet.md) become inaccessible publicly when they are associated with a private endpoint. All other Azure services require additional [access controls](../event-hubs/event-hubs-ip-filtering.md), however. These controls provide an extra network security layer to your resources, providing protection that helps prevent access to the Azure service associated with the private-link resource.
## Access to a private-link resource using approval workflow
The consumers can request a connection to a private-link service by using either
## DNS configuration
-The DNS settings that you use to connect to a private-link resource are important. Ensure that your DNS settings are correct when you use the fully qualified domain name (FQDN) for the connection. The settings must resolve to the private IP address of the private endpoint. Existing Azure services might already have a DNS configuration you can use when you're connecting over a public endpoint. This configuration must be overwritten so that you can connect by using your private endpoint.
+The DNS settings that you use to connect to a private-link resource are important. Existing Azure services might already have a DNS configuration you can use when you're connecting over a public endpoint. To connect to the same service over private endpoint, separate DNS settings, often configured via private DNS zones, are required. Ensure that your DNS settings are correct when you use the fully qualified domain name (FQDN) for the connection. The settings must resolve to the private IP address of the private endpoint.
The network interface associated with the private endpoint contains the information that's required to configure your DNS. The information includes the FQDN and private IP address for a private-link resource.
purview Create Azure Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-azure-purview-python.md
pa = purview_client.accounts.begin_delete(rg_name, purview_name).result()
## Next steps
-The code in this tutorial creates a purview account and deletes a purview account. You can now download the python SDK and learn about other resource provider actions you can perform for an Azure Purview account.
+The code in this tutorial creates a purview account and deletes a purview account. You can now download the Python SDK and learn about other resource provider actions you can perform for an Azure Purview account.
Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview.
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Previously updated : 03/05/2022 Last updated : 03/14/2022 # Create and manage a self-hosted integration runtime
Here are the domains and outbound ports that you need to allow at both **corpora
| -- | -- | - | | `*.frontend.clouddatahub.net` | 443 | Required to connect to the Azure Purview service. Currently wildcard is required as there is no dedicated resource. | | `*.servicebus.windows.net` | 443 | Required for setting up scan on Azure Purview Studio. This endpoint is used for interactive authoring from UI, for example, test connection, browse folder list and table list to scope scan. Currently wildcard is required as there is no dedicated resource. |
+| `<purview_account>.purview.azure.com` | 443 | Required to connect to Azure Purview service. |
| `<managed_storage_account>.blob.core.windows.net` | 443 | Required to connect to the Azure Purview managed Azure Blob storage account. | | `<managed_storage_account>.queue.core.windows.net` | 443 | Required to connect to the Azure Purview managed Azure Queue storage account. |
-| `<managed_Event_Hub_resource>.servicebus.windows.net` | 443 | Azure Purview uses this to connect with the associated service bus. It's covered by allowing the above domain. If you use private endpoint, you need to test access to this single domain.|
| `download.microsoft.com` | 443 | Required to download the self-hosted integration runtime updates. If you have disabled auto-update, you can skip configuring this domain. | | `login.windows.net`<br>`login.microsoftonline.com` | 443 | Required to sign in to the Azure Active Directory. |
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Discovering and understanding data sources and their use is the primary purpose
At the same time, users can contribute to the catalog by tagging, documenting, and annotating data sources that have already been registered. They can also register new data sources, which are then discovered, understood, and consumed by the community of catalog users.
-## In-region data residency
-For Azure Purview, customer content related to the metadata (e.g. blob uri path, table names and column names) stored in Azure Purview is Data Residency compliant with the exception of multi-cloud environments.
-For multi-cloud environments (AWS sources), the customer content will reside in the US region as a part of the global logs and will be data residency compliant in the next few months.
## Next steps
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
Previously updated : 11/02/2021 Last updated : 03/14/2022
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
1. Select **Save**.
+> [!IMPORTANT]
+> Currently, we do not support setting up scans for an Azure Synapse workspace from Azure Purview Studio, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. In this case:
+> - You can use [Azure Purview Rest API - Scans - Create Or Update](/api/purview/scanningdataplane/scans/create-or-update) to create a new scan for your Synapse workspaces including dedicated and serverless pools.
+> - You must use **SQL Auth** as authentication mechanism.
+ ### Create and run scan To create and run a new scan, do the following:
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
Previously updated : 01/20/2022 Last updated : 03/14/2022
When scanning Teradata source, Azure Purview supports:
When setting up scan, you can choose to scan an entire Teradata server, or scope the scan to a subset of databases matching the given name(s) or name pattern(s).
+### Required permissions for scan
+
+Azure Purview supports basic authentication (username and password) for scanning Teradata. The Teradata user must have read access to system tables in order to access advanced metadata.
+
+To retrieve data types of view columns, Azure Purview issues a prepare statement for `select * from <view>` for each of the view queries and parse the metadata that contains the data type details for better performance. It requires the SELECT data permission on views. If the permission is missing, view column data types will be skipped.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
When setting up scan, you can choose to scan an entire Teradata server, or scope
This section describes how to register Teradata in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
-### Authentication for registration
-
-The only supported authentication for a Teradata source is **Basic authentication**. Make sure to have Read access to the Teradata source being scanned.
- ### Steps to register 1. Navigate to your Azure Purview account.
purview Tutorial Azure Purview Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-tools.md
This article lists several open-source tools and utilities (command-line, python
1. [PyApacheAtlas: Interface between Azure Purview and Apache Atlas](https://github.com/wjohnson/pyapacheatlas) using Atlas APIs - **Recommended customer journey stages**: *Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
- - **Description**: A python package to work with Azure Purview and Apache Atlas API. Supports bulk loading, custom lineage, and more from a Pythonic set of classes and Excel templates. The package supports programmatic interaction and an Excel template for low-code uploads.
+ - **Description**: A Python package to work with Azure Purview and Apache Atlas API. Supports bulk loading, custom lineage, and more from a Pythonic set of classes and Excel templates. The package supports programmatic interaction and an Excel template for low-code uploads.
1. [Azure Purview Event Hubs Notifications Reader](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/purview_atlas_eventhub_sample.py)
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-image-scenarios.md
Images can also be passed into and returned from custom skills. The skillset bas
} ```
-The [Azure Search python samples](https://github.com/Azure-Samples/azure-search-python-samples) repository has a complete sample implemented in Python of a custom skill that enriches images.
+The [Azure Search Python samples](https://github.com/Azure-Samples/azure-search-python-samples) repository has a complete sample implemented in Python of a custom skill that enriches images.
<a name="passing-images-to-custom-skills"></a>
security Antimalware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware.md
The deployment workflow including configuration steps and options supported for
![Microsoft Antimalware in Azure](./media/antimalware/sec-azantimal-fig1.PNG) > [!NOTE]
-> You can however use PowerShell/APIs and Azure Resource Manager templates to deploy Virtual Machine Scale Sets with the Microsoft Anti-Malware extension. For installing an extension on an already running Virtual Machine, you can use the sample python script [vmssextn.py](https://github.com/gbowerman/vmsstools). This script gets the existing extension config on the Scale Set and adds an extension to the list of existing extensions on the VM Scale Sets.
+> You can however use PowerShell/APIs and Azure Resource Manager templates to deploy Virtual Machine Scale Sets with the Microsoft Anti-Malware extension. For installing an extension on an already running Virtual Machine, you can use the sample Python script [vmssextn.py](https://github.com/gbowerman/vmsstools). This script gets the existing extension config on the Scale Set and adds an extension to the list of existing extensions on the VM Scale Sets.
> >
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
na Previously updated : 10/26/2021 Last updated : 03/14/2022 # Data encryption models
When Server-side encryption with service-managed keys is used, the key creation,
For scenarios where the requirement is to encrypt the data at rest and control the encryption keys customers can use server-side encryption using customer-managed Keys in Key Vault. Some services may store only the root Key Encryption Key in Azure Key Vault and store the encrypted Data Encryption Key in an internal location closer to the data. In that scenario customers can bring their own keys to Key Vault (BYOK ΓÇô Bring Your Own Key), or generate new ones, and use them to encrypt the desired resources. While the Resource Provider performs the encryption and decryption operations, it uses the configured key encryption key as the root key for all encryption operations.
-Loss of key encryption keys means loss of data. For this reason, keys should not be deleted. Keys should be backed up whenever created or rotated. [Soft-Delete and purge protection](../../key-vault/general/soft-delete-overview.md) must be enabled on any vault storing key encryption keys to protect against accidental or malicious cryptographic erasure. Instead of deleting a key, it is recommended to set enabled to false on the key encryption key.
+Loss of key encryption keys means loss of data. For this reason, keys should not be deleted. Keys should be backed up whenever created or rotated. [Soft-Delete and purge protection](../../key-vault/general/soft-delete-overview.md) must be enabled on any vault storing key encryption keys to protect against accidental or malicious cryptographic erasure. Instead of deleting a key, it is recommended to set enabled to false on the key encryption key. Use access controls to revoke access to individual users or services in [Azure Key Vault](../../key-vault/general/security-features.md#access-model-overview) or [Managed HSM](../../key-vault/managed-hsm/secure-your-managed-hsm.md).
### Key Access
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
end
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [Web application credentials](https://developers.google.com/workspace/guides/create-credentials#web). 1. [Follow the instructions](https://developers.google.com/admin-sdk/reports/v1/quickstart/python) to obtain the credentials.json.
-1. To get the Google pickle string, run [this python script](https://aka.ms/sentinel-GWorkspaceReportsAPI-functioncode) (in the same path as credentials.json).
+1. To get the Google pickle string, run [this Python script](https://aka.ms/sentinel-GWorkspaceReportsAPI-functioncode) (in the same path as credentials.json).
1. Copy the pickle string output in single quotes and save. It will be needed for deploying the Function App.
sentinel Normalization Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-content.md
The following built-in DNS query content is supported for ASIM normalization.
### Analytics rules
+ - [(Preview) TI map Domain entity to DNS Events (ASIM DNS Schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimDNS/imDns_DomainEntity_DnsEvents.yaml)
+ - [(Preview) TI map IP entity to DNS Events (ASIM DNS Schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimDNS/imDns_IPEntity_DnsEvents.yaml)
- [Potential DGA detected (ASimDNS)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimDNS/imDns_HighNXDomainCount_detection.yaml)
- - [Excessive NXDOMAIN DNS Queries (Normalized DNS)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimDNS/imDns_ExcessiveNXDOMAINDNSQueries.yaml)
+ - [Excessive NXDOMAIN DNS Queries (ASIM DNS Schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimDNS/imDns_ExcessiveNXDOMAINDNSQueries.yaml)
+ - [DNS events related to mining pools (ASIM DNS Schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimDNS/imDNS_Miners.yaml)
+ - [DNS events related to ToR proxies (ASIM DNS Schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimDNS/imDNS_TorProxies.yaml)
- [Known Barium domains](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/BariumDomainIOC112020.yaml) - [Known Barium IP addresses](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/BariumIPIOC112020.yaml) - [Exchange Server Vulnerabilities Disclosed March 2021 IoC Match](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ExchangeServerVulnerabilitiesMarch2021IoCs.yaml)
The following built-in DNS query content is supported for ASIM normalization.
- [Solorigate Network Beacon](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Solorigate-Network-Beacon.yaml) - [THALLIUM domains included in DCU takedown](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ThalliumIOCs.yaml) - [Known ZINC Comebacker and Klackring malware hashes](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ZincJan272021IOCs.yaml)
+ - [Known CERIUM domains and hashes](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/CERIUMOct292020IOCs.yaml)
+ - [Known NICKEL domains and hashes](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NICKELIOCsNov2021.yaml)
+ - [NOBELIUM - Domain, Hash, and IP IOCs - May 2021](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NOBELIUM_IOCsMay2021.yaml)
+ - [Solorigate Network Beacon](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Solorigate-Network-Beacon.yaml)
## File Activity security content
The following built-in file activity content is supported for ASIM normalization
- [NOBELIUM - Domain, Hash, and IP IOCs - May 2021](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NOBELIUM_IOCsMay2021.yaml) - [SUNSPOT log file creation ](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/SUNSPOTLogFile.yaml) - [Known ZINC Comebacker and Klackring malware hashes](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ZincJan272021IOCs.yaml)
+- [DEV-0586 Actor IOC - January 2022](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Dev-0586_Jan2022_IOC.yaml)
+- [NOBELIUM IOCs related to FoggyWeb backdoor](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Nobelium_FoggyWeb.yaml)
## Network session security content
The following built-in network session related content is supported for ASIM nor
- [Log4j vulnerability exploit aka Log4Shell IP IOC](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Log4J_IPIOC_Dec112021.yaml) - [Excessive number of failed connections from a single source (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/ExcessiveDenyFromSource.yaml) - [Potential beaconing activity (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/PossibleBeaconingActivity.yaml)-- [User agent search for log4j exploitation attempt](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/UserAgentSearch_log4j.yaml)
+- [(Preview) TI map IP entity to Network Session Events (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/IPEntity_imNetworkSession.yaml)
+- [Port scan detected (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/PortScan.yaml)
+- [Known Barium IP addresses](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/BariumIPIOC112020.yaml)
+- [Exchange Server Vulnerabilities Disclosed March 2021 IoC Match](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ExchangeServerVulnerabilitiesMarch2021IoCs.yaml)
+- [Known IRIDIUM IP](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/IridiumIOCs.yaml)
+- [NOBELIUM - Domain, Hash, and IP IOCs - May 2021](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NOBELIUM_IOCsMay2021.yaml)
+- [Known STRONTIUM group domains - July 2019](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/STRONTIUMJuly2019IOCs.yaml)
++ ### Hunting queries
The following built-in process activity content is supported for ASIM normalizat
The following built-in registry activity content is supported for ASIM normalization.
+### Analytic rules
+
+- [Potential Fodhelper UAC Bypass (ASIM Version)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PotentialFodhelperUACBypass(ASIMVersion).yaml)
+ ### Hunting queries - [Persisting Via IFEO Registry Key](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PersistViaIFEORegistryKey.yaml)
The following built-in web session related content is supported for ASIM normali
### Analytics rules
+- [(Preview) TI map Domain entity to Web Session Events (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ThreatIntelligenceIndicator/DomainEntity_imWebSession.yaml)
+- [(Preview) TI map IP entity to Web Session Events (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ThreatIntelligenceIndicator/IPEntity_imWebSession.yaml)
- [Potential communication with a Domain Generation Algorithm (DGA) based hostname (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/PossibleDGAContacts.yaml) - [A client made a web request to a potentially harmful file (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/PotentiallyHarmfulFileTypes.yaml) - [A host is potentially running a crypto miner (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/UnusualUACryptoMiners.yaml) - [A host is potentially running a hacking tool (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/UnusualUAHackTool.yaml) - [A host is potentially running PowerShell to send HTTP(S) requests (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/UnusualUAPowershell.yaml)
+- [Discord CDN Risky File Download (ASIM Web Session Schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/DiscordCDNRiskyFileDownload_ASim.yaml)
+- [Excessive number of HTTP authentication failures from a source (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/ExcessiveNetworkFailuresFromSource.yaml)
+- [Known Barium domains](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/BariumDomainIOC112020.yaml)
+- [Known Barium IP addresses](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/BariumIPIOC112020.yaml)
+- [Known CERIUM domains and hashes](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/CERIUMOct292020IOCs.yaml)
+- [Known IRIDIUM IP](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/IridiumIOCs.yaml)
+- [Known NICKEL domains and hashes](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NICKELIOCsNov2021.yaml)
+- [NOBELIUM - Domain and IP IOCs - March 2021](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NOBELIUM_DomainIOCsMarch2021.yaml)
+- [NOBELIUM - Domain, Hash, and IP IOCs - May 2021](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NOBELIUM_IOCsMay2021.yaml)
+- [Known Phosphorus group domains/IP](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PHOSPHORUSMarch2019IOCs.yaml)
+- [User agent search for log4j exploitation attempt](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/UserAgentSearch_log4j.yaml)
+++ ## <a name="next-steps"></a>Next steps
sentinel Troubleshooting Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/troubleshooting-cef-syslog.md
Use the following sections to check your CEF or Syslog data connector prerequisi
If you're using an Azure Virtual Machine as a CEF collector, verify the following: -- Before you deploy the [Common Event Format Data connector python script](./connect-log-forwarder.md), make sure that your Virtual Machine isn't already connected to an existing Log Analytics workspace. You can find this information on the Log Analytics Workspace Virtual Machine list, where a VM that's connected to a Syslog workspace is listed as **Connected**.
+- Before you deploy the [Common Event Format Data connector Python script](./connect-log-forwarder.md), make sure that your Virtual Machine isn't already connected to an existing Log Analytics workspace. You can find this information on the Log Analytics Workspace Virtual Machine list, where a VM that's connected to a Syslog workspace is listed as **Connected**.
- Make sure that Microsoft Sentinel is connected to the correct Log Analytics workspace, with the **SecurityInsights** solution installed.
service-fabric Service Fabric Best Practices Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-infrastructure-as-code.md
You can deploy applications and services onto your Service Fabric cluster via Az
} ```
-To deploy your application using Azure Resource Manager, you first must [create a sfpkg](./service-fabric-package-apps.md#create-an-sfpkg) Service Fabric Application package. The following python script is an example of how to create a sfpkg:
+To deploy your application using Azure Resource Manager, you first must [create a sfpkg](./service-fabric-package-apps.md#create-an-sfpkg) Service Fabric Application package. The following Python script is an example of how to create a sfpkg:
```python # Create SFPKG that needs to be uploaded to Azure Storage Blob Container
service-fabric Service Fabric Best Practices Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-networking.md
The rules described below are the recommended minimum for a typical configuratio
|3930 |Cluster |1025-1027 |TCP |VirtualNetwork |Any |Allow |Yes |3940 |Ephemeral |49152-65534 |TCP |VirtualNetwork |Any |Allow |Yes |3950 |Application |20000-30000 |TCP |VirtualNetwork |Any |Allow |Yes
-|3960 |RDP |3389-3488 |TCP |Internet |Any |Deny |No
+|3960 |RDP |3389 |TCP |Internet |Any |Deny |No
|3970 |SSH |22 |TCP |Internet |Any |Deny |No |3980 |Custom endpoint |443 |TCP |Internet |Any |Deny |No
service-fabric Service Fabric Sfctl Settings Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-settings-telemetry.md
# sfctl settings telemetry Configure telemetry settings local to this instance of sfctl.
-Sfctl telemetry collects command name without parameters provided or their values, sfctl version, OS type, python version, the success or failure of the command, the error message returned.
+Sfctl telemetry collects command name without parameters provided or their values, sfctl version, OS type, Python version, the success or failure of the command, the error message returned.
## Commands
spring-cloud How To Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-deploy-powershell.md
Title: Create and deploy applications in Azure Spring Cloud using PowerShell
-description: How to create and deploy applications in Azure Spring Cloud using PowerShell
+ Title: Create and deploy applications in Azure Spring Cloud by using PowerShell
+description: How to create and deploy applications in Azure Spring Cloud by using PowerShell
Last updated 2/15/2022
-# Create and deploy applications using PowerShell
+# Create and deploy applications by using PowerShell
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article describes how you can create an instance of Azure Spring Cloud using the [Az.SpringCloud](/powershell/module/Az.SpringCloud) PowerShell module.
+This article describes how you can create an instance of Azure Spring Cloud by using the [Az.SpringCloud](/powershell/module/Az.SpringCloud) PowerShell module.
## Requirements
-* If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+The requirements for completing the steps in this article depend on your Azure subscription:
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
[!INCLUDE [azure-powershell-requirements-no-header.md](../../includes/azure-powershell-requirements-no-header.md)] > [!IMPORTANT]
- > While the **Az.SpringCloud** PowerShell module is in preview, you must install it using
- > the `Install-Module` cmdlet. After this PowerShell module becomes generally available, it will be
- > part of future Az PowerShell releases and available by default from within Azure Cloud
- > Shell.
+ > While the **Az.SpringCloud** PowerShell module is in preview, you must install it by using
+ > the `Install-Module` cmdlet. See the following command. After this PowerShell module becomes generally available, it will be part of future Az PowerShell releases and available by default from within Azure Cloud Shell.
```azurepowershell-interactive Install-Module -Name Az.SpringCloud ``` * If you have multiple Azure subscriptions, choose the appropriate subscription in which the
- resources should be billed. Select a specific subscription using the
- [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+ resources should be billed. Select a specific subscription by using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet:
```azurepowershell-interactive Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
before you begin.
## Create a resource group
-Create an [Azure resource group](../azure-resource-manager/management/overview.md)
-using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)
-cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as
-a group.
-
-The following example creates a resource group with the specified name and in the specified location.
+A resource group is a logical container in which Azure resources are deployed and managed as
+a group. Create an [Azure resource group](../azure-resource-manager/management/overview.md)
+by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)
+cmdlet. The following example creates a resource group with a specified name and location.
```azurepowershell-interactive New-AzResourceGroup -Name <resource group name> -Location eastus ```
-## Provision a new instance of Azure Spring Cloud
+## Provision a new instance
To create a new instance of Azure Spring Cloud, you use the [New-AzSpringCloud](/powershell/module/az.springcloud/new-azspringcloud) cmdlet. The following
-example creates an Azure Spring Cloud service with the specified name in the previously created
-resource group.
+example creates an Azure Spring Cloud service, with the name that you specified in the resource group you created previously.
```azurepowershell-interactive New-AzSpringCloud -ResourceGroupName <resource group name> -name <service instance name> -Location eastus ```
-## Create a new application in Azure Spring Cloud
+## Create a new application
-To create a new App, you use the
-[New-AzSpringCloudApp](/powershell/module/az.springcloud/new-azspringcloudapp) cmdlet. The following
-example creates an app in Azure Spring Cloud named `gateway`.
+To create a new app, you use the
+[New-AzSpringCloudApp](/powershell/module/az.springcloud/new-azspringcloudapp) cmdlet. The following example creates an app in Azure Spring Cloud named `gateway`.
```azurepowershell-interactive New-AzSpringCloudApp -ResourceGroupName <resource group name> -ServiceName <service instance name> -AppName gateway ```
-## Create a new app deployment in Azure Spring Cloud
+## Create a new app deployment
To create a new app Deployment, you use the [New-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/new-azspringcloudappdeployment)
-cmdlet. The following example creates an app deployment in Azure Spring Cloud named `default` for the
-`gateway` app.
+cmdlet. The following example creates an app deployment in Azure Spring Cloud named `default`, for the `gateway` app.
```azurepowershell-interactive New-AzSpringCloudAppDeployment -ResourceGroupName <resource group name> -Name <service instance name> -AppName gateway -DeploymentName default ```
-## Get an Azure Spring Cloud service
+## Get a service and its properties
To get an Azure Spring Cloud service and its properties, you use the [Get-AzSpringCloud](/powershell/module/az.springcloud/get-azspringcloud) cmdlet. The following
example retrieves information about the specified Azure Spring Cloud service.
Get-AzSpringCloud -ResourceGroupName <resource group name> -ServiceName <service instance name> ```
-## Get an application in Azure Spring Cloud
+## Get an application
To get an app and its properties in Azure Spring Cloud, you use the
-[Get-AzSpringCloudApp](/powershell/module/az.springcloud/get-azspringcloudapp) cmdlet. The following
-example retrieves information about the app `gateway`.
+[Get-AzSpringCloudApp](/powershell/module/az.springcloud/get-azspringcloudapp) cmdlet. The following example retrieves information about the app `gateway`.
```azurepowershell-interactive Get-AzSpringCloudApp -ResourceGroupName <resource group name> -ServiceName <service instance name> -AppName gateway ```
-## Get an app deployment in Azure Spring Cloud
+## Get an app deployment
To get an app deployment and its properties in Azure Spring Cloud, you use the
-[Get-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/get-azspringcloudappdeployment)
-cmdlet. The following example retrieves information about the `default` Spring Cloud deployment.
+[Get-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/get-azspringcloudappdeployment) cmdlet. The following example retrieves information about the `default` Azure Spring Cloud deployment.
```azurepowershell-interactive Get-AzSpringCloudAppDeployment -ResourceGroupName <resource group name> -ServiceName <service instance name> -AppName gateway -DeploymentName default
Get-AzSpringCloudAppDeployment -ResourceGroupName <resource group name> -Service
## Clean up resources
-If the resources created in this article aren't needed, you can delete them by running the following
-examples.
+If the resources created in this article aren't needed, you can delete them by running the examples shown in the following sections.
-### Delete an app deployment in Azure Spring Cloud
+### Delete an app deployment
To remove an app deployment in Azure Spring Cloud, you use the
-[Remove-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/remove-azspringcloudappdeployment)
-cmdlet. The following example deletes an app deployed in Azure Spring Cloud named `default` for the
-specified service and app.
+[Remove-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/remove-azspringcloudappdeployment) cmdlet. The following example deletes an app deployed in Azure Spring Cloud named `default`, for the specified service and app.
```azurepowershell-interactive Remove-AzSpringCloudAppDeployment -ResourceGroupName <resource group name> -ServiceName <service instance name> -AppName gateway -DeploymentName default ```
-### Delete an app in Azure Spring Cloud
+### Delete an app
To remove an app in Azure Spring Cloud, you use the
-[Remove-AzSpringCloudApp](/powershell/module/Az.SpringCloud/remove-azspringcloudapp) cmdlet. The
-following example deletes the `gateway` app in the specified service and resource group.
+[Remove-AzSpringCloudApp](/powershell/module/Az.SpringCloud/remove-azspringcloudapp) cmdlet. The following example deletes the `gateway` app in the specified service and resource group.
```azurepowershell Remove-AzSpringCloudApp -ResourceGroupName <resource group name> -ServiceName <service instance name> -AppName gateway ```
-### Delete an Azure Spring Cloud service
+### Delete a service
To remove an Azure Spring Cloud service, you use the
-[Remove-AzSpringCloud](/powershell/module/Az.SpringCloud/remove-azspringcloud) cmdlet. The following
-example deletes the specified Azure Spring Cloud service.
+[Remove-AzSpringCloud](/powershell/module/Az.SpringCloud/remove-azspringcloud) cmdlet. The following example deletes the specified Azure Spring Cloud service.
```azurepowershell Remove-AzSpringCloud -ResourceGroupName <resource group name> -ServiceName <service instance name>
Remove-AzSpringCloud -ResourceGroupName <resource group name> -ServiceName <serv
### Delete the resource group > [!CAUTION]
-> The following example deletes the specified resource group and all resources contained within it.
-> If resources outside the scope of this article exist in the specified resource group, they will
-> also be deleted.
+> The following example deletes the specified resource group and all resources contained within it. If resources outside the scope of this article exist in the specified resource group, they will also be deleted.
```azurepowershell-interactive Remove-AzResourceGroup -Name <resource group name>
Remove-AzResourceGroup -Name <resource group name>
## Next steps
-[Azure Spring Cloud developer resources](./resources.md).
+[Azure Spring Cloud developer resources](./resources.md)
spring-cloud Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quotas.md
All Azure services set default limits and quotas for resources and features. A
|--|--|--|| | vCPU | per app instance | 1 | 4 | | Memory | per app instance | 2 GB | 8 GB |
-| Azure Spring Cloud service instances | per region per subscription | 10 | 10 |
+| Azure Spring Cloud service instances | per region per subscription | 1 | 1 |
| Total app instances | per Azure Spring Cloud service instance | 25 | 500 | | Custom Domains | per Azure Spring Cloud service instance | 0 | 25 | | Persistent volumes | per Azure Spring Cloud service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps |
static-web-apps Password Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/password-protection.md
+
+ Title: Enable password protection for Azure Static Web Apps
+description: Prevent unauthorized access to your static web app with a password.
++++ Last updated : 03/13/2022+++
+# Configure password protection
+
+You can use a password to protect your app's pre-production environments or all environments. Scenarios when password protection is useful include:
+
+- Limiting access to your static web app to people who have the password
+- Protecting your static web app's staging environments
+
+Password protection is a lightweight feature that offers a limited level of security. To secure your app using an identity provider, use the integrated [Static Web Apps authentication](authentication-authorization.md). You can also restrict access to your app using [IP restrictions](configuration.md#networking) or a [private endpoint](private-endpoint.md).
+
+## Prerequisites
+
+An existing static web app in the Standard plan.
+
+## Enable password protection
+
+1. Open your static web app in the Azure portal.
+
+1. Under *Settings* menu, select **Configuration**.
+
+1. Select the **General settings** tab.
+
+1. In the *Password protection* section, select **Protect staging environments only** to protect only your app's pre-production environments or select **Protect both production and staging environments** to protect all environments.
+
+ :::image type="content" source="media/password-protection/portal-enable.png" alt-text="Screenshot of enabling password protection":::
+
+1. Enter a password in **Visitor password**. Passwords must be at least eight characters long and contain a capital letter, a lowercase letter, a number, and a symbol.
+
+1. Enter the same password in **Confirm visitor password**.
+
+1. Select the **Save** button.
+
+When visitors first navigate to a protected environment, they're prompted to enter the password before they can view the site.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Authentication and authorization](./authentication-authorization.md)
storage Data Lake Storage Acl Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-python.md
The entries of the ACL give the owning user read, write, and execute permissions
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/ACL_datalake.py" id="Snippet_SetACLRecursively":::
-To see an example that processes ACLs recursively in batches by specifying a batch size, see the python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
+To see an example that processes ACLs recursively in batches by specifying a batch size, see the Python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
## Update ACLs recursively
This example sets the ACL of a directory named `my-parent-directory`. This metho
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/ACL_datalake.py" id="Snippet_UpdateACLsRecursively":::
-To see an example that processes ACLs recursively in batches by specifying a batch size, see the python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
+To see an example that processes ACLs recursively in batches by specifying a batch size, see the Python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
## Remove ACL entries recursively
def remove_permission_recursively(is_default_scope):
print(e) ```
-To see an example that processes ACLs recursively in batches by specifying a batch size, see the python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
+To see an example that processes ACLs recursively in batches by specifying a batch size, see the Python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
## Recover from failures
This example returns a continuation token in the event of a failure. The applica
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/ACL_datalake.py" id="Snippet_ResumeContinuationToken":::
-To see an example that processes ACLs recursively in batches by specifying a batch size, see the python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
+To see an example that processes ACLs recursively in batches by specifying a batch size, see the Python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
If you want the process to complete uninterrupted by permission errors, you can specify that.
This example sets ACL entries recursively. If this code encounters a permission
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/ACL_datalake.py" id="Snippet_ContinueOnFailure":::
-To see an example that processes ACLs recursively in batches by specifying a batch size, see the python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
+To see an example that processes ACLs recursively in batches by specifying a batch size, see the Python [sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/storage/azure-storage-file-datalake/samples/datalake_samples_access_control_recursive.py).
[!INCLUDE [updated-for-az](../../../includes/recursive-acl-best-practices.md)]
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Filters include:
| Filter name | Filter type | Notes | Is Required | |-|-|-|-| | blobTypes | An array of predefined enum values. | The current release supports `blockBlob` and `appendBlob`. Only delete is supported for `appendBlob`, set tier is not supported. | Yes |
-| prefixMatch | An array of strings for prefixes to be matched. Each rule can define up to 10 case-senstive prefixes. A prefix string must start with a container name. For example, if you want to match all blobs under `https://myaccount.blob.core.windows.net/sample-container/blob1/...` for a rule, the prefixMatch is `sample-container/blob1`. | If you don't define prefixMatch, the rule applies to all blobs within the storage account. | No |
+| prefixMatch | An array of strings for prefixes to be matched. Each rule can define up to 10 case-sensitive prefixes. A prefix string must start with a container name. For example, if you want to match all blobs under `https://myaccount.blob.core.windows.net/sample-container/blob1/...` for a rule, the prefixMatch is `sample-container/blob1`. | If you don't define prefixMatch, the rule applies to all blobs within the storage account. | No |
| blobIndexMatch | An array of dictionary values consisting of blob index tag key and value conditions to be matched. Each rule can define up to 10 blob index tag condition. For example, if you want to match all blobs with `Project = Contoso` under `https://myaccount.blob.core.windows.net/` for a rule, the blobIndexMatch is `{"name": "Project","op": "==","value": "Contoso"}`. | If you don't define blobIndexMatch, the rule applies to all blobs within the storage account. | No | To learn more about the blob index feature together with known issues and limitations, see [Manage and find data on Azure Blob Storage with blob index](storage-manage-find-blobs.md).
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
You can access resources in a storage account by any language that can make HTTP
- [Azure Storage REST API](/rest/api/storageservices/) - [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage) - [Azure Storage client library for Java/Android](/java/api/overview/azure/storage)-- [Azure Storage client library for Node.js]((/azure/storage/blobs/reference#javascript-client-libraries)
+- [Azure Storage client library for Node.js](/azure/storage/blobs/reference#javascript-client-libraries)
- [Azure Storage client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/storage/azure-storage-blob) - [Azure Storage client library for PHP](https://github.com/Azure/azure-storage-php) - [Azure Storage client library for Ruby](https://github.com/Azure/azure-storage-ruby)
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview.md
This article highlights Microsoft partner companies that deliver a network attac
| ![Panzura](./media/panzura-logo.png) |**Panzura**<br>Panzura is the fabric that transforms Azure cloud storage into a high-performance global file system. By delivering one authoritative data source for all users, Panzura allows enterprises to use Azure as a globally available data center, with all the functionality and speed of a single-site NAS, including automatic file locking, immediate global data consistency, and local file operation performance. |[Partner page](https://panzura.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/panzura-file-system.panzura-freedom-filer)| | ![Pure Storage](./media/pure-logo.png) |**Pure Storage**<br>Pure delivers a modern data experience that empowers organizations to run their operations as a true, automated, storage as-a-service model seamlessly across multiple clouds.|[Partner page](https://www.purestorage.com/company/technology-partners/microsoft.html)<br>[Solution Video](https://azure.microsoft.com/resources/videos/pure-storage-overview)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.pure_storage_cloud_block_store_deployment?tab=Overview)| | ![Qumulo](./media/qumulo-logo.png)|**Qumulo**<br>Qumulo is a fast, scalable, and simple to use file system which makes it easy to store, manage, and run applications that use file data at scale on Microsoft Azure. Qumulo on Azure offers multiple petabytes (PB) of storage capacity and up to 20 GB/s of performance per file system. Windows (SMB) and Linux (NFS) are both natively supported. Patented software architecture delivers a low per-terabyte (TB) cost Media & Entertainment, Genomics, Technology, Natural Resources, and Finance companies all run their most demanding workloads on Qumulo in the cloud. With a Net Promoter Score of 89, customers use Qumulo for its scale, performance and ease of use capabilities like real-time visual insights into how storage is used and award winning Slack based support. Sign up for a free POC today through [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview) or [Qumulo.com](https://qumulo.com/). | [Partner page](https://qumulo.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview)<br>[Datasheet](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWUtF0)|
-| ![Scality](./media/scality-logo.png) |**Scality**<br>Scality builds a software-defined file and object platform designed for on-premise, hybrid, and multi-cloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure, and meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/scality.scalityconnecthourly?tab=Overview)|
+| ![Scality](./media/scality-logo.png) |**Scality**<br>Scality builds a software-defined file and object platform designed for on-premise, hybrid, and multi-cloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure, and meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)|
| ![Tiger Technology company logo](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure, data management software solutions. Tiger Technology enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br><br> Tiger Bridge is a non-proprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space, and enables hybrid workflows. This transparent file server extension lets you benefit from Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses several data management challenges, including: file server extension, disaster recovery, cloud migration, backup and archive, remote collaboration, and multi-site sync. It also offers continuous data protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)|
-| ![XenData company logo](./media/xendata-logo.png) |**XenData**<br>XenData software creates multi-tier storage systems that manage files and folders across on-premises storage and Azure Blob Storage. XenData Multi-Site Sync software creates a global file system for distributed teams, enabling them to share and synchronize files across multiple locations. XenData cloud solutions are optimized for video files, supporting video streaming and partial file restore. They are integrated with many complementary software products used in the Media and Entertainment industry and support a variety of workflows. Other industries and applications that use XenData solutions include Oil and Gas, Engineering and Scientific Data, Video Surveillance and Medical Imaging. |[Partner page](https://xendata.com/tech_partners_cloud/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/xendata-inc.sol-15118-gyy?tab=Overview)|
+| ![XenData company logo](./media/xendata-logo.png) |**XenData**<br>XenData software creates multi-tier storage systems that manage files and folders across on-premises storage and Azure Blob Storage. XenData Multi-Site Sync software creates a global file system for distributed teams, enabling them to share and synchronize files across multiple locations. XenData cloud solutions are optimized for video files, supporting video streaming and partial file restore. They are integrated with many complementary software products used in the Media and Entertainment industry and support a variety of workflows. Other industries and applications that use XenData solutions include Oil and Gas, Engineering and Scientific Data, Video Surveillance and Medical Imaging. |[Partner page](https://xendata.com/tech_partners_cloud/azure/)|
Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu). ## Next steps
To learn more about some of our other partners, see:
- [Big data and analytics partners](..\analytics\partner-overview.md) - [Archive, backup, and BCDR partners](..\backup-archive-disaster-recovery\partner-overview.md) - [Container solution partners](..\container-solutions\partner-overview.md)-- [Data management and migration partners](..\data-management\partner-overview.md)
+- [Data management and migration partners](..\data-management\partner-overview.md)
synapse-analytics Apache Spark Job Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-job-definitions.md
In this section, you create an Apache Spark job definition for PySpark (Python).
3. Select **Data** -> **Linked** -> **Azure Data Lake Storage Gen2**, and upload **wordcount.py** and **shakespeare.txt** into your ADLS Gen2 filesystem.
- ![upload python file](./media/apache-spark-job-definitions/upload-python-file.png)
+ ![upload Python file](./media/apache-spark-job-definitions/upload-python-file.png)
4. Select **Develop** hub, select the '+' icon and select **Spark job definition** to create a new Spark job definition.
synapse-analytics Vscode Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/vscode-tool-synapse.md
for (word, count) in sortedCollection:
![select interpreter to start jupyter server](./media/vscode-tool-synapse/select-interpreter-to-start-jupyter-server.png)
-8. Select the python option below.
+8. Select the Python option below.
![choose the below option](./media/vscode-tool-synapse/choose-the-below-option.png)
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
FROM OPENROWSET(
Do not use `OPENROWSET` without explicitly defined schema because it might impact your performance. Make sure that you use the smallest possible sizes for your columns (for example VARCHAR(100) instead of default VARCHAR(8000)). You should use some UTF-8 collation as default database collation or set it as explicit column collation to avoid [UTF-8 conversion issue](../troubleshoot/reading-utf8-text.md). Collation `Latin1_General_100_BIN2_UTF8` provides best performance when yu filter data using some string columns.
-## Query nested objects and arrays
+## Query nested objects
With Azure Cosmos DB, you can represent more complex data models by composing them as nested objects or arrays. The autosync capability of Azure Synapse Link for Azure Cosmos DB manages the schema representation in the analytical store out of the box, which includes handling nested data types that allow for rich querying from the serverless SQL pool.
For more information, see the following articles:
- [Create and use views in a serverless SQL pool](create-use-views.md) - [Tutorial on building serverless SQL pool views over Azure Cosmos DB and connecting them to Power BI models via DirectQuery](./tutorial-data-analyst.md) - Visit [Synapse link for Cosmos DB self-help page](resources-self-help-sql-on-demand.md#cosmos-db) if you are getting some errors or experiencing performance issues.-- Checkout the learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/).
+- Checkout the learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/).
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Previously updated : 01/28/2022 Last updated : 03/11/2022 # Previous monthly updates in Azure Synapse Analytics This article describes previous month updates to Azure Synapse Analytics. For the most current month's release, check out [Azure Synapse Analytics latest updates](whats-new.md). Each update links to the Azure Synapse Analytics blog and an article that provides more information.
+## Jan 2022 update
+
+The following updates are new to Azure Synapse Analytics this month.
+
+### Apache Spark for Synapse
+
+You can now use four new database templates in Azure Synapse. [Learn more about Automotive, Genomics, Manufacturing, and Pharmaceuticals templates from the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/four-additional-azure-synapse-database-templates-now-available/ba-p/3058044) or the [database templates article](./database-designer/overview-database-templates.md). These templates are currently in public preview and are available within the Synapse Studio gallery.
+
+### Machine Learning
+
+Improvements to the Synapse Machine Learning library v0.9.5 (previously called MMLSpark). This release simplifies the creation of massively scalable machine learning pipelines with Apache Spark. To learn more, [read the blog post about the new capabilities in this release](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_3) or see the [full release notes](https://microsoft.github.io/SynapseML/)
+
+### Security
+
+* The Azure Synapse Analytics security overview - A whitepaper that covers the five layers of security. The security layers include authentication, access control, data protection, network security, and threat protection. [Understand each security feature in detailed](./guidance/security-white-paper-introduction.md) to implement an industry-standard security baseline and protect your data on the cloud.
+
+* TLS 1.2 is now required for newly created Synapse Workspaces. To learn more, see how [TLS 1.2 provides enhanced security using this article](./security/connectivity-settings.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_6). Login attempts to a newly created Synapse workspace from connections using a TLS versions lower than 1.2 will fail.
+
+### Data Integration
+
+* Data quality validation rules using Assert transformation - You can now easily add data quality, data validation, and schema validation to your Synapse ETL jobs by leveraging Assert transformation in Synapse data flows. To learn more, see the [Assert transformation in mapping data flow article](/azure/data-factory/data-flow-assert) or [the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_8).
+
+* Native data flow connector for Dynamics - Synapse data flows can now read and write data directly to Dynamics through the new data flow Dynamics connector. Learn more on how to [Create data sets in data flows to read, transform, aggregate, join, etc. using this article](../data-factory/connector-dynamics-crm-office-365.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_9). You can then write the data back into Dynamics using the built-in Synapse Spark compute.
+
+* IntelliSense and auto-complete added to pipeline expressions - IntelliSense makes creating expressions, editing them easy. To learn more, see how to [check your expression syntax, find functions, and add code to your pipelines.](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/intellisense-support-in-expression-builder-for-more-productive/ba-p/3041459)
+
+### Synapse SQL
+
+* COPY schema discovery for complex data ingestion. To learn more, see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_12) or [how Github leveraged this functionality in Introducing Automatic Schema Discovery with auto table creation for complex datatypes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introducing-automatic-schema-discovery-with-auto-table-creation/ba-p/3068927).
+
+* Serverless SQL pools now support the HASHBYTES function. HASHBYTES is a T-SQL function which hashes values. Learn how to use [hash values in distributing data using this article](/sql/t-sql/functions/hashbytes-transact-sql) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_13).
+ ## December 2021 update The following updates are new to Azure Synapse Analytics this month.
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Previously updated : 01/28/2022 Last updated : 03/11/2022 # What's new in Azure Synapse Analytics?
-This article lists updates to Azure Synapse Analytics that are published in Jan 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
-
-## Jan 2022 update
+This article lists updates to Azure Synapse Analytics that are published in Feb 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
The following updates are new to Azure Synapse Analytics this month.
-### Apache Spark for Synapse
-
-You can now use four new database templates in Azure Synapse. [Learn more about Automotive, Genomics, Manufacturing, and Pharmaceuticals templates from the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/four-additional-azure-synapse-database-templates-now-available/ba-p/3058044) or the [database templates article](./database-designer/overview-database-templates.md). These templates are currently in public preview and are available within the Synapse Studio gallery.
-
-### Machine Learning
-
-Improvements to the Synapse Machine Learning library v0.9.5 (previously called MMLSpark). This release simplifies the creation of massively scalable machine learning pipelines with Apache Spark. To learn more, [read the blog post about the new capabilities in this release](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_3) or see the [full release notes](https://microsoft.github.io/SynapseML/)
-
-### Security
-
-* The Azure Synapse Analytics security overview - A whitepaper that covers the five layers of security. The security layers include authentication, access control, data protection, network security, and threat protection. [Understand each security feature in detailed](./guidance/security-white-paper-introduction.md) to implement an industry-standard security baseline and protect your data on the cloud.
+## SQL
-* TLS 1.2 is now required for newly created Synapse Workspaces. To learn more, see how [TLS 1.2 provides enhanced security using this article](./security/connectivity-settings.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_6). Login attempts to a newly created Synapse workspace from connections using a TLS versions lower than 1.2 will fail.
+* Serverless SQL Pools now support more consistent query execution times. [Learn how Serverless SQL pools automatically detect spikes in read latency and support consistent query execution time.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_2)
-### Data Integration
+* [The `OPENJSON` function makes it easy to get array element indexes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_3). To learn more, see how the OPENJSON function in a serverless SQL pool allows you to [parse nested arrays and return one row for each JSON array element with the index of each element](/sql/t-sql/functions/openjson-transact-sql?view=azure-sqldw-latest&preserve-view=true#array-element-identity).
-* Data quality validation rules using Assert transformation - You can now easily add data quality, data validation, and schema validation to your Synapse ETL jobs by leveraging Assert transformation in Synapse data flows. To learn more, see the [Assert transformation in mapping data flow article](/azure/data-factory/data-flow-assert) or [the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_8).
+## Data integration
-* Native data flow connector for Dynamics - Synapse data flows can now read and write data directly to Dynamics through the new data flow Dynamics connector. Learn more on how to [Create data sets in data flows to read, transform, aggregate, join, etc. using this article](../data-factory/connector-dynamics-crm-office-365.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_9). You can then write the data back into Dynamics using the built-in Synapse Spark compute.
+* [Upserting data is now supported by the copy activity](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_5). See how you can natively load data into a temporary table and then merge that data into a sink table with [upsert.](../data-factory/connector-azure-sql-database.md?tabs=data-factory#upsert-data)
-* IntelliSense and auto-complete added to pipeline expressions - IntelliSense makes creating expressions, editing them easy. To learn more, see how to [check your expression syntax, find functions, and add code to your pipelines.](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/intellisense-support-in-expression-builder-for-more-productive/ba-p/3041459)
+* [Transform Dynamics Data Visually in Synapse Data Flows.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_6) Learn more on how to use a [Dynamics dataset or an inline dataset as source and sink types to transform data at scale.](../data-factory/connector-dynamics-crm-office-365.md?tabs=data-factory#mapping-data-flow-properties)
-### Synapse SQL
+* [Connect to your SQL sources in data flows using Always Encrypted](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_7). To learn more, see [how to securely connect to your SQL databases from Synapse data flows using Always Encrypted.](../data-factory/connector-azure-sql-database.md?tabs=data-factory)
-* COPY schema discovery for complex data ingestion. To learn more, see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_12) or [how Github leveraged this functionality in Introducing Automatic Schema Discovery with auto table creation for complex datatypes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introducing-automatic-schema-discovery-with-auto-table-creation/ba-p/3068927).
+* [Capture descriptions from asserts in Data Flows](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_8) To learn more, see [how to define your own dynamic descriptive messages](../data-factory/data-flow-expressions-usage.md#assertErrorMessages) in the assert data flow transformation at the row or column level.
-* Serverless SQL pools now support the HASHBYTES function. HASHBYTES is a T-SQL function which hashes values. Learn how to use [hash values in distributing data using this article](/sql/t-sql/functions/hashbytes-transact-sql) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_13).
+* [Easily define schemas for complex type fields.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_9) To learn more, see how you can make the engine to [automatically detect the schema of an embedded complex field inside a string column](../data-factory/data-flow-parse.md).
## Next steps
time-series-insights How To Tsi Gen1 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-tsi-gen1-migration.md
+
+ Title: 'Time Series Insights Gen1 migration to Azure Data Explorer | Microsoft Docs'
+description: How to migrate Azure Time Series Insights Gen 1 environments to Azure Data Explorer.
+++++++ Last updated : 3/15/2022+++
+# Migrating Time Series Insights Gen1 to Azure Data Explorer
+
+## Overview
+
+The recommendation is to set up Azure Data Explorer cluster with a new consumer group from the Event Hub or IoT Hub and wait for retention period to pass and fill Azure Data Explorer with the same data as Time Series Insights environment.
+If telemetry data is required to be exported from Time Series Insights environment, the suggestion is to use Time Series Insights Query API to download the events in batches and serialize in required format.
+For reference data, Time Series Insights Explorer or Reference Data API can be used to download reference data set and upload it into Azure Data Explorer as another table. Then, materialized views in Azure Data Explorer can be used to join reference data with telemetry data. Use materialized view with arg_max() aggregation function which will get the latest record per entity, as demonstrated in the following example. For more information about materialized views, read the following documentation: [Materialized views use cases] (./data-explorer/kusto/management/materialized-views/materialized-view-overview.md#materialized-views-use-cases).
+
+```
+.create materialized-view MVName on table T
+{
+ T
+ | summarize arg_max(Column1,*) by Column2
+}
+```
+## Translate Time Series Insights Queries to KQL
+
+For queries, the recommendation is to use KQL in Azure Data Explorer.
+
+#### Events
+```TSQ
+{
+ "searchSpan": {
+ "from": "2021-11-29T22:09:32.551Z",
+ "to": "2021-12-06T22:09:32.551Z"
+ },
+ "predicate": {
+ "predicateString": "([device_id] = 'device_0') AND ([has_error] != null OR [error_code] != null)"
+ },
+ "top": {
+ "sort": [
+ {
+ "input": {
+ "builtInProperty": "$ts"
+ },
+ "order": "Desc"
+ }
+ ],
+ "count": 100
+ }
+}
+```
+```KQL
+ events
+| where _timestamp >= datetime("2021-11-29T22:09:32.551Z") and _timestamp < datetime("2021-12-06T22:09:32.551Z") and deviceid == "device_0" and (not(isnull(haserror)) or not(isempty(errorcode)))
+| top 100 by _timestamp desc
+
+```
+
+#### Aggregates
+
+```TSQ
+{
+ "searchSpan": {
+ "from": "2021-12-04T22:30:00Z",
+ "to": "2021-12-06T22:30:00Z"
+ },
+ "predicate": {
+ "eq": {
+ "left": {
+ "property": "DeviceId",
+ "type": "string"
+ },
+ "right": "device_0"
+ }
+ },
+ "aggregates": [
+ {
+ "dimension": {
+ "uniqueValues": {
+ "input": {
+ "property": "DeviceId",
+ "type": "String"
+ },
+ "take": 1
+ }
+ },
+ "aggregate": {
+ "dimension": {
+ "dateHistogram": {
+ "input": {
+ "builtInProperty": "$ts"
+ },
+ "breaks": {
+ "size": "2d"
+ }
+ }
+ },
+ "measures": [
+ {
+ "count": {}
+ },
+ {
+ "sum": {
+ "input": {
+ "property": "DataValue",
+ "type": "Double"
+ }
+ }
+ },
+ {
+ "min": {
+ "input": {
+ "property": "DataValue",
+ "type": "Double"
+ }
+ }
+ },
+ {
+ "max": {
+ "input": {
+ "property": "DataValue",
+ "type": "Double"
+ }
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+
+```
+```KQL
+ let _q = events | where _timestamp >= datetime("2021-12-04T22:30:00Z") and _timestamp < datetime("2021-12-06T22:30:00Z") and deviceid == "device_0";
+let _dimValues0 = _q | project deviceId | sample-distinct 1 of deviceId;
+_q
+| where deviceid in (_dimValues0) or isnull(deviceid)
+| summarize
+ _meas0 = count(),
+ _meas1 = iff(isnotnull(any(datavalue)), sum(datavalue), any(datavalue)),
+ _meas2 = min(datavalue),
+ _meas3 = max(datavalue),
+ by _dim0 = deviceid, _dim1 = bin(_timestamp, 2d)
+| project
+ _dim0,
+ _dim1,
+ _meas0,
+ _meas1,
+ _meas2,
+ _meas3,
+| sort by _dim0 nulls last, _dim1 nulls last
+```
+
time-series-insights How To Tsi Gen2 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-tsi-gen2-migration.md
+
+ Title: 'Time Series Insights Gen2 migration to Azure Data Explorer | Microsoft Docs'
+description: How to migrate Azure Time Series Insights Gen 2 environments to Azure Data Explorer.
+++++++ Last updated : 3/15/2022+++
+# Migrating Time Series Insights (TSI) Gen2 to Azure Data Explorer
+
+## Overview
+
+High-level migration recommendations.
+
+| Feature | Gen2 State | Migration Recommended |
+| | | |
+| Ingesting JSON from Hub with flattening and escaping | TSI Ingestion | ADX - OneClick Ingest / Wizard |
+| Open Cold store | Customer Storage Account | [Continuous data export](/azure/data-explorer/kusto/management/data-export/continuous-data-export) to customer specified external table in ADLS. |
+| PBI Connector | Private Preview | Use ADX PBI Connector. Rewrite TSQ to KQL manually. |
+| Spark Connector | Private Preview. Query telemetry data. Query model data. | Migrate data to ADX. Use ADX Spark connector for telemetry data + export model to JSON and load in Spark. Rewrite queries in KQL. |
+| Bulk Upload | Private Preview | Use ADX OneClick Ingest and LightIngest. An optionally, set up partitioning within ADX. |
+| Time Series Model | | Can be exported as JSON file. Can be imported to ADX to perform joins in KQL. |
+| TSI Explorer | Toggling warm and cold | ADX Dashboards |
+| Query language | Time Series Queries (TSQ) | Rewrite queries in KQL. Use Kusto SDKs instead of TSI ones. |
+
+## Migrating Telemetry
+
+Use `PT=Time` folder in the storage account to retrieve the copy of all telemetry in the environment. For more information, please see [Data Storage](./concepts-storage.md#cold-store).
+
+### Migration Step 1 ΓÇô Get Statistics about Telemetry Data
+
+Data
+1. Env overview
+ - Record Environment ID from first part of Data Access FQDN (for example, d390b0b0-1445-4c0c-8365-68d6382c1c2a From .env.crystal-dev.windows-int.net)
+1. Env Overview -> Storage Configuration -> Storage Account
+1. Use Storage Explorer to get folder statistics
+ - Record size and the number of blobs of `PT=Time` folder. For customers in private preview of Bulk Import, also record `PT=Import` size and number of blobs.
++
+### Migration Step 2 ΓÇô Migrate Telemetry To ADX
+
+#### Create ADX cluster
+
+1. Define the cluster size based on data size using the ADX Cost Estimator.
+ 1. From Event Hubs (or IoT Hub) metrics, retrieve the rate of how much data it's ingested per day. From the Storage Account connected to the TSI environment, retrieve how much data there is in the blob container used by TSI. This information will be used to compute the ideal size of an ADX Cluster for your environment.
+ 1. Open [the Azure Data Explorer Cost Estimator](https://dataexplorer.azure.com/AzureDataExplorerCostEstimator.html) and fill the existing fields with the information found. Set ΓÇ£Workload typeΓÇ¥ as ΓÇ£Storage OptimizedΓÇ¥, and "Hot Data" with the total amount of data queried actively.
+ 1. After providing all the information, Azure Data Explorer Cost Estimator will suggest a VM size and number of instances for your cluster. Analyze if the size of actively queried data will fit in the Hot Cache. Multiply the number of instances suggested by the cache size of the VM size, per example:
+ - Cost Estimator suggestion: 9x DS14 + 4 TB (cache)
+ - Total Hot Cache suggested: 36 TB = [9x (instances) x 4 TB (of Hot Cache per node)]
+ 1. More factors to consider:
+ - Environment growth: when planning the ADX Cluster size consider the data growth along the time.
+ - Hydration and Partitioning: when defining the number of instances in ADX Cluster, consider extra nodes (by 2-3x) to speed up hydration and partitioning.
+ - For more information about compute selection, see [Select the correct compute SKU for your Azure Data Explorer cluster](/azure/data-explorer/manage-cluster-choose-sku).
+1. To best monitor your cluster and the data ingestion, you should enable Diagnostic Settings and send the data to a Log Analytics Workspace.
+ 1. In the Azure Data Explorer blade, go to ΓÇ£Monitoring | Diagnostic settingsΓÇ¥ and click on ΓÇ£Add diagnostic settingΓÇ¥
+
+ :::image type="content" source="media/gen2-migration/adx-diagnostic.png" alt-text="Screenshot of the Azure Data Explorer blade Monitoring | Diagnostic settings" lightbox="media/gen2-migration/adx-diagnostic.png":::
+
+ 1. Fill in the following
+ 1. Diagnostic setting name: Display Name for this configuration
+ 1. Logs: At minimum select SucceededIngestion, FailedIngestion, IngestionBatching
+ 1. Select the Log Analytics Workspace to send the data to (if you donΓÇÖt have one youΓÇÖll need to provision one before this step)
+
+ :::image type="content" source="media/gen2-migration/adx-log-analytics.png" alt-text="Screenshot of the Azure Data Explorer Log Analytics Workspace" lightbox="media/gen2-migration/adx-log-analytics.png":::
+
+1. Data partitioning.
+ 1. For small size data, the default ADX partitioning is enough. For more complex scenario, with large datasets and right push rate custom ADX data partitioning is more appropriate. Data partitioning is beneficial for scenarios, as follows:
+ 1. Improving query latency in big data sets.
+ 1. When querying historical data.
+ 1. When ingesting out-of-order data.
+ 1. The custom data partitioning should include:
+ 1. The timestamp column, which results in time-based partitioning of extents.
+ 1. A string-based column, which corresponds to the Time Series ID with highest cardinality.
+ 1. An example of data partitioning containing a Time Series ID column and a timestamp column is:
+
+```
+.alter table events policy partitioning
+ {
+ "PartitionKeys": [
+ {
+ "ColumnName": "timeSeriesId",
+ "Kind": "Hash",
+ "Properties": {
+ "Function": "XxHash64",
+ "MaxPartitionCount": 32,
+ "PartitionAssignmentMode": "Uniform"
+ }
+ },
+ {
+ "ColumnName": "timestamp",
+ "Kind": "UniformRange",
+ "Properties": {
+ "Reference": "1970-01-01T00:00:00",
+ "RangeSize": "1.00:00:00",
+ "OverrideCreationTime": true
+ }
+ }
+ ] ,
+ "EffectiveDateTime": "1970-01-01T00:00:00",
+ "MinRowCountPerOperation": 0,
+ "MaxRowCountPerOperation": 0,
+ "MaxOriginalSizePerOperation": 0
+ }
+```
+For more references, check [ADX Data Partitioning Policy](/azure/data-explorer/kusto/management/partitioningpolicy).
+
+#### Prepare for Data Ingestion
+
+1. Go to [https://dataexplorer.azure.com](https://dataexplorer.azure.com).
+
+ :::image type="content" source="media/gen2-migration/adx-landing-page.png" alt-text="Screenshot of the Azure Data Explorer landing page" lightbox="media/gen2-migration/adx-landing-page.png":::
+
+1. Go to Data tab and select ΓÇÿIngest from blob containerΓÇÖ
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-blob.png" alt-text="Screenshot of the Azure Data Explorer ingestion from blob container" lightbox="media/gen2-migration/adx-ingest-blob.png":::
+
+1. Select Cluster, Database, and create a new Table with the name you choose for the TSI data
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-table.png" alt-text="Screenshot of the Azure Data Explorer ingestion selection of cluster, database, and table" lightbox="media/gen2-migration/adx-ingest-table.png":::
+
+1. Select Next: Source
+1. In the Source tab select:
+ 1. Historical data
+ 1. ΓÇ£Select ContainerΓÇ¥
+ 1. Choose the Subscription and Storage account for your TSI data
+ 1. Choose the container that correlates to your TSI Environment
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-container.png" alt-text="Screenshot of the Azure Data Explorer ingestion selection of container" lightbox="media/gen2-migration/adx-ingest-container.png":::
+
+1. Select on Advanced settings
+ 1. Creation time pattern: '/'yyyyMMddHHmmssfff'_'
+ 1. Blob name pattern: *.parquet
+ 1. Select on ΓÇ£DonΓÇÖt wait for ingestion to completeΓÇ¥
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-advanced.png" alt-text="Screenshot of the Azure Data Explorer ingestion selection of advanced settings" lightbox="media/gen2-migration/adx-ingest-advanced.png":::
+
+1. Under File Filters, add the Folder path `V=1/PT=Time`
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-folder-path.png" alt-text="Screenshot of the Azure Data Explorer ingestion selection of folder path" lightbox="media/gen2-migration/adx-ingest-folder-path.png":::
+
+1. Select Next: Schema
+ > [!NOTE]
+ > TSI applies some flattening and escaping when persisting columns in Parquet files. See these links for more details: https://docs.microsoft.com/azure/time-series-insights/concepts-json-flattening-escaping-rules, https://docs.microsoft.com/azure/time-series-insights/ingestion-rules-update.
+- If schema is unknown or varying
+ 1. Remove all columns that are infrequently queried, leaving at least timestamp and TSID column(s).
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-schema.png" alt-text="Screenshot of the Azure Data Explorer ingestion selection of schema" lightbox="media/gen2-migration/adx-ingest-schema.png":::
+
+ 1. Add new column of dynamic type and map it to the whole record using $ path.
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-dynamic-type.png" alt-text="Screenshot of the Azure Data Explorer ingestion for dynamic type" lightbox="media/gen2-migration/adx-ingest-dynamic-type.png":::
+
+ Example:
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-dynamic-type-example.png" alt-text="Screenshot of the Azure Data Explorer ingestion for dynamic type example" lightbox="media/gen2-migration/adx-ingest-dynamic-type-example.png":::
+
+- If schema is known or fixed
+ 1. Confirm that the data looks correct. Correct any types if needed.
+ 1. Select Next: Summary
+
+Copy the LightIngest command and store it somewhere so you can use it in the next step.
++
+## Data Ingestion
+
+Before ingesting data you need to install the [LightIngest tool](/azure/data-explorer/lightingest#prerequisites).
+The command generated from One-Click tool includes a SAS token. ItΓÇÖs best to generate a new one so that you have control over the expiration time. In the portal, navigate to the Blob Container for the TSI Environment and select on ΓÇÿShared access tokenΓÇÖ
++
+> [!NOTE]
+> ItΓÇÖs also recommended to scale up your cluster before kicking off a large ingestion. For instance, D14 or D32 with 8+ instances.
+1. Set the following
+ 1. Permissions: Read and List
+ 1. Expiry: Set to a period youΓÇÖre comfortable that the migration of data will be complete
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-sas-expiry.png" alt-text="Screenshot of the Azure Data Explorer ingestion for permission expiry" lightbox="media/gen2-migration/adx-ingest-sas-expiry.png":::
+
+1. Select on ΓÇÿGenerate SAS token and URLΓÇÖ and copy the ΓÇÿBlob SAS URLΓÇÖ
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-sas-blob.png" alt-text="Screenshot of the Azure Data Explorer ingestion for SAS Blob URL" lightbox="media/gen2-migration/adx-ingest-sas-blob.png":::
+
+1. Go to the LightIngest command that you copied previously. Replace the -source parameter in the command with this ΓÇÿSAS Blob URLΓÇÖ
+1. `Option 1: Ingest All Data`. For smaller environments, you can ingest all of the data with a single command.
+ 1. Open a command prompt and change to the directory where the LightIngest tool was extracted to. Once there, paste the LightIngest command and execute it.
+
+ :::image type="content" source="media/gen2-migration/adx-ingest-lightingest-prompt.png" alt-text="Screenshot of the Azure Data Explorer ingestion for command prompt" lightbox="media/gen2-migration/adx-ingest-lightingest-prompt.png":::
+
+1. `Option 2: Ingest Data by Year or Month`. For larger environments or to test on a smaller data set you can filter the Lightingest command further.
+ 1. By Year
+ > Change your -prefix parameter
+ > Before: -prefix:"V=1/PT=Time"
+ > After: -prefix:"V=1/PT=Time/Y=<Year>"
+ > Example: -prefix:"V=1/PT=Time/Y=2021"
+ 1. By Month
+ > Change your -prefix parameter
+ > Before: -prefix:"V=1/PT=Time"
+ > After: -prefix:"V=1/PT=Time/Y=<Year>/M=<month #>"
+ > Example: -prefix:"V=1/PT=Time/Y=2021/M=03"
+
+Once youΓÇÖve modified the command, execute it like above. One the ingestion is complete (using monitoring option below) modify the command for the next year and month you want to ingest.
+
+## Monitoring Ingestion
+
+The LightIngest command included the -dontWait flag so the command itself wonΓÇÖt wait for ingestion to complete. The best way to monitor the progress while itΓÇÖs happening is to utilize the ΓÇ£InsightsΓÇ¥ tab within the portal.
+Open the Azure Data Explorer clusterΓÇÖs section within the portal and go to ΓÇÿMonitoring | InsightsΓÇÖ
++
+You can use the ΓÇÿIngestion (preview)ΓÇÖ section with the below settings to monitor the ingestion as itΓÇÖs happening
+- Time range: Last 30 minutes
+- Look at Successful and by Table
+- If you have any failures, look at Failed and by Table
++
+YouΓÇÖll know that the ingestion is complete once you see the metrics go to 0 for your table. If you want to see more details,, you can use Log Analytics. On the Azure Data Explorer cluster section select on the ΓÇÿLogΓÇÖ tab:
++
+#### Useful Queries
+
+Understand Schema if Dynamic Schema is used
+```
+| project p=treepath(fullrecord)
+| mv-expand p
+| summarize by tostring(p)
+```
+
+Accessing values in array
+```
+| where id_string == "a"
+| summarize avg(todouble(fullrecord.['nestedArray_v_double'])) by bin(timestamp, 1s)
+| render timechart
+```
+
+## Migrating Time Series Model (TSM) to Azure Data Explorer
+
+The model can be download in JSON format from TSI Environment using TSI Explorer UX or TSM Batch API.
+Then the model can be imported to another system like Azure Data Explorer.
+
+1. Download TSM from TSI UX.
+1. Delete first three lines using VSCode or another editor.
+
+ :::image type="content" source="media/gen2-migration/adx-tsm-1.png" alt-text="Screenshot of TSM migration to the Azure Data Explorer - Delete first 3 lines" lightbox="media/gen2-migration/adx-tsm-1.png":::
+
+1. Using VSCode or another editor, search and replace as regex `\},\n \{` with `}{`
+
+ :::image type="content" source="media/gen2-migration/adx-tsm-2.png" alt-text="Screenshot of TSM migration to the Azure Data Explorer - search and replace" lightbox="media/gen2-migration/adx-tsm-2.png":::
+
+1. Ingest as JSON into ADX as a separate table using Upload from file functionality.
+
+ :::image type="content" source="media/gen2-migration/adx-tsm-3.png" alt-text="Screenshot of TSM migration to the Azure Data Explorer - Ingest as JSON" lightbox="media/gen2-migration/adx-tsm-3.png":::
+
+## Translate Time Series Queries (TSQ) to KQL
+
+#### GetEvents
+
+```TSQ
+{
+ "getEvents": {
+ "timeSeriesId": [
+ "assest1",
+ "siteId1",
+ "dataId1"
+ ],
+ "searchSpan": {
+ "from": "2021-11-01T00:00:0.0000000Z",
+ "to": "2021-11-05T00:00:00.000000Z"
+ },
+ "inlineVariables": {},
+ }
+}
+```
+
+```KQL
+events
+| where timestamp >= datetime(2021-11-01T00:00:0.0000000Z) and timestamp < datetime(2021-11-05T00:00:00.000000Z)
+| where assetId_string == "assest1" and siteId_string == "siteId1" and dataid_string == "dataId1"
+| take 10000
+```
++
+#### GetEvents with filter
+
+ ```TSQ
+{
+ "getEvents": {
+ "timeSeriesId": [
+ "deviceId1",
+ "siteId1",
+ "dataId1"
+ ],
+ "searchSpan": {
+ "from": "2021-11-01T00:00:0.0000000Z",
+ "to": "2021-11-05T00:00:00.000000Z"
+ },
+ "filter": {
+ "tsx": "$event.sensors.sensor.String = 'status' AND $event.sensors.unit.String = 'ONLINE"
+ }
+ }
+}
+```
+
+```KQL
+events
+| where timestamp >= datetime(2021-11-01T00:00:0.0000000Z) and timestamp < datetime(2021-11-05T00:00:00.000000Z)
+| where deviceId_string== "deviceId1" and siteId_string == "siteId1" and dataId_string == "dataId1"
+| where ['sensors.sensor_string'] == "status" and ['sensors.unit_string'] == "ONLINE"
+| take 10000
+```
++
+#### GetEvents with projected variable
+
+```TSQ
+{
+ "getEvents": {
+ "timeSeriesId": [
+ "deviceId1",
+ "siteId1",
+ "dataId1"
+ ],
+ "searchSpan": {
+ "from": "2021-11-01T00:00:0.0000000Z",
+ "to": "2021-11-05T00:00:00.000000Z"
+ },
+ "inlineVariables": {},
+ "projectedVariables": [],
+ "projectedProperties": [
+ {
+ "name": "sensors.value",
+ "type": "String"
+ },
+ {
+ "name": "sensors.value",
+ "type": "bool"
+ },
+ {
+ "name": "sensors.value",
+ "type": "Double"
+ }
+ ]
+ }
+}
+```
+
+```KQL
+events
+| where timestamp >= datetime(2021-11-01T00:00:0.0000000Z) and timestamp < datetime(2021-11-05T00:00:00.000000Z)
+| where deviceId_string== "deviceId1" and siteId_string == "siteId1" and dataId_string == "dataId1"
+| take 10000
+| project timestamp, sensorStringValue= ['sensors.value_string'], sensorBoolValue= ['sensors.value_bool'], sensorDoublelValue= ['sensors.value_double']
+```
+
+#### AggregateSeries
+
+```TSQ
+{
+ "aggregateSeries": {
+ "timeSeriesId": [
+ "deviceId1"
+ ],
+ "searchSpan": {
+ "from": "2021-11-01T00:00:00.0000000Z",
+ "to": "2021-11-05T00:00:00.0000000Z"
+ },
+ "interval": "PT1M",
+ "inlineVariables": {
+ "sensor": {
+ "kind": "numeric",
+ "value": {
+ "tsx": "coalesce($event.sensors.value.Double, todouble($event.sensors.value.Long))"
+ },
+ "aggregation": {
+ "tsx": "avg($value)"
+ }
+ }
+ },
+ "projectedVariables": [
+ "sensor"
+ ]
+ }
+```
+
+```KQL
+events
+| where timestamp >= datetime(2021-11-01T00:00:00.0000000Z) and timestamp < datetime(2021-11-05T00:00:00.0000000Z)
+| where deviceId_string == "deviceId1"
+| summarize avgSensorValue= avg(coalesce(['sensors.value_double'], todouble(['sensors.value_long']))) by bin(IntervalTs = timestamp, 1m)
+| project IntervalTs, avgSensorValue
+```
+
+#### AggregateSeries with filter
+
+```TSQ
+{
+ "aggregateSeries": {
+ "timeSeriesId": [
+ "deviceId1"
+ ],
+ "searchSpan": {
+ "from": "2021-11-01T00:00:00.0000000Z",
+ "to": "2021-11-05T00:00:00.0000000Z"
+ },
+ "filter": {
+ "tsx": "$event.sensors.sensor.String = 'heater' AND $event.sensors.location.String = 'floor1room12'"
+ },
+ "interval": "PT1M",
+ "inlineVariables": {
+ "sensor": {
+ "kind": "numeric",
+ "value": {
+ "tsx": "coalesce($event.sensors.value.Double, todouble($event.sensors.value.Long))"
+ },
+ "aggregation": {
+ "tsx": "avg($value)"
+ }
+ }
+ },
+ "projectedVariables": [
+ "sensor"
+ ]
+ }
+}
+```
+
+```KQL
+events
+| where timestamp >= datetime(2021-11-01T00:00:00.0000000Z) and timestamp < datetime(2021-11-05T00:00:00.0000000Z)
+| where deviceId_string == "deviceId1"
+| where ['sensors.sensor_string'] == "heater" and ['sensors.location_string'] == "floor1room12"
+| summarize avgSensorValue= avg(coalesce(['sensors.value_double'], todouble(['sensors.value_long']))) by bin(IntervalTs = timestamp, 1m)
+| project IntervalTs, avgSensorValue
+```
+
+## Migration from TSI Power BI Connector to ADX Power BI Connector
+
+The manual steps involved in this migration are
+1. Convert Power BI query to TSQ
+1. Convert TSQ to KQL
+Power BI query to TSQ:
+The Power BI query copied from TSI UX Explorer looks like as shown below
+#### For Raw Data(GetEvents API)
+```
+{"storeType":"ColdStore","isSearchSpanRelative":false,"clientDataType":"RDX_20200713_Q","environmentFqdn":"6988946f-2b5c-4f84-9921-530501fbab45.env.timeseries.azure.com", "queries":[{"getEvents":{"searchSpan":{"from":"2019-10-31T23:59:39.590Z","to":"2019-11-01T05:22:18.926Z"},"timeSeriesId":["Arctic Ocean",null],"take":250000}}]}
+```
+- To convert it to TSQ, build a JSON from the above payload. The GetEvents API documentation also has examples to understand it better. Query - Execute - REST API (Azure Time Series Insights) | Microsoft Docs
+- The converted TSQ looks like as shown below. It's the JSON payload inside ΓÇ£queriesΓÇ¥
+```
+{
+ "getEvents": {
+ "timeSeriesId": [
+ "Arctic Ocean",
+ "null"
+ ],
+ "searchSpan": {
+ "from": "2019-10-31T23:59:39.590Z",
+ "to": "2019-11-01T05:22:18.926Z"
+ },
+ "take": 250000
+ }
+}
+```
+
+#### For Aggradate Data(Aggregate Series API)
+
+- For single inline variable, PowerBI query from TSI UX Explorer looks like as shown bellow:
+```
+{"storeType":"ColdStore","isSearchSpanRelative":false,"clientDataType":"RDX_20200713_Q","environmentFqdn":"6988946f-2b5c-4f84-9921-530501fbab45.env.timeseries.azure.com", "queries":[{"aggregateSeries":{"searchSpan":{"from":"2019-10-31T23:59:39.590Z","to":"2019-11-01T05:22:18.926Z"},"timeSeriesId":["Arctic Ocean",null],"interval":"PT1M", "inlineVariables":{"EventCount":{"kind":"aggregate","aggregation":{"tsx":"count()"}}},"projectedVariables":["EventCount"]}}]}
+```
+- To convert it to TSQ, build a JSON from the above payload. The AggregateSeries API documentation also has examples to understand it better. [Query - Execute - REST API (Azure Time Series Insights) | Microsoft Docs](/azure/rest/api/time-series-insights/dataaccessgen2/query/execute#queryaggregateseriespage1)
+- The converted TSQ looks like as shown below. It's the JSON payload inside ΓÇ£queriesΓÇ¥
+```
+{
+ "aggregateSeries": {
+ "timeSeriesId": [
+ "Arctic Ocean",
+ "null"
+ ],
+ "searchSpan": {
+ "from": "2019-10-31T23:59:39.590Z",
+ "to": "2019-11-01T05:22:18.926Z"
+ },
+ "interval": "PT1M",
+ "inlineVariables": {
+ "EventCount": {
+ "kind": "aggregate",
+ "aggregation": {
+ "tsx": "count()"
+ }
+ }
+ },
+ "projectedVariables": [
+ "EventCount",
+ ]
+ }
+}
+```
+- For more than one inline variable, append the json into ΓÇ£inlineVariablesΓÇ¥ as shown in the example below. The Power BI query for more than one inline variable looks like:
+```
+{"storeType":"ColdStore","isSearchSpanRelative":false,"clientDataType":"RDX_20200713_Q","environmentFqdn":"6988946f-2b5c-4f84-9921-530501fbab45.env.timeseries.azure.com","queries":[{"aggregateSeries":{"searchSpan":{"from":"2019-10-31T23:59:39.590Z","to":"2019-11-01T05:22:18.926Z"},"timeSeriesId":["Arctic Ocean",null],"interval":"PT1M", "inlineVariables":{"EventCount":{"kind":"aggregate","aggregation":{"tsx":"count()"}}},"projectedVariables":["EventCount"]}}, {"aggregateSeries":{"searchSpan":{"from":"2019-10-31T23:59:39.590Z","to":"2019-11-01T05:22:18.926Z"},"timeSeriesId":["Arctic Ocean",null],"interval":"PT1M", "inlineVariables":{"Magnitude":{"kind":"numeric","value":{"tsx":"$event['mag'].Double"},"aggregation":{"tsx":"max($value)"}}},"projectedVariables":["Magnitude"]}}]}
+
+{
+ "aggregateSeries": {
+ "timeSeriesId": [
+ "Arctic Ocean",
+ "null"
+ ],
+ "searchSpan": {
+ "from": "2019-10-31T23:59:39.590Z",
+ "to": "2019-11-01T05:22:18.926Z"
+ },
+ "interval": "PT1M",
+ "inlineVariables": {
+ "EventCount": {
+ "kind": "aggregate",
+ "aggregation": {
+ "tsx": "count()"
+ }
+ },
+ "Magnitude": {
+ "kind": "numeric",
+ "value": {
+ "tsx": "$event['mag'].Double"
+ },
+ "aggregation": {
+ "tsx": "max($value)"
+ }
+ }
+ },
+ "projectedVariables": [
+ "EventCount",
+ "Magnitude",
+ ]
+ }
+}
+```
+- If you want to query the latest data("isSearchSpanRelative": true), manually calculate the searchSpan as mentioned below
+ - Find the difference between ΓÇ£fromΓÇ¥ and ΓÇ£toΓÇ¥ from the Power BI payload. LetΓÇÖs call that difference as ΓÇ£DΓÇ¥ where ΓÇ£DΓÇ¥ = ΓÇ£fromΓÇ¥ - ΓÇ£toΓÇ¥
+ - Take the current timestamp(ΓÇ£TΓÇ¥) and subtract the difference obtained in first step. It will be new ΓÇ£fromΓÇ¥(F) of searchSpan where ΓÇ£FΓÇ¥ = ΓÇ£TΓÇ¥ - ΓÇ£DΓÇ¥
+ - Now, the new ΓÇ£fromΓÇ¥ is ΓÇ£FΓÇ¥ obtained in step 2 and new ΓÇ£toΓÇ¥ is ΓÇ£TΓÇ¥(current timestamp)
time-series-insights Migration To Adx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/migration-to-adx.md
+
+ Title: 'Migrating to Azure Data Explorer | Microsoft Docs'
+description: How to migrate Azure Time Series Insights environments to Azure Data Explorer.
+++++++ Last updated : 3/15/2022+++
+# Migrating to Azure Data Explorer
+
+## Overview
+
+Time Series Insights (TSI) service provides access to historical data ingested through hubs for operational analytics and reporting. Service features are:
+- Data ingestion via hubs or bulk upload capability.
+- Data storage on hot, limited retention, and cold, infinite retention, paths.
+- Data contextualization applying hierarchies through Time Series Model.
+- Data charting and operational analysis through TSI Explorer.
+- Data query using TSQ through API or TSI Explorer.
+- Connectors to access data with Databricks Spark or PBI.
++
+## Feature Comparison with Azure Data Explorer (ADX)
+
+| Feature | TSI | ADX |
+| | | |
+| Data ingestion | Event Hubs, IoT hub limited to 1 MB/s | Event Hubs, IoT hub, Kafka, Spark, Azure storage, Azure Stream Analytics, Azure Data Factory, Logstash, Power automate, Logic apps, Telegraf, Apache Nifi. No limits on ingestion (scalable), Ingestion benchmark is 200 MB/s/node on a 16 core machine in ADX cluster. |
+| Data storage and retention | Warm store ΓÇô multitenant ADX Cluster | Cold Store - Azure Blob storage in customerΓÇÖs subscription Distributed columnar store with highly optimized hot(on SSD of compute nodes) and cold(on Azure storage) store. Choose any ADX SKU so full flexibility |
+| Data formats | JSON | JSON, CSV, Avro, Parquet, ORC, TXT and various others [Data formats supported by Azure Data Explorer for ingestion](/azure/data-explorer/ingestion-supported-formats). |
+| Data Querying | TSQ | KQL, SQL |
+| Data Visualization | TSI Explorer, PBI | PBI, ADX Dashboards, Grafana, Kibana and other visualization tools using ODBC/JDBC connectors |
+| Machine Learning | NA | Supports R, Python to build ML models or score data by exporting existing ML models. Native capabilities for forecasting. Anomaly detection at scale. Clustering capabilities for diagnostics and RCA |
+| PBI Connector | Public preview | Optimized native PBI connector(GA), supports direct query or import mode, supports query parameters and filters |
+| Data Export | Data is available as Parquet files in BLOB storage | Supports automatic continuous export to Azure storage, external tables to query exported data |
+| HA/DR | Storage is owned by customer so depends on selected config. | HA SLA of 99.9% availability, AZ supported, Storage is built on durable Azure Blob storage |
+| Security | Private link for incoming traffic, but open for storage and hubs | VNet injection, Private Link, Encryption at rest with customer managed keys supported |
+| RBAC role and RLS | Limited RBAC role, no RLS | Granular RBAC role for functions and data access, RLS and data masking supported |
+
+## TSI Migration to ADX Steps
+
+TSI has two offerings, Gen1 and Gen2, which have different migration steps.
+
+### TSI Gen1
+
+TSI Gen1 doesnΓÇÖt have cold storage or hierarchy capability. All data has fixed retention. Extracting data and mapping it to ADX would be complicated and time-consuming task for TSI developers and the customer. Suggestion migration path is to set up parallel data ingestion to ADX. After fixed data retention period passes TSI environment can be deleted as ADX will contain same data.
+1. Create ADX Cluster
+1. Set up parallel ingestion from hubs to ADX Cluster
+1. Continue ingesting data for the period of fixed retention
+1. Start using ADX Cluster
+1. Delete TSI environment
+
+Detailed FAQ and engineering experience is outlined in [How to migrate TSI Gen1 to ADX](./how-to-tsi-gen1-migration.md)
+
+### TSI Gen2
+
+TSI Gen2 stores all data on cold storage using Parquet format as a blob in customerΓÇÖs subscription. To migrate data customer, should take the blob and import it into ADX using bulk upload capability Lightingest. More information on lightingest can be fund here.
+1. Create ADX Cluster
+1. Redirect data ingestion to ADX Cluster
+1. Import TSI cold data using lightingest
+1. Start using ADX Cluster
+1. Delete TSI Environment
+
+Detailed FAQ and engineering experience is outlined in [How to migrate TSI Gen2 to ADX](./how-to-tsi-gen2-migration.md)
+
+> [!NOTE]
+> If you are unable to migrate to Time Series Insights to Azure Data Explorer by 31 March 2025, your Time Series Insights resources will be automatically deleted. YouΓÇÖll be able to access Gen2 data in your storage account. However, youΓÇÖll only be able to perform management operations (such as updating storage account settings, getting storage account properties/keys, and deleting storage accounts) through Azure Resource Manager. For Gen1 data, if you have a support plan, please create a support ticket to retrieve your Gen1 data. We will keep your Gen1 data until 30 April 2025.
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
For OS types that release patches on a fixed cadence, VMs configured to the publ
As a new rollout is triggered every month, a VM will receive at least one patch rollout every month if the VM is powered on during off-peak hours. This process ensures that the VM is patched with the latest available security and critical patches on a monthly basis. To ensure consistency in the set of patches installed, you can configure your VMs to assess and download patches from your own private repositories. ## Supported OS images
-Only VMs created from certain OS platform images are currently supported. Custom images are currently not supported.
-> [!NOTE]
-> Automatic VM guest patching is only supported on Gen1 images.
+> [!IMPORTANT]
+> Automatic VM guest patching, on-demand patch assessment and on-demand patch installation are supported only on VMs created from images with the exact combination of publisher, offer and sku from the below supported OS images list. Custom images or any other publisher, offer, sku combinations are not supported. More images are added periodically.
-The following platform SKUs are currently supported (and more are added periodically):
| Publisher | OS Offer | Sku | |-||--|
The following platform SKUs are currently supported (and more are added periodic
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core |
-> [!NOTE]
->Automatic VM guest patching, on-demand patch assessment and on-demand patch installation are supported only on VMs created from images with the exact combination of publisher, offer and sku from the supported OS images list. Custom images or any other publisher, offer, sku combinations are not supported.
- ## Patch orchestration modes VMs on Azure now support the following patch orchestration modes:
virtual-machines Stackify Retrace Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/stackify-retrace-linux.md
az vm extension set --publisher 'Stackify.LinuxAgent.Extension' --version 1.0 --
| Error code | Meaning | Possible action | | :: | | | | 10 | Install Error | wget is required |
-| 20 | Install Error | python is required |
+| 20 | Install Error | Python is required |
| 30 | Install Error | sudo is required | | 40 | Install Error | activationKey is required | | 51 | Install Error | OS distro not supported |
virtual-machines Freebsd Intro On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/freebsd-intro-on-azure.md
If bash is not installed on your FreeBSD machine, run following command before t
sudo pkg install bash ```
-If python is not installed on your FreeBSD machine, run following commands before the installation. 
+If Python is not installed on your FreeBSD machine, run following commands before the installation. 
```bash sudo pkg install python38
virtual-machines Oracle Database Backup Azure Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-backup.md
Perform the following steps for each database on the VM:
1. Unmount the restore point.
- When all databases on the VM have been successfully recovered you may unmount the restore point. This can be done on the VM using the `unmount` command or in Azure portal from the File Recovery blade. You can also unmount the recovery volumes by running the python script again with the **-clean** option.
+ When all databases on the VM have been successfully recovered you may unmount the restore point. This can be done on the VM using the `unmount` command or in Azure portal from the File Recovery blade. You can also unmount the recovery volumes by running the Python script again with the **-clean** option.
In the VM using unmount: ```bash
virtual-machines Hana Vm Operations Netapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-netapp.md
az netappfiles snapshot create -g mygroup --account-name myaccname --pool-name m
BACKUP DATA FOR FULL SYSTEM CLOSE SNAPSHOT BACKUP_ID 47110815 SUCCESSFUL SNAPSHOT-2020-08-18:11:00'; ```
-This snapshot backup procedure can be managed in various ways, using various tools. One example is the python script ΓÇ£ntaphana_azure.pyΓÇ¥ available on GitHub [https://github.com/netapp/ntaphana](https://github.com/netapp/ntaphana)
+This snapshot backup procedure can be managed in various ways, using various tools. One example is the Python script ΓÇ£ntaphana_azure.pyΓÇ¥ available on GitHub [https://github.com/netapp/ntaphana](https://github.com/netapp/ntaphana)
This is sample code, provided ΓÇ£as-isΓÇ¥ without any maintenance or support.
Available solutions for storage snapshot based application consistent backup:
### Back up the snapshot using Azure blob storage
-Back up to Azure blob storage is a cost effective and fast method to save ANF-based HANA database storage snapshot backups. To save the snapshots to Azure Blob storage, the AzCopy tool is preferred. Download the latest version of this tool and install it, for example, in the bin directory where the python script from GitHub is installed.
+Back up to Azure blob storage is a cost effective and fast method to save ANF-based HANA database storage snapshot backups. To save the snapshots to Azure Blob storage, the AzCopy tool is preferred. Download the latest version of this tool and install it, for example, in the bin directory where the Python script from GitHub is installed.
Download the latest AzCopy tool: ```
virtual-machines Sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-rise-integration.md
+
+ Title: Integrating Azure with SAP RISE managed workloads| Microsoft Docs
+description: Describes integrating SAP RISE managed virtual network with customer's own Azure environment
+
+documentationcenter: ''
++
+editor: ''
+tags: azure-resource-manager
+keywords: ''
++
+ vm-linux
+ Last updated : 03/14/2022++++
+# Integrating Azure with SAP RISE managed workloads
+
+For customers with SAP solutions such as RISE with SAP Enterprise Cloud Services (ECS) and SAP S/4HANA Cloud, private edition (PCE) which are deployed on Azure, integrating the SAP managed environment with their own Azure ecosystem and third party applications is of particular importance. The following article explains the concepts utilized and best practices to follow for a secure and performant solution.
+
+RISE with SAP S/4HANA Cloud, private edition and SAP Enterprise Cloud Services are SAP managed services of your SAP landscape, in an Azure subscription owned by SAP. The virtual network (vnet) utilized by these managed systems should fit well in your overall network concept and your available IP address space. Requirements for private IP range for RISE PCE or ECS environments are coming from SAP reference deployments. Customers specify the chosen RFC1918 CIDR IP address range to SAP. To facilitate connectivity between SAP and customers owned Azure subscriptions/vnets, a direct vnet peering can be set up. Another option is the use of a VPN vnet-to-vnet connection.
+
+> [!IMPORTANT]
+> For all details about RISE with SAP Enterprise Cloud Services and SAP S/4HANA Cloud, private edition please contact your SAP representative.
+
+## Virtual network peering with SAP RISE/ECS
+
+A vnet peering is the most performant way to connect securely and privately two standalone vnets, utilizing the Microsoft private backbone network. The peered networks appear as one for connectivity purposes, allowing applications to talk to each other. Applications running in different vnets, subscriptions, Azure tenants or regions are enabled to communicate directly. Like network traffic on a single vnet, vnet peering traffic remains on MicrosoftΓÇÖs private network and does not traverse the internet.
+
+For SAP RISE/ECS deployments, virtual peering is the preferred way to establish connectivity with customerΓÇÖs existing Azure environment. Both the SAP vnet and customer vnet(s) are protected with network security groups (NSG), enabling communication on SAP and database ports through the vnet peering. Communication between the peered vnets is secured through these NSGs, limiting communication to customerΓÇÖs SAP environment. For details and a list of open ports, contact your SAP representative.
+
+SAP managed workload is preferably deployed in the same [Azure region](https://azure.microsoft.com/global-infrastructure/geographies/) as customerΓÇÖs central infrastructure and applications accessing it. Virtual network peering can be set up within the same region as your SAP managed environment, but also through [global virtual network peering](/azure/virtual-network/virtual-network-peering-overview) between any two Azure regions. With SAP RISE/ECS available in many Azure regions, the region ideally should be matched with workload running in customer vnets due to latency and vnet peering cost considerations. However, some of the scenarios (for example, central S/4HANA deployment for a multi-national, globally presented company) also require to peer networks globally.
+
+ This diagram shows a typical SAP customer's hub and spoke virtual networks. Cross-tenant virtual network peering connects SAP RISE vnet to customer's hub vnet.
+
+Since SAP RISE/ECS runs in SAPΓÇÖs Azure tenant and subscriptions, the virtual network peering needs to be set up between [different tenants](/azure/virtual-network/create-peering-different-subscriptions). This can be accomplished by setting up the peering with the SAP provided networkΓÇÖs Azure resource ID and have SAP approve the peering. Add a user from the opposite AAD tenant as a guest user, accept the guest user invitation and follow process documented at [Create a VNet peering - different subscriptions](/azure/virtual-network/create-peering-different-subscriptions#cli). Contact your SAP representative for the exact steps required. Engage the respective team(s) within your organization that deal with network, user administration and architecture to enable this process to be completed swiftly.
+
+## VPN Vnet-to-Vnet
+
+As an alternative to vnet peering, virtual private network (VPN) connection can be established between VPN gateways, deployed both in the SAP RISE/ECS subscription and customers own. A vnet-to-vnet connection will be established between these two VPN gateways, enabling fast communication between the two separate vnets. The respective vnets and gateways can be located in different Azure regions.
+
+ This diagram shows a typical SAP customer's hub and spoke virtual networks. VPN gateway located in SAP RISE vnet connects through vnet-to-vnet connection into gateway contained in customer's hub vnet.
+
+While vnet peering is the recommended and more typical deployment model, a VPN vnet-to-vnet can potentially simplify a complex virtual peering between customer and SAP RISE/ECS virtual networks. The VPN Gateway acts as only point of entry into the customerΓÇÖs network and is managed and secured by a central team.
+
+Network Security Groups are in effect on both customer and SAP vnet, identically to vnet peering architecture enabling communication to SAP NetWeaver and HANA ports as required. For details how to set up the VPN connection and which settings should be used, contact your SAP representative.
+
+## Connectivity back to on-premise
+
+With an existing customer Azure deployment, on-premise network is already connected through ExpressRoute (ER) or VPN. The same on-premise network path is typically used for SAP RISE/ECS managed workloads. Preferred architecture is to use existing ER/VPN Gateways in customerΓÇÖs hub vnet for this purpose, with connected SAP RISE vnet seen as a spoke network connected to customerΓÇÖs vnet hub.
+
+ This diagram shows a typical SAP customer's hub and spoke virtual networks. It's connected to on-premise with a connection. Cross tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. The vnet peering has remote gateway transit enabled, enabling SAP RISE vnet to be accessed from on-premise.
+
+With this architecture, central policies and security rules governing network connectivity to customer workloads also apply to SAP RISE/ECS managed workloads. The same on-premise network path is used for both customer's vnets and SAP RISE/ECS vnet.
+
+If there's no currently existing Azure to on-premise connectivity, contact your SAP representative for details which connections models are possible to be established. Any on-premise to SAP RISE/ECS connection is then for reaching the SAP managed vnet only. The on-premise to SAP RISE/ECS connection isn't used to access customer's own Azure vnets.
+
+**Important to note**: A virtual network can have [only have one gateway](/azure/virtual-network/virtual-network-peering-overview#gateways-and-on-premises-connectivity), local or remote. With vnet peering established between SAP RISE/ECS using remote gateway transit like in above architecture, no gateways can be added in the SAP RISE/ECS vnet. A combination of vnet peering with remote gateway transit together with another VPN gateway in the SAP RISE/ECS vnet isn't possible.
+
+## Virtual WAN with SAP RISE/ECS managed workloads
+
+Similarly to using a hub and spoke network architecture with connectivity to both SAP RISE/ECS vnet and on-premises, the Azure Virtual Wan (vWAN) hub can be used for same purpose. Both connection options described earlier ΓÇô vnet peering as well as VPN vnet-to-vnet ΓÇô are available to be connected to vWAN hub.
+
+The vWAN network hub is deployed and managed entirely by customer in customer subscription and vnet. On-premise connection and routing through vWAN network hub are also managed entirely by customer.
+
+Again, contact your SAP representative for details and steps needed to establish this connectivity.
+
+## DNS integration with SAP RISE/ECS managed workloads
+
+Integration of customer owned networks with Cloud-based infrastructure and providing a seamless name resolution concept is a vital part of a successful project implementation.
+
+This diagram describes one of the common integration scenarios of SAP owned subscriptions, VNets and DNS infrastructure with customerΓÇÖs local network and DNS services. In this setup on-premise DNS servers are holding all DNS entries. The DNS infrastructure is capable to resolve DNS requests coming from all sources (on-premise clients, customerΓÇÖs Azure services and SAP managed environments).
+
+ This diagram shows a typical SAP customer's hub and spoke virtual networks. Cross-tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. On-premise connectivity is provided from customer's hub. DNS servers are located both within customer's hub vnet as well as SAP RISE vnet, with DNS zone transfer between them. DNS Queries from customer's VMs query the customer's DNS servers.
+
+Design description and specifics:
+
+ - Custom DNS configuration for SAP-owned VNets
+
+ - 2 VMs in the RISE/STE/ECS Azure vnet hosting DNS servers
+
+ - Customers must provide and delegate to SAP a subdomain/zone (for example, \*hec.contoso.com) which will be used to assign names and create forward and reverse DNS entries for the virtual machines that run SAP managed environment. SAP DNS servers are holding a master DNS role for the delegated zone
+
+ - DNS zone transfer from SAP DNS server to customerΓÇÖs DNS servers is the primary method to replicate DNS entries from RISE/STE/ECS environment to on-premise DNS
+
+ - Customer-owned Azure vnets are also using custom DNS configuration referring to customer DNS servers located in Azure Hub vnet.
+
+ - Optionally, customers can set up a DNS forwarder within their Azure vnets. Such forwarder then pushes DNS requests coming from Azure services to SAP DNS servers that are targeted to the delegated zone (\*hec.contoso.com).
+
+Alternatively, DNS zone transfer from SAP DNS servers could be performed to a customerΓÇÖs DNS servers located in Azure Hub VNet (diagram above). This is applicable for the designs when customers operate custom DNS solution (e.g. [AD DS](/windows-server/identity/ad-ds/active-directory-domain-services) or BIND servers) within their Hub VNet.
+
+**Important to note**, that both Azure provided DNS and Azure private zones **do not** support DNS zone transfer capability, hence, cannot be used to accept DNS replication from SAP RISE/STE/ECS DNS servers. Additionally, external DNS service providers are typically not supported by SAP RISE/ECS.
+
+To further read about the usage of Azure DNS for SAP, outside the usage with SAP RISE/ECS see details in following [blog post](https://www.linkedin.com/posts/k-popov_sap-on-azure-dns-integration-whitepaper-activity-6841398577495977984-ui9V/).
+
+## Internet outbound and inbound connections with SAP RISE/ECS
+
+SAP workloads communicating with external applications or inbound connections from a companyΓÇÖs user base (for example, SAP Fiori) could require a network path to the Internet, depending on customerΓÇÖs requirements. Within SAP RISE/ECS managed workloads, work with your SAP representative to explore needs for such https/RFC/other communication paths. Network communication to/from the Internet is by default not enabled for SAP RISE/ECS customers and default networking is utilizing private IP ranges only. Internet connectivity requires planning with SAP, to optimally protect customerΓÇÖs SAP landscape.
+
+Should you enable Internet bound or incoming traffic with your SAP representatives, the network communication is protected through various Azure technologies such as NSGs, ASGs, Application Gateway with Web Application Firewall (WAF), proxy servers and others. These services are entirely managed through SAP within the SAP RISE/ECS vnet and subscription. The network path SAP RISE/ECS to and from Internet remains typically within the SAP RISE/ECS vnet only and does not transit into/from customerΓÇÖs own vnet(s).
+
+ This diagram shows a typical SAP customer's hub and spoke virtual networks. Cross-tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. On-premise connectivity is provided from customer's hub. SAP Cloud Connector VM from SAP RISE vnet connects through Internet to SAP BTP. Another SAP Cloud Connector VM connects through Internet to SAP BTP, with internet inbound and outbound connectivity facilitated by customer's hub vnet.
+
+Applications within a customerΓÇÖs own vnet connect to the Internet directly from respective vnet or through customerΓÇÖs centrally managed services such as Azure Firewall, Azure Application Gateway, NAT Gateway and others. Connectivity to SAP BTP from non-SAP RISE/ECS applications takes the same network Internet bound path. Should an SAP Cloud Connecter be needed for such integration, it's placed with customerΓÇÖs non-SAP VMs requiring SAP BTP communication and network path managed by customer themselves.
+
+## SAP BTP Connectivity
+
+SAP Business Technology Platform (BTP) provides a multitude of applications that are accessed by public IP/hostname via the Internet.
+Customer services running in their Azure subscriptions access them either directly through VM/Azure service internet connection, or through User Defined Routes forcing all Internet bound traffic to go through a centrally managed firewall or other network virtual appliances.
+
+SAP has a [preview program](https://help.sap.com/products/PRIVATE_LINK/42acd88cb4134ba2a7d3e0e62c9fe6cf/3eb3bc7aa5db4b5da9dcdbf8ee478e52.html) in operation for SAP Private Link Service for customers using SAP BTP on Azure. The SAP Private Link Service exposes SAP BTP services through a private IP range to customerΓÇÖs Azure network and thus accessible privately through previously described vnet peering or VPN site-to-site connections instead of through the Internet.
+
+See a series of blog posts on the architecture of the SAP BTP Private Link Service and private connectivity methods, dealing with DNS and certificates in following SAP blog series [Getting Started with BTP Private Link Service for Azure](https://blogs.sap.com/2021/12/29/getting-started-with-btp-private-link-service-for-azure/)
+
+## Next steps
+Check out the documentation:
+
+- [SAP workloads on Azure: planning and deployment checklist](./sap-deployment-checklist.md)
+- [Virtual network peering](/azure/virtual-network/virtual-network-peering-overview)
+- [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md)
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
If the VM is configured with Accelerated Networking, a second network interface
Different Azure hosts use different models of Mellanox physical NIC, so Linux automatically determines whether to use the ΓÇ£mlx4ΓÇ¥ or ΓÇ£mlx5ΓÇ¥ driver. Placement of the VM on an Azure host is controlled by the Azure infrastructure. With no customer option to specify which physical NIC that a VM deployment uses, the VMs must include both drivers. If a VM is stopped/deallocated and then restarted, it might be redeployed on hardware with a different model of Mellanox physical NIC. Therefore, it might use the other Mellanox driver.
+If a VM image doesn't include a driver for the Mellanox physical NIC, networking capabilities will continue to work at the slower speeds of the virtual NIC, even though the portal, Azure CLI, and Azure PowerShell will still show the Accelerated Networking feature as _enabled_.
+ FreeBSD provides the same support for Accelerated Networking as Linux when running in Azure. The remainder of this article describes Linux and uses Linux examples, but the same functionality is available in FreeBSD. > [!NOTE]
virtual-network Configure Public Ip Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-application-gateway.md
Azure Application Gateway is a web traffic load balancer that manages traffic to your web applications. Application Gateway makes routing decisions based on attributes of an HTTP request. Examples of attributes such as URI path or host headers. The frontend of an Application Gateway is the connection point for the applications in its backend pool.
-An Application Gateway frontend can be a private IP address, public IP address, or both. The V1 SKU of Application Gateway supports basic public IPs, static, or dynamic. The V2 SKU supports standard SKU public IPs that are static only. Application Gateway V2 SKU doesn't support an internal IP address as it's only frontend. For more information, see [Application Gateway front-end IP address configuration](../../application-gateway/configuration-front-end-ip.md).
+An Application Gateway frontend can be a private IP address, public IP address, or both. The V1 SKU of Application Gateway supports basic dynamic public IPs. The V2 SKU supports standard SKU public IPs that are static only. Application Gateway V2 SKU doesn't support an internal IP address as it's only frontend. For more information, see [Application Gateway front-end IP address configuration](../../application-gateway/configuration-front-end-ip.md).
In this article, you'll learn how to create an Application Gateway using an existing public IP in your subscription.
Application gateway doesn't support changing the public IP address after creatio
In this article, you learned how to create an Application Gateway and use an existing public IP. - For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT?](../nat-gateway/nat-overview.md).-- To learn more about public IP addresses in Azure, see [Public IP addresses](./public-ip-addresses.md).
+- To learn more about public IP addresses in Azure, see [Public IP addresses](./public-ip-addresses.md).
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
For guides on how to enable NSG flow logs, see [Enabling NSG Flow Logs](../../ne
Each NAT gateway can provide up to 50 Gbps of throughput. You can split your deployments into multiple subnets and assign each subnet or group of subnets a NAT gateway to scale out.
-NAT gateway can be attached to up to 16 public IP addresses. Each NAT gateway can support up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet for TCP and UDP. Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
+Each NAT gateway public IP address provides 64,512 SNAT ports to make outbound connections. NAT gateway can support up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet for TCP and UDP. Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
## Source Network Address Translation
The source IP address and port of each flow is SNAT'd to the public IP address 6
#### Source (SNAT) port reuse
-Azure provides ~64,000 SNAT ports per public IP address. For each public IP address attached to NAT gateway, the entire inventory of ports provided by those IPs is made available to any virtual machine instance within a subnet that is also attached to NAT gateway. NAT gateway selects a port at random out of the available inventory of ports to make new outbound connections. If NAT gateway doesn't find any available SNAT ports, then it will reuse a SNAT port. A port can be reused so long as it's going to a different destination endpoint. As mentioned in the [Performance](#performance) section, NAT gateway supports up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet.
+For NAT gateway, 64,512 SNAT ports are available per public IP address. For each public IP address attached to NAT gateway, the entire inventory of ports provided by those IPs is made available to any virtual machine instance within a subnet that is also attached to NAT gateway. NAT gateway selects a port at random out of the available inventory of ports to make new outbound connections. If NAT gateway doesn't find any available SNAT ports, then it will reuse a SNAT port. A port can be reused so long as it's going to a different destination endpoint. As mentioned in the [Performance](#performance) section, NAT gateway supports up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet.
-The following flow illustrates this concept with a VM flowing to destination IP 65.52.0.2 after flows 1 - 3 from the above tables have already taken place.
+The following illustrates this concept as an additional flow to the preceding set, with a VM flowing to a new destination IP 65.52.0.2.
| Flow | Source tuple | Destination tuple | |::|::|::|
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
NAT is fully scaled out from the start. There's no ramp up or scale-out operatio
* Outbound connectivity can be defined for each subnet with NAT. Multiple subnets within the same virtual network can have different NATs. Or multiple subnets within the same virtual network can use the same NAT. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by NAT automatically without any customer configuration. NAT takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
-* Presence of UDRs for virtual appliances and virtual network gateways override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-and-virtual-network-gateway-udrs-supersede-nat-gateway-for-going-outbound) to learn more.
+* Presence of custom UDRs for virtual appliances and VPN ExpressRoutes override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more.
* NAT supports TCP and UDP protocols only. ICMP isn't supported.
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
This article provides guidance on how to configure your NAT gateway to ensure ou
* [Configuration issues with NAT gateway](#configuration-issues-with-nat-gateway) * [Configuration issues with your subnets and virtual network](#configuration-issues-with-subnets-and-virtual-networks-using-nat-gateway) * [SNAT exhaustion due to NAT gateway configuration](#snat-exhaustion-due-to-nat-gateway-configuration)
+* [Connection failures due to idle timeouts](#connection-failures-due-to-idle-timeouts)
* [Connection issues with NAT gateway and integrated services](#connection-issues-with-nat-gateway-and-integrated-services) * [NAT gateway public IP not being used for outbound traffic](#nat-gateway-public-ip-not-being-used-for-outbound-traffic) * [Connection failures in the Azure infrastructure](#connection-failures-in-the-azure-infrastructure)
-* [Connection failures in the path between Azure and the public internet destination](#connection-failures-with-public-internet-transit)
-* [Connection failures at the public internet destination](#connection-failures-at-the-public-internet-destination)
-* [Connection failures due to TCP Resets received](#connection-failures-due-to-tcp-resets-received)
+* [Connection failures outside of the Azure infrastructure](#connection-failures-outside-of-the-azure-infrastructure)
## Configuration issues with NAT gateway
This article provides guidance on how to configure your NAT gateway to ensure ou
Check the following configurations to ensure that NAT gateway can be used to direct traffic outbound: 1. At least one public IP address or one public IP prefix is attached to NAT gateway. At least one public IP address must be associated with the NAT gateway for it to provide outbound connectivity. 2. At least one subnet is attached to a NAT gateway. You can attach multiple subnets to a NAT gateway for going outbound, but those subnets must exist within the same virtual network. NAT gateway cannot span beyond a single virtual network.
-3. No [NSG rules](../network-security-groups-overview.md#outbound) or [UDRs](#virtual-appliance-and-virtual-network-gateway-udrs-supersede-nat-gateway-for-going-outbound) are blocking NAT gateway from directing traffic outbound to the internet.
+3. No [NSG rules](../network-security-groups-overview.md#outbound) or [UDRs](#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) are blocking NAT gateway from directing traffic outbound to the internet.
### How to validate connectivity
NAT gateway cannot be deployed in a gateway subnet. VPN gateway uses gateway sub
## SNAT exhaustion due to NAT gateway configuration Common SNAT exhaustion issues with NAT gateway typically have to do with the configurations on the NAT gateway. Common SNAT exhaustion issues include:
-* NAT gateway idle timeout timers being set higher than their default value of 4 minutes.
* Outbound connectivity on NAT gateway not scaled out enough.
+* NAT gateway's configurable TCP idle timeout timer is set higher than the default value of 4 minutes.
-### Idle timeout timers have been changed to higher value than their default values
+### Outbound connectivity not scaled out enough
-NAT gateway resources have a default TCP idle timeout of 4 minutes. If this setting is changed to a higher value, NAT gateway will hold on to flows longer and can cause [unnecessary pressure on SNAT port inventory](nat-gateway-resource.md#timers).
+Each public IP address provides 64,512 SNAT ports to subnets attached to NAT gateway. From those available SNAT ports, NAT gateway can support up to 50,000 concurrent connections to the same destination endpoint. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. More public IP addresses may need to be added to NAT gateway in order to provide more SNAT ports for outbound connectivity.
-Check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening:
+The table below describes two common scenarios in which outbound connectivity may not be scaled out enough and how to validate and mitigate these issues:
-*Total SNAT Connection*
-* "Sum" aggregation shows high connection volume.
-* "Failed" connection state shows transient or persistent failures over time.
+| Scenario | Evidence |Mitigation |
+||||
+| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | Determine if you can add more public IP addresses or public IP prefixes. This addition will allow for up to 16 IP addresses in total to your NAT gateway. This addition will provide more inventory for available SNAT ports (64,000 per IP address) and allow you to scale your scenario further.|
+| You've already given 16 IP addresses and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address resources or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. |
-*Dropped Packets*
-* "Sum" aggregation shows packets dropping consistent with high connection volume.
+>[!NOTE]
+>It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports to the inventory by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns.
-**Mitigation**
+### TCP idle timeout timers set higher than the default value
-Explore the impact of reducing TCP idle timeout to lower values to free up SNAT port inventory earlier. The TCP idle timeout timer cannot be set lower than 4 minutes.
+NAT gateway has a configurable TCP idle timeout timer that defaults to 4 minutes. If this setting is changed to a higher value, NAT gateway will hold on to flows longer and can create [additional pressure on SNAT port inventory](nat-gateway-resource.md#timers). The table below describes a common scenarion in which a high TCP idle timeout may be causing SNAT exhaustion and provides possible mitigation steps to take:
-Consider [asynchronous polling patterns](/azure/architecture/patterns/async-request-reply) for long-running operations to free up connection resources for other operations.
+| Scenario | Evidence | Mitigation |
+||||
+| You would like to ensure that TCP connections stay active for long periods of time without idle timing out so you increase the TCP idle timeout timer setting. After a while you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | You have a few possible mitigation steps that you can take to resolve SNAT port exhaustion: - **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer cannot be set lower than 4 minutes. - Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. - **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). - For connections going to Azure PaaS services, use **[Private Link](/azure/private-link/private-link-overview)**. Private Link eliminates the need to use public IPs of your NAT gateway which frees up more SNAT ports for outbound connections to the internet.|
-Long-lived flows (for example reused TCP connections) should use TCP keepalives or application layer keepalives to avoid intermediate systems timing out. Increasing the idle timeout is a last resort and may not resolve the root cause. A long timeout can cause low rate failures when timeout expires and introduce delay and unnecessary failures.
+## Connection failures due to idle timeouts
-### Outbound connectivity not scaled out enough
+### TCP idle timeout
-NAT gateway provides 64,000 SNAT ports to a subnetΓÇÖs resources for each public IP address attached to it. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. More public IP addresses may need to be added to NAT gateway in order to provide more SNAT ports for outbound connectivity.
+As described in the [TCP timers](#tcp-idle-timeout-timers-set-higher-than-the-default-value) section above, TCP keepalives should be used instead to refresh idle flows and reset the idle timeout. TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection. To learn more, see [Timer considerations](/azure/virtual-network/nat-gateway-resource#timers).
-The table below describes two common scenarios in which outbound connectivity may not be scaled out enough and how to validate and mitigate these issues:
+>[!NOTE]
+>Increasing the TCP idle timeout is a last resort and may not resolve the root cause. A long timeout can cause low rate failures when timeout expires and introduce delay and unnecessary failures.
-| Scenario | Evidence |Mitigation |
-||||
-| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | Determine if you can add more public IP addresses or public IP prefixes. This addition will allow for up to 16 IP addresses in total to your NAT gateway. This addition will provide more inventory for available SNAT ports (64,000 per IP address) and allow you to scale your scenario further.|
-| You've already given 16 IP addresses and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address resources or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. |
+### UDP idle timeout
->[!NOTE]
->It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports to the inventory by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns.
+Unlike TCP idle timeout timers for NAT gateway, UDP idle timeout timers are not configurable. The table below describes a common scenario encountered with connections dropping due to UDP traffic idle timing out and steps to take to mitigate the issue.
+
+| Scenario | Evidence | Mitigation |
+||||
+| You notice that UDP traffic is dropping connections that need to be maintained for long periods of time. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor, **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | A few possible mitigation steps that can be taken: - **Enable UDP keepalives**. Keep in mind that when a UDP keepalive is enabled, it is only active for one direction in a connection. This means that the connection can still time-out from going idle on the other side of a connection. To prevent a UDP connection from going idle and timing out, UDP keepalives should be enabled for both directions in a connection flow. - **Application layer keepalives** can also be used to refresh idle flows and reset the idle timeout. Check the server side for what options exist for application specific keepalives. |
## Connection issues with NAT gateway and integrated services
Test and resolve issues with VMs holding on to old SNAT IP addresses by:
If you are still having trouble, open a support case for further troubleshooting.
-### Virtual appliance and virtual network gateway UDRs supersede NAT gateway for going outbound
+### Virtual appliance UDRs and VPN ExpressRoute override NAT gateway for routing outbound traffic
-When NAT gateway is attached to a subnet also associated with a user defined route (UDR) for a virtual appliance or virtual network gateway, the UDR will take precedence over the NAT gateway for internet routed traffic. The internet traffic will flow from the IP configured for the UDR rather than from the NAT gateway public IP address(es).
+When forced tunneling with a custom UDR is enabled to direct traffic to a virtual appliance or VPN through ExpressRoute, the UDR or ExpressRoute takes precedence over NAT gateway for directing internet bound traffic. To learn more, see [custom UDRs](/azure/virtual-network/virtual-networks/udr-overview#custom-routes).
The order of precedence for internet routing configurations is as follows:
-Virtual appliance / Virtual network gateway UDR >> NAT gateway >> default system
+Virtual appliance UDR / VPN ExpressRoute >> NAT gateway >> default system
-Test and resolve issues with a virtual appliance or virtual network gateway UDR configured to your virtual network by:
-1. [Testing that the NAT gateway public IP](./tutorial-create-nat-gateway-portal.md#test-nat-gateway) is used for outbound traffic. If a different IP is being used, it could be because of a UDR, follow the remaining steps on how to check for and remove UDRs.
+Test and resolve issues with a virtual appliance UDR or VPN ExpressRoute overriding your NAT gateway by:
+1. [Testing that the NAT gateway public IP](./tutorial-create-nat-gateway-portal.md#test-nat-gateway) is used for outbound traffic. If a different IP is being used, it could be because of a custom UDR, follow the remaining steps on how to check for and remove custom UDRs.
2. Check for UDRs in the virtual networkΓÇÖs route table, refer to [view route tables](../manage-route-table.md#view-route-tables). 3. Remove the UDR from the route table by following [create, change, or delete an Azure route table](../manage-route-table.md#change-a-route-table).
-Once the UDR is removed from the routing table, the NAT gateway public IP should now take precedence in routing outbound traffic to the internet.
+Once the custom UDR is removed from the routing table, the NAT gateway public IP should now take precedence in routing outbound traffic to the internet.
+
+### Private IPs are used to connect to Azure services by Private Link
+
+[Private Link](/azure/private-link/private-link-overview) connects your Azure virtual networks privately to Azure PaaS services such as Storage, SQL, or Cosmos DB over the Azure backbone network instead of over the internet. Private Link uses the private IP addresses of virtual machine instances in your virtual network to connect to these Azure platform services instead of the public IP of NAT gateway. As a result, when looking at the source IP address used to connect to these Azure services, you will notice that the private IPs of your instances are used. See [Azure services listed here](/azure/private-link/availability) for all services supported by Private Link.
+
+When possible, Private Link should be used to connect directly from your virtual networks to Azure platform services in order to [reduce the demand on SNAT ports](#tcp-idle-timeout-timers-set-higher-than-the-default-value). Reducing the demand on SNAT ports can help reduce the risk of SNAT port exhaustion.
+
+To create a Private Link, see the following Quickstart guides to get started:
+- [Create a Private Endpoint](/azure/private-link/create-private-endpoint-portal)
+- [Create a Private Link](/azure/private-link/create-private-link-service-portal)
+
+To check which Private Endpoints you have set up with Private Link:
+1. From the Azure portal, search for Private Link in the search box.
+2. In the Private Link center, select Private Endpoints or Private Link services to see what configurations have been set up. See [Manage private endpoint connections](/azure/private-link/manage-private-endpoint#manage-private-endpoint-connections-on-azure-paas-resources) for more details.
+
+Service endpoints can also be used to connect your virtual network to Azure PaaS services. To check if you have service endpoints configured for your virtual network:
+1. From the Azure portal, navigate to your virtual network and select "Service endpoints" from Settings.
+2. All Service endpoints created will be listed along with which subnets they are configured. See [logging and troubleshooting Service endpoints](/azure/virtual-network/virtual-network-service-endpoints-overview#logging-and-troubleshooting) for more details.
+
+>[!NOTE]
+>Private Link is the recommended option over Service endpoints for private access to Azure hosted services.
## Connection failures in the Azure infrastructure
Azure monitors and operates its infrastructure with great care. However, transie
We don't recommend artificially reducing the TCP connection timeout or tuning the RTO parameter.
-## Connection failures with public internet transit
+## Connection failures outside of the Azure infrastructure
+
+### Connection failures with public internet transit
The chances of transient failures increases with a longer path to the destination and more intermediate systems. It's expected that transient failures can increase in frequency over [Azure infrastructure](#connection-failures-in-the-azure-infrastructure). Follow the same guidance as preceding [Azure infrastructure](#connection-failures-in-the-azure-infrastructure) section.
-## Connection failures at the public internet destination
+### Connection failures at the public internet destination
The previous sections apply, along with the internet endpoint that communication is established with. Other factors that can impact connectivity success are:
What else to check for:
If your investigation is inconclusive, open a support case for further troubleshooting.
-## Connection failures due to TCP Resets received
-
-The NAT gateway generates TCP resets on the source VM for traffic that isn't recognized as in progress.
-
-One possible reason is the TCP connection has idle timed out. You can adjust the idle timeout from 4 minutes to up to 120 minutes.
-
-TCP Resets aren't generated on the public side of NAT gateway resources. TCP resets on the destination side are generated by the source VM, not the NAT gateway resource.
-
-Keep in mind that a long timeout can cause low-rate failures when timeout expires and introduce delay and unnecessary connection failures.
+## Next steps
-Open a support case for further troubleshooting if necessary.
+We are always looking to improve the experience of our customers. If you are experiencing issues with NAT gateway that are not listed or resolved by this article, submit feedback through GitHub via the bottom of this page and we will address your feedback as soon as possible.
-## Next steps
+To learn more about NAT gateway, see:
-* Learn about [Virtual Network NAT](nat-overview.md)
-* Learn about [NAT gateway resource](nat-gateway-resource.md)
-* Learn about [metrics and alerts for NAT gateway resources](nat-metrics.md).
+* [Virtual Network NAT](nat-overview.md)
+* [NAT gateway resource](nat-gateway-resource.md)
+* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
virtual-network Virtual Network Vnet Plan Design Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-vnet-plan-design-arm.md
You can filter network traffic to and from resources in a virtual network using
- You can filter network traffic between resources in a virtual network using a network security group, an NVA that filters network traffic, or both. To deploy an NVA, such as a firewall, to filter network traffic, see the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?subcategories=appliances&page=1). When using an NVA, you also create custom routes to route traffic from subnets to the NVA. Learn more about [traffic routing](#traffic-routing). - A network security group contains several default security rules that allow or deny traffic to or from resources. A network security group can be associated to a network interface, the subnet the network interface is in, or both. To simplify management of security rules, it's recommended that you associate a network security group to individual subnets, rather than individual network interfaces within the subnet, whenever possible. - If different VMs within a subnet need different security rules applied to them, you can associate the network interface in the VM to one or more application security groups. A security rule can specify an application security group in its source, destination, or both. That rule then only applies to the network interfaces that are members of the application security group. Learn more about [network security groups](./network-security-groups-overview.md) and [application security groups](./network-security-groups-overview.md#application-security-groups).
+- When a network security group is associated at the subnet level, it applies to all the NICs in the subnet, not just to the traffic coming from outside the subnet. This means that the traffic between the VMs contained in the subnet can be affected as well.
- Azure creates several default security rules within each network security group. One default rule allows all traffic to flow between all resources in a virtual network. To override this behavior, use network security groups, custom routing to route traffic to an NVA, or both. It's recommended that you familiarize yourself with all of Azure's [default security rules](./network-security-groups-overview.md#default-security-rules) and understand how network security group rules are applied to a resource. You can view sample designs for implementing a perimeter network (also known as a DMZ) between Azure and the internet using an [NVA](/azure/architecture/reference-architectures/dmz/secure-vnet-dmz?toc=%2Fazure%2Fvirtual-network%2Ftoc.json).
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
na Previously updated : 3/2/2020 Last updated : 3/11/2022
Points to consider when you are using Azure-provided name resolution:
* WINS and NetBIOS are not supported. You cannot see your VMs in Windows Explorer. * Host names must be DNS-compatible. Names must use only 0-9, a-z, and '-', and cannot start or end with a '-'. * DNS query traffic is throttled for each VM. Throttling shouldn't impact most applications. If request throttling is observed, ensure that client-side caching is enabled. For more information, see [DNS client configuration](#dns-client-configuration).
+* Use a different name for each virtual machine in a virtual network to avoid DNS resolution issues.
* Only VMs in the first 180 cloud services are registered for each virtual network in a classic deployment model. This limit does not apply to virtual networks in Azure Resource Manager. * The Azure DNS IP address is 168.63.129.16. This is a static IP address and will not change.
If forwarding queries to Azure doesn't suit your needs, you should provide your
* Be secured against access from the internet, to mitigate threats posed by external agents. > [!NOTE]
-> For best performance, when you are using Azure VMs as DNS servers, IPv6 should be disabled.
+> * For best performance, when you are using Azure VMs as DNS servers, IPv6 should be disabled.
+> * NSGs act as firewalls for you DNS resolver endpoints. You should modify or override your NSG security rules to allow access for UDP Port 53 (and optionally TCP Port 53) to your DNS listener endpoints. Once custom DNS servers are set on a network, then the traffic through port 53 will bypass the NSG's of the subnet.
### Web apps Suppose you need to perform name resolution from your web app built by using App Service, linked to a virtual network, to VMs in the same virtual network. In addition to setting up a custom DNS server that has a DNS forwarder that forwards queries to Azure (virtual IP 168.63.129.16), perform the following steps:
vpn-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-overview.md
Previously updated : 12/02/2021 Last updated : 03/14/2022
Once a NAT rule is defined for a connection, the effective address space for the
* Advertised routes: Azure VPN gateway will advertise the External Mapping (post-NAT) prefixes of the EgressSNAT rules for the VNet address space, and the learned routes with post-NAT address prefixes from other connections. * BGP peer IP address consideration for a NAT'ed on-premises network:
- * APIPA (169.254.0.1 to 169.254.255.254) address: No NAT rule is required; specify the APIPA address in the Local Network Gateway directly.
+ * APIPA (169.254.0.1 to 169.254.255.254) address: Do not NAT the BGP APIPA address; specify the APIPA address in the Local Network Gateway directly.
* Non-APIPA address: Specify the **translated** or **post-NAT** IP address on the Local Network Gateway. Use the **translated** or **post-NAT** Azure BGP IP address(es) to configure the on-premises VPN routers. Ensure the NAT rules are defined for the intended translation. > [!NOTE]