Updates from: 08/25/2022 01:07:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
Microsoft partners with the following ISV partners.
| ISV partner | Description and integration walkthroughs | |:-|:--|
+| ![Screenshot of a deduce logo.](./medi) is an identity verification and proofing provider focused on stopping account takeover and registration fraud. It helps combat identity fraud and creates a trusted user experience. |
| ![Screenshot of a eid-me logo](./medi) is an identity verification and decentralized digital identity solution for Canadian citizens. It enables organizations to meet Identity Assurance Level (IAL) 2 and Know Your Customer (KYC) requirements. | |![Screenshot of an Experian logo.](./medi) is an Identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. | |![Screenshot of an IDology logo.](./medi) is an Identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.|
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
+
+ Title: Configure Azure Active Directory B2C with Deduce
+
+description: Learn how to integrate Azure AD B2C authentication with Deduce for identity verification
++++++ Last updated : 8/22/2022+++++
+# Configure Azure Active Directory B2C with Deduce to combat identity fraud and create a trusted user experience
+
+In this sample article, we provide guidance on how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Deduce](https://www.deduce.com/). Deduce is focused on stopping account takeover and registration fraudΓÇöthe fastest-growing fraud on the internet. The Deduce Identity Network is powered by a coalition of over 150,000 websites and apps who share logins, registrations, and checkouts with Deduce over 1.4 billion times per day.
+
+The resulting identity intelligence stops attacks before they become a financial problem and a corporate liability. It uses historical behavioral analysis as a predictor of trust so organizations can deliver a frictionless user experience for their best customers. A comprehensive range of risk and trust signals can inform every authentication decision with the Azure AD B2C instance.
+With this integration, organizations can extend their Azure AD B2C capabilities during the sign-up or sign in process to get additional insights about the user from the Deduce Insights API. Some of the attributes ingested by the Deduce API are:
+
+- Email
+- IP Address
+- User agent
+
+## Prerequisites
+
+To get started, you'll need:
+
+- An Azure subscription. If you don't have one, get a [free account](https://azure.microsoft.com/free).
+
+- An [Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.
+
+- [Register an application](./tutorial-register-applications.md)
+
+- [Contact Deduce](mailto:support@deduce.com) to configure a test or production environment.
+
+- Ability to use Azure AD B2C custom policies. If you can't, complete the steps in [Get started with custom policies in Azure AD B2C](custom-policy-overview.md) to learn how to use custom policies.
++
+## Scenario description
+
+The integration includes the following components:
+
+- **Azure AD B2C** ΓÇô The authorization server, responsible for verifying the user's credentials, also known as the identity provider.
+- **Deduce** ΓÇô The Deduce service takes inputs provided by the user and provides digital activity insights on the user's identity.
+- **Custom rest API** ΓÇô This API implements the integration between Azure AD B2C and the Deduce Insights API.
+
+The following architecture diagram shows the implementation:
+![image shows the architecture diagram.](./media/partner-deduce/partner-deduce-architecture-diagram.png)
++
+| Steps | Description |
+|:--|:|
+| 1. | User opens Azure AD B2C's sign-in page, and then signs in or signs up by entering their username.|
+| 2. | Azure AD B2C calls the middle layer API and passes on the user attributes.|
+| 3. | Middle layer API collects user attributes and transforms it into a format that the Deduce API can consume and then sends it to Deduce.|
+| 4. | Deduce consumes the information and processes it to validate user identification based on the risk analysis. Then, it returns the result to the middle layer API.|
+| 5. | Middle layer API processes the information and sends back risk, trust and info signals in the correct JSON format to Azure AD B2C.|
+| 6. | Azure AD B2C receives information back from the middle layer API. <br> If it shows a failure response, an error message is displayed to the user. <br> If it shows a success response, the user is authenticated and written into the directory. |
+
+## Onboard with Deduce
+
+To create a Deduce account, contact [Deduce support](mailto:support@deduce.com). Once an account is created, you'll receive a **Site ID**, and **API key** that you'll need for the API configuration.
+
+The following sections describe the integration process.
+
+### Step 1: Configure the Azure AD B2C policy
+
+Follow the instructions in [Get the starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#get-the-starter-pack) to learn how to set up your Azure AD B2C tenant and configure policies. This samples article is based on the Local Accounts starter pack.
++
+### Step 2: Customize the Azure AD B2C user interface
+
+In order to collect the user_agent from client-side, create your own `**ContentDefinition**` with an arbitrary ID to include the related JavaScript. Determine the end-user browser's user_agent string and store it as a claim in Azure AD B2C.
+
+1. Download the api.selfasserted, [selfAsserted.cshtml](https://login.microsoftonline.com/static/tenant/templates/AzureBlue/selfAsserted.cshtml), locally.
+
+1. Edit the selfAsserted.cshtml to include the following JavaScript before the closure of `</head>` defines an additional Style element to hide the panel-default.
+
+ ``` html
+ <style>
+ .panel-default {
+ margin: 0 auto;
+ width: 60%;
+ height: 0px;
+ background-color: #296ec6;
+ opacity: 1;
+ border-radius: .5rem;
+ border: none;
+ color: #fff;
+ font-size: 1em;
+ box-shadow: 0 0 30px 0 #dae1f7;
+ visibility: hidden;
+ }
+
+ </style>
+ ```
+
+1. Add the following JavaScript code before the closure of the `</body>`. This code reads the user_agent from the user's browser and the ContentDefinition is used in combination with the self-asserted technical profile to return user_agent as an output claim to the next orchestration step.
+
+ ``` html
+ <script>
+ $("#user_agent").hide().val(window.navigator.userAgent);
+ var img = new Image();
+ img.onload = function() {
+ document.getElementById("continue").click();
+ };
+ img.src = "https://login.microsoftonline.com/static/tenant/templates/images/logo.svg";
+ </script>
+ ```
+
+### Step 3: Configure your storage location
+
+1. Set up a [blob storage container in your storage account](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) and upload the previously edited `**selfAsserted.cshtml**` file to your blob container.
+
+1. Allow CORS access to storage container you created by following these instructions:
+
+ 1. Go to **Settings** >**Allowed Origin**, enter `https://your_tenant_name.b2clogin.com`. Replace `your-tenant- name` with the name of your Azure AD B2C tenant such as `fabrikam`. Use all lowercase letters when entering your tenant name.
+
+ 1. For **Allowed Methods**, select `GET` and `PUT`.
+
+ 1. Select **Save**.
+
+### Step 4: Configure Content Definition
+
+To customize the user interface, you specify a URL in the `ContentDefinition` element with customized HTML content. In the self-asserted technical profile or orchestration step, you point to that ContentDefinition identifier.
++
+1. Open the `TrustFrameworksExtension.xml` and define a new **ContentDefinition** to customize the [self-asserted technical profile](https://docs.microsoft.com/azure/active-directory-b2c/self-asserted-technical-profile).
+
+1. Find the `BuildingBlocks` element and add the `**api.selfassertedDeduce**` ContentDefinition:
+
+ ```xml
+ <BuildingBlocks>
+ ...
+ <ContentDefinitions>
+ <ContentDefinition Id="api.selfassertedDeduce">
+ <LoadUri>https://<STORAGE-ACCOUNT-NAME>.blob.core.windows.net/<CONTAINER>/selfAsserted.cshtml</LoadUri>
+ <RecoveryUri>~/common/default_page_error.html</RecoveryUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.7</DataUri>
+ <Metadata>
+ <Item Key="DisplayName">Signin and Signup Deduce</Item>
+ </Metadata>
+ </ContentDefinition>
+ </ContentDefinitions>
+ ...
+ </BuildingBlocks>
+ ```
+
+Replace LoadUri with the url pointing to the `selfAsserted.cshtml` file created in [step 1](#step-1-configure-the-azure-ad-b2c-policy).
+
+### Step 5: Add Deduce additional ClaimType
+
+The **ClaimsSchema** element defines the claim types that can be referenced as part of the policy. There are additional claims that Deduce supports and can be added.
+
+1. Open the `TrustFrameworksExtension.xml`
+
+1. In the `**BuildingBlocks**` element additional identity claims that Deduce supports can be added.
+
+ ```xml
+ <BuildingBlocks>
+ ...
+ <ClaimsSchema>
+ <!-- Claims for Deduce API request body -->
+ <ClaimType Id="site">
+ <DisplayName>Site ID</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Deduce Insight API site id</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="ip">
+ <DisplayName>IP Address</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="apikey">
+ <DisplayName>API Key</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="action">
+ <DisplayName>Contextual action</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+
+ <!-- End of Claims for Deduce API request body -->
+
+ <!-- Rest API call request body to deduce insight API -->
+ <ClaimType Id="deduce_requestbody">
+ <DisplayName>Request body for insight api</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Request body for insight api</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="deduce_trust_response">
+ <DisplayName>Response body for insight api</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Response body for insight api</AdminHelpText>
+ </ClaimType>
+ <!-- End of Rest API call request body to deduce insight API -->
+
+ <!-- Response claims from Deduce Insight API -->
+
+ <ClaimType Id="data.signals.trust">
+ <DisplayName>Trust collection</DisplayName>
+ <DataType>stringCollection</DataType>
+ <AdminHelpText>List of asserted trust</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="data.signals.info">
+ <DisplayName>Trust collection</DisplayName>
+ <DataType>stringCollection</DataType>
+ <AdminHelpText>List of asserted info</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="data.signals.risk">
+ <DisplayName>Trust collection</DisplayName>
+ <DataType>stringCollection</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="data.network.company_name">
+ <DisplayName>data.network.company_name</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.crawler_name">
+ <DisplayName>data.network.crawler_name</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.is_corporate">
+ <DisplayName>data.network.is_corporate</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.is_education">
+ <DisplayName>data.network.is_education</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.is_hosting">
+ <DisplayName>data.network.is_hosting</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.is_mobile">
+ <DisplayName>data.network.is_mobile</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.is_proxy">
+ <DisplayName>data.network.is_proxy"</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.is_tor">
+ <DisplayName>data.network.is_tor</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.is_vpn_capable">
+ <DisplayName>data.network.is_vpn_capable</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.is_vpn_confirmed">
+ <DisplayName>data.network.is_vpn_confirmed</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.is_vpn_suspect">
+ <DisplayName>data.network.is_vpn_suspect</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.isp_name">
+ <DisplayName>data.network.isp_name</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.network.vpn_name">
+ <DisplayName>data.network.vpn_name</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.geo.city">
+ <DisplayName>data.geo.city</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.geo.country">
+ <DisplayName>data.geo.country</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.geo.lat">
+ <DisplayName>data.geo.lat</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.geo.long">
+ <DisplayName>data.geo.long</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.geo.state">
+ <DisplayName>data.geo.state</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.device.ua_brand">
+ <DisplayName>data.device.ua_brand</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.device.ua_browser">
+ <DisplayName>data.device.ua_browser</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.device.ua_device_type">
+ <DisplayName>data.device.ua_device_type</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.device.ua_name">
+ <DisplayName>data.device.ua_name</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.device.ua_os">
+ <DisplayName>data.device.ua_os</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.device.ua_type">
+ <DisplayName>data.device.ua_type</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.device.ua_version">
+ <DisplayName>data.device.ua_version</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>List of asserted risk</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.email.ip_count">
+ <DisplayName>data.activity.email.ip_count</DisplayName>
+ <DataType>int</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.email.lastseen">
+ <DisplayName>data.activity.email.lastseen</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.email.frequency">
+ <DisplayName>data.activity.email.frequency</DisplayName>
+ <DataType>int</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.emailip.frequency">
+ <DisplayName>data.activity.emailip.frequency</DisplayName>
+ <DataType>int</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.emailip.lastseen">
+ <DisplayName>data.activity.emailip.lastseen</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.emailip.match">
+ <DisplayName>data.activity.emailip.match</DisplayName>
+ <DataType>boolean</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.emailip.rank_email">
+ <DisplayName>data.activity.emailip.rank_email</DisplayName>
+ <DataType>int</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.emailip.rank_ip">
+ <DisplayName>data.activity.emailip.rank_ip</DisplayName>
+ <DataType>int</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.ip.email_count">
+ <DisplayName>data.activity.ip.email_count</DisplayName>
+ <DataType>int</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="data.activity.ip.lastseen">
+ <DisplayName>data.activity.ip.lastseen</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="data.activity.ip.frequency">
+ <DisplayName>data.activity.ip.frequency</DisplayName>
+ <DataType>int</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="data.sent_timestamp">
+ <DisplayName>datasent_timestamp</DisplayName>
+ <DataType>long</DataType>
+ <AdminHelpText>Add help text here</AdminHelpText>
+ </ClaimType>
+
+ <ClaimType Id="user_agent">
+ <DisplayName>User Agent</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>Add help text here</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="correlationId">
+ <DisplayName>correlation ID</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <!-- End Response claims from Deduce Insight API -->
+ ...
+ </ClaimsSchema>
+ ...
+ </BuildingBlocks>
+
+ ```
+
+### Step 6: Add Deduce ClaimsProvider
+
+A **claims provider** is an interface to communicate with different types of parties via its [technical profiles](https://docs.microsoft.com/azure/active-directory-b2c/technicalprofiles).
+
+- `SelfAsserted-UserAgent` self-asserted technical profile is used to collect user_agent from client-side.
+
+- `deduce_insight_api` technical profile sends data to the Deduce RESTful service in an input claims collection and receives data back in an output claims collection. For more information, see [integrate REST API claims exchanges in your Azure AD B2C custom policy](https://docs.microsoft.com/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-custom-policy)
+
+You can define Deduce as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy.
+
+1. Open the `TrustFrameworkExtensions.xml`.
+
+1. Find the **ClaimsProvider** element. If it doesn't exist, add a new **ClaimsProvider** as follows:
+
+ ```xml
+ <ClaimsProvider>
+ <DisplayName>Deduce REST API</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="SelfAsserted-UserAgent">
+ <DisplayName>Pre-login</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.SelfAssertedAttributeProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="ContentDefinitionReferenceId">api.selfassertedDeduce</Item>
+ <Item Key="setting.showCancelButton">false</Item>
+ <Item Key="language.button_continue">Continue</Item>
+ </Metadata>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="user_agent" />
+ </OutputClaims>
+ </TechnicalProfile>
+ <TechnicalProfile Id="deduce_insight_api">
+ <DisplayName>Get customer insight data from deduce api</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="ServiceUrl">https://deduceproxyapi.azurewebsites.net/api/Deduce/DeduceInsights</Item>
+ <Item Key="AuthenticationType">None</Item>
+ <Item Key="SendClaimsIn">Body</Item>
+ <Item Key="ResolveJsonPathsInJsonTokens">true</Item>
+ <Item Key="AllowInsecureAuthInProduction">true</Item>
+ <Item Key="DebugMode">true</Item>
+ <Item Key="IncludeClaimResolvingInClaimsHandling">true</Item>
+ </Metadata>
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="user_agent" />
+ <InputClaim ClaimTypeReferenceId="signInNames.emailAddress" PartnerClaimType="email" />
+ <InputClaim ClaimTypeReferenceId="ip" DefaultValue="{Context:IPAddress}" AlwaysUseDefaultValue="true" />
+ <InputClaim ClaimTypeReferenceId="apikey" DefaultValue="<DEDUCE API KEY>" />
+ <InputClaim ClaimTypeReferenceId="action" DefaultValue="auth.success.password" />
+ <InputClaim ClaimTypeReferenceId="site" DefaultValue="<SITE>" />
+ </InputClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="data.sent_timestamp" PartnerClaimType="data.sent_timestamp" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.ip.frequency" PartnerClaimType="data.activity.ip.frequency" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.ip.lastseen" PartnerClaimType="data.activity.ip.lastseen" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.ip.email_count" PartnerClaimType="data.activity.ip.email_count" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.email.ip_count" PartnerClaimType="data.activity.email.ip_count" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.email.lastseen" PartnerClaimType="data.activity.email.lastseen" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.email.frequency" PartnerClaimType="data.activity.email.frequency" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.frequency" PartnerClaimType="data.activity.emailip.frequency" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.lastseen" PartnerClaimType="data.activity.emailip.lastseen" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.match" PartnerClaimType="data.activity.emailip.match" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.rank_email" PartnerClaimType="data.activity.emailip.rank_email" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.rank_ip" PartnerClaimType="data.activity.emailip.rank_ip" />
+ <OutputClaim ClaimTypeReferenceId="data.signals.trust" PartnerClaimType="data.signals.trust" />
+ <OutputClaim ClaimTypeReferenceId="data.signals.info" PartnerClaimType="data.signals.info" />
+ <OutputClaim ClaimTypeReferenceId="data.signals.risk" PartnerClaimType="data.signals.risk" />
+ <OutputClaim ClaimTypeReferenceId="data.network.company_name" PartnerClaimType="data.network.company_name" />
+ <OutputClaim ClaimTypeReferenceId="data.network.crawler_name" PartnerClaimType="data.network.crawler_name" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_corporate" PartnerClaimType="data.network.is_corporate" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_education" PartnerClaimType="data.network.is_education" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_hosting" PartnerClaimType="data.network.is_hosting" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_mobile" PartnerClaimType="data.network.is_mobile" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_proxy" PartnerClaimType="data.network.is_proxy" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_tor" PartnerClaimType="data.network.is_tor" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_vpn_capable" PartnerClaimType="data.network.is_vpn_capable" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_vpn_confirmed" PartnerClaimType="data.network.is_vpn_confirmed" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_vpn_suspect" PartnerClaimType="data.network.is_vpn_suspect" />
+ <OutputClaim ClaimTypeReferenceId="data.network.isp_name" PartnerClaimType="data.network.isp_name" />
+ <OutputClaim ClaimTypeReferenceId="data.network.vpn_name" PartnerClaimType="data.network.vpn_name" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.city" PartnerClaimType="data.geo.city" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.country" PartnerClaimType="data.geo.country" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.lat" PartnerClaimType="data.geo.lat" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.long" PartnerClaimType="data.geo.long" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.state" PartnerClaimType="data.geo.state" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_brand" PartnerClaimType="data.device.ua_brand" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_browser" PartnerClaimType="data.device.ua_browser" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_device_type" PartnerClaimType="data.device.ua_device_type" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_name" PartnerClaimType="data.device.ua_name" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_os" PartnerClaimType="data.device.ua_os" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_type" PartnerClaimType="data.device.ua_type" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_version" PartnerClaimType="data.device.ua_version" />
+ </OutputClaims>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+
+ ```
+
+Replace `apikey` and `site` with the information provided by Deduce at the time of initial onboarding.
+
+### Step 7: Add a user journey
+
+At this point, the **Deduce RESTfull API** has been set up, but it's not yet available in any of the sign-up or sign-in pages. If you don't have your own custom user journey, create a duplicate of an existing template user journey, otherwise continue to the next step.
+
+1. Open the `TrustFrameworkBase.xml` file from the starter pack.
+
+1. Find and copy the entire contents of the **UserJourneys** element that includes 'Id=SignUpOrSignIn`.
+
+1. Open the `TrustFrameworkExtensions.xml` and find the **UserJourneys** element. If the element doesn't exist, add one.
+
+1. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
+
+1. Rename the `Id` of the user journey. For example, `Id=CustomSignUpSignIn`
+
+### Step 8: Add Deduce API to a user journey
+
+Now that you've a user journey, add the orchestrations steps to call Deduce.
+
+1. Find the orchestration step element that includes `Type=CombinedSignInAndSignUp`, or `Type=ClaimsProviderSelection` in the user journey. It's usually the first orchestration step.
+
+1. Add a new orchestration step to invoke `SelfAsserted-UserAgent` technical profile.
+
+1. Add a new orchestration step to invoke `**deduce_insight_api**` technical profile.
+
+ The below example of UserJourney is based on local accounts starter pack:
+
+ ```xml
+ <UserJourneys>
+ <UserJourney Id="CustomSignUpOrSignIn">
+ <OrchestrationSteps>
+ <OrchestrationStep Order="1" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="Browser-UserAgent" TechnicalProfileReferenceId="SelfAsserted-UserAgent" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="2" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ <ClaimsProviderSelection ValidationClaimsExchangeId="LocalAccountSigninEmailExchange" />
+ </ClaimsProviderSelections>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="LocalAccountSigninEmailExchange" TechnicalProfileReferenceId="SelfAsserted-LocalAccountSignin-Email" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="3" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>objectId</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="SignUpWithLogonEmailExchange" TechnicalProfileReferenceId="LocalAccountSignUpWithLogonEmail" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <!-- This step reads any user attributes that we may not have received when in the token. -->
+ <OrchestrationStep Order="4" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserReadWithObjectId" TechnicalProfileReferenceId="AAD-UserReadUsingObjectId" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="5" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="DecideInsights" TechnicalProfileReferenceId="deduce_insight_api" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="6" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
+
+ </OrchestrationSteps>
+ <ClientDefinition ReferenceId="DefaultWeb" />
+ </UserJourney>
+ </UserJourneys>
+ ```
+
+### Step 9: Configure the relying party policy
+
+The relying party policy specifies the user journey which Azure AD B2C will execute. You can also control what claims are passed to your application by adjusting the **OutputClaims** element of the **SignUpOrSignIn_WithDeduce** TechnicalProfile element. In this sample, the application will receive information back from the middle layer API:
+
+```xml
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="CustomSignUpOrSignIn" />
+ <UserJourneyBehaviors>
+ <ScriptExecution>Allow</ScriptExecution>
+ </UserJourneyBehaviors>
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <!-- <OutputClaim ClaimTypeReferenceId="user_agent" /> -->
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="correlationId" DefaultValue="{Context:CorrelationId}" />
+ <OutputClaim ClaimTypeReferenceId="data.sent_timestamp" PartnerClaimType="data.sent_timestamp" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.ip.frequency" PartnerClaimType="data.activity.ip.frequency" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.ip.lastseen" PartnerClaimType="data.activity.ip.lastseen" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.ip.email_count" PartnerClaimType="data.activity.ip.email_count" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.email.ip_count" PartnerClaimType="data.activity.email.ip_count" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.email.lastseen" PartnerClaimType="data.activity.email.lastseen" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.email.frequency" PartnerClaimType="data.activity.email.frequency" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.frequency" PartnerClaimType="data.activity.emailip.frequency" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.lastseen" PartnerClaimType="data.activity.emailip.lastseen" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.match" PartnerClaimType="data.activity.emailip.match" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.rank_email" PartnerClaimType="data.activity.emailip.rank_email" />
+ <OutputClaim ClaimTypeReferenceId="data.activity.emailip.rank_ip" PartnerClaimType="data.activity.emailip.rank_ip" />
+ <OutputClaim ClaimTypeReferenceId="data.signals.trust" PartnerClaimType="data.signals.trust" />
+ <OutputClaim ClaimTypeReferenceId="data.signals.info" PartnerClaimType="data.signals.info" />
+ <OutputClaim ClaimTypeReferenceId="data.signals.risk" PartnerClaimType="data.signals.risk" />
+ <OutputClaim ClaimTypeReferenceId="data.network.company_name" PartnerClaimType="data.network.company_name" />
+ <OutputClaim ClaimTypeReferenceId="data.network.crawler_name" PartnerClaimType="data.network.crawler_name" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_corporate" PartnerClaimType="data.network.is_corporate" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_education" PartnerClaimType="data.network.is_education" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_hosting" PartnerClaimType="data.network.is_hosting" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_mobile" PartnerClaimType="data.network.is_mobile" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_proxy" PartnerClaimType="data.network.is_proxy" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_tor" PartnerClaimType="data.network.is_tor" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_vpn_capable" PartnerClaimType="data.network.is_vpn_capable" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_vpn_confirmed" PartnerClaimType="data.network.is_vpn_confirmed" />
+ <OutputClaim ClaimTypeReferenceId="data.network.is_vpn_suspect" PartnerClaimType="data.network.is_vpn_suspect" />
+ <OutputClaim ClaimTypeReferenceId="data.network.isp_name" PartnerClaimType="data.network.isp_name" />
+ <OutputClaim ClaimTypeReferenceId="data.network.vpn_name" PartnerClaimType="data.network.vpn_name" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.city" PartnerClaimType="data.geo.city" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.country" PartnerClaimType="data.geo.country" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.lat" PartnerClaimType="data.geo.lat" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.long" PartnerClaimType="data.geo.long" />
+ <OutputClaim ClaimTypeReferenceId="data.geo.state" PartnerClaimType="data.geo.state" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_brand" PartnerClaimType="data.device.ua_brand" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_browser" PartnerClaimType="data.device.ua_browser" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_device_type" PartnerClaimType="data.device.ua_device_type" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_name" PartnerClaimType="data.device.ua_name" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_os" PartnerClaimType="data.device.ua_os" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_type" PartnerClaimType="data.device.ua_type" />
+ <OutputClaim ClaimTypeReferenceId="data.device.ua_version" PartnerClaimType="data.device.ua_version" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub" />
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+ </RelyingParty>
+```
+
+### Step 10: Upload the custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+
+ a. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+ b. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+
+1. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
+
+1. Under Policies, select **Identity Experience Framework**.
+
+1. Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order: the extension policy, for example `TrustFrameworkBase.xml`, then the relying party policy, such as `B2C_1A_signup`.
+
+### Step 11: Test your custom policy
+
+1. Select your relying party policy, for example `B2C_1A_signup`.
+
+1. For **Application**, select a web application that you [previously registered](./tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
+
+1. Select the **Run now** button.
+
+1. The sign-up policy should invoke Deduce immediately. If sign-in is used, then select Deduce to sign in with Deduce.
+
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+
+## Next steps
+
+For additional information, review the following articles:
+
+- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
+
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for identity verification and proofin
| ISV partner | Description and integration walkthroughs | |:-|:--|
+| ![Screenshot of a deduce logo.](./medi) is an identity verification and proofing provider focused on stopping account takeover and registration fraud. It helps combat identity fraud and creates a trusted user experience. |
| ![Screenshot of a eid-me logo](./medi) is an identity verification and decentralized digital identity solution for Canadian citizens. It enables organizations to meet Identity Assurance Level (IAL) 2 and Know Your Customer (KYC) requirements. | | ![Screenshot of an Experian logo.](./medi) is an identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. | | ![Screenshot of an IDology logo.](./medi) is an identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.|
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-introduction.md
Previously updated : 04/29/2022 Last updated : 08/24/2022 - it-pro
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
Title: Create an access review of groups and applications - Azure AD
description: Learn how to create an access review of group members or application access in Azure Active Directory. -+ editor: markwahl-msft na Previously updated : 07/20/2022 Last updated : 08/24/2022
If you are reviewing access to an application, then before creating the review,
### Scope 1. Sign in to the Azure portal and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
-1. On the left menu, select **Access reviews**.
+2. On the left menu, select **Access reviews**.
-1. Select **New access review** to create a new access review.
+3. Select **New access review** to create a new access review.
![Screenshot that shows the Access reviews pane in Identity Governance.](./media/create-access-review/access-reviews.png)
-1. In the **Select what to review** box, select which resource you want to review.
+4. In the **Select what to review** box, select which resource you want to review.
![Screenshot that shows creating an access review.](./media/create-access-review/select-what-review.png)
-1. If you selected **Teams + Groups**, you have two options:
+5. If you selected **Teams + Groups**, you have two options:
- **All Microsoft 365 groups with guest users**: Select this option if you want to create recurring reviews on all your guest users across all your Microsoft Teams and Microsoft 365 groups in your organization. Dynamic groups and role-assignable groups aren't included. You can also choose to exclude individual groups by selecting **Select group(s) to exclude**. - **Select Teams + groups**: Select this option if you want to specify a finite set of teams or groups to review. A list of groups to choose from appears on the right. ![Screenshot that shows selecting Teams + Groups.](./media/create-access-review/teams-groups.png)
-1. If you selected **Applications**, select one or more applications.
+6. If you selected **Applications**, select one or more applications.
![Screenshot that shows the interface that appears if you selected applications instead of groups.](./media/create-access-review/select-application-detailed.png)
If you are reviewing access to an application, then before creating the review,
> [!NOTE] > If you selected **All Microsoft 365 groups with guest users**, your only option is to review **Guest users only**.
-1. Or if you are conducting group membership review, you can create access reviews for only the inactive users in the group. In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only, those who have not signed in either interactively or non-interactively to the tenant. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
+8. Or if you are conducting group membership review, you can create access reviews for only the inactive users in the group. In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only, those who have not signed in either interactively or non-interactively to the tenant. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
+
+ > [!NOTE]
+ > Recently created users are not affected when configuring the inactivity time. The Access Review will check if a user has been created in the time frame configured and disregard users who havenΓÇÖt existed for at least that amount of time. For example, if you set the inactivity time as 90 days and a guest user was created or invited less than 90 days ago, the guest user will not be in scope of the Access Review. This ensures that a user can sign in at least once before being removed.
-1. Select **Next: Reviews**.
+9. Select **Next: Reviews**.
### Next: Reviews
active-directory Home Realm Discovery Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md
Previously updated : 02/09/2021 Last updated : 08/24/2022 + # Home Realm Discovery for an application
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
If setting multiple approvers, approval completes as soon as one of them approve
![Select a user or group pane to select approvers](./media/pim-resource-roles-configure-role-settings/resources-role-settings-select-approvers.png) 1. Select at least one user and then click **Select**. Select at least one approver. If no specific approvers are selected, Privileged Role Administrators and Global Administrators become the default approvers.
+ > [!Note]
+ > An approver does not have to have an Azure AD administrative role themselves. They can be a regular user, such as an IT executive.
1. Select **Update** to save your changes.
active-directory Netmotion Mobility Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netmotion-mobility-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with NetMotion Mobility'
+description: Learn how to configure single sign-on between Azure Active Directory and NetMotion Mobility.
++++++++ Last updated : 08/19/2022++++
+# Tutorial: Azure AD SSO integration with NetMotion Mobility
+
+In this tutorial, you'll learn how to integrate NetMotion Mobility with Azure Active Directory (Azure AD). When you integrate NetMotion Mobility with Azure AD, you can:
+
+* Control in Azure AD who has access to NetMotion Mobility.
+* Enable users to be signed-in with a NetMotion Mobility client with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* NetMotion Mobility 12.50 or later.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* NetMotion Mobility supports **SP** initiated SSO.
+* NetMotion Mobility supports **Just In Time** user provisioning.
+
+## Add NetMotion Mobility from the gallery
+
+To configure the integration of NetMotion Mobility into Azure AD, you need to add NetMotion Mobility from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **NetMotion Mobility** in the search box.
+1. Select **NetMotion Mobility** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for NetMotion Mobility
+
+Configure and test Azure AD SSO with NetMotion Mobility using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in NetMotion Mobility.
+
+To configure and test Azure AD SSO with NetMotion Mobility, perform the following steps:
+
+1. **[Configure Mobility for SAML-based Authentication](#configure-mobility-for-saml-based-authentication)** - to enable end users to authenticate using their Azure AD credentials.
+2. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+3. **[Configure NetMotion Mobility SSO](#configure-netmotion-mobility-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create NetMotion Mobility test user](#create-netmotion-mobility-test-user)** - to have a counterpart of B.Simon in NetMotion Mobility that is linked to the Azure AD representation of user.
+4. **[Test SAML-based User Authentication with the Mobility Client](#test-saml-based-user-authentication-with-the-mobility-client)** - to verify whether the configuration works.
+
+## Configure Mobility for SAML-based Authentication
+
+On the Mobility console, follow the procedures in the [Mobility Administrator Guide](https://help.netmotionsoftware.com/support/docs/MobilityXG/1250/help/mobilityhelp.htm#page/Mobility%2520Server%2Fintro.01.01.html%23) to accomplish the following:
+1. Create an [authentication profile](https://help.netmotionsoftware.com/support/docs/MobilityXG/1250/help/mobilityhelp.htm#page/Mobility%2520Server%2Fconfig.05.41.html%23ww2298330) for SAML ΓÇô to enable a set of Mobility users to use the SAML protocol.
+2. Configure [SAML-based user authentication](https://help.netmotionsoftware.com/support/docs/MobilityXG/1250/help/mobilityhelp.htm#context/nmcfgapp/saml_userconfig), in Mobility ΓÇô to set an SP URL and generate the mobilitySPmetadata.xml file which you will later import into Azure AD.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **NetMotion Mobility** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click on **Upload Metadata file** just above the **Basic SAML Configuration** section to import your mobilitySPMetadata.xml file into Azure AD.
+
+ ![Screenshot shows to choose metadata file.](media/netmotion-mobility-tutorial/file.png "Metadata")
+
+1. After importing the metadata file, on the **Basic SAML Configuration** section, perform the following steps to verify that the XML import has been completed successfully:
+
+ a. In the **Identifier** text box, verify that the URL is using the following pattern, where the variables in the following example URL match those for your Mobility server:
+ `https://<YourMobilityServerName>.<CustomerDomain>.<tld>/`
+
+ b. In the **Reply URL** text box, verify that the URL is using the following pattern:
+ `https://<YourMobilityServerName>.<CustomerDomain>.<tld>/saml/login`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to NetMotion Mobility.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **NetMotion Mobility**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure NetMotion Mobility SSO
+
+Follow the instructions in the Mobility Administrator Guide for [Configuring IdP Settings in the Mobility Console](https://help.netmotionsoftware.com/support/docs/MobilityXG/1250/help/mobilityhelp.htm#context/nmcfgapp/saml_userconfig), import the Azure AD metadata file back into your Mobility server and complete the steps for IdP configuration.
+
+1. Once the Mobility authentication settings are configured, assign them to devices or device groups.
+1. Go to **Mobility console** > **Configure** > **Client Settings** and select the device or device group on the left that will use SAML-based authentication.
+1. Select **Authentication - Settings** Profile and choose the settings profile you created from the drop-down list.
+1. When you click **Apply**, the selected device or group is subscribed to the non-default settings.
+
+### Create NetMotion Mobility test user
+
+In this section, a user called B.Simon is created in NetMotion Mobility. NetMotion Mobility supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in NetMotion Mobility, a new one is created after authentication.
+
+## Test SAML-based User Authentication with the Mobility Client
+
+In this section, you test your Azure AD SAML configuration for client authentication.
+
+1. Follow the guidance in [Configuring Mobility Clients](https://help.netmotionsoftware.com/support/docs/MobilityXG/1250/help/mobilityhelp.htm#page/Mobility%2520Server%2Fusing.06.01.html%23), configure a client device that is assigned a SAML-based authentication profile to access the Mobility server pool you have configured for SAML-based authentication and attempt to connect.
+1. If you encounter problems during the test, follow the guidance under [Troubleshooting the Mobility Client](https://help.netmotionsoftware.com/support/docs/MobilityXG/1250/help/mobilityhelp.htm#page/Mobility%2520Server%2Ftrouble.14.02.html).
aks Dapr Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-migration.md
kubectl delete namespace dapr-system
+## Register the `KubernetesConfiguration` service provider
+
+If you have not previously used cluster extensions, you may need to register the service provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
+
+```azurecli-interactive
+az provider list --query "[?contains(namespace,'Microsoft.KubernetesConfiguration')]" -o table
+```
+
+The *Microsoft.KubernetesConfiguration* provider should report as *Registered*, as shown in the following example output:
+
+```output
+Namespace RegistrationState RegistrationPolicy
+ - --
+Microsoft.KubernetesConfiguration Registered RegistrationRequired
+```
+
+If the provider shows as *NotRegistered*, register the provider using the [az provider register][az-provider-register] as shown in the following example:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.KubernetesConfiguration
+```
+ ## Install Dapr via the AKS extension Once you've uninstalled Dapr from your system, install the [Dapr extension for AKS and Arc-enabled Kubernetes](./dapr.md#create-the-extension-and-install-dapr-on-your-aks-or-arc-enabled-kubernetes-cluster).
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
If the `k8s-extension` extension is already installed, you can update it to the
az extension update --name k8s-extension ```
+### Register the `KubernetesConfiguration` service provider
+
+If you have not previously used cluster extensions, you may need to register the service provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
+
+```azurecli-interactive
+az provider list --query "[?contains(namespace,'Microsoft.KubernetesConfiguration')]" -o table
+```
+
+The *Microsoft.KubernetesConfiguration* provider should report as *Registered*, as shown in the following example output:
+
+```output
+Namespace RegistrationState RegistrationPolicy
+ - --
+Microsoft.KubernetesConfiguration Registered RegistrationRequired
+```
+
+If the provider shows as *NotRegistered*, register the provider using the [az provider register][az-provider-register] as shown in the following example:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.KubernetesConfiguration
+```
+ ## Create the extension and install Dapr on your AKS or Arc-enabled Kubernetes cluster When installing the Dapr extension, use the flag value that corresponds to your cluster type:
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
kubectl apply -f aks-helloworld-two.yaml --namespace ingress-basic
Both applications are now running on your Kubernetes cluster. To route traffic to each application, create a Kubernetes ingress resource. The ingress resource configures the rules that route traffic to one of the two applications.
-In the following example, traffic to *EXTERNAL_IP* is routed to the service named `aks-helloworld-one`. Traffic to *EXTERNAL_IP/hello-world-two* is routed to the `aks-helloworld-two` service. Traffic to *EXTERNAL_IP/static* is routed to the service named `aks-helloworld-one` for static assets.
+In the following example, traffic to *EXTERNAL_IP/hello-world-one* is routed to the service named `aks-helloworld-one`. Traffic to *EXTERNAL_IP/hello-world-two* is routed to the `aks-helloworld-two` service. Traffic to *EXTERNAL_IP/static* is routed to the service named `aks-helloworld-one` for static assets.
Create a file named `hello-world-ingress.yaml` and copy in the following example YAML.
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
+
+ Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster by using Bicep
+description: Learn how to quickly create a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS)
++ Last updated : 08/11/2022+
+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
++
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Bicep
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you'll:
+
+* Deploy an AKS cluster using a Bicep file.
+* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
+++
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+
+## Prerequisites
++
+### [Azure CLI](#tab/azure-cli)
++
+* This article requires version 2.20.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+* If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount][connect-azaccount] cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell]. You'll also need Bicep CLI. For more information, see [Azure PowerShell](../../azure-resource-manager/bicep/install.md#azure-powershell). If using Azure Cloud Shell, the latest version is already installed.
+++
+* To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the Bicep file](#review-the-bicep-file) section.
+
+* The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+
+* To deploy a Bicep file, you need write access on the resources you're deploying and access to all operations on the Microsoft.Resources/deployments resource type. For example, to deploy a virtual machine, you need Microsoft.Compute/virtualMachines/write and Microsoft.Resources/deployments/* permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+
+### Create an SSH key pair
+
+To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command will overwrite any SSH key pair with the same name already existing in the given location.
+
+1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.
+
+1. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 4096:
+
+ ```console
+ ssh-keygen -t rsa -b 4096
+ ```
+
+For more information about creating SSH keys, see [Create and manage SSH keys for authentication in Azure][ssh-keys].
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/aks/).
++
+The resource defined in the Bicep file:
+
+* [**Microsoft.ContainerService/managedClusters**](/azure/templates/microsoft.containerservice/managedclusters?tabs=bicep&pivots=deployment-language-bicep)
+
+For more AKS samples, see the [AKS quickstart templates][aks-quickstart-templates] site.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name myResourceGroup --location eastus
+ az deployment group create --resource-group myResourceGroup --template-file main.bicep --parameters clusterName=<cluster-name> dnsPrefix=<dns-previs> linuxAdminUsername=<linux-admin-username> sshRSAPublicKey='<ssh-key>'
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name myResourceGroup -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./main.bicep -clusterName=<cluster-name> -dnsPrefix=<dns-prefix> -linuxAdminUsername=<linux-admin-username> -sshRSAPublicKey="<ssh-key>"
+ ```
+
+
+
+ Provide the following values in the commands:
+
+ * **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*.
+ * **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*.
+ * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*.
+ * **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*).
+
+ It takes a few minutes to create the AKS cluster. Wait for the cluster to be successfully deployed before you move on to the next step.
+
+## Validate the Bicep deployment
+
+### Connect to the cluster
+
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
+
+ ```azurecli
+ az aks install-cli
+ ```
+
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+
+ ```console
+ kubectl get nodes
+ ```
+
+ The following output example shows the three nodes created in the previous steps. Make sure the node status is *Ready*:
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
+ aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6
+ aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6
+ ```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. Install `kubectl` locally using the [Install-AzAksKubectl][install-azakskubectl] cmdlet:
+
+ ```azurepowershell
+ Install-AzAksKubectl
+ ```
+
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
+
+ ```azurepowershell-interactive
+ Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+ ```
+
+3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+
+ ```azurepowershell-interactive
+ kubectl get nodes
+ ```
+
+ The following output example shows the three nodes created in the previous steps. Make sure the node status is *Ready*:
+
+ ```plaintext
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
+ aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6
+ aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6
+ ```
+++
+## Deploy the application
+
+A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+
+In this quickstart, you'll use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
+
+* The sample Azure Vote Python applications.
+* A Redis instance.
+
+Two [Kubernetes Services][kubernetes-service] are also created:
+
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
+
+1. Create a file named `azure-vote.yaml`.
+ * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
+1. Copy in the following YAML definition:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-back
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-back
+ template:
+ metadata:
+ labels:
+ app: azure-vote-back
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-back
+ image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ env:
+ - name: ALLOW_EMPTY_PASSWORD
+ value: "yes"
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 6379
+ name: redis
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
+ app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
+ metadata:
+ labels:
+ app: azure-vote-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-front
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
+ ```
+
+1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+ ```console
+ kubectl apply -f azure-vote.yaml
+ ```
+
+ The following example resembles output showing the successfully created deployments and
+
+ ```output
+ deployment "azure-vote-back" created
+ service "azure-vote-back" created
+ deployment "azure-vote-front" created
+ service "azure-vote-front" created
+ ```
+
+### Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+
+Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
+
+```console
+kubectl get service azure-vote-front --watch
+```
+
+The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+```
+
+Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+
+```output
+azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+```
+
+To see the Azure Vote app in action, open a web browser to the external IP address of your service.
++
+## Clean up resources
+
+### [Azure CLI](#tab/azure-cli)
+
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroup --yes --no-wait
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myResourceGroup
+```
+++
+> [!NOTE]
+> In this quickstart, the AKS cluster was created with a system-assigned managed identity (the default identity option). This identity is managed by the platform and does not require removal.
+
+## Next steps
+
+In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it.
+
+To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+
+> [!div class="nextstepaction"]
+> [AKS tutorial][aks-tutorial]
+
+<!-- LINKS - external -->
+[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
+[aks-quickstart-templates]: https://azure.microsoft.com/resources/templates/?term=Azure+Kubernetes+Service
+
+<!-- LINKS - internal -->
+[kubernetes-concepts]: ../concepts-clusters-workloads.md
+[aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
+[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
+[az-aks-browse]: /cli/azure/aks#az_aks_browse
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
+[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[az-group-create]: /cli/azure/group#az_group_create
+[az-group-delete]: /cli/azure/group#az_group_delete
+[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
+[azure-cli-install]: /cli/azure/install-azure-cli
+[install-azure-powershell]: /powershell/azure/install-az-ps
+[connect-azaccount]: /powershell/module/az.accounts/Connect-AzAccount
+[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
+[azure-portal]: https://portal.azure.com
+[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
+[kubernetes-service]: ../concepts-network.md#services
+[ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md
+[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster
description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS) Previously updated : 04/29/2021 Last updated : 08/17/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
If your environment meets the prerequisites and you're familiar with using ARM t
[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json)
+## Prerequisites
+ [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
-- This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+* This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
### [Azure PowerShell](#tab/azure-powershell) -- If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount][connect-azaccount] cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell]. If using Azure Cloud Shell, the latest version is already installed.
+* If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount][connect-azaccount] cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell]. If using Azure Cloud Shell, the latest version is already installed.
-- To create an AKS cluster using a Resource Manager template, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the template](#review-the-template) section.
+* To create an AKS cluster using a Resource Manager template, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the template](#review-the-template) section.
-- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+* The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
-- To deploy a Bicep file or ARM template, you need write access on the resources you're deploying and access to all operations on the Microsoft.Resources/deployments resource type. For example, to deploy a virtual machine, you need Microsoft.Compute/virtualMachines/write and Microsoft.Resources/deployments/* permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+* To deploy a Bicep file or ARM template, you need write access on the resources you're deploying and access to all operations on the Microsoft.Resources/deployments resource type. For example, to deploy a virtual machine, you need Microsoft.Compute/virtualMachines/write and Microsoft.Resources/deployments/* permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
### Create an SSH key pair
For more information about creating SSH keys, see [Create and manage SSH keys fo
## Review the template
-The template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/aks/).
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/aks/).
++
+The resource defined in the ARM template includes:
+
+* [**Microsoft.ContainerService/managedClusters**](/azure/templates/microsoft.containerservice/managedclusters?pivots=deployment-language-arm-template)
For more AKS samples, see the [AKS quickstart templates][aks-quickstart-templates] site.
For more AKS samples, see the [AKS quickstart templates][aks-quickstart-template
[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json)
-2. Select or enter the following values.
+1. Select or enter the following values.
For this quickstart, leave the default values for the *OS Disk Size GB*, *Agent Count*, *Agent VM Size*, *OS Type*, and *Kubernetes Version*. Provide your own values for the following template parameters:
For more AKS samples, see the [AKS quickstart templates][aks-quickstart-template
:::image type="content" source="./media/quick-kubernetes-deploy-rm-template/create-aks-cluster-using-template-portal.png" alt-text="Screenshot of Resource Manager template to create an Azure Kubernetes Service cluster in the portal.":::
-3. Select **Review + Create**.
+1. Select **Review + Create**.
It takes a few minutes to create the AKS cluster. Wait for the cluster to be successfully deployed before you move on to the next step.
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
az aks install-cli ```
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
+1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
```console kubectl get nodes
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
```output NAME STATUS ROLES AGE VERSION
- aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
+ aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6 aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6 ```
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
Install-AzAksKubectl ```
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
+1. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
```azurepowershell-interactive Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ```
-3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
```azurepowershell-interactive kubectl get nodes
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
```plaintext NAME STATUS ROLES AGE VERSION
- aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
+ aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6 aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6 ```
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-power-platform.md
Previously updated : 07/27/2021 Last updated : 08/12/2022
This article walks through the steps in the Azure portal to create a custom Powe
1. Select an API to publish to the Power Platform. 1. Select a Power Platform environment to publish the API to. 1. Enter a display name, which will be used as the name of the custom connector.
+ 1. Optionally, if the API doesn't already require a subscription, select **Create subscription key connection parameter**.
1. Optionally, if the API is [protected by an OAuth 2.0 server](api-management-howto-protect-backend-with-aad.md), provide details including **Client ID**, **Client secret**, **Authorization URL**, **Token URL**, and **Refresh URL**. 1. Select **Create**.
Once the connector is created, navigate to your [Power Apps](https://make.powera
:::image type="content" source="media/export-api-power-platform/custom-connector-power-app.png" alt-text="Custom connector in Power Platform":::
+## Manage a custom connector
+
+You can manage your custom connector in your Power Apps or Power Platform environment. For details about settings, see [Create a custom connector from scratch](/connectors/custom-connectors/define-blank).
+
+1. Select your connector from the list of custom connectors.
+1. Select the pencil (Edit) icon to edit and test the custom connector.
+ > [!NOTE]
-> To call the API from the Power Apps test console, you need to add the "https://flow.microsoft.com" URL as an origin to the [CORS policy](api-management-cross-domain-policies.md#CORS) in your API Management instance.
+> To call the API from the Power Apps test console, you need to add the `https://flow.microsoft.com` URL as an origin to the [CORS policy](api-management-cross-domain-policies.md#CORS) in your API Management instance.
+
+## Update a custom connector
+
+From API Management, you can update a connector to target a different API or Power Apps environment, or to update authorization settings.
+
+1. Navigate to your API Management service in the Azure portal.
+1. In the menu, under **APIs**, select **Power Platform**.
+1. Select **Update a connector**.
+1. Select the API you want to update the connector for, update settings as needed, and select **Update**.
+ ## Next steps
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
When clustering is enabled, the JBoss EAP instances use the FILE_PING JGroups di
The Premium V3 and Isolated V2 App Service Plan types can optionally be distributed across Availability Zones to improve resiliency and reliability for your business-critical workloads. This architecture is also known as [zone redundancy](../availability-zones/migrate-app-service.md). The JBoss EAP clustering feature is compatabile with the zone redundancy feature.
+#### Auto-Scale Rules
+
+When configuring auto-scale rules for horizontal scaling it is important to remove instances incrementally (one at a time) to ensure each removed instance can transfer its activity (such as handling a database transaction) to another member of the cluster. When configuring your autoscale rules in the Portal to scale down, use the following options:
+
+- **Operation**: "Decrease count by"
+- **Cool down**: "5 minutes" or greater
+- **Instance count**: 1
+
+You do not need to incrementally add instances (scaling out), you can add multiple instances to the cluster at a time.
+ ### JBoss EAP App Service Plans <a id="jboss-eap-hardware-options"></a>
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
For App Service, you then make the following modifications:
] ```
+## Serve static files for Flask apps
+
+If your Flask web app includes static front-end files, first follow the instructions on [managing static files](https://flask.palletsprojects.com/en/2.1.x/tutorial/static/) in the Flask documentation. For an example of serving static files in a Flask application, see the [quickstart sample Flask application](https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart) on Github.
+
+To serve static files directly from a route on your application, you can use the [`send_from_directory`](https://flask.palletsprojects.com/en/2.2.x/api/#flask.send_from_directory) method:
+
+```python
+from flask import send_from_directory
+
+@app.route('/reports/<path:path>')
+def send_report(path):
+ return send_from_directory('reports', path)
+```
+ ## Container characteristics When deployed to App Service, Python apps run within a Linux Docker container that's defined in the [App Service Python GitHub repository](https://github.com/Azure-App-Service/python). You can find the image configurations inside the version-specific directories.
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md
> [!IMPORTANT] > Private Endpoint is available for Windows and Linux Web App, containerized or not, hosted on these App Service Plans : **Basic**, **Standard**, **PremiumV2**, **PremiumV3**, **IsolatedV2**, **Functions Premium** (sometimes referred to as the Elastic Premium plan).
-You can use Private Endpoint for your Azure Web App to allow clients located in your private network to securely access the app over Private Link. The Private Endpoint uses an IP address from your Azure VNet address space. Network traffic between a client on your private network and the Web App traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure from the public Internet.
+You can use Private Endpoint for your Azure Web App to allow clients located in your private network to securely access the app over Private Link. The Private Endpoint uses an IP address from your Azure virtual network address space. Network traffic between a client on your private network and the Web App traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public Internet.
Using Private Endpoint for your Web App enables you to: - Secure your Web App by configuring the Private Endpoint, eliminating public exposure.-- Securely connect to Web App from on-premises networks that connect to the VNet using a VPN or ExpressRoute private peering.-- Avoid any data exfiltration from your VNet.
+- Securely connect to Web App from on-premises networks that connect to the virtual network using a VPN or ExpressRoute private peering.
+- Avoid any data exfiltration from your virtual network.
-If you just need a secure connection between your VNet and your Web App, a Service Endpoint is the simplest solution.
-If you also need to reach the web app from on-premises through an Azure Gateway, a regionally peered VNet, or a globally peered VNet, Private Endpoint is the solution.
+If you just need a secure connection between your virtual network and your Web App, a Service Endpoint is the simplest solution.
+If you also need to reach the web app from on-premises through an Azure Gateway, a regionally peered virtual network, or a globally peered virtual network, Private Endpoint is the solution.
For more information, see [Service Endpoints][serviceendpoint]. ## Conceptual overview
-A Private Endpoint is a special network interface (NIC) for your Azure Web App in a Subnet in your Virtual Network (VNet).
-When you create a Private Endpoint for your Web App, it provides secure connectivity between clients on your private network and your Web App. The Private Endpoint is assigned an IP Address from the IP address range of your VNet.
-The connection between the Private Endpoint and the Web App uses a secure [Private Link][privatelink]. Private Endpoint is only used for incoming flows to your Web App. Outgoing flows won't use this Private Endpoint. You can inject outgoing flows to your network in a different subnet through the [VNet integration feature][vnetintegrationfeature].
+A Private Endpoint is a special network interface (NIC) for your Azure Web App in a Subnet in your virtual network.
+When you create a Private Endpoint for your Web App, it provides secure connectivity between clients on your private network and your Web App. The Private Endpoint is assigned an IP Address from the IP address range of your virtual network.
+The connection between the Private Endpoint and the Web App uses a secure [Private Link][privatelink]. Private Endpoint is only used for incoming flows to your Web App. Outgoing flows won't use this Private Endpoint. You can inject outgoing flows to your network in a different subnet through the [virtual network integration feature][vnetintegrationfeature].
Each slot of an app is configured separately. You can plug up to 100 Private Endpoints per slot. You can't share a Private Endpoint between slots.
The Subnet where you plug the Private Endpoint can have other resources in it, y
You can also deploy the Private Endpoint in a different region than the Web App. > [!Note]
->The VNet integration feature cannot use the same subnet as Private Endpoint, this is a limitation of the VNet integration feature.
+>The virtual network integration feature cannot use the same subnet as Private Endpoint, this is a limitation of the virtual network integration feature.
From a security perspective: - By default, when you enable Private Endpoints to your Web App, you disable all public access.-- You can enable multiple Private Endpoints in others VNets and Subnets, including VNets in other regions.-- The IP address of the Private Endpoint NIC must be dynamic, but will remain the same until you delete the Private Endpoint.-- The Subnet that hosts the Private Endpoint can have an NSG associated, but you must disable the network policies enforcement for the Private Endpoint: see [Disable network policies for private endpoints][disablesecuritype]. As a result, you can't filter by any NSG the access to your Private Endpoint.-- By default, when you enable Private Endpoint to your Web App, the [access restrictions][accessrestrictions] configuration of the Web App isn't evaluated.-- You can eliminate the data exfiltration risk from the VNet by removing all NSG rules where destination is tag Internet or Azure services. When you deploy a Private Endpoint for a Web App, you can only reach this specific Web App through the Private Endpoint. If you have another Web App, you must deploy another dedicated Private Endpoint for this other Web App.
+- You can enable multiple Private Endpoints in others virtual networks and Subnets, including virtual network in other regions.
+- The access restrictions configuration of a Web App isn't evaluated for traffic through the Private Endpoint.
+- You can eliminate the data exfiltration risk from the virtual network by removing all NSG rules where destination is tag Internet or Azure services. When you deploy a Private Endpoint for a Web App, you can only reach this specific Web App through the Private Endpoint. If you have another Web App, you must deploy another dedicated Private Endpoint for this other Web App.
In the Web HTTP logs of your Web App, you'll find the client source IP. This feature is implemented using the TCP Proxy protocol, forwarding the client IP property up to the Web App. For more information, see [Getting connection Information using TCP Proxy v2][tcpproxy].
az appservice ase update --name myasename --allow-new-private-endpoint-connectio
## Specific requirements
-If the Virtual Network is in a different subscription than the app, you must ensure that the subscription with the Virtual Network is registered for the Microsoft.Web resource provider. You can explicitly register the provider [by following this documentation][registerprovider], but it will also automatically be registered when creating the first web app in a subscription.
+If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the Microsoft.Web resource provider. You can explicitly register the provider [by following this documentation][registerprovider], but it will also automatically be registered when creating the first web app in a subscription.
## Pricing
applied-ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-azure-function.md
Title: "Tutorial: Use an Azure Function to process stored documents"
-description: This guide shows you how to use an Azure function to trigger the processing of documents that are uploaded to an Azure blob storage container.
+description: This guide shows you how to use an Azure function to trigger the processing of documents that are uploaded to an Azure blob storage container.
Previously updated : 03/19/2021 Last updated : 08/23/2022 -+
-# Tutorial: Use an Azure Function to process stored documents
+# Tutorial: Use Azure Functions and Python to process stored documents
-You can use Form Recognizer as part of an automated data processing pipeline built with Azure Functions. This guide shows you how to use an Azure function to process documents that are uploaded to an Azure blob storage container. This workflow extracts table data from stored documents using the Form Recognizer Layout service and saves the table data in a .csv file in Azure. You can then display the data using Microsoft Power BI (not covered here).
+Form Recognizer can be used as part of an automated data processing pipeline built with Azure Functions. This guide will show you how to use Azure Functions to process documents that are uploaded to an Azure blob storage container. This workflow extracts table data from stored documents using the Form Recognizer layout model and saves the table data in a .csv file in Azure. You can then display the data using Microsoft Power BI (not covered here).
-> [!div class="mx-imgBorder"]
-> ![azure service workflow diagram](./media/tutorial-azure-function/workflow-diagram.png)
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create an Azure Storage account
-> * Create an Azure Functions project
-> * Extract layout data from uploaded forms
-> * Upload layout data to Azure Storage
+>
+> * Create an Azure Storage account.
+> * Create an Azure Functions project.
+> * Extract layout data from uploaded forms.
+> * Upload extracted layout data to Azure Storage.
## Prerequisites
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource <span class="docon docon-navigate-external x-hidden-focus"></span></a> in the Azure portal to get your Form Recognizer key and endpoint. After it deploys, select **Go to resource**.
- * You'll need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
- * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-* A local PDF document to analyze. You can download this [sample document](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample-layout.pdf) to use.
-* [Python 3.8.x](https://www.python.org/downloads/) installed.
-* [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) installed.
-* [Azure Functions Core Tools](../../azure-functions/functions-run-local.md?tabs=windows%2ccsharp%2cbash#install-the-azure-functions-core-tools) installed.
-* Visual Studio Code with the following extensions installed:
- * [Azure Functions extension](/azure/developer/python/tutorial-vs-code-serverless-python-01#visual-studio-code-python-and-the-azure-functions-extension)
- * [Python extension](https://code.visualstudio.com/docs/python/python-tutorial#_install-visual-studio-code-and-the-python-extension)
+* **Azure subscription** - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+
+* **A Form Recognizer resource**. Once you have your Azure subscription, create a [Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+
+ * After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the tutorial:
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+* [**Python 3.6.x, 3.7.x, 3.8.x or 3.9.x**](https://www.python.org/downloads/) (Python 3.10.x isn't supported for this project).
+
+* The latest version of [**Visual Studio Code**](https://code.visualstudio.com/) (VS Code) with the following extensions installed:
+
+ * [**Azure Functions extension**](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). Once it's installed, you should see the Azure logo in the left-navigation pane.
+
+ * [**Azure Functions Core Tools**](/azure/azure-functions/functions-run-local?tabs=v3%2Cwindows%2Ccsharp%2Cportal%2Cbash) version 3.x (Version 4.x isn't supported for this project).
+
+ * [**Python Extension**](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio code. For more information, *see* [Getting Started with Python in VS Code](https://code.visualstudio.com/docs/python/python-tutorial)
+
+* [**Azure Storage Explorer**](https://azure.microsoft.com/features/storage-explorer/) installed.
+
+* **A local PDF document to analyze**. You can use our [sample pdf document](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample-layout.pdf) for this project.
## Create an Azure Storage account
-[Create an Azure Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) on the Azure portal. Select **StorageV2** as the Account kind.
+1. [Create a general-purpose v2 Azure Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the Azure portal. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+
+ * [Create a storage account](../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
+ * [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window.
-On the left pane, select the **CORS** tab, and remove the existing CORS policy if any exists.
+1. On the left pane, select the **Resource sharing (CORS)** tab, and remove the existing CORS policy if any exists.
-Once that has deployed, create two empty blob storage containers, named **test** and **output**.
+1. Once your storage account has deployed, create two empty blob storage containers, named **input** and **output**.
## Create an Azure Functions project
-Open Visual Studio Code. If you've installed the Azure Functions extension, you should see an Azure logo on the left navigation pane. Select it. Create a new project, and when prompted create a local folder **coa_new** to contain the project.
+1. Create a new folder named **functions-app** to contain the project and choose **Select**.
+
+1. Open Visual Studio Code and open the Command Palette (Ctrl+Shift+P). Search for and choose **Python:Select Interpreter** → choose an installed Python interpreter that is version 3.6.x, 3.7.x, 3.8.x or 3.9.x. This selection will add the Python interpreter path you selected to your project.
+
+1. Select the Azure logo from the left-navigation pane.
+
+ * You'll see your existing Azure resources in the Resources view.
+
+ * Select the Azure subscription that you're using for this project and below you should see the Azure Function App.
+
+ :::image type="content" source="media/tutorial-azure-function/azure-extensions-visual-studio-code.png" alt-text="Screenshot of a list showing your Azure resources in a single, unified view.":::
+
+1. Select the Workspace (Local) section located below your listed resources. Select the plus symbol and choose the **Create Function** button.
+
+ :::image type="content" source="media/tutorial-azure-function/workspace-create-function.png" alt-text="Screenshot showing where to begin creating an Azure function.":::
+
+1. When prompted, choose **Create new project** and navigate to the **function-app** directory. Choose **Select**.
+
+1. You'll be prompted to configure several settings:
+
+ * **Select a language** → choose Python.
+
+ * **Select a Python interpreter to create a virtual environment** → select the interpreter you set as the default earlier.
+
+ * **Select a template** → choose **Azure Blob Storage trigger** and give the trigger a name or accept the default name. Press **Enter** to confirm.
+
+ * **Select setting** → choose **➕Create new local app setting** from the dropdown menu.
+
+ * **Select subscription** → choose your Azure subscription with the storage account you created → select your storage account → then select the name of the storage input container (in this case, `input/{name}`). Press **Enter** to confirm.
-![VSCode create function button](./media/tutorial-azure-function/vs-code-create-function.png)
+ * **Select how your would like to open your project** → choose **Open the project in the current window** from the dropdown window.
+1. Once you've completed these steps, VS Code will add a new Azure Function project with a *\_\_init\_\_.py* Python script. This script will be triggered when a file is uploaded to the **input** storage container:
-You'll be prompted to configure a number of settings:
-* In the **Select a language** prompt, select Python.
-* In the **Select a template** prompt, select Azure Blob Storage trigger. Then give the default trigger a name.
-* In the **Select setting** prompt, opt to create new local app settings.
-* Select your Azure subscription with the storage account you created. Then you need to enter the name of the storage container (in this case, `test/{name}`)
-* Opt to open the project in the current window.
+ ```python
+ import logging
-![VSCode create prompt example](./media/tutorial-azure-function/vs-code-prompt.png)
+ import azure.functions as func
-When you've completed these steps, VSCode will add a new Azure Function project with a *\_\_init\_\_.py* Python script. This script will be triggered when a file is uploaded to the **test** storage container, but it won't do anything.
+
+ def main(myblob: func.InputStream):
+ logging.info(f"Python blob trigger function processed blob \n"
+ f"Name: {myblob.name}\n"
+ f"Blob Size: {myblob.length} bytes")
+ ```
## Test the function
-Press F5 to run the basic function. VSCode will prompt you to select a storage account to interface with. Select the storage account you created and continue.
+1. Press F5 to run the basic function. VS Code will prompt you to select a storage account to interface with.
-Open Azure Storage Explorer and upload a sample PDF document to the **Test** container. Then check the VSCode terminal. The script should log that it was triggered by the PDF upload.
+1. Select the storage account you created and continue.
-![VSCode terminal test](./media/tutorial-azure-function/vs-code-terminal-test.png)
+1. Open Azure Storage Explorer and upload the sample PDF document to the **input** container. Then check the VS Code terminal. The script should log that it was triggered by the PDF upload.
+ :::image type="content" source="media/tutorial-azure-function/visual-studio-code-terminal-test.png" alt-text="Screenshot of the VS Code terminal after uploading a new document.":::
-Stop the script before continuing.
+1. Stop the script before continuing.
## Add document processing code
-Next, you'll add your own code to the Python script to call the Form Recognizer service and parse the uploaded documents using the Form Recognizer [Layout API](concept-layout.md).
-
-In VSCode, navigate to the function's *requirements.txt* file. This defines the dependencies for your script. Add the following Python packages to the file:
-
-```
-cryptography
-azure-functions
-azure-storage-blob
-azure-identity
-requests
-pandas
-numpy
-```
-
-Then, open the *\_\_init\_\_.py* script. Add the following `import` statements:
-
-```Python
-import logging
-from azure.storage.blob import BlobServiceClient
-import azure.functions as func
-import json
-import time
-from requests import get, post
-import os
-from collections import OrderedDict
-import numpy as np
-import pandas as pd
-```
-
-You can leave the generated `main` function as-is. You'll add your custom code inside this function.
-
-```python
-# This part is automatically generated
-def main(myblob: func.InputStream):
- logging.info(f"Python blob trigger function processed blob \n"
- f"Name: {myblob.name}\n"
- f"Blob Size: {myblob.length} bytes")
-```
-
-The following code block calls the Form Recognizer [Analyze Layout](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync) API on the uploaded document. Fill in your endpoint and key values.
-
-```Python
-# This is the call to the Form Recognizer endpoint
- endpoint = r"Your Form Recognizer Endpoint"
- apim_key = "Your Form Recognizer Key"
- post_url = endpoint + "/formrecognizer/v2.1/Layout/analyze"
- source = myblob.read()
-
- headers = {
- # Request headers
- 'Content-Type': 'application/pdf',
- 'Ocp-Apim-Subscription-Key': apim_key,
- }
-
- text1=os.path.basename(myblob.name)
-```
-
+Next, you'll add your own code to the Python script to call the Form Recognizer service and parse the uploaded documents using the Form Recognizer [layout model](concept-layout.md).
+
+1. In VS Code, navigate to the function's *requirements.txt* file. This file defines the dependencies for your script. Add the following Python packages to the file:
+
+ ```txt
+ cryptography
+ azure-functions
+ azure-storage-blob
+ azure-identity
+ requests
+ pandas
+ numpy
+ ```
+
+1. Then, open the *\_\_init\_\_.py* script. Add the following `import` statements:
+
+ ```Python
+ import logging
+ from azure.storage.blob import BlobServiceClient
+ import azure.functions as func
+ import json
+ import time
+ from requests import get, post
+ import os
+ from collections import OrderedDict
+ import numpy as np
+ import pandas as pd
+ ```
+
+1. You can leave the generated `main` function as-is. You'll add your custom code inside this function.
+
+ ```python
+ # This part is automatically generated
+ def main(myblob: func.InputStream):
+ logging.info(f"Python blob trigger function processed blob \n"
+ f"Name: {myblob.name}\n"
+ f"Blob Size: {myblob.length} bytes")
+ ```
+
+1. The following code block calls the Form Recognizer [Analyze Layout](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync) API on the uploaded document. Fill in your endpoint and key values.
+
+ ```Python
+ # This is the call to the Form Recognizer endpoint
+ endpoint = r"Your Form Recognizer Endpoint"
+ apim_key = "Your Form Recognizer Key"
+ post_url = endpoint + "/formrecognizer/v2.1/layout/analyze"
+ source = myblob.read()
+
+ headers = {
+ # Request headers
+ 'Content-Type': 'application/pdf',
+ 'Ocp-Apim-Subscription-Key': apim_key,
+ }
+
+ text1=os.path.basename(myblob.name)
+ ```
+
+ > [!IMPORTANT]
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, *see* Cognitive Services [security](../../cognitive-services/cognitive-services-security.md).
+
+1. Next, add code to query the service and get the returned data.
+
+ ```Python
+ resp = requests.post(url = post_url, data = source, headers = headers)
+ if resp.status_code != 202:
+ print("POST analyze failed:\n%s" % resp.text)
+ quit()
+ print("POST analyze succeeded:\n%s" % resp.headers)
+ get_url = resp.headers["operation-location"]
+
+ wait_sec = 25
+
+ time.sleep(wait_sec)
+ # The layout API is async therefore the wait statement
+
+ resp =requests.get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
+
+ resp_json = json.loads(resp.text)
++
+ status = resp_json["status"]
++
+ if status == "succeeded":
+ print("POST Layout Analysis succeeded:\n%s")
+ results=resp_json
+ else:
+ print("GET Layout results failed:\n%s")
+ quit()
-> [!IMPORTANT]
-> Go to the Azure portal. If the Form Recognizer resource you created in the **Prerequisites** section deployed successfully, click the **Go to Resource** button under **Next Steps**. You can find your key and endpoint in the resource's **key and endpoint** page, under **resource management**.
->
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, see the [Cognitive Services security](../../cognitive-services/cognitive-services-security.md) article.
-
-Next, add code to query the service and get the returned data.
--
-```Python
-resp = requests.post(url = post_url, data = source, headers = headers)
- if resp.status_code != 202:
- print("POST analyze failed:\n%s" % resp.text)
- quit()
- print("POST analyze succeeded:\n%s" % resp.headers)
- get_url = resp.headers["operation-location"]
-
- wait_sec = 25
-
- time.sleep(wait_sec)
- # The layout API is async therefore the wait statement
-
- resp =requests.get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
-
- resp_json = json.loads(resp.text)
-
-
- status = resp_json["status"]
-
-
- if status == "succeeded":
- print("Layout Analysis succeeded:\n%s")
results=resp_json
- else:
- print("GET Layout results failed:\n%s")
- quit()
-
- results=resp_json
-```
-
-Then add the following code to connect to the Azure Storage **output** container. Fill in your own values for the storage account name and key. You can get the key on the **Access keys** tab of your storage resource in the Azure portal.
-
-```Python
-# This is the connection to the blob storage, with the Azure Python SDK
- blob_service_client = BlobServiceClient.from_connection_string("DefaultEndpointsProtocol=https;AccountName="Storage Account Name";AccountKey="storage account key";EndpointSuffix=core.windows.net")
- container_client=blob_service_client.get_container_client("output")
-```
-
-The following code parses the returned Form Recognizer response, constructs a .csv file, and uploads it to the **output** container.
--
-> [!IMPORTANT]
-> You will likely need to edit this code to match the structure of your own form documents.
-
-```python
- # The code below extracts the json format into tabular data.
- # Please note that you need to adjust the code below to your form structure.
- # It probably won't work out-of-the-box for your specific form.
- pages = results["analyzeResult"]["pageResults"]
-
- def make_page(p):
- res=[]
- res_table=[]
- y=0
- page = pages[p]
- for tab in page["tables"]:
- for cell in tab["cells"]:
- res.append(cell)
- res_table.append(y)
- y=y+1
-
- res_table=pd.DataFrame(res_table)
- res=pd.DataFrame(res)
- res["table_num"]=res_table[0]
- h=res.drop(columns=["boundingBox","elements"])
- h.loc[:,"rownum"]=range(0,len(h))
- num_table=max(h["table_num"])
- return h, num_table, p
-
- h, num_table, p= make_page(0)
-
- for k in range(num_table+1):
- new_table=h[h.table_num==k]
- new_table.loc[:,"rownum"]=range(0,len(new_table))
- row_table=pages[p]["tables"][k]["rows"]
- col_table=pages[p]["tables"][k]["columns"]
- b=np.zeros((row_table,col_table))
- b=pd.DataFrame(b)
- s=0
- for i,j in zip(new_table["rowIndex"],new_table["columnIndex"]):
- b.loc[i,j]=new_table.loc[new_table.loc[s,"rownum"],"text"]
- s=s+1
-
-```
-
-Finally, the last block of code uploads the extracted table and text data to your blob storage element.
-
-```Python
- # Here is the upload to the blob storage
- tab1_csv=b.to_csv(header=False,index=False,mode='w')
- name1=(os.path.splitext(text1)[0]) +'.csv'
- container_client.upload_blob(name=name1,data=tab1_csv)
-```
+ ```
+
+1. Add the following code to connect to the Azure Storage **output** container. Fill in your own values for the storage account name and key. You can get the key on the **Access keys** tab of your storage resource in the Azure portal.
+
+ ```Python
+ # This is the connection to the blob storage, with the Azure Python SDK
+ blob_service_client = BlobServiceClient.from_connection_string("DefaultEndpointsProtocol=https;AccountName="Storage Account Name";AccountKey="storage account key";EndpointSuffix=core.windows.net")
+ container_client=blob_service_client.get_container_client("output")
+ ```
+
+ The following code parses the returned Form Recognizer response, constructs a .csv file, and uploads it to the **output** container.
+
+ > [!IMPORTANT]
+ > You will likely need to edit this code to match the structure of your own form documents.
+
+ ```python
+ # The code below extracts the json format into tabular data.
+ # Please note that you need to adjust the code below to your form structure.
+ # It probably won't work out-of-the-box for your specific form.
+ pages = results["analyzeResult"]["pageResults"]
+
+ def make_page(p):
+ res=[]
+ res_table=[]
+ y=0
+ page = pages[p]
+ for tab in page["tables"]:
+ for cell in tab["cells"]:
+ res.append(cell)
+ res_table.append(y)
+ y=y+1
+
+ res_table=pd.DataFrame(res_table)
+ res=pd.DataFrame(res)
+ res["table_num"]=res_table[0]
+ h=res.drop(columns=["boundingBox","elements"])
+ h.loc[:,"rownum"]=range(0,len(h))
+ num_table=max(h["table_num"])
+ return h, num_table, p
+
+ h, num_table, p= make_page(0)
+
+ for k in range(num_table+1):
+ new_table=h[h.table_num==k]
+ new_table.loc[:,"rownum"]=range(0,len(new_table))
+ row_table=pages[p]["tables"][k]["rows"]
+ col_table=pages[p]["tables"][k]["columns"]
+ b=np.zeros((row_table,col_table))
+ b=pd.DataFrame(b)
+ s=0
+ for i,j in zip(new_table["rowIndex"],new_table["columnIndex"]):
+ b.loc[i,j]=new_table.loc[new_table.loc[s,"rownum"],"text"]
+ s=s+1
+
+ ```
+
+1. Finally, the last block of code uploads the extracted table and text data to your blob storage element.
+
+ ```Python
+ # Here is the upload to the blob storage
+ tab1_csv=b.to_csv(header=False,index=False,mode='w')
+ name1=(os.path.splitext(text1)[0]) +'.csv'
+ container_client.upload_blob(name=name1,data=tab1_csv)
+ ```
## Run the function
-Press F5 to run the function again. Use Azure Storage Explorer to upload a sample PDF form to the **Test** storage container. This action should trigger the script to run, and you should then see the resulting .csv file (displayed as a table) in the **output** container.
+1. Press F5 to run the function again.
+
+1. Use Azure Storage Explorer to upload a sample PDF form to the **input** storage container. This action should trigger the script to run, and you should then see the resulting .csv file (displayed as a table) in the **output** container.
You can connect this container to Power BI to create rich visualizations of the data it contains.
In this tutorial, you learned how to use an Azure Function written in Python to
> [Microsoft Power BI](https://powerbi.microsoft.com/integrations/azure-table-storage/) * [What is Form Recognizer?](overview.md)
-* Learn more about the [Layout API](concept-layout.md)
+* Learn more about the [layout model](concept-layout.md)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
**Form Recognizer REST API v3.0 is now generally available and ready for use in production applications!**
-#### The August release introduces the following performance updates:
+#### The August release introduces the following new capabilities and updates:
##### Form Recognizer Studio updates
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* **Custom models**. The Studio now includes the ability to reorder labels in custom model projects to improve labeling efficiency.
-* **Copy Models** Custom models can be copied across Form Recognizer services from within the Studio. This enables the promotion of a trained model to other environments and regions.
+* **Copy Models** Custom models can be copied across Form Recognizer services from within the Studio. The operation enables the promotion of a trained model to other environments and regions.
* **Delete documents**. The Studio now supports deleting documents from labeled dataset within custom projects.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields will now resolve to the existing fields TotalTax and Line/Tax respectively.
-* [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards as well as passport visa information.
+* [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards. Support for passport visa information.
* [**prebuilt-receipt**](concept-receipt.md). Expanded locale support for French (fr-FR), Spanish (es-ES), Portuguese (pt-PT), Italian (it-IT) and German (de-DE).
-* [**prebuilt-businessCard**](concept-business-card.md). Address parsing support to extract sub-fields for address components like address, city, state, country, and zip code.
+* [**prebuilt-businessCard**](concept-business-card.md). Address parsing support to extract subfields for address components like address, city, state, country, and zip code.
* **AI quality improvements**
This new release includes the following updates:
**Version 4.0.0-beta.4 (2022-06-08)**
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
-##### [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4)
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4)
-##### [**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
+[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
### [**Java**](#tab/java) **Version 4.0.0-beta.5 (2022-06-07)**
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
-##### [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar)
+ [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar)
-##### [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
### [**JavaScript**](#tab/javascript) **Version 4.0.0-beta.4 (2022-06-07)**
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-##### [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4)
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4)
-##### [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
+ [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
### [Python](#tab/python) **Version 3.2.0b5 (2022-06-07**
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-##### [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/)
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/)
-##### [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
+ [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
The latest beta release version of the Azure Form Recognizer SDKs incorporates n
This new release includes the following updates:
-* [Custom Document models and modes](concept-custom.md):
+* [Custom Document models and modes](concept-custom.md):
* [Custom template](concept-custom-template.md) (formerly custom form) * [Custom neural](concept-custom-neural.md). * [Custom modelΓÇöbuild mode](concept-custom.md#build-mode).
-* [W-2 prebuilt model](concept-w2.md) (prebuilt-tax.us.w2).
+* [W-2 prebuilt model](concept-w2.md) (prebuilt-tax.us.w2).
-* [Read prebuilt model](concept-read.md) (prebuilt-read).
+* [Read prebuilt model](concept-read.md) (prebuilt-read).
-* [Invoice prebuilt model (Spanish)](concept-invoice.md#supported-languages-and-locales) (prebuilt-invoice).
+* [Invoice prebuilt model (Spanish)](concept-invoice.md#supported-languages-and-locales) (prebuilt-invoice).
### [**C#**](#tab/csharp) **Version 4.0.0-beta.3 (2022-02-10)**
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta3-2022-02-10)
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta3-2022-02-10)
-##### [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.3)
+ [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.3)
-##### [**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer.documentanalysis?view=azure-dotnet-preview&preserve-view=true)
+ [**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer.documentanalysis?view=azure-dotnet-preview&preserve-view=true)
### [**Java**](#tab/java) **Version 4.0.0-beta.4 (2022-02-10)**
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta4-2022-02-10)
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta4-2022-02-10)
-##### [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.4/jar)
+ [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.4/jar)
-##### [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
### [**JavaScript**](#tab/javascript) **Version 4.0.0-beta.3 (2022-02-10)**
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#400-beta3-2022-02-10)
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#400-beta3-2022-02-10)
-##### [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.3)
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.3)
-##### [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
+ [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
### [Python](#tab/python) **Version 3.2.0b3 (2022-02-10)**
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#320b3-2022-02-10)
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#320b3-2022-02-10)
-##### [**Package (PyPI)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b3/)
+ [**Package (PyPI)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b3/)
-##### [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
+ [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
azure-arc Use Azure Policy Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy-flux-2.md
+
+ Title: "Apply Flux v2 configurations at-scale using Azure Policy"
+ Last updated : 8/23/2022+
+description: "Apply Flux v2 configurations at-scale using Azure Policy"
+keywords: "Kubernetes, K8s, Arc, AKS, Azure, containers, GitOps, Flux v2, policy"
++
+# Apply Flux v2 configurations at-scale using Azure Policy
+
+You can use Azure Policy to apply Flux v2 configurations (`Microsoft.KubernetesConfiguration/fluxConfigurations` resource type) at scale on Azure Arc-enabled Kubernetes (`Microsoft.Kubernetes/connectedClusters`) or AKS (`Microsoft.ContainerService/managedClusters`) clusters.
+
+To use Azure Policy, select a built-in policy definition and create a policy assignment. You can search for **flux** to find all of the Flux v2 policy definitions. When creating the policy assignment:
+1. Set the scope for the assignment.
+ * The scope will be all resource groups in a subscription or management group or specific resource groups.
+2. Set the parameters for the Flux v2 configuration that will be created.
+
+Once the assignment is created, the Azure Policy engine identifies all Azure Arc-enabled Kubernetes clusters located within the scope and applies the GitOps configuration to each cluster.
+
+To enable separation of concerns, you can create multiple policy assignments, each with a different Flux v2 configuration pointing to a different source. For example, one git repository may be used by cluster admins and other repositories may be used by application teams.
+
+> [!TIP]
+> There are built-in policy definitions for these scenarios:
+> * Flux extension install (required for all scenarios): `Configure installation of Flux extension on Kubernetes cluster`
+> * Flux configuration using public Git repository (generally a test scenario): `Configure Kubernetes clusters with Flux v2 configuration using public Git repository`
+> * Flux configuration using private Git repository with SSH auth: `Configure Kubernetes clusters with Flux v2 configuration using Git repository and SSH secrets`
+> * Flux configuration using private Git repository with HTTPS auth: `Configure Kubernetes clusters with Flux v2 configuration using Git repository and HTTPS secrets`
+> * Flux configuration using private Git repository with HTTPS CA cert auth: `Configure Kubernetes clusters with Flux v2 configuration using Git repository and HTTPS CA Certificate`
+> * Flux configuration using private Git repository with local K8s secret: `Configure Kubernetes clusters with Flux v2 configuration using Git repository and local secrets`
+> * Flux configuration using private Bucket source and KeyVault secrets: `Configure Kubernetes clusters with Flux v2 configuration using Bucket source and secrets in KeyVault`
+> * Flux configuration using private Bucket source and local K8s secret: `Configure Kubernetes clusters with specified Flux v2 Bucket source using local secrets`
+
+## Prerequisite
+
+Verify you have `Microsoft.Authorization/policyAssignments/write` permissions on the scope (subscription or resource group) where you'll create this policy assignment.
+
+## Create a policy assignment
+
+1. In the Azure portal, navigate to **Policy**.
+1. In the **Authoring** section of the sidebar, select **Definitions**.
+1. In the "Kubernetes" category, choose the "Configure Kubernetes clusters with specified GitOps configuration using no secrets" built-in policy definition.
+1. Select **Assign**.
+1. Set the **Scope** to the management group, subscription, or resource group to which the policy assignment will apply.
+ * If you want to exclude any resources from the policy assignment scope, set **Exclusions**.
+1. Give the policy assignment an easily identifiable **Name** and **Description**.
+1. Ensure **Policy enforcement** is set to **Enabled**.
+1. Select **Next**.
+1. Set the parameter values to be used while creating the `fluxConfigurations` resource.
+ * For more information about parameters, see the [tutorial on deploying Flux v2 configurations](./tutorial-use-gitops-flux2.md).
+1. Select **Next**.
+1. Enable **Create a remediation task**.
+1. Verify **Create a managed identity** is checked, and that the identity will have **Contributor** permissions.
+ * For more information, see the [Create a policy assignment quickstart](../../governance/policy/assign-policy-portal.md) and the [Remediate non-compliant resources with Azure Policy article](../../governance/policy/how-to/remediate-resources.md).
+1. Select **Review + create**.
+
+After creating the policy assignment, the configuration is applied to new Azure Arc-enabled Kubernetes or AKS clusters created within the scope of policy assignment.
+
+For existing clusters, you may need to manually run a remediation task. This task typically takes 10 to 20 minutes for the policy assignment to take effect.
+
+## Verify a policy assignment
+
+1. In the Azure portal, navigate to one of your Azure Arc-enabled Kubernetes or AKS clusters.
+1. In the **Settings** section of the sidebar, select **GitOps**.
+ * In the configurations list, you should see the configuration created by the policy assignment.
+1. In the **Kubernetes resources** section of the sidebar, select **Namespaces** and **Workloads**.
+ * You should see the namespace and artifacts that were created by the Flux configuration.
+ * You should see the objects described by the manifests in the Git repo deployed on the cluster.
+
+## Next steps
+
+[Set up Azure Monitor for Containers with Azure Arc-enabled Kubernetes clusters](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md).
azure-arc Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy.md
Title: "Apply configurations at-scale using Azure Policy"
+ Title: "Apply Flux v1 configurations at-scale using Azure Policy"
# Previously updated : 11/23/2021 Last updated : 8/23/2022
-description: "Apply configurations at-scale using Azure Policy"
-keywords: "Kubernetes, Arc, Azure, K8s, containers"
+description: "Apply Flux v1 configurations at-scale using Azure Policy"
+keywords: "Kubernetes, Arc, Azure, K8s, containers, GitOps, Flux v1, policy"
-# Apply configurations at-scale using Azure Policy
+# Apply Flux v1 configurations at-scale using Azure Policy
-You can use Azure Policy to apply configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations` resource type) at scale on Azure Arc-enabled Kubernetes clusters (`Microsoft.Kubernetes/connectedclusters`).
+You can use Azure Policy to apply Flux v1 configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations` resource type) at scale on Azure Arc-enabled Kubernetes clusters (`Microsoft.Kubernetes/connectedclusters`).
->[!NOTE]
->The built-in policies referenced in this article are for GitOps with Flux v1.
+> [!NOTE]
+> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the article for using policy with Flux v2](./use-azure-policy-flux-2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
To use Azure Policy, select a built-in GitOps policy definition and create a policy assignment. When creating the policy assignment: 1. Set the scope for the assignment.
Verify you have `Microsoft.Authorization/policyAssignments/write` permissions on
1. In the Azure portal, navigate to **Policy**. 1. In the **Authoring** section of the sidebar, select **Definitions**. 1. In the "Kubernetes" category, choose the "Configure Kubernetes clusters with specified GitOps configuration using no secrets" built-in policy definition.
-1. Click on **Assign**.
+1. Select **Assign**.
1. Set the **Scope** to the management group, subscription, or resource group to which the policy assignment will apply. * If you want to exclude any resources from the policy assignment scope, set **Exclusions**. 1. Give the policy assignment an easily identifiable **Name** and **Description**. 1. Ensure **Policy enforcement** is set to **Enabled**. 1. Select **Next**.
-1. Set the parameter values to be used while creating the `sourceControlConfiguration`.
+1. Set the parameter values to be used while creating the `sourceControlConfigurations` resource.
* For more information about parameters, see the [tutorial on deploying GitOps configurations](./tutorial-use-gitops-connected-cluster.md). 1. Select **Next**. 1. Enable **Create a remediation task**.
For existing clusters, you may need to manually run a remediation task. This tas
* In the list, you should see the policy assignment that you created earlier with the **Compliance state** set as *Compliant*. 1. In the **Settings** section of the sidebar, select **GitOps**. * In the configurations list, you should see the configuration created by the policy assignment.
-1. Use `kubectl` to interrogate the cluster.
- * You should see the namespace and artifacts that were created by the GitOps configuration.
- * You should see the objects described by the manifests in the Git repo getting deployed on the cluster.
+1. In the **Kubernetes resources** section of the sidebar, select **Namespaces** and **Workloads**.
+ * You should see the namespace and artifacts that were created by the Flux configuration.
+ * You should see the objects described by the manifests in the Git repo deployed on the cluster.
## Next steps
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 07/14/2022 Last updated : 08/24/2022
URLS:
|`https://*.dp.prod.appliances.azure.com`|Resource bridge data plane service| |`https://ecpacr.azurecr.io` |Resource bridge container image download | |`.blob.core.windows.net`<br> `*.dl.delivery.mp.microsoft.com`<br> `*.do.dsp.mp.microsoft.com` |Resource bridge image download |
+|`https://azurearcfork8sdev.azurecr.io` |Azure Arc for Kubernetes container image download |
+|`adhs.events.data.microsoft.com ` |Required diagnostic data sent to Microsoft from control plane nodes|
+|`v20.events.data.microsoft.com` |Required diagnostic data sent to Microsoft from the Azure Stack HCI or Windows Server host|
+
+URLs used by other Arc agents:
+
+|Agent resource | Description |
+|||
+|`https://management.azure.com` |Azure Resource Manager|
+|`https://login.microsoftonline.com` |Azure Active Directory|
### Azure Arc resource bridge is unreachable
Azure Arc resource bridge must be configured for proxy so that it can connect to
There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the host and guest trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.
+### KVA timeout error
+
+Azure Arc resource bridge is a Kubernetes management cluster that is deployed in an appliance VM directly on the on-premises infrastructure. While trying to deploy Azure Arc resource bridge, a "KVA timeout error" may appear if there is a networking problem that doesn't allow communication of the Arc Resource Bridge appliance VM to the host, DNS, network or internet. This error is typically displayed for the following reasons:
+
+- The appliance VM IP address doesn't have DNS resolution.
+- The appliance VM IP address doesn't have internet access to download the required image.
+- The host doesn't have routability to the appliance VM IP address.
+
+To resolve this error, ensure that all IP addresses assigned to the Arc Resource Bridge appliance VM can be resolved by DNS and have access to the internet, and that the host can successfully route to the IP addresses.
+ ## Azure-Arc enabled VMs on Azure Stack HCI issues For general help resolving issues related to Azure-Arc enabled VMs on Azure Stack HCI, see [Troubleshoot Azure Arc-enabled virtual machines](/azure-stack/hci/manage/troubleshoot-arc-enabled-vms).
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
Before starting, review the following requirements.
* System Center Operations Manager 2022 * System Center Operations Manager 2019 * System Center Operations Manager 2016
- * System Center Operations Manager 2012 SP1 UR6 or later
- * System Center Operations Manager 2012 R2 UR2 or later
-* Integrating System Center Operations Manager 2016 with US Government cloud requires the following:
+* Integrating System Center Operations Manager with US Government cloud requires the following:
* System Center Operations Manager 2022 * System Center Operations Manager 2019
- * System Center Operations Manager 2016 UR 2 or later
- * System Center Operations Manager 2012 R2 UR 3 or later
* All Operations Manager agents must meet minimum support requirements. Ensure that agents are at the minimum update, otherwise Windows agent communication may fail and generate errors in the Operations Manager event log. * A Log Analytics workspace. For further information, review [Log Analytics workspace overview](../logs/workspace-design.md). * You authenticate to Azure with an account that is a member of the [Log Analytics Contributor role](../logs/manage-access.md#azure-rbac).
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Title: Monitor Azure app services performance ASP.NET | Microsoft Docs description: Application performance monitoring for Azure app services using ASP.NET. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/05/2021 Last updated : 08/24/2022 ms.devlang: javascript
Enabling monitoring on your ASP.NET based web applications running on [Azure App
## Enable auto-instrumentation monitoring > [!NOTE]
-> The combination of APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression is not supported. For more info see the explanation in the [troubleshooting section](#appinsights_javascript_enabled-and-urlcompression-is-not-supported).
+> The combination of APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression is not supported. For more info see the explanation in the [troubleshooting section](#appinsights_javascript_enabled-and-urlcompression-isnt-supported).
1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**.
In order to enable telemetry collection with Application Insights, only the Appl
|App setting name | Definition | Value | |--|:|-:| |ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` |
-|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled in order to insure optimal performance. | `default` or `recommended`. |
+|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled in order to ensure optimal performance. | `default` or `recommended`. |
|InstrumentationEngine_EXTENSION_VERSION | Controls if the binary-rewrite engine `InstrumentationEngine` will be turned on. This setting has performance implications and impacts cold start/startup time. | `~1` |
-|XDT_MicrosoftApplicationInsights_BaseExtensions | Controls if SQL & Azure table text will be captured along with the dependency calls. Performance warning: application cold start up time will be affected. This setting requires the `InstrumentationEngine`. | `~1` |
+|XDT_MicrosoftApplicationInsights_BaseExtensions | Controls if SQL & Azure table text will be captured along with the dependency calls. Performance warning: application cold startup time will be affected. This setting requires the `InstrumentationEngine`. | `~1` |
[!INCLUDE [azure-web-apps-arm-automation](../../../includes/azure-monitor-app-insights-azure-web-apps-arm-automation.md)]
In order to enable telemetry collection with Application Insights, only the Appl
### Upgrade from versions 2.8.9 and up
-Upgrading from version 2.8.9 happens automatically, without any additional actions. The new monitoring bits are delivered in the background to the target app service, and on application restart they will be picked up.
+Upgrading from version 2.8.9 happens automatically, without any extra actions. The new monitoring bits are delivered in the background to the target app service, and on application restart they'll be picked up.
-To check which version of the extension you're running, go to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
+To check which version of the extension you're running, go to `https://scm.yoursitename.azurewebsites.net/ApplicationInsights`.
### Upgrade from versions 1.0.0 - 2.6.5
-Starting with version 2.8.9 the pre-installed site extension is used. If you are an earlier version, you can update via one of two ways:
+Starting with version 2.8.9 the pre-installed site extension is used. If you're an earlier version, you can update via one of two ways:
* [Upgrade by enabling via the portal](#enable-auto-instrumentation-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
If the upgrade is done from a version prior to 2.5.1, check that the Application
Below is our step-by-step troubleshooting guide for extension/agent based monitoring for ASP.NET based applications running on Azure App Services. 1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2".
-2. Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
+2. Browse to `https://scm.yoursitename.azurewebsites.net/ApplicationInsights`.
:::image type="content"source="./media/azure-web-apps/app-insights-sdk-status.png" alt-text="Screenshot of the link above results page."border ="false"::: - Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
- If it is not running, follow the [enable Application Insights monitoring instructions](#enable-auto-instrumentation-monitoring).
+ If it isn't running, follow the [enable Application Insights monitoring instructions](#enable-auto-instrumentation-monitoring).
- Confirm that the status source exists and looks like: `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`
- If a similar value is not present, it means the application is not currently running or is not supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
+ If a similar value isn't present, it means the application isn't currently running or isn't supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
- Confirm that `IKeyExists` is `true`
- If it is `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey guid to your application settings.
+ If not, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey guid to your application settings.
- Confirm that there are no entries for `AppAlreadyInstrumented`, `AppContainsDiagnosticSourceAssembly`, and `AppContainsAspNetTelemetryCorrelationAssembly`.
Below is our step-by-step troubleshooting guide for extension/agent based monito
#### Default website deployed with web apps does not support automatic client-side monitoring
-When you create a web app with the `ASP.NET` runtimes in Azure App Services it deploys a single static HTML page as a starter website. The static webpage also loads a ASP.NET managed web part in IIS. This allows for testing codeless server-side monitoring, but does not support automatic client-side monitoring.
+When you create a web app with the `ASP.NET` runtimes in Azure App Services it deploys a single static HTML page as a starter website. The static webpage also loads a ASP.NET managed web part in IIS. This allows for testing codeless server-side monitoring, but doesn't support automatic client-side monitoring.
-If you wish to test out codeless server and client-side monitoring for ASP.NET in an Azure App Services web app we recommend following the official guides for [creating an ASP.NET Framework web app](../../app-service/quickstart-dotnetcore.md?tabs=netframework48) and then use the instructions in the current article to enable monitoring.
+If you wish to test out codeless server and client-side monitoring for ASP.NET in an Azure App Services web app, we recommend following the official guides for [creating an ASP.NET Framework web app](../../app-service/quickstart-dotnetcore.md?tabs=netframework48) and then use the instructions in the current article to enable monitoring.
-### APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression is not supported
+### APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression isn't supported
If you use APPINSIGHTS_JAVASCRIPT_ENABLED=true in cases where content is encoded, you might get errors like: - 500 URL rewrite error-- 500.53 URL rewrite module error with message Outbound rewrite rules cannot be applied when the content of the HTTP response is encoded ('gzip').
+- 500.53 URL rewrite module error with message Outbound rewrite rules can't be applied when the content of the HTTP response is encoded ('gzip').
-This is due to the APPINSIGHTS_JAVASCRIPT_ENABLED application setting being set to true and content-encoding being present at the same time. This scenario is not supported yet. The workaround is to remove APPINSIGHTS_JAVASCRIPT_ENABLED from your application settings. Unfortunately this means that if client/browser-side JavaScript instrumentation is still required, manual SDK references are needed for your webpages. Follow the [instructions](https://github.com/Microsoft/ApplicationInsights-JS#snippet-setup-ignore-if-using-npm-setup) for manual instrumentation with the JavaScript SDK.
+This is due to the APPINSIGHTS_JAVASCRIPT_ENABLED application setting being set to true and content-encoding being present at the same time. This scenario isn't supported yet. The workaround is to remove APPINSIGHTS_JAVASCRIPT_ENABLED from your application settings. Unfortunately this means that if client/browser-side JavaScript instrumentation is still required, manual SDK references are needed for your webpages. Follow the [instructions](https://github.com/Microsoft/ApplicationInsights-JS#snippet-setup-ignore-if-using-npm-setup) for manual instrumentation with the JavaScript SDK.
For the latest information on the Application Insights agent/extension, check out the [release notes](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/app-insights-web-app-extensions-releasenotes.md).
The table below provides a more detailed explanation of what these values mean,
|`AppAlreadyInstrumented:true` | This value can also be caused by the presence of the above dlls in the app folder from a previous deployment. | Clean the app folder to ensure that these dlls are removed. Check both your local app's bin directory, and the wwwroot directory on the App Service. (To check the wwwroot directory of your App Service web app: Advanced Tools (Kudu) > Debug console > CMD > home\site\wwwroot). |`AppContainsAspNetTelemetryCorrelationAssembly: true` | This value indicates that extension detected references to `Microsoft.AspNet.TelemetryCorrelation` in the application, and will back-off. | Remove the reference. |`AppContainsDiagnosticSourceAssembly**:true`|This value indicates that extension detected references to `System.Diagnostics.DiagnosticSource` in the application, and will back-off.| For ASP.NET remove the reference.
-|`IKeyExists:false`|This value indicates that the instrumentation key is not present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings.
+|`IKeyExists:false`|This value indicates that the instrumentation key isn't present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings.
## Release notes
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn about the steps required to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 08/22/2022 Last updated : 08/23/2022
This section walks through migrating a classic Application Insights resource to
![Migrate resource button](./media/convert-classic-resource/migrate.png)
-3. Choose the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription, or in a different subscription that shares the same Azure AD tenant. The Log Analytics workspace does not have to be in the same resource group as the Application Insights resource.
+3. Choose the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription, or in a different subscription that shares the same Azure AD tenant. The Log Analytics workspace doesn't have to be in the same resource group as the Application Insights resource.
> [!NOTE] > Migrating to a workspace-based resource can take up to 24 hours, but is usually faster than that. Please rely on accessing data through your Application Insights resource while waiting for the migration process to complete. Once completed, you will start seeing new data stored in the Log Analytics workspace tables.
Once a workspace-based Application Insights resource has been created, you can m
From within the Application Insights resource pane, select **Properties** > **Change Workspace** > **Log Analytics Workspaces**.
+## Frequently asked questions
+
+### Is there any implication on the cost from migration?
+
+There's usually no difference, with a couple of exceptions.
+
+ - Migrated Application Insights resources can use [Log Analytics Commitment Tiers](../logs/cost-logs.md#commitment-tiers) to reduce cost if the data volumes in the workspace are high enough.
+ - Grandfathered Application Insights resources will no longer get 1 GB per month free from the original Application Insights pricing model.
+
+### How will telemetry capping work?
+
+You can set a [daily cap on the Log Analytics workspace](../logs/daily-cap.md#application-insights).
+
+There's no strict (billing-wise) capping available.
+
+### How will ingestion-based sampling work?
+
+There are no changes to ingestion-based sampling.
+
+### Will there be any gap in data collected during migration?
+
+No. We merge data during query time.
+
+### Will my old logs queries continue to work?
+
+Yes, they'll continue to work.
+
+### Will my dashboards that have pinned metric and log charts continue to work after migration?
+
+Yes, they'll continue to work.
+
+### Will migration impact AppInsights API accessing data?
+
+No, migration won't impact existing API access to data. After migration, you'll be able to access data directly from a workspace using a [slightly different schema](#workspace-based-resource-changes).
+
+### Will there be any impact on Live Metrics or other monitoring experiences?
+
+No, there's no impact to [Live Metrics](live-stream.md#live-metrics-monitor--diagnose-with-1-second-latency) or other monitoring experiences.
+
+### What happens with Continuous export after migration?
+
+Continuous export doesn't support workspace-based resources.
+
+You'll need to switch to [Diagnostic Settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
+ ## Troubleshooting ### Access mode
The legacy continuous export functionality isn't supported for workspace-based r
### Retention settings
-**Warning Message:** *Your customized Application Insights retention settings will not apply to data sent to the workspace. You'll need to reconfigure these separately.*
+**Warning Message:** *Your customized Application Insights retention settings won't apply to data sent to the workspace. You'll need to reconfigure these separately.*
You don't have to make any changes prior to migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you may want to modify the retention settings for your Log Analytics workspace prior to migrating and starting to ingest new data.
You can check your current retention settings for Log Analytics under **General*
## Workspace-based resource changes
-Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This simplifies your configuration by allowing you to analyze data across multiple solutions more easily, and to leverage the capabilities of workspaces.
+Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This simplifies your configuration by allowing you to analyze data across multiple solutions more easily, and to use the capabilities of workspaces.
### Classic data structure
-The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data is not stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
+The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data isn't stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
> [!NOTE] > The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](#table-structure), you must first go to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
The structure of a Log Analytics workspace is described in [Log Analytics worksp
The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries using legacy tables.
-Most of the columns have the same name with different capitalization. Since KQL is case-sensitive, you will need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it is a workspace-based resource. The new property names are required for when querying from within the context of the Log Analytics workspace experience.
+Most of the columns have the same name with different capitalization. Since KQL is case-sensitive, you'll need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it's a workspace-based resource. The new property names are required for when querying from within the context of the Log Analytics workspace experience.
#### AppAvailabilityResults
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
-# Troubleshoot Azure Monitor's Change Analysis (preview)
+# Troubleshoot Azure Monitor's Change Analysis
## Trouble registering Microsoft.ChangeAnalysis resource provider from Change history tab.
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
-# Visualizations for Change Analysis in Azure Monitor (preview)
+# Visualizations for Change Analysis in Azure Monitor
Change Analysis provides data for various management and troubleshooting scenarios to help you understand what changes to your application might have caused the issues. You can view the Change Analysis data through several channels:
You can view change data via the **Web App Down** and **Application Crashes** de
- The change types over time. - Details on those changes.
-By default, the graph displays changes from within the past 24 hours help with immediate problems.
-
+By default, the graph displays changes from within the past 24 hours help with immediate problems.
### Diagnose and solve problems tool for Virtual Machines
Use the [View change history](../essentials/activity-log.md#view-change-history)
1. From within your resource, select **Activity Log** from the side menu. 1. Select a change from the list.
-1. Select the **Change history (Preview)** tab.
-1. For the Azure Monitor Change Analysis service to scan for changes in users' subscriptions, a resource provider needs to be registered. Upon selecting the **Change history (Preview)** tab, the tool will automatically register **Microsoft.ChangeAnalysis** resource provider.
+1. Select the **Change history** tab.
+1. For the Azure Monitor Change Analysis service to scan for changes in users' subscriptions, a resource provider needs to be registered. Upon selecting the **Change history** tab, the tool will automatically register **Microsoft.ChangeAnalysis** resource provider.
1. Once registered, you can view changes from **Azure Resource Graph** immediately from the past 14 days. - Changes from other sources will be available after ~4 hours after subscription is onboard.
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
ms.contributor: cawa Previously updated : 07/29/2022 Last updated : 08/23/2022
-# Use Change Analysis in Azure Monitor (preview)
+# Use Change Analysis in Azure Monitor
While standard monitoring solutions might alert you to a live site issue, outage, or component failure, they often don't explain the cause. For example, your site worked five minutes ago, and now it's broken. What changed in the last five minutes?
Building on the power of [Azure Resource Graph](../../governance/resource-graph/
- Increases observability. - Reduces mean time to repair (MTTR).
-> [!IMPORTANT]
-> Change Analysis is currently in preview. This version:
->
-> - Is provided without a service-level agreement.
-> - Is not recommended for production workloads.
-> - Includes unsupported features and might have constrained capabilities.
->
-> For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- > [!NOTE] > Change Analysis is currently only available in Public Azure Cloud.
Azure Monitor Change Analysis service supports resource property level changes i
## Data sources Azure Monitor's Change Analysis queries for:-- Azure Resource Manager resource properties.-- Configuration changes.-- Web app in-guest changes.
+- [Azure Resource Manager resource properties.](#azure-resource-manager-resource-properties-changes)
+- [Resource configuration changes.](#resource-configuration-changes)
+- [App Service Function and Web App in-guest changes.](#changes-in-azure-app-services-function-and-web-apps-in-guest-changes)
-Change Analysis also tracks resource dependency changes to diagnose and monitor an application end-to-end.
+Change Analysis also tracks [resource dependency changes](#dependency-changes) to diagnose and monitor an application end-to-end.
### Azure Resource Manager resource properties changes
-Using [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis provides a historical record of how the Azure resources that host your application have changed over time. The following tracked settings can be detected:
+Using [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis provides a historical record of how the Azure resources that host your application have changed over time. The following basic configuration settings are set using Azure Resource Manager and tracked by Azure Resource Graph:
- Managed identities - Platform OS upgrade - Hostnames
-### Azure Resource Manager configuration changes
+### Resource configuration changes
-Unlike Azure Resource Graph, Change Analysis securely queries and computes IP Configuration rules, TLS settings, and extension versions to provide more change details in the app.
+In addition to the settings set via Azure Resource Manager, you can set configuration settings using the CLI, Bicep, etc., such as:
+- IP Configuration rules
+- TLS settings
+- Extension versions
-### Changes in web app deployment and configuration (in-guest changes)
+These setting changes are not captured by Azure Resource Graph. Change Analysis fills this gap by capturing snapshots of changes in those main configuration properties, like changes to the connection string, etc. Snapshots are taken of configuration changes and change details every up to 6 hours. [See known limitations.](#limitations)
-Every 30 minutes, Change Analysis captures the deployment and configuration state of an application. For example, it can detect changes in the application environment variables. The tool computes the differences and presents the changes.
+### Changes in Azure App Services Function and Web Apps (in-guest changes)
-Unlike Azure Resource Manager changes, code deployment change information might not be available immediately in the Change Analysis tool. To view the latest changes in Change Analysis, select **Refresh**.
+Every 30 minutes, Change Analysis captures the configuration state of a web application. For example, it can detect changes in the application environment variables, configuration files, and WebJobs. The tool computes the differences and presents the changes.
-If you don't see changes within 30 minutes, refer to [our troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+If you don't see file changes within 30 minutes or configuration changes within 6 hours, refer to [our troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app). [See known limitations.](#limitations)
Currently, all text-based files under site root **wwwroot** with the following extensions are supported:+ - *.json - *.xml - *.ini
Changes to resource dependencies can also cause issues in a resource. For exampl
As another example, if port 22 was closed in a virtual machine's Network Security Group, it will cause connectivity errors.
-#### Web App diagnose and solve problems navigator (Preview)
+#### Web App diagnose and solve problems navigator (preview)
To detect changes in dependencies, Change Analysis checks the web app's DNS record. In this way, it identifies changes in all app components that could cause issues.
-Currently the following dependencies are supported in **Web App Diagnose and solve problems | Navigator (Preview)**:
+Currently the following dependencies are supported in **Web App Diagnose and solve problems | Navigator**:
- Web Apps - Azure Storage - Azure SQL
-#### Related resources
-
-Change Analysis detects related resources. Common examples are:
--- Network Security Group-- Virtual Network-- Azure Monitor Gateway-- Load Balancer related to a Virtual Machine.-
-Network resources are usually provisioned in the same resource group as the resources using it. Filter the changes by resource group to show all changes for the virtual machine and its related networking resources.
+## Limitations
+- **OS environment**: For Azure App Services Function and Web App in-guest changes, Change Analysis currently only works with Windows environments, not Linux.
+- **Web app deployment changes**: Code deployment change information might not be available immediately in the Change Analysis tool. To view the latest changes in Change Analysis, select **Refresh**.
+- **App Services file changes**: File changes take up to 30 minutes to display.
+- **App Services configuration changes**: Due to the snapshot approach to configuration changes, timestamps of configuration changes could take up to 6 hours to display from when the change actually happened.
## Next steps
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
The [Azure Activity log](essentials/platform-logs-overview.md) includes service
| Destination | Description | Reference | | -- | -- | | | Azure Resource Manager control plane changes | Change Analysis provides a historical record of how the Azure resources that host your application have changed over time, using Azure Resource Graph | [Resources | Get Changes](../governance/resource-graph/how-to/get-resource-changes.md) |
-| Resource configurations and settings changes | Change Analysis securely queries and computes IP Configuration rules, TLS settings, and extension versions to provide more change details in the app. | [Azure Resource Manager configuration changes](./change/change-analysis.md#azure-resource-manager-configuration-changes) |
+| Resource configurations and settings changes | Change Analysis securely queries and computes IP Configuration rules, TLS settings, and extension versions to provide more change details in the app. | [Azure Resource Manager configuration changes](./change/change-analysis.md#azure-resource-manager-resource-properties-changes) |
| Web app in-guest changes | Every 30 minutes, Change Analysis captures the deployment and configuration state of an application. | [Diagnose and solve problems tool for Web App](./change/change-analysis-visualizations.md#diagnose-and-solve-problems-tool-for-web-app) | ## Azure resources
azure-monitor Activity Log Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log-insights.md
+
+ Title: Azure activity log insights
+description: Learn how to monitor changes to resources and resource groups in an Azure subscription with Azure Monitor activity log insights.
+++ Last updated : 08/24/2022+++
+#customer-intent: As an IT manager, I want to understand how I can use activity log insights to monitor changes to resources and resource groups in an Azure subscription.
++
+# Monitor changes to resources and resource groups with Azure Monitor activity log insights
+
+Activity log insights provide you with a set of dashboards that monitor the changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to onboard and view activity log insights in the Azure portal.
+
+Before you use activity log insights, you must [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
+
+## How do activity log insights work?
+
+Azure Monitor stores all activity logs you send to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) in a table called `AzureActivity`.
+
+Activity log insights are a curated [Log Analytics workbook](../visualize/workbooks-overview.md) with dashboards that visualize the data in the `AzureActivity` table. For example, data might include which administrators deleted, updated, or created resources and whether the activities failed or succeeded.
++
+## View resource group or subscription-level activity log insights
+
+To view activity log insights at the resource group or subscription level:
+
+1. In the Azure portal, select **Monitor** > **Workbooks**.
+1. In the **Insights** section, select **Activity Logs Insights**.
+
+ :::image type="content" source="media/activity-log/open-activity-log-insights-workbook.png" lightbox= "media/activity-log/open-activity-log-insights-workbook.png" alt-text="Screenshot that shows how to locate and open the Activity Logs Insights workbook on a scale level.":::
+
+1. At the top of the **Activity Logs Insights** page, select:
+
+ 1. One or more subscriptions from the **Subscriptions** dropdown.
+ 1. Resources and resource groups from the **CurrentResource** dropdown.
+ 1. A time range for which to view data from the **TimeRange** dropdown.
+
+## View resource-level activity log insights
+
+>[!Note]
+> Activity log insights does not currently support Application Insights resources.
+
+To view activity log insights at the resource level:
+
+1. In the Azure portal, go to your resource and select **Workbooks**.
+1. In the **Activity Logs Insights** section, select **Activity Logs Insights**.
+
+ :::image type="content" source="media/activity-log/activity-log-resource-level.png" lightbox= "media/activity-log/activity-log-resource-level.png" alt-text="Screenshot that shows how to locate and open the Activity Logs Insights workbook on a resource level.":::
+
+1. At the top of the **Activity Logs Insights** page, select a time range for which to view data from the **TimeRange** dropdown:
+
+ * **Azure Activity Log Entries** shows the count of activity log records in each activity log category.
+
+ :::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Screenshot that shows Azure activity logs by category value.":::
+
+ * **Activity Logs by Status** shows the count of activity log records in each status.
+
+ :::image type="content" source="media/activity-log/activity-logs-insights-status.png" lightbox= "media/activity-log/activity-logs-insights-status.png" alt-text="Screenshot that shows Azure activity logs by status.":::
+
+ * At the subscription and resource group level, **Activity Logs by Resource** and **Activity Logs by Resource Provider** show the count of activity log records for each resource and resource provider.
+
+ :::image type="content" source="media/activity-log/activity-logs-insights-resource.png" lightbox= "media/activity-log/activity-logs-insights-resource.png" alt-text="Screenshot that shows Azure activity logs by resource.":::
+
+## Next steps
+
+Learn more about:
+
+* [Activity logs](./activity-log.md)
+* [The activity log event schema](activity-log-schema.md)
+
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Title: Azure activity log description: View the Azure Monitor activity log and send it to Azure Monitor Logs, Azure Event Hubs, and Azure Storage.-+ Last updated 07/01/2022--++ # Azure Monitor activity log
Each event is stored in the PT1H.json file with the following format. This forma
## Legacy collection methods
+If you're collecting activity logs using the legacy collection method, we recommend you [export activity logs to your Log Analytics workspace](#send-to-log-analytics-workspace) and disable the legacy collection using the [Data Sources - Delete API](/rest/api/loganalytics/data-sources/delete?tabs=HTTP) as follows:
+
+1. List all data sources connected to the workspace using the [Data Sources - List By Workspace API](/rest/api/loganalytics/data-sources/list-by-workspace?tabs=HTTP#code-try-0) and filter for activity logs by setting `filter=kind='AzureActivityLog'`.
+
+ :::image type="content" source="media/activity-log/data-sources-list-by-workspace-api.png" alt-text="Screenshot showing the configuration of the Data Sources - List By Workspace API." lightbox="media/activity-log/data-sources-list-by-workspace-api.png":::
+
+1. Copy the name of the connection you want to disable from the API response.
+
+ :::image type="content" source="media/activity-log/data-sources-list-by-workspace-api-connection.png" alt-text="Screenshot showing the connection information you need to copy from the output of the Data Sources - List By Workspace API." lightbox="media/activity-log/data-sources-list-by-workspace-api-connection.png":::
+
+1. Use the [Data Sources - Delete API](/rest/api/loganalytics/data-sources/delete?tabs=HTTP) to stop collecting activity logs for the specific resource.
+
+ :::image type="content" source="media/activity-log/data-sources-delete-api.png" alt-text="Screenshot of the configuration of the Data Sources - Delete API." lightbox="media/activity-log/data-sources-delete-api.png":::
+### Managing legacy log profiles
+ Log profiles are the legacy method for sending the activity log to storage or event hubs. If you're using this method, consider transitioning to diagnostic settings, which provide better functionality and consistency with resource logs.
-### [PowerShell](#tab/powershell)
+#### [PowerShell](#tab/powershell)
If a log profile already exists, you first must remove the existing log profile and then create a new one.
This sample PowerShell script creates a log profile that writes the activity log
Add-AzLogProfile -Name $logProfileName -Location $locations -StorageAccountId $storageAccountId -ServiceBusRuleId $serviceBusRuleId ```
-### [CLI](#tab/cli)
+#### [CLI](#tab/cli)
If a log profile already exists, you first must remove the existing log profile and then create a log profile.
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
Title: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger description: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger+++
+reviewer: cweining
Previously updated : 01/14/2021- Last updated : 08/18/2022+ # Configure Bring Your Own Storage (BYOS) for Application Insights Profiler and Snapshot Debugger
-## What is Bring Your Own Storage (BYOS) and why might I need it?
-When you use Application Insights Profiler or Snapshot Debugger, artifacts generated by your application are uploaded into Azure storage accounts over the public Internet. Those accounts are paid and controlled by Microsoft for processing and analysis. Microsoft controls the encryption-at-rest and lifetime management policies for those artifacts.
+## What is Bring Your Own Storage (BYOS) and why might I need it?
-With Bring Your Own Storage, these artifacts are uploaded into a storage account that you control. That means you control the encryption-at-rest policy, the lifetime management policy and network access. You will, however, be responsible for the costs associated with that storage account.
+When you use Application Insights Profiler or Snapshot Debugger, artifacts generated by your application are uploaded into Azure storage accounts over the public Internet. For these artifacts and storage accounts, Microsoft controls and covers the cost for:
+
+* Processing and analysis.
+* Encryption-at-rest and lifetime management policies.
+
+When you configure Bring Your Own Storage (BYOS), artifacts are uploaded into a storage account that you control. That means you control and are responsible for the cost of:
+
+* The encryption-at-rest policy and the Lifetime management policy.
+* Network access.
> [!NOTE]
-> If you are enabling Private Link, Bring Your Own Storage is a requirement. For more information about Private Link for Application Insights, [see the documentation.](../logs/private-link-security.md)
->
-> If you are enabling Customer-Managed Keys, Bring Your Own Storage is a requirement. For more information about Customer-Managed Keys for Application Insights, [see the documentation.](../logs/customer-managed-keys.md).
+> BYOS is required if you are enabling Private Link or Customer-Managed Keys.
+
+> * [Learn more about Private Link for Application Insights](../logs/private-link-security.md).
+> * [Learn more about Customer-Managed Keys for Application Insights](../logs/customer-managed-keys.md).
## How will my storage account be accessed?
-1. Agents running in your Virtual Machines or App Service will upload artifacts (profiles, snapshots, and symbols) to blob containers in your account. This process involves contacting the Application Insights Profiler or Snapshot Debugger service to obtain a SAS (Shared Access Signature) token to a new blob in your storage account.
-1. The Application Insights Profiler or Snapshot Debugger service will analyze the incoming blob and write back the analysis results and log files into blob storage. Depending on available compute capacity, this process may occur anytime after upload.
-1. When you view the profiler traces, or snapshot debugger analysis, the service will fetch the analysis results from blob storage.
+
+1. Agents running in your Virtual Machines or App Service will upload artifacts (profiles, snapshots, and symbols) to blob containers in your account.
+
+ This process involves contacting the Profiler or Snapshot Debugger service to obtain a Shared Access Signature (SAS) token to a new blob in your storage account.
+
+1. The Profiler or Snapshot Debugger service will:
+
+ 1. Analyze the incoming blob.
+ 1. Write back the analysis results and log files into blob storage.
+
+ Depending on available compute capacity, this process may occur anytime after upload.
+
+1. When you view the Profiler traces or Snapshot Debugger analysis, the service fetches the analysis results from blob storage.
## Prerequisites
-* Make sure to create your Storage Account in the same location as your Application Insights Resource. Ex. If your Application Insights resource is in West US 2, your Storage Account must be also in West US 2.
-* Grant the "Storage Blob Data Contributor" role to the AAD application "Diagnostic Services Trusted Storage Access" in your storage account via the Access Control (IAM) UI.
-* If Private Link enabled, configure the additional setting to allow connection to our Trusted Microsoft Service from your Virtual Network.
-## How to enable BYOS
+* Create your Storage Account in the same location as your Application Insights resource.
-### Create Storage Account
-Create a brand-new Storage Account (if you don't have it) on the same location as your Application Insights resource.
-If your Application Insights resource it's on `West US 2`, then, your Storage Account must be in `West US 2`.
+ For example, if your Application Insights resource is in West US 2, your Storage Account must be also in West US 2.
+
+* Grant the `Storage Blob Data Contributor` role to the Azure AD application named `Diagnostic Services Trusted Storage Access` via the [Access Control (IAM)](/role-based-access-control/role-assignments-portal.md) page in your storage account.
+* If Private Link is enabled, allow connection to our Trusted Microsoft Service from your virtual network.
+
+## Enable BYOS
### Grant Access to Diagnostic Services to your Storage Account+ A BYOS storage account will be linked to an Application Insights resource. There may be only one storage account per Application Insights resource and both must be in the same location. You may use the same storage account with more than one Application Insights resource.
-First, the Application Insights Profiler, and Snapshot Debugger service needs to be granted access to the storage account. To grant access, add the role `Storage Blob Data Contributor` to the AAD application named `Diagnostic Services Trusted Storage Access` via the Access Control (IAM) page in your storage account as shown in Figure 1.0.
+First, the Application Insights Profiler, and Snapshot Debugger service needs to be granted access to the storage account. To grant access, add the role `Storage Blob Data Contributor` to the Azure AD application named `Diagnostic Services Trusted Storage Access` via the Access Control (IAM) page in your storage account as shown in Figure 1.0.
-Steps:
+Steps:
1. Select **Access control (IAM)**. 1. Select **Add** > **Add role assignment** to open the Add role assignment page. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
+
| Setting | Value | | | | | Role | Storage Blob Data Contributor | | Assign access to | User, group, or service principal | | Members | Diagnostic Services Trusted Storage Access |
- ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="media/profiler-bring-your-own-storage/add-role-assignment-page.png" alt-text="Screenshot showing how to add role assignment page in Azure portal.":::
+ *Figure 1.0*
-After you added the role, it will appear under the "Role assignments" section, like the below Figure 1.1.
-_![Figure 1.1](media/profiler-bring-your-own-storage/figure-11.png)_
-_Figure 1.1_
+After you added the role, it will appear under the "**Role assignments**" section, like the below Figure 1.1.
+ :::image type="content" source="media/profiler-bring-your-own-storage/figure-11.png" alt-text="Screenshot showing the IAM screen after Role assignments.":::
+ *Figure 1.1*
-If you're also using Private Link, it's required one additional configuration to allow connection to our Trusted Microsoft Service from your Virtual Network. Refer to the [Storage Network Security documentation](../../storage/common/storage-network-security.md#trusted-microsoft-services).
+If you're also using Private Link, it's required one additional configuration to allow connection to our Trusted Microsoft Service from your Virtual Network. or more information, see [Storage Network Security documentation](../../storage/common/storage-network-security.md#trusted-microsoft-services).
### Link your Storage Account with your Application Insights resource+ To configure BYOS for code-level diagnostics (Profiler/Debugger), there are three options:
-* Using Azure PowerShell cmdlets
-* Using the Azure CLI
-* Using Azure Resource Manager templates
+* Using Azure PowerShell cmdlets.
+* Using the Azure CLI.
+* Using Azure Resource Manager templates.
-#### Configure using Azure PowerShell Cmdlets
+#### [PowerShell](#tab/azure-powershell)
1. Make sure you have installed Az PowerShell 4.2.0 or greater. To install Azure PowerShell, refer to the [Official Azure PowerShell documentation](/powershell/azure/install-az-ps). 1. Install the Application Insights PowerShell extension.+ ```powershell Install-Module -Name Az.ApplicationInsights -Force ```
-1. Sign in with your Azure Account
+1. Sign in with your Azure account subscription.
+ ```powershell Connect-AzAccount -Subscription "{subscription_id}" ```
- For more info of how to sign in, refer to the [Connect-AzAccount documentation](/powershell/module/az.accounts/connect-azaccount).
+ For more information on how to sign in, refer to the [Connect-AzAccount documentation](/powershell/module/az.accounts/connect-azaccount).
1. Remove previous Storage Account linked to your Application Insights resource. Pattern:+ ```powershell $appInsights = Get-AzApplicationInsights -ResourceGroupName "{resource_group_name}" -Name "{application_insights_name}" Remove-AzApplicationInsightsLinkedStorageAccount -ResourceId $appInsights.Id ``` Example:+ ```powershell $appInsights = Get-AzApplicationInsights -ResourceGroupName "byos-test" -Name "byos-test-westus2-ai" Remove-AzApplicationInsightsLinkedStorageAccount -ResourceId $appInsights.Id ``` 1. Connect your Storage Account with your Application Insights resource.
-
+
Pattern:+ ```powershell $storageAccount = Get-AzStorageAccount -ResourceGroupName "{resource_group_name}" -Name "{storage_account_name}" $appInsights = Get-AzApplicationInsights -ResourceGroupName "{resource_group_name}" -Name "{application_insights_name}"
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
``` Example:+ ```powershell $storageAccount = Get-AzStorageAccount -ResourceGroupName "byos-test" -Name "byosteststoragewestus2" $appInsights = Get-AzApplicationInsights -ResourceGroupName "byos-test" -Name "byos-test-westus2-ai" New-AzApplicationInsightsLinkedStorageAccount -ResourceId $appInsights.Id -LinkedStorageAccountResourceId $storageAccount.Id ```
-#### Configure using Azure CLI
+#### [Azure CLI](#tab/azure-cli)
1. Make sure you have installed Azure CLI. To install Azure CLI, refer to the [Official Azure CLI documentation](/cli/azure/install-azure-cli). 1. Install the Application Insights CLI extension.+ ```azurecli az extension add -n application-insights ```
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
1. Connect your Storage Account with your Application Insights resource. Pattern:+ ```azurecli az monitor app-insights component linked-storage link --resource-group "{resource_group_name}" --app "{application_insights_name}" --storage-account "{storage_account_name}" ```
-
+
Example:+ ```azurecli az monitor app-insights component linked-storage link --resource-group "byos-test" --app "byos-test-westus2-ai" --storage-account "byosteststoragewestus2" ```
-
+
Expected output:+ ```powershell { "id": "/subscriptions/{subscription}/resourcegroups/byos-test/providers/microsoft.insights/components/byos-test-westus2-ai/linkedstorageaccounts/serviceprofiler",
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
> [!NOTE] > For performing updates on the linked Storage Accounts to your Application Insights resource, refer to the [Application Insights CLI documentation](/cli/azure/monitor/app-insights/component/linked-storage).
-#### Configure using Azure Resource Manager template
+#### [Resource Manager Template](#tab/azure-resource-manager)
+
+1. Create an Azure Resource Manager template file with the following content (*byos.template.json*):
-1. Create an Azure Resource Manager template file with the following content (byos.template.json).
```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
} ```
-1. Run the following PowerShell command to deploy previous template (create Linked Storage Account).
+1. Run the following PowerShell command to deploy the above template:
+
+ Syntax:
- Pattern:
```powershell New-AzResourceGroupDeployment -ResourceGroupName "{your_resource_name}" -TemplateFile "{local_path_to_arm_template}" ``` Example:+ ```powershell New-AzResourceGroupDeployment -ResourceGroupName "byos-test" -TemplateFile "D:\Docs\byos.template.json" ``` 1. Provide the following parameters when prompted in the PowerShell console:
-
+
| Parameter | Description | |-|--|
- | application_insights_name | The name of the Application Insights resource to enable BYOS. |
- | storage_account_name | The name of the Storage Account resource that you'll use as your BYOS. |
-
+ | `application_insights_name` | The name of the Application Insights resource to enable BYOS. |
+ | `storage_account_name` | The name of the Storage Account resource that you'll use as your BYOS. |
+
Expected output:+ ```powershell Supply values for the following parameters: (Type !? for Help.)
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
storage_account_name: byosteststoragewestus2 DeploymentName : byos.template
- ResourceGroupName : byos-test
+ ResourceGroupName : byos-test
ProvisioningState : Succeeded Timestamp : 4/16/2020 1:24:57 AM Mode : Incremental
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
DeploymentDebugLogLevel : ```
-1. Enable code-level diagnostics (Profiler/Debugger) on the workload of interest through the Azure portal. (App Service > Application Insights)
-_![Figure 2.0](media/profiler-bring-your-own-storage/figure-20.png)_
-_Figure 2.0_
+1. Enable code-level diagnostics (Profiler/Debugger) on the workload of interest through the Azure portal. In this example, **App Service** > **Application Insights**.
+
+ :::image type="content" source="media/profiler-bring-your-own-storage/figure-20.png" alt-text="Screenshot showing the code level diagnostics on Azure portal.":::
+ *Figure 2.0*
+
+## Troubleshoot
+
+### Template schema '{schema_uri}' isn't supported
-## Troubleshooting
-### Template schema '{schema_uri}' isn't supported.
* Make sure that the `$schema` property of the template is valid. It must follow the following pattern: `https://schema.management.azure.com/schemas/{schema_version}/deploymentTemplate.json#` * Make sure that the `schema_version` of the template is within valid values: `2014-04-01-preview, 2015-01-01, 2018-05-01, 2019-04-01, 2019-08-01`. Error message:+ ```powershell New-AzResourceGroupDeployment : 11:53:49 AM - Error: Code=InvalidTemplate; Message=Deployment template validation failed: 'Template schema 'https://schema.management.azure.com/schemas/2020-01-01/deploymentTemplate.json#' is not supported. Supported versions are '2014-04-01-preview,2015-01-01,2018-05-01,2019-04-01,2019-08-01'. Please see https://aka.ms/arm-template for usage details.'. ```
-### No registered resource provider found for location '{location}'.
+### No registered resource provider found for location '{location}'
+ * Make sure that the `apiVersion` of the resource `microsoft.insights/components` is `2015-05-01`. * Make sure that the `apiVersion` of the resource `linkedStorageAccount` is `2020-03-01-preview`. Error message:+ ```powershell New-AzResourceGroupDeployment : 6:18:03 PM - Resource microsoft.insights/components 'byos-test-westus2-ai' failed with message '{ "error": {
_Figure 2.0_
} }' ```
-### Storage account location should match AI component location.
+
+### Storage Account location should match AI component location
+ * Make sure that the location of the Application Insights resource is the same as the Storage Account. Error message:+ ```powershell New-AzResourceGroupDeployment : 1:01:12 PM - Resource microsoft.insights/components/linkedStorageAccounts 'byos-test-centralus-ai/serviceprofiler' failed with message '{ "error": {
_Figure 2.0_
For general Profiler troubleshooting, refer to the [Profiler Troubleshoot documentation](profiler-troubleshooting.md).
-For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](../app/snapshot-debugger-troubleshoot.md).
+For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](../app/snapshot-debugger-troubleshoot.md).
## FAQs
-* If I have Profiler or Snapshot enabled, and then I enabled BYOS, will my data be migrated into my Storage Account?
- _No, it won't._
-* Will BYOS work with Encryption at Rest and Customer-Managed Key?
- _Yes, to be precise, BYOS is a requisite to have profiler/debugger enabled with Customer-Manager Keys._
+### If I have enabled Profiler/Snapshot Debugger and BYOS, will my data be migrated into my Storage Account?
+
+ *No, it won't.*
+
+### Will BYOS work with encryption-at-rest and Customer-Managed Key?
+
+ *Yes, to be precise, BYOS is a requisite to have Profiler/Snapshot Debugger enabled with Customer-Manager Keys.*
+
+### Will BYOS work in an environment isolated from the Internet?
+
+ *Yes, BYOS is a requirement for isolated network scenarios.*
-* Will BYOS work in an environment isolated from the Internet?
- _Yes. In fact, BYOS is a requirement for isolated network scenarios._
+### Will BYOS work with both Customer-Managed Keys and Private Link enabled?
+
+ *Yes, it can be possible.*
-* Will BYOS work when, both, Customer-Managed Keys and Private Link were enabled?
- _Yes, it can be possible._
+### If I have enabled BYOS, can I go back using Diagnostic Services storage accounts to store my data collected?
+
+ *Yes, you can, but we don't currently support data migration from your BYOS.*
-* If I have enabled BYOS, can I go back using Diagnostic Services storage accounts to store my data collected?
- _Yes, you can, but, right now we don't support data migration from your BYOS._
+### After enabling BYOS, will I take over of all the related costs of storage and networking?
-* After enabling BYOS, will I take over of all the related costs of it, which are Storage and Networking?
- _Yes_
+ *Yes.*
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler.md
To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps
> For more information about supported runtime, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
-## Pre-requisites
+## Prerequisites
- An [Azure App Services ASP.NET/ASP.NET Core app](../../app-service/quickstart-dotnetcore.md). - [Application Insights resource](../app/create-new-resource.md) connected to your App Service app.
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
Title: Enable Snapshot Debugger for .NET apps in Azure App Service | Microsoft Docs description: Enable Snapshot Debugger for .NET apps in Azure App Service Previously updated : 03/26/2019+++
+reviewer: cweining
Last updated : 08/18/2022+ # Enable Snapshot Debugger for .NET apps in Azure App Service Snapshot Debugger currently supports ASP.NET and ASP.NET Core apps that are running on Azure App Service on Windows service plans.
-We recommend you run your application on the Basic service tier, or higher, when using snapshot debugger.
+We recommend that you run your application on the Basic service tier, or higher, when using Snapshot Debugger.
For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots. ## <a id="installation"></a> Enable Snapshot Debugger
-To enable Snapshot Debugger for an app, follow the instructions below.
-If you're running a different type of Azure service, here are instructions for enabling Snapshot Debugger on other supported platforms:
-* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+Snapshot Debugger is pre-installed as part of the App Services runtime, but you need to turn it on to get snapshots for your App Service app. To enable Snapshot Debugger for an app, follow the instructions below:
+
+> [!NOTE]
+> If you're using a preview version of .NET Core, or your application references Application Insights SDK (directly or indirectly via a dependent assembly), follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md) to include the [`Microsoft.ApplicationInsights.SnapshotCollector`](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package with the application.
> [!NOTE]
-> If you're using a preview version of .NET Core, or your application references Application Insights SDK, directly or indirectly via a dependent assembly, follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) to include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package with the application, and then complete the rest of the instructions below.
->
> Codeless installation of Application Insights Snapshot Debugger follows the .NET Core support policy. > For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
-Snapshot Debugger is pre-installed as part of the App Services runtime, but you need to turn it on to get snapshots for your App Service app.
-
-Once you've deployed an app, follow the steps below to enable the snapshot debugger:
-
-1. Navigate to the Azure control panel for your App Service.
-2. Go to the **Settings > Application Insights** page.
+After you've deployed your .NET app:
- ![Enable App Insights on App Services portal](./media/snapshot-debugger/application-insights-app-services.png)
+1. Go to the Azure control panel for your App Service.
+1. Go to the **Settings** > **Application Insights** page.
-3. Either follow the instructions on the page to create a new resource or select an existing App Insights resource to monitor your app. Also make sure both switches for Snapshot Debugger are **On**.
+ :::image type="content" source="./media/snapshot-debugger/application-insights-app-services.png" alt-text="Screenshot showing the Enable App Insights on App Services portal.":::
- ![Add App Insights site extension][Enablement UI]
+1. Either follow the instructions on the page to create a new resource or select an existing App Insights resource to monitor your app.
+1. Switch Snapshot Debugger toggles to **On**.
+
+ :::image type="content" source="./media/snapshot-debugger/enablement-ui.png" alt-text="Screenshot showing how to add App Insights site extension.":::
+
+1. Snapshot Debugger is now enabled using an App Services App Setting.
-4. Snapshot Debugger is now enabled using an App Services App Setting.
+ :::image type="content" source="./media/snapshot-debugger/snapshot-debugger-app-setting.png" alt-text="Screenshot showing App Setting for Snapshot Debugger.":::
- ![App Setting for Snapshot Debugger][snapshot-debugger-app-setting]
+If you're running a different type of Azure service, here are instructions for enabling Snapshot Debugger on other supported platforms:
+* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+
## Enable Snapshot Debugger for other clouds Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide) through the Application Insights Connection String.
-|Connection String Property | US Government Cloud | China Cloud |
+|Connection String Property | US Government Cloud | China Cloud |
|||-| |SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
Application Insights Snapshot Debugger supports Azure AD authentication for snap
As of today, Snapshot Debugger only supports Azure AD authentication when you reference and configure Azure AD using the Application Insights SDK in your application.
-Below you can find all the steps required to enable Azure AD for profiles ingestion:
+To turn-on Azure AD for snapshot ingestion:
+ 1. Create and add the managed identity you want to use to authenticate against your Application Insights resource to your App Service.
- a. For System-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity)
+ 1. For System-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity).
- b. For User-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity)
+ 1. For User-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity).
-2. Configure and enable Azure AD in your Application Insights resource. For more information, see the following [documentation](../app/azure-ad-authentication.md?tabs=net#configuring-and-enabling-azure-ad-based-authentication)
-3. Add the following application setting, used to let Snapshot Debugger agent know which managed identity to use:
+1. Configure and turn on Azure AD in your Application Insights resource. For more information, see the following [documentation](../app/azure-ad-authentication.md?tabs=net#configuring-and-enabling-azure-ad-based-authentication)
+1. Add the following application setting, used to let Snapshot Debugger agent know which managed identity to use:
For System-Assigned Identity: |App Setting | Value | ||-|
-|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD |
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AD |
For User-Assigned Identity: |App Setting | Value | ||-|
-|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD;ClientId={Client id of the User-Assigned Identity} |
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AD;ClientId={Client id of the User-Assigned Identity} |
## Disable Snapshot Debugger
-Follow the same steps as for **Enable Snapshot Debugger**, but switch both switches for Snapshot Debugger to **Off**.
-
-We recommend you have Snapshot Debugger enabled on all your apps to ease diagnostics of application exceptions.
+To disable Snapshot Debugger, repeat the [steps for enabling](#installation). However, switch the Snapshot Debugger toggles to **Off**.
## Azure Resource Manager template
-For an Azure App Service, you can set app settings within the Azure Resource Manager template to enable Snapshot Debugger and Profiler, see the below template snippet:
+For an Azure App Service, you can set app settings within the Azure Resource Manager template to enable Snapshot Debugger and Profiler. For example:
```json {
For an Azure App Service, you can set app settings within the Azure Resource Man
``` ## Not Supported Scenarios
-Below you can find scenarios where Snapshot Collector is not supported:
+
+Below you can find scenarios where Snapshot Collector isn't supported:
|Scenario | Side Effects | Recommendation | ||--|-|
-|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](../app/azure-web-apps-net-core.md#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+|You're using the Snapshot Collector SDK in your application directly (*.csproj*) and have enabled the advanced option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost and no Snapshots will be available. <br/> Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.` <br/> [Learn more about the Application Insights feature "Interop".](../app/azure-web-apps-net-core.md#troubleshooting) | If you're using the advanced option "Interop", use the codeless Snapshot Collector injection (enabled through the Azure portal). |
## Next steps -- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.-- See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.-- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+* Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
+* See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
[Enablement UI]: ./media/snapshot-debugger/enablement-ui.png
-[snapshot-debugger-app-setting]:./media/snapshot-debugger/snapshot-debugger-app-setting.png
+[snapshot-debugger-app-setting]:./media/snapshot-debugger/snapshot-debugger-app-setting.png
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-function-app.md
Title: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions | Microsoft Docs description: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions+++
+reviewer: cweining
Previously updated : 12/18/2020- Last updated : 08/18/2022+ # Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions Snapshot Debugger currently works for ASP.NET and ASP.NET Core apps that are running on Azure Functions on Windows Service Plans.
-We recommend you run your application on the Basic service tier or higher when using Snapshot Debugger.
+We recommend that you run your application on the Basic service tier or higher when using Snapshot Debugger.
For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots.
For most applications, the Free and Shared service tiers don't have enough memor
## Enable Snapshot Debugger
-If you're running a different type of Azure service, here are instructions for enabling Snapshot Debugger on other supported platforms:
-* [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-
-To enable Snapshot Debugger in your Function app, you have to update your `host.json` file by adding the property `snapshotConfiguration` as defined below and redeploy your function.
+To enable Snapshot Debugger in your Function app, add the `snapshotConfiguration` property to your *host.json* file and redeploy your function. For example:
```json {
To enable Snapshot Debugger in your Function app, you have to update your `host.
} ```
-Snapshot Debugger is pre-installed as part of the Azure Functions runtime, which by default it's disabled.
-
-Since Snapshot Debugger it's included in the Azure Functions runtime, it isn't needed to add extra NuGet packages nor application settings.
+Snapshot Debugger is pre-installed as part of the Azure Functions runtime and is disabled by default. Since it's included in the runtime, you don't need to add extra NuGet packages or application settings.
-Just as reference, for a simple Function app (.NET Core), below is how it will look the `.csproj`, `{Your}Function.cs`, and `host.json` after enabled Snapshot Debugger on it.
+In the simple .NET Core Function app example below, `.csproj`, `{Your}Function.cs`, and `host.json` has Snapshot Debugger enabled:
-Project csproj
+***`Project.csproj`***
```xml <Project Sdk="Microsoft.NET.Sdk">
Project csproj
</Project> ```
-Function class
+`{Your}Function.cs`
```csharp using System;
namespace SnapshotCollectorAzureFunction
} ```
-Host file
+`Host.json`
```json {
Host file
Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide). Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:+ ```json { "version": "2.0",
Below are the supported overrides of the Snapshot Debugger agent endpoint:
## Disable Snapshot Debugger
-To disable Snapshot Debugger in your Function app, you just need to update your `host.json` file by setting to `false` the property `snapshotConfiguration.isEnabled`.
+To disable Snapshot Debugger in your Function app, update your `host.json` file by setting the `snapshotConfiguration.isEnabled` property to `false`.
```json {
To disable Snapshot Debugger in your Function app, you just need to update your
} ```
-We recommend you have Snapshot Debugger enabled on all your apps to ease diagnostics of application exceptions.
+We recommend that you have Snapshot Debugger enabled on all your apps to ease diagnostics of application exceptions.
## Next steps -- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.-- [View snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.-- Customize Snapshot Debugger configuration based on your use-case on your Function app. For more info, see [snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration).-- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+* Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
+* [View snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+* Customize Snapshot Debugger configuration based on your use-case on your Function app. For more information, see [snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration).
+* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
Title: Troubleshoot Azure Application Insights Snapshot Debugger description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Snapshot Debugger.+++
+reviewer: cweining
Previously updated : 03/07/2019- Last updated : 08/18/2022+ # <a id="troubleshooting"></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots+ If you enabled Application Insights Snapshot Debugger for your application, but aren't seeing snapshots for exceptions, you can use these instructions to troubleshoot. There can be many different reasons why snapshots aren't generated. You can start by running the snapshot health check to identify some of the possible common causes. ## Not Supported Scenarios
-Below you can find scenarios where Snapshot Collector is not supported:
+
+Below you can find scenarios where Snapshot Collector isn't supported:
|Scenario | Side Effects | Recommendation | ||--|-|
-|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](../app/azure-web-apps-net-core.md#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+|When using the Snapshot Collector SDK in your application directly (*.csproj*) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available. <br/> Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor` <br/> For more information about the Application Insights feature "Interop", see the [documentation.](../app/azure-web-apps-net-core.md#troubleshooting) | If you're using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure portal UX) |
## Make sure you're using the appropriate Snapshot Debugger Endpoint
Currently the only regions that require endpoint modifications are [Azure Govern
For App Service and applications using the Application Insights SDK, you have to update the connection string using the supported overrides for Snapshot Debugger as defined below:
-|Connection String Property | US Government Cloud | China Cloud |
+|Connection String Property | US Government Cloud | China Cloud |
|||-| |SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
For more information about other connection overrides, see [Application Insights
For Function App, you have to update the `host.json` using the supported overrides below:
-|Property | US Government Cloud | China Cloud |
+|Property | US Government Cloud | China Cloud |
|||-| |AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` | Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:+ ```json { "version": "2.0",
Below is an example of the `host.json` updated with the US Government Cloud agen
``` ## Use the snapshot health check+ Several common problems result in the Open Debug Snapshot not showing up. Using an outdated Snapshot Collector, for example; reaching the daily upload limit; or perhaps the snapshot is just taking a long time to upload. Use the Snapshot Health Check to troubleshoot common problems. There's a link in the exception pane of the end-to-end trace view that takes you to the Snapshot Health Check.
-![Enter snapshot health check](./media/snapshot-debugger/enter-snapshot-health-check.png)
The interactive, chat-like interface looks for common problems and guides you to fix them.
-![Health Check](./media/snapshot-debugger/health-check.png)
If that doesn't solve the problem, then refer to the following manual troubleshooting steps. ## Verify the instrumentation key
-Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the ApplicationInsights.config file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal.
+Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the *ApplicationInsights.config* file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
-If you have an ASP.NET application that it is hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
+If you have an ASP.NET application that's hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
-[The Snapshot Debugger endpoint requires TLS version 1.2](snapshot-debugger-upgrade.md?toc=/azure/azure-monitor/toc.json). The set of SSL security protocols is one of the quirks enabled by the httpRuntime targetFramework value in the system.web section of web.config.
-If the httpRuntime targetFramework is 4.5.2 or lower, then TLS 1.2 isn't included by default.
+[The Snapshot Debugger endpoint requires TLS version 1.2](snapshot-debugger-upgrade.md?toc=/azure/azure-monitor/toc.json). The set of SSL security protocols is one of the quirks enabled by the `httpRuntime targetFramework` value in the `system.web` section of `web.config`.
+If the `httpRuntime targetFramework` is 4.5.2 or lower, then TLS 1.2 isn't included by default.
> [!NOTE]
-> The httpRuntime targetFramework value is independent of the target framework used when building your application.
+> The `httpRuntime targetFramework` value is independent of the target framework used when building your application.
-To check the setting, open your web.config file and find the system.web section. Ensure that the `targetFramework` for `httpRuntime` is set to 4.6 or above.
+To check the setting, open your *web.config* file and find the system.web section. Ensure that the `targetFramework` for `httpRuntime` is set to 4.6 or above.
```xml <system.web>
To check the setting, open your web.config file and find the system.web section.
``` > [!NOTE]
-> Modifying the httpRuntime targetFramework value changes the runtime quirks applied to your application and can cause other, subtle behavior changes. Be sure to test your application thoroughly after making this change. For a full list of compatibility changes, see [Retargeting changes](/dotnet/framework/migration-guide/application-compatibility#retargeting-changes).
+> Modifying the `httpRuntime targetFramework` value changes the runtime quirks applied to your application and can cause other, subtle behavior changes. Be sure to test your application thoroughly after making this change. For a full list of compatibility changes, see [Re-targeting changes](/dotnet/framework/migration-guide/application-compatibility#retargeting-changes).
> [!NOTE]
-> If the targetFramework is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you are using your own virtual machine, you may need to enable TLS 1.2 in the OS.
+> If the `targetFramework` is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you're using your own virtual machine, you may need to enable TLS 1.2 in the OS.
## Preview Versions of .NET Core+ If you're using a preview version of .NET Core or your application references Application Insights SDK, directly or indirectly via a dependent assembly, follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json). ## Check the Diagnostic Services site extension' Status Page+ If Snapshot Debugger was enabled through the [Application Insights pane](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json) in the portal, it was enabled by the Diagnostic Services site extension. > [!NOTE]
This domain will be the same as the Kudu management site for App Service.
This Status Page shows the installation state of the Profiler and Snapshot Collector agents. If there was an unexpected error, it will be displayed and show how to fix it. You can use the Kudu management site for App Service to get the base url of this Status Page:+ 1. Open your App Service application in the Azure portal.
-2. Select **Advanced Tools**, or search for **Kudu**.
-3. Select **Go**.
-4. Once you are on the Kudu management site, in the URL, **append the following `/DiagnosticServices` and press enter**.
+1. Select **Advanced Tools**, or search for **Kudu**.
+1. Select **Go**.
+1. Once you are on the Kudu management site, in the URL, **append the following `/DiagnosticServices` and press enter**.
It will end like this: `https://<kudu-url>/DiagnosticServices` ## Upgrade to the latest version of the NuGet package+ Based on how Snapshot Debugger was enabled, see the following options: * If Snapshot Debugger was enabled through the [Application Insights pane in the portal](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json), then your application should already be running the latest NuGet package.
-* If Snapshot Debugger was enabled by including the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package, use Visual Studio's NuGet Package Manager to make sure you're using the latest version of Microsoft.ApplicationInsights.SnapshotCollector.
+* If Snapshot Debugger was enabled by including the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package, use Visual Studio's NuGet Package Manager to make sure you're using the latest version of `Microsoft.ApplicationInsights.SnapshotCollector`.
For the latest updates and bug fixes [consult the release notes](./snapshot-collector-release-notes.md). ## Check the uploader logs
-After a snapshot is created, a minidump file (.dmp) is created on disk. A separate uploader process creates that minidump file and uploads it, along with any associated PDBs, to Application Insights Snapshot Debugger storage. After the minidump has uploaded successfully, it's deleted from disk. The log files for the uploader process are kept on disk. In an App Service environment, you can find these logs in `D:\Home\LogFiles`. Use the Kudu management site for App Service to find these log files.
+After a snapshot is created, a minidump file (*.dmp*) is created on disk. A separate uploader process creates that minidump file and uploads it, along with any associated PDBs, to Application Insights Snapshot Debugger storage. After the minidump has uploaded successfully, it's deleted from disk. The log files for the uploader process are kept on disk. In an App Service environment, you can find these logs in `D:\Home\LogFiles`. Use the Kudu management site for App Service to find these log files.
1. Open your App Service application in the Azure portal.
-2. Select **Advanced Tools**, or search for **Kudu**.
-3. Select **Go**.
-4. In the **Debug console** drop-down list box, select **CMD**.
-5. Select **LogFiles**.
+1. Select **Advanced Tools**, or search for **Kudu**.
+1. Select **Go**.
+1. In the **Debug console** drop-down list, select **CMD**.
+1. Select **LogFiles**.
You should see at least one file with a name that begins with `Uploader_` or `SnapshotUploader_` and a `.log` extension. Select the appropriate icon to download any log files or open them in a browser. The file name includes a unique suffix that identifies the App Service instance. If your App Service instance is hosted on more than one machine, there are separate log files for each machine. When the uploader detects a new minidump file, it's recorded in the log file. Here's an example of a successful snapshot and upload:
SnapshotUploader.exe Information: 0 : Deleted D:\local\Temp\Dumps\c12a605e73c443
``` > [!NOTE]
-> The example above is from version 1.2.0 of the Microsoft.ApplicationInsights.SnapshotCollector NuGet package. In earlier versions, the uploader process is called `MinidumpUploader.exe` and the log is less detailed.
+> The example above is from version 1.2.0 of the `Microsoft.ApplicationInsights.SnapshotCollector` NuGet package. In earlier versions, the uploader process is called `MinidumpUploader.exe` and the log is less detailed.
In the previous example, the instrumentation key is `c12a605e73c44346a984e00000000000`. This value should match the instrumentation key for your application. The minidump is associated with a snapshot with the ID `139e411a23934dc0b9ea08a626db16c5`. You can use this ID later to locate the associated exception record in Application Insights Analytics.
SnapshotUploader.exe Information: 0 : Deleted PDB scan marker : D:\local\Temp\Du
DateTime=2018-03-09T01:47:19.4614027Z ```
-For applications that _aren't_ hosted in App Service, the uploader logs are in the same folder as the minidumps: `%TEMP%\Dumps\<ikey>` (where `<ikey>` is your instrumentation key).
+For applications that *aren't* hosted in App Service, the uploader logs are in the same folder as the minidumps: `%TEMP%\Dumps\<ikey>` (where `<ikey>` is your instrumentation key).
## Troubleshooting Cloud Services+ In Cloud Services, the default temporary folder could be too small to hold the minidump files, leading to lost snapshots. The space needed depends on the total working set of your application and the number of concurrent snapshots.
For example, if your application uses 1 GB of total working set, you should make
Follow these steps to configure your Cloud Service role with a dedicated local resource for snapshots. 1. Add a new local resource to your Cloud Service by editing the Cloud Service definition (.csdef) file. The following example defines a resource called `SnapshotStore` with a size of 5 GB.+ ```xml <LocalResources> <LocalStorage name="SnapshotStore" cleanOnRoleRecycle="false" sizeInMB="5120" /> </LocalResources> ```
-2. Modify your role's startup code to add an environment variable that points to the `SnapshotStore` local resource. For Worker Roles, the code should be added to your role's `OnStart` method:
+1. Modify your role's startup code to add an environment variable that points to the `SnapshotStore` local resource. For Worker Roles, the code should be added to your role's `OnStart` method:
+ ```csharp public override bool OnStart() {
Follow these steps to configure your Cloud Service role with a dedicated local r
return base.OnStart(); } ```+ For Web Roles (ASP.NET), the code should be added to your web application's `Application_Start` method:+ ```csharp using Microsoft.WindowsAzure.ServiceRuntime; using System;
Follow these steps to configure your Cloud Service role with a dedicated local r
} ```
-3. Update your role's ApplicationInsights.config file to override the temporary folder location used by `SnapshotCollector`
+1. Update your role's *ApplicationInsights.config* file to override the temporary folder location used by `SnapshotCollector`
+ ```xml <TelemetryProcessors> <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
Follow these steps to configure your Cloud Service role with a dedicated local r
When the Snapshot Collector starts up, it tries to find a folder on disk that is suitable for running the Snapshot Uploader process. The chosen folder is known as the Shadow Copy folder. The Snapshot Collector checks a few well-known locations, making sure it has permissions to copy the Snapshot Uploader binaries. The following environment variables are used:-- Fabric_Folder_App_Temp-- LOCALAPPDATA-- APPDATA-- TEMP
-If a suitable folder can't be found, Snapshot Collector reports an error saying _"Couldn't find a suitable shadow copy folder."_
+* Fabric_Folder_App_Temp
+* LOCALAPPDATA
+* APPDATA
+* TEMP
+
+If a suitable folder can't be found, Snapshot Collector reports an error saying *"Couldn't find a suitable shadow copy folder."*
If the copy fails, Snapshot Collector reports a `ShadowCopyFailed` error. If the uploader can't be launched, Snapshot Collector reports an `UploaderCannotStartFromShadowCopy` error. The body of the message often contains `System.UnauthorizedAccessException`. This error usually occurs because the application is running under an account with reduced permissions. The account has permission to write to the shadow copy folder, but it doesn't have permission to execute code.
-Since these errors usually happen during startup, they'll usually be followed by an `ExceptionDuringConnect` error saying _"Uploader failed to start."_
+Since these errors usually happen during startup, they'll usually be followed by an `ExceptionDuringConnect` error saying *Uploader failed to start*."
-To work around these errors, you can specify the shadow copy folder manually via the `ShadowCopyFolder` configuration option. For example, using ApplicationInsights.config:
+To work around these errors, you can specify the shadow copy folder manually via the `ShadowCopyFolder` configuration option. For example, using *ApplicationInsights.config*:
```xml <TelemetryProcessors>
To work around these errors, you can specify the shadow copy folder manually via
</TelemetryProcessors> ```
-Or, if you're using appsettings.json with a .NET Core application:
+Or, if you're using *appsettings.json* with a .NET Core application:
```json {
Or, if you're using appsettings.json with a .NET Core application:
When a snapshot is created, the throwing exception is tagged with a snapshot ID. That snapshot ID is included as a custom property when the exception is reported to Application Insights. Using **Search** in Application Insights, you can find all records with the `ai.snapshot.id` custom property. 1. Browse to your Application Insights resource in the Azure portal.
-2. Select **Search**.
-3. Type `ai.snapshot.id` in the Search text box and press Enter.
+1. Select **Search**.
+1. Type `ai.snapshot.id` in the Search text box and press Enter.
-![Search for telemetry with a snapshot ID in the portal](./media/snapshot-debugger/search-snapshot-portal.png)
If this search returns no results, then, no snapshots were reported to Application Insights in the selected time range.
To search for a specific snapshot ID from the Uploader logs, type that ID in the
1. Double-check that you're looking at the right Application Insights resource by verifying the instrumentation key.
-2. Using the timestamp from the Uploader log, adjust the Time Range filter of the search to cover that time range.
+1. Using the timestamp from the Uploader log, adjust the Time Range filter of the search to cover that time range.
If you still don't see an exception with that snapshot ID, then the exception record wasn't reported to Application Insights. This situation can happen if your application crashed after it took the snapshot but before it reported the exception record. In this case, check the App Service logs under `Diagnose and solve problems` to see if there were unexpected restarts or unhandled exceptions.
If you still don't see an exception with that snapshot ID, then the exception re
If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Snapshot Debugger service.
-The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
+The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
azure-monitor Snapshot Debugger Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-upgrade.md
Title: Upgrading Azure Application Insights Snapshot Debugger description: How to upgrade Snapshot Debugger for .NET apps to the latest version on Azure App Services, or via Nuget packages+++
+reviewer: cweining
Previously updated : 03/28/2019- Last updated : 08/18/2022+ # Upgrading the Snapshot Debugger
-To provide the best possible security for your data, Microsoft is moving away from TLS 1.0 and TLS 1.1, which have been shown to be vulnerable to determined attackers. If you're using an older version of the site extension, it will require an upgrade to continue working. This document outlines the steps needed to upgrade your Snapshot debugger to the latest version.
-There are two primary upgrade paths depending on if you enabled the Snapshot Debugger using a site extension or if you used an SDK/Nuget added to your application. Both upgrade paths are discussed below.
+To provide the best possible security for your data, Microsoft is moving away from TLS 1.0 and TLS 1.1, which have been shown to be vulnerable to determined attackers. If you're using an older version of the site extension, it will require an upgrade to continue working. This document outlines the steps needed to upgrade your Snapshot debugger to the latest version.
+
+You can follow two primary upgrade paths, depending on how you enabled the Snapshot Debugger:
+
+* Via site extension
+* Via an SDK/NuGet added to your application
+
+This article discusses both upgrade paths.
## Upgrading the site extension > [!IMPORTANT]
-> Older versions of Application Insights used a private site extension called _Application Insights extension for Azure App Service_. The current Application Insights experience is enabled by setting App Settings to light up a pre-installed site extension.
+> Older versions of Application Insights used a private site extension called *Application Insights extension for Azure App Service*. The current Application Insights experience is enabled by setting App Settings to light up a pre-installed site extension.
> To avoid conflicts, which may cause your site to stop working, it is important to delete the private site extension first. See step 4 below. If you enabled the Snapshot debugger using the site extension, you can upgrade using the following procedure: 1. Sign in to the Azure portal.
-2. Navigate to your resource that has Application Insights and Snapshot debugger enabled. For example, for a Web App, navigate to the App Service resource:
+1. Go to to your resource that has Application Insights and Snapshot debugger enabled. For example, for a Web App, go to to the App Service resource:
+
+ :::image type="content" source="./media/snapshot-debugger-upgrade/app-service-resource.png" alt-text="Screenshot of individual App Service resource named DiagService01.":::
+
+1. After you've navigated to your resource, click on the **Extensions** blade and wait for the list of extensions to populate:
- ![Screenshot of individual App Service resource named DiagService01](./media/snapshot-debugger-upgrade/app-service-resource.png)
+ :::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-site-extension-to-be-deleted.png" alt-text="Screenshot of App Service Extensions showing Application Insights extension for Azure App Service installed.":::
-3. Once you've navigated to your resource, click on the Extensions blade and wait for the list of extensions to populate:
+1. If any version of *Application Insights extension for Azure App Service* is installed, select it and click **Delete**. Confirm **Yes** to delete the extension and wait for the delete to complete before moving to the next step.
- ![Screenshot of App Service Extensions showing Application Insights extension for Azure App Service installed](./media/snapshot-debugger-upgrade/application-insights-site-extension-to-be-deleted.png)
+ :::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-site-extension-delete.png" alt-text="Screenshot of App Service Extensions showing Application Insights extension for Azure App Service with the Delete button highlighted.":::
-4. If any version of _Application Insights extension for Azure App Service_ is installed, then select it and click Delete. Confirm **Yes** to delete the extension and wait for the delete to complete before moving to the next step.
+1. Go to the **Overview** blade of your resource and select **Application Insights**:
- ![Screenshot of App Service Extensions showing Application Insights extension for Azure App Service with the Delete button highlighted](./media/snapshot-debugger-upgrade/application-insights-site-extension-delete.png)
+ :::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-button.png" alt-text="Screenshot of three buttons. Center button with name Application Insights is selected.":::
-5. Go to the Overview blade of your resource and click on Application Insights:
+1. If this is the first time you've viewed the Application Insights blade for this App Service, you'll be prompted to turn on Application Insights. Select **Turn on Application Insights**.
- ![Screenshot of three buttons. Center button with name Application Insights is selected](./media/snapshot-debugger-upgrade/application-insights-button.png)
+ :::image type="content" source="./media/snapshot-debugger-upgrade/turn-on-application-insights.png" alt-text="Screenshot of the first-time experience for the Application Insights blade with the Turn on Application Insights button highlighted.":::
-6. If this is the first time you've viewed the Application Insights blade for this App Service, you'll be prompted to turn on Application Insights. Select **Turn on Application Insights**.
-
- ![Screenshot of the first-time experience for the Application Insights blade with the Turn on Application Insights button highlighted](./media/snapshot-debugger-upgrade/turn-on-application-insights.png)
+1. In the Application Insights settings blade, switch the Snapshot Debugger setting toggles to **On** and select **Apply**.
-7. The current Application Insights settings are displayed. Unless you want to take the opportunity to change your settings, you can leave them as is. The **Apply** button on the bottom of the blade isn't enabled by default and you'll have to toggle one of the settings to activate the button. You donΓÇÖt have to change any actual settings, rather you can change the setting and then immediately change it back. We recommend toggling the Profiler setting and then selecting **Apply**.
+ If you decide to change *any* Application Insights settings, the **Apply** button on the bottom of the blade will be activated.
- ![Screenshot of Application Insights App Service Configuration page with Apply button highlighted in red](./media/snapshot-debugger-upgrade/view-application-insights-data.png)
+ :::image type="content" source="./media/snapshot-debugger-upgrade/view-application-insights-data.png" alt-text="Screenshot of Application Insights App Service Configuration page with Apply button highlighted in red.":::
-8. Once you click **Apply**, you'll be asked to confirm the changes.
+1. After you click **Apply**, you'll be asked to confirm the changes.
> [!NOTE] > The site will be restarted as part of the upgrade process.
- ![Screenshot of App Service's apply monitoring prompt. Text box displays message: "We will now apply changes to your app settings and install our tools to link your Application Insights resource to the web app. This will restart the site. Do you want to continue?"](./media/snapshot-debugger-upgrade/apply-monitoring-settings.png)
+ :::image type="content" source="./media/snapshot-debugger-upgrade/apply-monitoring-settings.png" alt-text="Screenshot of App Service's apply monitoring prompt.":::
-9. Click **Yes** to apply the changes and wait for the process to complete.
+1. Click **Yes** to apply the changes and wait for the process to complete.
The site has now been upgraded and is ready to use.
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
Title: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines | Microsoft Docs description: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines+++
+reviewer: cweining
Previously updated : 03/07/2019- Last updated : 08/18/2022+ # Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines
-If your ASP.NET or ASP.NET core application runs in Azure App Service, it's highly recommended to [enable Snapshot Debugger through the Application Insights portal page](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json). However, if your application requires a customized Snapshot Debugger configuration, or a preview version of .NET core, then this instruction should be followed ***in addition*** to the instructions for [enabling through the Application Insights portal page](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json).
+If your ASP.NET or ASP.NET Core application runs in App Service and requires a customized Snapshot Debugger configuration, or a preview version of .NET Core, start with the [Enable Snapshot Debugger for App Services how-to guide](snapshot-debugger-app-service.md).
-If your application runs in Azure Service Fabric, Cloud Service, Virtual Machines, or on-premises machines, the following instructions should be used.
+If your application runs in Azure Service Fabric, Cloud Service, Virtual Machines, or on-premises machines, you can skip enabling Snapshot Debugger on App Services and jump into following this guide.
## Configure snapshot collection for ASP.NET applications
-1. [Enable Application Insights in your web app](../app/asp-net.md), if you haven't done it yet.
+### Prerequisite
-2. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+[Enable Application Insights in your web app](../app/asp-net.md).
-3. If needed, customized the Snapshot Debugger configuration added to [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md). The default Snapshot Debugger configuration is mostly empty and all settings are optional. Here is an example showing a configuration equivalent to the default configuration:
+1. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+
+1. If needed, customize the Snapshot Debugger configuration added to [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md).
+
+ The default Snapshot Debugger configuration is mostly empty and all settings are optional. Here's an example showing a configuration equivalent to the default configuration:
```xml <TelemetryProcessors>
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
</TelemetryProcessors> ```
-4. Snapshots are collected only on exceptions that are reported to Application Insights. In some cases (for example, older versions of the .NET platform), you might need to [configure exception collection](../app/asp-net-exceptions.md#exceptions) to see exceptions with snapshots in the portal.
-
+1. Snapshots are collected only on exceptions that are reported to Application Insights. In some cases (for example, older versions of the .NET platform), you might need to [configure exception collection](../app/asp-net-exceptions.md#exceptions) to see exceptions with snapshots in the portal.
## Configure snapshot collection for applications using ASP.NET Core LTS or above
-1. [Enable Application Insights in your ASP.NET Core web app](../app/asp-net-core.md), if you haven't done it yet.
+### Prerequisites
- > [!NOTE]
- > Be sure that your application references version 2.1.1, or newer, of the Microsoft.ApplicationInsights.AspNetCore package.
+[Enable Application Insights in your ASP.NET Core web app](../app/asp-net-core.md), if you haven't done it yet.
+> [!NOTE]
+> Be sure that your application references version 2.1.1, or newer, of the Microsoft.ApplicationInsights.AspNetCore package.
-2. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+1. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
-3. Modify your application's `Startup` class to add and configure the Snapshot Collector's telemetry processor.
- 1. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.5 or above is used, then add the following using statements to `Startup.cs`.
+1. Modify your application's `Startup` class to add and configure the Snapshot Collector's telemetry processor.
+ 1. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.5 or above is used, then add the following using statements to *Startup.cs*:
```csharp using Microsoft.ApplicationInsights.SnapshotCollector; ```
- Add the following at the end of the ConfigureServices method in the `Startup` class in `Startup.cs`.
+ Add the following at the end of the ConfigureServices method in the `Startup` class in *Startup.cs*:
```csharp services.AddSnapshotCollector((configuration) => Configuration.Bind(nameof(SnapshotCollectorConfiguration), configuration)); ```
- 2. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.4 or below is used, then add the following using statements to `Startup.cs`.
+
+ 1. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.4 or below is used, then add the following using statements to *Startup.cs*.
```csharp using Microsoft.ApplicationInsights.SnapshotCollector;
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
using Microsoft.ApplicationInsights.Extensibility; ```
- Add the following `SnapshotCollectorTelemetryProcessorFactory` class to `Startup` class.
+ Add the following `SnapshotCollectorTelemetryProcessorFactory` class to `Startup` class:
```csharp class Startup
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
} ... ```+ Add the `SnapshotCollectorConfiguration` and `SnapshotCollectorTelemetryProcessorFactory` services to the startup pipeline: ```csharp
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
} ```
-4. If needed, customized the Snapshot Debugger configuration by adding a SnapshotCollectorConfiguration section to appsettings.json. All settings in the Snapshot Debugger configuration are optional. Here is an example showing a configuration equivalent to the default configuration:
+1. If needed, customize the Snapshot Debugger configuration by adding a `SnapshotCollectorConfiguration` section to *appsettings.json*.
+
+ All settings in the Snapshot Debugger configuration are optional. Here's an example showing a configuration equivalent to the default configuration:
```json {
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
1. If your application isn't already instrumented with Application Insights, get started by [enabling Application Insights and setting the instrumentation key](../app/windows-desktop.md).
-2. Add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+1. Add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
-3. Snapshots are collected only on exceptions that are reported to Application Insights. You may need to modify your code to report them. The exception handling code depends on the structure of your application, but an example is below:
+1. Snapshots are collected only on exceptions that are reported to Application Insights. You may need to modify your code to report them. The exception handling code depends on the structure of your application, but an example is below:
```csharp TelemetryClient _telemetryClient = new TelemetryClient();
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
} } ```+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]+ ## Next steps - Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance. - See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal. - For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).-
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
Title: Azure Application Insights Snapshot Debugger for .NET apps description: Debug snapshots are automatically collected when exceptions are thrown in production .NET apps+++
+reviewer: cweining
- Previously updated : 10/12/2021-+ Last updated : 08/18/2022 # Debug snapshots on exceptions in .NET apps
-When an exception occurs, you can automatically collect a debug snapshot from your live web application. The snapshot shows the state of source code and variables at the moment the exception was thrown. The Snapshot Debugger in [Azure Application Insights](../app/app-insights-overview.md) monitors exception telemetry from your web app. It collects snapshots on your top-throwing exceptions so that you have the information you need to diagnose issues in production. Include the [Snapshot collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application, and optionally configure collection parameters in [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md). Snapshots appear on [exceptions](../app/asp-net-exceptions.md) in the Application Insights portal.
-You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To get a more powerful debugging experience with source code, open snapshots with Visual Studio 2019 Enterprise. In Visual Studio, you can also [set Snappoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
+When an exception occurs, you can automatically collect a debug snapshot from your live web application. The debug snapshot shows the state of source code and variables at the moment the exception was thrown. The Snapshot Debugger in [Azure Application Insights](../app/app-insights-overview.md):
-Debug snapshots are stored for 15 days. This retention policy is set on a per-application basis. If you need to increase this value, you can request an increase by opening a support case in the Azure portal.
+* Monitors system-generated logs from your web app.
+* Collects snapshots on your top-throwing exceptions.
+* Provides information you need to diagnose issues in production.
+
+Simply include the [Snapshot collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application and configure collection parameters in [`ApplicationInsights.config`](../app/configuration-with-applicationinsights-config.md).
+
+Snapshots appear on [**Exceptions**](../app/asp-net-exceptions.md) in the Application Insights blade of the Azure portal.
+
+You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To get a more powerful debugging experience with source code, open snapshots with Visual Studio Enterprise. You can also [set SnapPoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
## Enable Application Insights Snapshot Debugger for your application+ Snapshot collection is available for:+ * .NET Framework and ASP.NET applications running .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or later. * .NET Core and ASP.NET Core applications running .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) on Windows. * .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications on Windows.
The following environments are supported:
* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later or Windows 8.1 or later > [!NOTE]
-> Client applications (for example, WPF, Windows Forms or UWP) are not supported.
+> Client applications (for example, WPF, Windows Forms or UWP) aren't supported.
If you've enabled Snapshot Debugger but aren't seeing snapshots, check our [Troubleshooting guide](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
If you've enabled Snapshot Debugger but aren't seeing snapshots, check our [Trou
Access to snapshots is protected by Azure role-based access control (Azure RBAC). To inspect a snapshot, you must first be added to the necessary role by a subscription owner. > [!NOTE]
-> Owners and contributors do not automatically have this role. If they want to view snapshots, they must add themselves to the role.
-
-Subscription owners should assign the `Application Insights Snapshot Debugger` role to users who will inspect snapshots. This role can be assigned to individual users or groups by subscription owners for the target Application Insights resource or its resource group or subscription.
+> Owners and contributors don't automatically have this role. If they want to view snapshots, they must add themselves to the role.
-1. Assign the Debugger role to the **Application Insights Snapshot**.
+Subscription owners should assign the [Application Insights Snapshot Debugger](../../role-based-access-control/role-assignments-portal.md) role to users who will inspect snapshots. This role can be assigned to individual users or groups by subscription owners for the target Application Insights resource or its resource group or subscription.
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+Assign the Debugger role to the **Application Insights Snapshot**.
+For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
> [!IMPORTANT]
-> Please note that snapshots may contain personal data or other sensitive information in variable and parameter values. Snapshot data is stored in the same region as your App Insights resource.
+> Snapshots may contain personal data or other sensitive information in variable and parameter values. Snapshot data is stored in the same region as your App Insights resource.
## View Snapshots in the Portal
-After an exception has occurred in your application and a snapshot has been created, you should have snapshots to view. It can take 5 to 10 minutes from an exception occurring to a snapshot ready and viewable from the portal. To view snapshots, in the **Failure** pane, select the **Operations** button when viewing the **Operations** tab, or select the **Exceptions** button when viewing the **Exceptions** tab:
+After an exception has occurred in your application and a snapshot has been created, you should have snapshots to view in the Azure portal within 5 to 10 minutes. To view snapshots, in the **Failure** pane, either:
+
+* Select the **Operations** button when viewing the **Operations** tab, or
+* Select the **Exceptions** button when viewing the **Exceptions** tab.
-![Failures Page](./media/snapshot-debugger/failures-page.png)
Select an operation or exception in the right pane to open the **End-to-End Transaction Details** pane, then select the exception event. If a snapshot is available for the given exception, an **Open Debug Snapshot** button appears on the right pane with details for the [exception](../app/asp-net-exceptions.md).
-![Open Debug Snapshot button on exception](./media/snapshot-debugger/e2e-transaction-page.png)
In the Debug Snapshot view, you see a call stack and a variables pane. When you select frames of the call stack in the call stack pane, you can view local variables and parameters for that function call in the variables pane.
-![View Debug Snapshot in the portal](./media/snapshot-debugger/open-snapshot-portal.png)
-Snapshots might include sensitive information, and by default they aren't viewable. To view snapshots, you must have the `Application Insights Snapshot Debugger` role assigned to you.
+Snapshots might include sensitive information. By default, you can only view snapshots if you've been assigned the `Application Insights Snapshot Debugger` role.
## View Snapshots in Visual Studio 2017 Enterprise or above+ 1. Click the **Download Snapshot** button to download a `.diagsession` file, which can be opened by Visual Studio Enterprise.
-2. To open the `.diagsession` file, you need to have the Snapshot Debugger Visual Studio component installed. The Snapshot Debugger component is a required component of the ASP.NET workload in Visual Studio and can be selected from the Individual Component list in the Visual Studio installer. If you're using a version of Visual Studio before Visual Studio 2017 version 15.5, you'll need to install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger).
+1. To open the `.diagsession` file, you need to have the Snapshot Debugger Visual Studio component installed. The Snapshot Debugger component is a required component of the ASP.NET workload in Visual Studio and can be selected from the Individual Component list in the Visual Studio installer. If you're using a version of Visual Studio before Visual Studio 2017 version 15.5, you'll need to install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger).
-3. After you open the snapshot file, the Minidump Debugging page in Visual Studio appears. Click **Debug Managed Code** to start debugging the snapshot. The snapshot opens to the line of code where the exception was thrown so that you can debug the current state of the process.
+1. After you open the snapshot file, the Minidump Debugging page in Visual Studio appears. Click **Debug Managed Code** to start debugging the snapshot. The snapshot opens to the line of code where the exception was thrown so that you can debug the current state of the process.
- ![View debug snapshot in Visual Studio](./media/snapshot-debugger/open-snapshot-visual-studio.png)
+ :::image type="content" source="./media/snapshot-debugger/open-snapshot-visual-studio.png" alt-text="Screenshot showing the debug snapshot in Visual Studio.":::
The downloaded snapshot includes any symbol files that were found on your web application server. These symbol files are required to associate snapshot data with source code. For App Service apps, make sure to enable symbol deployment when you publish your web apps. ## How snapshots work
-The Snapshot Collector is implemented as an [Application Insights Telemetry Processor](../app/configuration-with-applicationinsights-config.md#telemetry-processors-aspnet). When your application runs, the Snapshot Collector Telemetry Processor is added to your application's telemetry pipeline.
+The Snapshot Collector is implemented as an [Application Insights Telemetry Processor](../app/configuration-with-applicationinsights-config.md#telemetry-processors-aspnet). When your application runs, the Snapshot Collector Telemetry Processor is added to your application's system-generated logs pipeline.
Each time your application calls [TrackException](../app/asp-net-exceptions.md#exceptions), the Snapshot Collector computes a Problem ID from the type of exception being thrown and the throwing method.
-Each time your application calls TrackException, a counter is incremented for the appropriate Problem ID. When the counter reaches the `ThresholdForSnapshotting` value, the Problem ID is added to a Collection Plan.
+Each time your application calls `TrackException`, a counter is incremented for the appropriate Problem ID. When the counter reaches the `ThresholdForSnapshotting` value, the Problem ID is added to a Collection Plan.
The Snapshot Collector also monitors exceptions as they're thrown by subscribing to the [AppDomain.CurrentDomain.FirstChanceException](/dotnet/api/system.appdomain.firstchanceexception) event. When that event fires, the Problem ID of the exception is computed and compared against the Problem IDs in the Collection Plan.
-If there's a match, then a snapshot of the running process is created. The snapshot is assigned a unique identifier and the exception is stamped with that identifier. After the FirstChanceException handler returns, the thrown exception is processed as normal. Eventually, the exception reaches the TrackException method again where it, along with the snapshot identifier, is reported to Application Insights.
+If there's a match, then a snapshot of the running process is created. The snapshot is assigned a unique identifier and the exception is stamped with that identifier. After the `FirstChanceException` handler returns, the thrown exception is processed as normal. Eventually, the exception reaches the `TrackException` method again where it, along with the snapshot identifier, is reported to Application Insights.
-The main process continues to run and serve traffic to users with little interruption. Meanwhile, the snapshot is handed off to the Snapshot Uploader process. The Snapshot Uploader creates a minidump and uploads it to Application Insights along with any relevant symbol (.pdb) files.
+The main process continues to run and serve traffic to users with little interruption. Meanwhile, the snapshot is handed off to the Snapshot Uploader process. The Snapshot Uploader creates a minidump and uploads it to Application Insights along with any relevant symbol (*.pdb*) files.
> [!TIP]
-> - A process snapshot is a suspended clone of the running process.
-> - Creating the snapshot takes about 10 to 20 milliseconds.
-> - The default value for `ThresholdForSnapshotting` is 1. This is also the minimum value. Therefore, your app has to trigger the same exception **twice** before a snapshot is created.
-> - Set `IsEnabledInDeveloperMode` to true if you want to generate snapshots while debugging in Visual Studio.
-> - The snapshot creation rate is limited by the `SnapshotsPerTenMinutesLimit` setting. By default, the limit is one snapshot every ten minutes.
-> - No more than 50 snapshots per day may be uploaded.
+
+> * A process snapshot is a suspended clone of the running process.
+> * Creating the snapshot takes about 10 to 20 milliseconds.
+> * The default value for `ThresholdForSnapshotting` is 1. This is also the minimum value. Therefore, your app has to trigger the same exception **twice** before a snapshot is created.
+> * Set `IsEnabledInDeveloperMode` to true if you want to generate snapshots while debugging in Visual Studio.
+> * The snapshot creation rate is limited by the `SnapshotsPerTenMinutesLimit` setting. By default, the limit is one snapshot every ten minutes.
+> * No more than 50 snapshots per day may be uploaded.
## Limitations
-The default data retention period is 15 days. For each Application Insights instance, a maximum number of 50 snapshots are allowed per day.
+### Data retention
+
+Debug snapshots are stored for 15 days. The default data retention policy is set on a per-application basis. If you need to increase this value, you can request an increase by opening a support case in the Azure portal. For each Application Insights instance, a maximum number of 50 snapshots are allowed per day.
### Publish symbols+ The Snapshot Debugger requires symbol files on the production server to decode variables and to provide a debugging experience in Visual Studio. Version 15.2 (or above) of Visual Studio 2017 publishes symbols for release builds by default when it publishes to App Service. In prior versions, you need to add the following line to your publish profile `.pubxml` file so that symbols are published in release mode:
Version 15.2 (or above) of Visual Studio 2017 publishes symbols for release buil
For Azure Compute and other types, make sure that the symbol files are in the same folder of the main application .dll (typically, `wwwroot/bin`) or are available on the current path. > [!NOTE]
-> For more information on the different symbol options that are available, see the [Visual Studio documentation](/visualstudio/ide/reference/advanced-build-settings-dialog-box-csharp?view=vs-2019&preserve-view=true#output
-). For best results, we recommend using "Full", "Portable" or "Embedded".
+> For more information on the different symbol options that are available, see the [Visual Studio documentation](/visualstudio/ide/reference/advanced-build-settings-dialog-box-csharp). For best results, we recommend that you use "Full", "Portable" or "Embedded".
### Optimized builds+ In some cases, local variables can't be viewed in release builds because of optimizations that are applied by the JIT compiler. However, in Azure App Services, the Snapshot Collector can deoptimize throwing methods that are part of its Collection Plan. > [!TIP]
-> Install the Application Insights Site Extension in your App Service to get deoptimization support.
+> Install the Application Insights Site Extension in your App Service to get de-optimization support.
## Next steps+ Enable Application Insights Snapshot Debugger for your application: * [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
Enable Application Insights Snapshot Debugger for your application:
* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) Beyond Application Insights Snapshot Debugger:
-
+ * [Set snappoints in your code](/visualstudio/debugger/debug-live-azure-applications) to get snapshots without waiting for an exception. * [Diagnose exceptions in your web apps](../app/asp-net-exceptions.md) explains how to make more exceptions visible to Application Insights. * [Smart Detection](../alerts/proactive-diagnostics.md) automatically discovers performance anomalies.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 08/11/2022 Last updated : 08/24/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files volumes are designed to be contained in a special purpose sub
### Supported regions
-Azure NetApp Files standard network features are supported for the following regions:
+Azure NetApp Files Standard network features are supported for the following regions:
* Australia Central * Australia Central 2
Azure NetApp Files standard network features are supported for the following reg
* North Central US * North Europe * South Central US
+* Southeast Asia
* Switzerland North * UK South * West Europe
The following table describes whatΓÇÖs supported for each network features confi
| Features | Standard network features | Basic network features | ||||
-| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | [Standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) | 1000 |
+| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) | 1000 |
| Azure NetApp Files delegated subnets per VNet | 1 | 1 | | [Network Security Groups](../virtual-network/network-security-groups-overview.md) (NSGs) on Azure NetApp Files delegated subnets | Yes | No | | [User-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) (UDRs) on Azure NetApp Files delegated subnets | Yes | No |
The following table describes whatΓÇÖs supported for each network features confi
| Dual stack (IPv4 and IPv6) VNet | No <br> (IPv4 only supported) | No <br> (IPv4 only supported) | > [!IMPORTANT]
-> Upgrade from basic to standard network feature is not currently supported.
+> Upgrade from Basic to Standard network feature is not currently supported.
+
+> [!IMPORTANT]
+> Conversion between Basic and Standard networking features is not currently supported.
### Supported network topologies
The following diagram illustrates an Azure-native environment with cross-region
:::image type="content" source="../media/azure-netapp-files/azure-native-cross-region-peering.png" alt-text="Diagram depicting Azure native environment setup with cross-region VNet peering." lightbox="../media/azure-netapp-files/azure-native-cross-region-peering.png":::
-With the standard network feature, VMs are able to connect to volumes in another region via global or cross-region VNet peering. The above diagram adds a second region to the configuration in the [local VNet peering section](#vnet-peering). For VNet 4 in this diagram, an Azure NetApp Files volume is created in a delegated subnet and can be mounted on VM5 in the application subnet.
+With Standard network features, VMs are able to connect to volumes in another region via global or cross-region VNet peering. The above diagram adds a second region to the configuration in the [local VNet peering section](#vnet-peering). For VNet 4 in this diagram, an Azure NetApp Files volume is created in a delegated subnet and can be mounted on VM5 in the application subnet.
In the diagram, VM2 in Region 1 can connect to Volume 3 in Region 2. VM5 in Region 2 can connect to Volume 2 in Region 1 via VNet peering between Region 1 and Region 2.
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 04/28/2022 Last updated : 08/24/2022 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Number of volumes per subscription | 500 | Yes | | Number of volumes per capacity pool | 500 | Yes | | Number of snapshots per volume | 255 | No |
-| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | 1000 | No |
+| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No |
| Minimum size of a single capacity pool | 4 TiB | No | | Maximum size of a single capacity pool | 500 TiB | No | | Minimum size of a single volume | 100 GiB | No |
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 04/21/2022 Last updated : 08/24/2022 # SMB FAQs for Azure NetApp Files
However, you can map multiple NetApp accounts that are under the same subscripti
Both [Azure Active Directory (AD) Domain Services](../active-directory-domain-services/overview.md) and [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) are supported. You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can reside in Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. Azure NetApp Files doesn't support AD join for [Azure Active Directory](https://azure.microsoft.com/resources/videos/azure-active-directory-overview/) at this time.
-If you are using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account.
+If you're using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account.
## What versions of Windows Server Active Directory are supported?
Azure NetApp Files supports [`CHANGE_NOTIFY` response](/openspecs/windows_protoc
Azure NetApp Files also supports [`LOCK` response](/openspecs/windows_protocols/ms-smb2/e215700a-102c-450a-a598-7ec2a99cd82c). This response is for the clientΓÇÖs request that comes in the form of a [`LOCK` request](/openspecs/windows_protocols/ms-smb2/6178b960-48b6-4999-b589-669f88e9017d).
+## What network authentication methods are supported for SMB volumes in Azure NetApp Files?
+
+NTLMv2 and Kerberos network authentication methods are supported with SMB volumes in Azure NetApp Files. NTLMv1 and LanManager are disabled and are not supported.
+ ## What is the password rotation policy for the Active Directory machine account for SMB volumes? The Azure NetApp Files service has a policy that automatically updates the password on the Active Directory machine account that is created for SMB volumes. This policy has the following properties:
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 02/01/2022 Last updated : 08/23/2022 # Azure Resource Manager template specs in Bicep
-A template spec is a resource type for storing an Azure Resource Manager template (ARM template) for later deployment. This resource type enables you to share ARM templates with other users in your organization. Just like any other Azure resource, you can use Azure role-based access control (Azure RBAC) to share the template spec. You can use Azure CLI or Azure PowerShell to create template specs by providing Bicep files. The Bicep files are transpiled into ARM JSON templates before they are stored. Currently, you can't import a Bicep file from the Azure portal to create a template spec resource.
+A template spec is a resource type for storing an Azure Resource Manager template (ARM template) for later deployment. This resource type enables you to share ARM templates with other users in your organization. Just like any other Azure resource, you can use Azure role-based access control (Azure RBAC) to share the template spec. You can use Azure CLI or Azure PowerShell to create template specs by providing Bicep files. The Bicep files are transpiled into ARM JSON templates before they're stored. Currently, you can't import a Bicep file from the Azure portal to create a template spec resource.
[Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) is the resource type for template specs. It consists of a main template and any number of linked templates. Azure securely stores template specs in resource groups. Both the main template and the linked templates must be in JSON. Template Specs support [versioning](#versioning).
When designing your deployment, always consider the lifecycle of the resources a
To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
+## Required permissions
+
+To create a template spec, you need **write** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`.
+
+To deploy a template spec, you need **read** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`. You also need **write** access to any resources deployed by the template spec, and access to `Microsoft.Resources/deployments/*`.
+ ## Why use template specs? Template specs provide the following benefits:
The JSON template embedded in the Bicep file needs to make these changes:
* To access the parameters and variables defined in the Bicep file, you can directly use the parameter names and the variable names. To access the parameters and variables defined in `mainTemplate`, you still need to use the ARM JSON template syntax. For example, **'name': '[parameters(&#92;'storageAccountType&#92;')]'**. * Use the Bicep syntax to call Bicep functions. For example, **'location': resourceGroup().location**.
-The size of a template spec is limited to approximated 2 MB. If a template spec size exceeds the limit, you will get the **TemplateSpecTooLarge** error code. The error message says:
+The size of a template spec is limited to approximated 2 MB. If a template spec size exceeds the limit, you'll get the **TemplateSpecTooLarge** error code. The error message says:
```error The size of the template spec content exceeds the maximum limit. For large template specs with many artifacts, the recommended course of action is to split it into multiple template specs and reference them modularly via TemplateLinks.
When you create a template spec, you provide a version name for it. As you itera
## Use tags
-[Tags](../management/tag-resources.md) help you logically organize your resources. You can add tags to template specs by using Azure PowerShell and Azure CLI:
+[Tags](../management/tag-resources.md) help you logically organize your resources. You can add tags to template specs by using Azure PowerShell and Azure CLI. The following example shows how to specify tags when creating the template spec:
# [PowerShell](#tab/azure-powershell)
az ts create \
+The next example shows how to apply tags when updating an existing template spec:
+ # [PowerShell](#tab/azure-powershell) ```azurepowershell
az ts update \
-When creating or modifying a template spec with the version parameter specified, but without the tag/tags parameter:
-
-* If the template spec exists and has tags, but the version doesn't exist, the new version inherits the same tags as the existing template spec.
-
-When creating or modifying a template spec with both the tag/tags parameter and the version parameter specified:
-
-* If both the template spec and the version don't exist, the tags are added to both the new template spec and the new version.
-* If the template spec exists, but the version doesn't exist, the tags are only added to the new version.
-* If both the template spec and the version exist, the tags only apply to the version.
+Both the template and its versions can have tags. The tags are applied or inherited depending on the parameters you specify.
-When modifying a template with the tag/tags parameter specified but without the version parameter specified, the tags is only added to the template spec.
+| Template spec | Version | Version parameter | Tag parameter | Tag values |
+| - | - | -- | - | |
+| Exists | N/A | Not specified | Specified | applied to the template spec |
+| Exists | New | Specified | Not specified | inherited from the template spec to the version |
+| New | New | Specified | Specified | applied to both template spec and version |
+| Exists | New | Specified | Specified | applied to the version |
+| Exists | Exists | Specified | Specified | applied to the version |
## Link to template specs
azure-resource-manager Microsoft Common Textbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-textbox.md
Title: TextBox UI element
-description: Describes the Microsoft.Common.TextBox UI element for Azure portal. Use for adding unformatted text.
+description: Describes the Microsoft.Common.TextBox UI element for Azure portal that's used for adding unformatted text.
-- Previously updated : 03/03/2021 -+ Last updated : 08/23/2022 # Microsoft.Common.TextBox UI element
-A user-interface (UI) element that can be used to add unformatted text. The `Microsoft.Common.TextBox` element is a single-line input field, but supports multi-line input with the `multiLine` property.
+The `TextBox` user-interface (UI) element can be used to add unformatted text. The element is a single-line input field, but supports multi-line input with the `multiLine` property.
## UI sample The `TextBox` element uses a single-line or multi-line text box.
+Example of single-line text box.
+
+Example of multi-line text box.
+ ## Schema
The examples show how to use a single-line text box and a multi-line text box.
The following example uses a text box with the [Microsoft.Solutions.ArmApiControl](microsoft-solutions-armapicontrol.md) control to check the availability of a resource name.
+In this example, when you enter a storage account name and exit the text box, the control checks if the name is valid and if it's available. If the name is invalid or already exists, an error message is displayed. A storage account name that's valid and available is shown in the output.
+ ```json
-"basics": [
- {
+{
+ "$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#",
+ "handler": "Microsoft.Azure.CreateUIDef",
+ "version": "0.1.2-preview",
+ "parameters": {
+ "basics": [
+ {
"name": "nameApi", "type": "Microsoft.Solutions.ArmApiControl", "request": {
- "method": "POST",
- "path": "[concat(subscription().id, '/providers/Microsoft.Storage/checkNameAvailability?api-version=2021-04-01')]",
- "body": {
- "name": "[basics('txtStorageName')]",
- "type": "Microsoft.Storage/storageAccounts"
- }
+ "method": "POST",
+ "path": "[concat(subscription().id, '/providers/Microsoft.Storage/checkNameAvailability?api-version=2021-09-01')]",
+ "body": {
+ "name": "[basics('txtStorageName')]",
+ "type": "Microsoft.Storage/storageAccounts"
+ }
}
- },
- {
+ },
+ {
"name": "txtStorageName", "type": "Microsoft.Common.TextBox", "label": "Storage account name", "constraints": {
- "validations": [
- {
- "isValid": "[basics('nameApi').nameAvailable]",
- "message": "[basics('nameApi').message]"
- }
- ]
+ "validations": [
+ {
+ "isValid": "[basics('nameApi').nameAvailable]",
+ "message": "[basics('nameApi').message]"
+ }
+ ]
}
+ }
+ ],
+ "steps": [],
+ "outputs": {
+ "textBox": "[basics('txtStorageName')]"
}
-]
+ }
+}
``` ### Multi-line
This example uses the `multiLine` property to create a multi-line text box with
- For an introduction to creating UI definitions, see [CreateUiDefinition.json for Azure managed application's create experience](create-uidefinition-overview.md). - For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).
+- To learn more about functions, see [CreateUiDefinition functions](create-uidefinition-functions.md).
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 08/10/2022 Last updated : 08/24/2022 # Resource types that extend capabilities of other resources
An extension resource is a resource that adds to another resource's capabilities
* registrationAssignments * registrationDefinitions
+## Microsoft.Management
+
+* managementGroups
+ ## Microsoft.Network * cloudServiceSlots
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.Subscription
+* aliases
* policies ## microsoft.support
azure-resource-manager Template Tutorial Use Parameter File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-parameter-file.md
Title: Tutorial - use parameter file to deploy template description: Use parameter files that contain the values to use for deploying your Azure Resource Manager template (ARM template). Previously updated : 09/10/2020 Last updated : 08/22/2022
# Tutorial: Use parameter files to deploy your ARM template
-In this tutorial, you learn how to use [parameter files](parameter-files.md) to store the values you pass in during deployment. In the previous tutorials, you used inline parameters with your deployment command. This approach worked for testing your Azure Resource Manager template (ARM template), but when automating deployments it can be easier to pass a set of values for your environment. Parameter files make it easier to package parameter values for a specific environment. In this tutorial, you'll create parameter files for development and production environments. It takes about **12 minutes** to complete.
+In this tutorial, you learn how to use [parameter files](parameter-files.md) to store the values you pass in during deployment. In the previous tutorials, you used inline parameters with your deployment command. This approach worked for testing your Azure Resource Manager template (ARM template), but when automating deployments it can be easier to pass a set of values for your environment. Parameter files make it easier to package parameter values for a specific environment. In this tutorial, you create parameter files for development and production environments. This instruction takes **12 minutes** to complete.
## Prerequisites We recommend that you complete the [tutorial about tags](template-tutorial-add-tags.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure Command-Line Interface (CLI). For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
-Your template has many parameters you can provide during deployment. At the end of the previous tutorial, your template looked like:
+Your template has many parameters you can provide during deployment. At the end of the previous tutorial, your template had the following JSON file:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-tags/azuredeploy.json":::
This template works well, but now you want to easily manage the parameters that
## Add parameter files
-Parameter files are JSON files with a structure that is similar to your template. In the file, you provide the parameter values you want to pass in during deployment.
+Parameter files are JSON files with a structure that's similar to your template. In the file, you provide the parameter values you want to pass in during deployment.
-Within the parameter file, you provide values for the parameters in your template. The name of each parameter in your parameter file must match the name of a parameter in your template. The name is case-insensitive but to easily see the matching values we recommend that you match the casing from the template.
+Within the parameter file, you provide values for the parameters in your template. The name of each parameter in your parameter file needs to match the name of a parameter in your template. The name is case-insensitive but to easily see the matching values we recommend that you match the casing from the template.
You don't have to provide a value for every parameter. If an unspecified parameter has a default value, that value is used during deployment. If a parameter doesn't have a default value and isn't specified in the parameter file, you're prompted to provide a value during deployment.
-You can't specify a parameter name in your parameter file that doesn't match a parameter name in the template. You get an error when unknown parameters are provided.
+You can't specify a parameter name in your parameter file that doesn't match a parameter name in the template. You get an error when you provide unknown parameters.
-In Visual Studio Code, create a new file with following content. Save the file with the name _azuredeploy.parameters.dev.json_.
+In Visual Studio Code, create a new file with the following content. Save the file with the name _azuredeploy.parameters.dev.json_.
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-tags/azuredeploy.parameters.dev.json":::
Again, create a new file with the following content. Save the file with the name
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-tags/azuredeploy.parameters.prod.json":::
-This file is your parameter file for the production environment. Notice that it uses **Standard_GRS** for the storage account, names resources with a **contoso** prefix, and sets the `Environment` tag to **Production**. In a real production environment, you would also want to use an app service with a SKU other than free, but we'll continue to use that SKU for this tutorial.
+This file is your parameter file for the production environment. Notice that it uses **Standard_GRS** for the storage account, names resources with a **contoso** prefix, and sets the `Environment` tag to **Production**. In a real production environment, you would also want to use an app service with a SKU other than free, but we use that SKU for this tutorial.
## Deploy template
As a final test of your template, let's create two new resource groups. One for
For the template and parameter variables, replace `{path-to-the-template-file}`, `{path-to-azuredeploy.parameters.dev.json}`, `{path-to-azuredeploy.parameters.prod.json}`, and the curly braces `{}` with your template and parameter file paths.
-First, we'll deploy to the dev environment.
+First, let's deploy to the dev environment.
# [PowerShell](#tab/azure-powershell)
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli templateFile="{path-to-the-template-file}"
az deployment group create \
-Now, we'll deploy to the production environment.
+Now, we deploy to the production environment.
# [PowerShell](#tab/azure-powershell)
az deployment group create \
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources you're creating. Use the `debug` switch to get more information for debugging.
## Verify deployment
You can verify the deployment by exploring the resource groups from the Azure po
1. Sign in to the [Azure portal](https://portal.azure.com). 1. From the left menu, select **Resource groups**.
-1. You see the two new resource groups you deployed in this tutorial.
+1. You see the two new resource groups you deploy in this tutorial.
1. Select either resource group and view the deployed resources. Notice that they match the values you specified in your parameter file for that environment. ## Clean up resources
-1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field. If you've completed this series, you have three resource groups to delete - **myResourceGroup**, **myResourceGroupDev**, and **myResourceGroupProd**.
-3. Select the resource group name.
-4. Select **Delete resource group** from the top menu.
+1. From the Azure portal, select **Resource groups** from the left menu.
+1. Select the hyperlinked resource group name next to the check box. If you complete this series, you have three resource groups to delete - **myResourceGroup**, **myResourceGroupDev**, and **myResourceGroupProd**.
+1. Select the **Delete resource group** icon from the top menu.
+
+ > [!CAUTION]
+ > Deleting a resource group is irreversible.
+
+1. Type the resource group name in the pop-up window that displays and select **Delete**.
## Next steps
-Congratulations, you've finished this introduction to deploying templates to Azure. Let us know if you have any comments and suggestions in the feedback section. Thanks!
+Congratulations. You've finished this introduction to deploying templates to Azure. Let us know if you have any comments and suggestions in the feedback section.
The next tutorial series goes into more detail about deploying templates.
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
-# Embed Video Analyzer for Media widgets in your apps
+# Embed Azure Video Indexer widgets in your apps
This article shows how you can embed Azure Video Indexer widgets in your apps. Azure Video Indexer supports embedding three types of widgets into your apps: *Cognitive Insights*, *Player*, and *Editor*.
batch Simplified Compute Node Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md
This document describes forthcoming changes with how the Azure Batch service com
> Support for simplified compute node communication in Azure Batch is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Opting in isn't required at this time. However, in the future, using simplified compute node communication will be required for all Batch accounts. At that time, an official retirement notice will be provided, with an opportunity to migrate your Batch pools before that happens.
+Opting in isn't required at this time. However, in the future, using simplified compute node communication will be required and defaulted for all Batch accounts.
## Supported regions
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
After you've updated the engine version for your voice model, you need to [redep
For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext). > [!NOTE]
-> Custom Neural Voice training is only available in some regions. But you can easily copy a neural voice model from these regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
+> Custom Neural Voice training is currently only available in some regions. But you can easily copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
## Next steps
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
Once you've created an Azure account and a Speech service subscription, you'll n
1. Select your subscription and create a speech project. 1. If you want to switch to another Speech subscription, select the **cog** icon at the top.
-> [!IMPORTANT]
-> Custom Neural Voice training is currently only available in East US, Southeast Asia, UK South, with the S0 tier. Make sure you select the right Speech resource if you would like to create a neural voice.
+> [!NOTE]
+> Custom Neural Voice training is currently only available in some regions. But you can easily copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
## Create a project
If you're using the old version of Custom Voice (which is scheduled to be retire
- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md) - [How to record voice samples](record-custom-voice-samples.md) - [Train your voice model](how-to-custom-voice-create-voice.md)-- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
The custom endpoint is functionally identical to the standard endpoint that's us
You can copy your voice model to another project for the same region or another region. For example, you can copy a neural voice model that was trained in one region, to a project for another region. > [!NOTE]
-> Custom neural voice training is only available in the these regions: East US, Southeast Asia, and UK South. But you can copy a neural voice model from those regions to other regions. For more information, see the [regions for custom neural voice](regions.md#speech-service).
+> Custom Neural Voice training is currently only available in some regions. But you can easily copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
To copy your custom neural voice model to another project:
cognitive-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md
Examples of phrases include:
Phrase lists are simple and lightweight: - **Just-in-time**: A phrase list is provided just before starting the speech recognition, eliminating the need to train a custom model. -- **Lightweight**: You don't need a large data set. Simply provide a word or phrase to give it importance.
+- **Lightweight**: You don't need a large data set. Simply provide a word or phrase to boost its recognition.
-You can use phrase lists with the [Speech Studio](speech-studio-overview.md), [Speech SDK](quickstarts/setup-platform.md), or [Speech Command Line Interface (CLI)](spx-overview.md). The Batch transcription API does not support phrase lists.
+You can use phrase lists with the [Speech Studio](speech-studio-overview.md), [Speech SDK](quickstarts/setup-platform.md), or [Speech Command Line Interface (CLI)](spx-overview.md).
There are some situations where [training a custom model](custom-speech-overview.md) that includes phrases is likely the best option to improve accuracy. In these cases you would not use a phrase list: - If you need to use a large list of phrases. A phrase list shouldn't have more than 500 phrases. -- If you need a phrase list for languages that are not currently supported. -- If you use a custom endpoint. Phrase lists can't be used with custom endpoints.
+- If you need a phrase list for languages that are not currently supported.
+- If you need to do batch transcription. The Batch transcription API does not support phrase lists.
+
+> [!TIP]
+> You can use phrase lists with both standard and custom speech.
## Try it in Speech Studio
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
Previously updated : 10/14/2021 Last updated : 08/24/2022 recommendations: false keywords: on-premises, Docker, container, identify
See the list of [languages supported](../language-support.md) when using Transla
> [!IMPORTANT] >
-> * Translator container is in gated preview and to use it you must submit an online request, and have it approved. See [Request approval to run container](#request-approval-to-run-container) below for more information.
-> * Translator container supports limited features compared to the cloud offerings. *See* [**Container translate methods**](translator-container-supported-parameters.md) for more details.
+> * Translator container is in gated preview and to use it you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container) below.
+> * Translator container supports limited features compared to the cloud offerings. Form more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
<!-- markdownlint-disable MD033 -->
See the list of [languages supported](../language-support.md) when using Transla
To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-You'll also need the following:
+You'll also need to have:
| Required | Purpose | |--|--| | Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> | | Docker Engine | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> | | Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource with region other than 'global', associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
-|||
|Optional|Purpose| ||-|
-|Azure CLI (command-line interface) |<ul><li> The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It is available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell.</li></ul> |
-|||
+|Azure CLI (command-line interface) |<ul><li> The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It's available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell.</li></ul> |
+ ## Required elements
All Cognitive Services containers require three primary elements:
## Container requirements and recommendations
-The following table describes the minimum and recommended specifications for Translator containers. At least 2 gigabytes (GB) of memory are required and each CPU must be at least 2.6 gigahertz (GHz) or faster. and memory, in gigabytes (GB), to allocate for each Translator. The following table describes the minimum and recommended allocation of resources for each Translator container.
+The following table describes the minimum and recommended CPU cores and memory to allocate for the Translator container.
| Container | Minimum |Recommended | Language Pair | |--|||-| | Translator connected |2 core, 2-GB memory |4 core, 8-GB memory | 4 |
-|||
-For every language pair, it's recommended to have 2 GB of memory. By default, the Translator offline container has four language pairs. The core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+* Each core must be at least 2.6 gigahertz (GHz) or faster.
+
+* For every language pair, it's recommended to have 2 GB of memory. By default, the Translator offline container has four language pairs.
+
+* The core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
> [!NOTE] >
You can have this container and a different Azure Cognitive Services container r
## Query the container's Translator endpoint
- The container provides a REST-based Translator endpoint API. Here is an example request:
+ The container provides a REST-based Translator endpoint API. Here's an example request:
```curl curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS"
print(json.dumps(
Launch Visual Studio, and create a new console application. Edit the `*.csproj` file to add the `<LangVersion>7.1</LangVersion>` nodeΓÇöspecifies C# 7.1. Add the [Newtoonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json/) NuGet package, version 11.0.2.
-In the `Program.cs` replace all the existing code with the following:
+In the `Program.cs` replace all the existing code with the following script:
```csharp using Newtonsoft.Json;
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
The following example shows the formatting of the `docker run` command you'll us
| Placeholder | Value | Format or example | |-|-|| | `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
-| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted. | `/volume/license:/path/to/license/directory` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted. | `/host/license:/path/to/license/directory` |
| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` | | `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
```bash
-docker run {IMAGE} --rm -it -p 5000:5000 \
+docker run --rm -it -p 5000:5000 \
-v {LICENSE_MOUNT} \
+{IMAGE} \
eula=accept \ billing={ENDPOINT_URI} \ apikey={API_KEY} \ DownloadLicense=True \
-Mounts:License={LICENSE_MOUNT} \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
``` After you've configured the container, use the next section to run the container in your environment with the license, and appropriate memory and CPU allocations.
Placeholder | Value | Format or example |
| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` | `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` | | `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
-| `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/volume/license:/path/to/license/directory` |
+| `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/host/license:/path/to/license/directory` |
| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
```bash
-docker run {IMAGE} --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
-v {LICENSE_MOUNT} \ -v {OUTPUT_PATH} \
+{IMAGE} \
eula=accept \
-Mounts:License={LICENSE_MOUNT}
-Mounts:Output={OUTPUT_PATH}
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
``` ### Additional parameters and commands
If you're using the [Translator container](../translator/containers/translator-h
#### Speech-to-text and Neural text-to-speech containers
-The [speech-to-text](../speech-service/speech-container-howto.md?tabs=stt) and [neural text-to-speech](../speech-service/speech-container-howto.md?tabs=ntts) containers provide a default directory for writing the license file and billing log at runtime. When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+The [speech-to-text](../speech-service/speech-container-howto.md?tabs=stt) and [neural text-to-speech](../speech-service/speech-container-howto.md?tabs=ntts) containers provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
Below is a sample command to set file/directory ownership.
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
Some of the benefits of confidential VMs include:
- Robust hardware-based isolation between virtual machines, hypervisor, and host management code. - Customizable attestation policies to ensure the host's compliance before deployment.-- Cloud-based full-disk encryption before the first boot.
+- Cloud-based Confidential OS disk encryption before the first boot.
- VM encryption keys that the platform or the customer (optionally) owns and manages. - Secure key release with cryptographic binding between the platform's successful attestation and the VM's encryption keys. - Dedicated virtual [Trusted Platform Module (TPM)](/windows/security/information-protection/tpm/trusted-platform-module-overview) instance for attestation and protection of keys and secrets in the virtual machine. - Secure boot capability similar to [Trusted launch for Azure VMs](../virtual-machines/trusted-launch.md)
-## Full-disk encryption
+## Confidential OS disk encryption
Azure confidential VMs offer a new and enhanced disk encryption scheme. This scheme protects all critical partitions of the disk. It also binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. These encryption keys can securely bypass Azure components, including the hypervisor and host operating system. To minimize the attack potential, a dedicated and separate cloud service also encrypts the disk during the initial creation of the VM. If the compute platform is missing critical settings for your VM's isolation, then during boot [Azure Attestation](https://azure.microsoft.com/services/azure-attestation/) won't attest to the platform's health. It will prevent the VM from starting. For example, this scenario happens if you haven't enabled SEV-SNP.
-Full-disk encryption is optional, because this process can lengthen the initial VM creation time. You can choose between:
+Confidential OS disk encryption is optional, because this process can lengthen the initial VM creation time. You can choose between:
+ - A confidential VM with Confidential OS disk encryption before VM deployment that uses platform-managed keys (PMK) or a customer-managed key (CMK).
+ - A confidential VM without Confidential OS disk encryption before VM deployment.
-For further integrity and protection, confidential VMs offer [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot) by default.
+For further integrity and protection, confidential VMs offer [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot) by default when confidential OS disk encryption is selected.
With Secure Boot, trusted publishers must sign OS boot components (including the boot loader, kernel, and kernel drivers). All compatible confidential VM images support Secure Boot. ### Encryption pricing differences
Confidential VMs *don't support*:
- Azure Backup - Azure Site Recovery - Azure Dedicated Host -- Microsoft Azure Virtual Machine Scale Sets with full OS disk encryption enabled
+- Microsoft Azure Virtual Machine Scale Sets with Confidential OS disk encryption enabled
- Limited Azure Compute Gallery support - Shared disks - Ultra disks
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
It's not possible to resize a non-confidential VM to a confidential VM.
### Disk encryption
-OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [full-disk encryption](confidential-vm-overview.md#full-disk-encryption), and isolation from underlying cloud infrastructure. These images include:
+OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [confidential OS disk encryption](confidential-vm-overview.md#confidential-os-disk-encryption), and isolation from underlying cloud infrastructure. These images include:
- Ubuntu 20.04 Gen 2 - Windows Server 2019 Gen 2
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
Remove-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/00000000-00
Scheduled exports are affected by the time and day of week of when you initially create the export. When you create a scheduled export, the export runs at the same frequency for each export that runs later. For example, for a daily export of month-to-date costs export set at a daily frequency, the export runs daily. Similarly for a weekly export, the export runs every week on the same day as it is scheduled. The exact delivery time of the export isn't guaranteed and the exported data is available within four hours of run time.
+Exports are scheduled using Coordinated Universal Time (UTC). The Exports API always uses and displays UTC.
+ - When you create an export using the [Exports API](/rest/api/cost-management/exports/create-or-update?tabs=HTTP), specify the `recurrencePeriod` in UTC time. The API doesnΓÇÖt convert your local time to UTC. - Example - A weekly export is scheduled on Friday, August 19 with `recurrencePeriod` set to 2:00 PM. The API receives the input as 2:00 PM UTC, Friday, August 19. The weekly export will be scheduled to run every Friday. - When you create an export in the Azure portal, its start date time is automatically converted to the equivalent UTC time.
cost-management-billing Buy Vm Software Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/buy-vm-software-reservation.md
Previously updated : 03/17/2022 Last updated : 08/24/2022
You can buy virtual machine software reservation in the Azure portal. To buy a r
## Buy a virtual machine software reservation
-1. Select your desired plan from Azure Marketplace that has reservation pricing.
-2. Browse to Reservations blade, click on Add, select Virtual Machine software reservation, it will navigate to Azure Marketplace to show plans that have reservation pricing.
-3. Select the desired Virtual machine software reservation that you want to buy.
-Any virtual machine software reservation that matches the attributes of what you buy gets a discount. The actual number of deployments that get the discount depend on the scope and quantity selected.
-3. Select a subscription. It's used to pay for the plan.
-The subscription payment method is charged the upfront costs for the reservation. To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement.
- - For an enterprise subscription, these reservation purchase charges are not deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance. The charges are billed to the subscription's credit card or invoice payment method.
+There are two ways to purchase a virtual machine software reservation:
+
+**Option 1**
+
+1. Navigate to Reservations, select **Add**, and then select **Virtual Machine**. The Azure Marketplace shows offers that have reservation pricing.
+2. Select the desired Virtual machine software plan that you want to buy.
+ Any virtual machine software reservation that matches the attributes of what you buy gets a discount. The actual number of deployments that get the discount depend on the scope and quantity selected.
+3. Select a subscription. It's used to pay for the plan.
+ The subscription payment method is charged the upfront costs for the reservation. To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement.
+ - For an enterprise subscription, the reservation purchase charges aren't deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance. The charges are billed to the subscription's credit card or invoice payment method.
+ - For an individual subscription with pay-as-you-go pricing, the charges are billed to the subscription's credit card or invoice payment method.
+4. Select a scope. The scope can cover one subscription or multiple subscriptions (using a shared scope).
+ - Single subscription - The plan discount is applied to matching usage in the subscription.
+ - Shared - The plan discount is applied to matching instances in any subscription in your billing context. For enterprise customers, the billing context is the enrollment and includes all subscriptions in the enrollment. For individual plan with pay-as-you-go pricing customers, the billing context is all individual plans with pay-as-you-go pricing subscriptions created by the account administrator.
+ - Management group - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope.
+ - Single resource group - Applies the reservation discount to the matching resources in the selected resource group only.
+5. Select a product to choose the VM size and the image type. The discount will apply to matching resources and has instance size flexibility turned on.
+6. Select a one-year or three-year term.
+7. Choose a quantity, which is the number of prepaid VM instances that can get the billing discount.
+8. Add the product to the cart, review, and purchase.
+
+**Option 2**
+
+1. Browse to Marketplace to view offers that have reservation pricing. Apply a **Pricing** filter for **Reservation**.
+2. Select the desired Virtual machine software offer that you want to buy. Then select the desired plan.
+ Any virtual machine software reservation that matches the attributes of what you buy gets a discount. The actual number of deployments that get the discount depend on the scope and quantity selected.
+3. Select a subscription. It's used to pay for the plan.
+ The subscription payment method is charged the upfront costs for the reservation. To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement.
+ - For an enterprise subscription, these reservation purchase charges aren't deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance. The charges are billed to the subscription's credit card or invoice payment method.
- For an individual subscription with pay-as-you-go pricing, the charges are billed to the subscription's credit card or invoice payment method. 4. Select a scope. The scope can cover one subscription or multiple subscriptions (using a shared scope). - Single subscription - The plan discount is applied to matching usage in the subscription.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
You can create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge Pro GPU device by using the Azure portal. > [!IMPORTANT]
-> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication click [here](/articles/active-directory/authentication/tutorial-enable-azure-mfa.md)
+> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication click [here](/azure/active-directory/authentication/tutorial-enable-azure-mfa.md)
## VM deployment workflow
databox-online Azure Stack Edge Gpu Manage Virtual Machine Network Interfaces Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md
Previously updated : 08/02/2021 Last updated : 08/19/2022 # Customer intent: As an IT admin, I need to understand how to manage network interfaces on an Azure Stack Edge Pro device so that I can use it to run applications using Edge compute before sending it to Azure.<!--Does "it" refer to the device or to the virtual NICs?-->
This article explains how to add a network interface to an existing VM, change e
A network interface enables a virtual machine (VM) running on your Azure Stack Edge Pro device to communicate with Azure and on-premises resources. When you enable a port for compute network on your device, a virtual switch is created on that network interface. This virtual switch is then used to deploy compute workloads such as VMs or containerized applications on your device.
-Your device supports only one virtual switch but multiple virtual network interfaces. Each network interface on your VM has a static or a dynamic IP address assigned to it. With IP addresses assigned to multiple network interfaces on your VM, certain capabilities are enabled on your VM. For example, your VM can host multiple websites or services with different IP addresses and SSL certificates on a single server. A VM on your device can serve as a network virtual appliance, such as a firewall or a load balancer. <!--Is it possible to do that on ASE?-->
+Multiple network interfaces can be associated with one virtual switch. Each network interface on your VM has a static or a dynamic IP address assigned to it. With IP addresses assigned to multiple network interfaces on your VM, certain capabilities are enabled on your VM. For example, your VM can host multiple websites or services with different IP addresses and SSL certificates on a single server. A VM on your device can serve as a network virtual appliance, such as a firewall or a load balancer. <!--Is it possible to do that on ASE?-->
<!--There is a limit to how many virtual network interfaces can be created on the virtual switch on your device. See the Azure Stack Edge Pro limits article for details.-->
Before you begin to manage VMs on your device via the Azure portal, make sure th
1. You've access to an activated Azure Stack Edge Pro GPU device. You have enabled a network interface for compute on your device. This action creates a virtual switch on that network interface on your VM. 1. In the local UI of your device, go to **Compute**. Select the network interface that you will use to create a virtual switch.
- > [!IMPORTANT]
- > You can only configure one port for compute.
- 1. Enable compute on the network interface. Azure Stack Edge Pro GPU creates and manages a virtual switch corresponding to that network interface. 1. You have at least one VM deployed on your device. To create this VM, see the instructions in [Deploy VM on your Azure Stack Edge Pro via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Az
1. Verify the authorization was created successfully and store ExpressRoute Direct authorization into a variable: ```powershell
- $ERDirectAuthorization = Get-AzExpressRoutePortAuthorization -ExpressRoutePortObject $ERDirect
+ $ERDirectAuthorization = Get-AzExpressRoutePortAuthorization -ExpressRoutePortObject $ERPort -Name $Name
$ERDirectAuthorization ```
ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Az
CircuitResourceUri :on ```
-1. Redeem the authorization to create the ExpressRoute Direct circuit with the following command:
+1. Redeem the authorization to create the ExpressRoute Direct circuit in different subscription or Azure Active Directory tenant with the following command:
```powershell
- New-AzExpressRouteCircuit -Name $Name -ResourceGroupName $RGName -ExpressRoutePort $ERDirect -Location $Location -SkuTier $SkuTier -SkuFamily $SkuFamily -BandwidthInGbps $BandwidthInGbps -Authorization $ERDirect.Authorization
+ New-AzExpressRouteCircuit -Name $Name -ResourceGroupName $RGName -Location $Location -SkuTier $SkuTier -SkuFamily $SkuFamily -BandwidthInGbps $BandwidthInGbps -AuthorizationKey $$ERDirectAuthorization.AuthorizationKey
``` ## Next steps
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX, Deutsche Telekom AG, Equinix | | **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | Supported | Colt, Equinix, InterCloud, Megaport, Swisscom | | **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | Supported | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon, Zayo |
-| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | Supported | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel |
+| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | Supported | China Mobile International, China Telecom Global, Equinix, iAdvantage, Megaport, PCCW Global Limited, SingTel |
| **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | Supported | NTT Communications, Telin, XL Axiata | | **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | Supported | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Orange, Teraco, Vodacom | | **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom |
firewall Quick Create Multiple Ip Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-bicep.md
+
+ Title: 'Quickstart: Create an Azure Firewall with multiple public IP addresses - Bicep'
+description: In this quickstart, you learn how to use a Bicep file to create an Azure Firewall with multiple public IP addresses.
++++++ Last updated : 08/11/2022++
+# Quickstart: Create an Azure Firewall with multiple public IP addresses - Bicep
+
+In this quickstart, you use a Bicep file to deploy an Azure Firewall with multiple public IP addresses from a public IP address prefix. The deployed firewall has NAT rule collection rules that allow RDP connections to two Windows Server 2019 virtual machines.
++
+For more information about Azure Firewall with multiple public IP addresses, see [Deploy an Azure Firewall with multiple public IP addresses using Azure PowerShell](deploy-multi-public-ip-powershell.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Review the Bicep file
+
+This Bicep file creates an Azure Firewall with two public IP addresses, along with the necessary resources to support the Azure Firewall.
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/fw-docs-qs).
++
+Multiple Azure resources are defined in the template:
+
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)
+- [**Microsoft.Network/publicIPPrefix**](/azure/templates/microsoft.network/publicipprefixes)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines)
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts)
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces)
+- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls)
+- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-username>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-username\>** with the admin username for the backend server.
+
+ You will be prompt to enter the admin password.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+In the Azure portal, review the deployed resources. Note the firewall public IP addresses.
+
+Use Remote Desktop Connection to connect to the firewall public IP addresses. Successful connection demonstrates firewall NAT rules that allow the connection to the backend servers.
+
+## Clean up resources
+
+When you no longer need the resources that you created with the firewall, delete the resource group. This removes the firewall and all the related resources.
+
+To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name "exampleRG"
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Deploy and configure Azure Firewall in a hybrid network using the Azure portal](tutorial-hybrid-portal.md)
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
Title: Apache Hadoop components and versions - Azure HDInsight 4.0
-description: Learn about the Apache Hadoop components and versions in Azure HDInsight 4.0.
+ Title: Open-source components and versions - Azure HDInsight 4.0
+description: Learn about the open-source components and versions in Azure HDInsight 4.0.
Previously updated : 06/10/2022 Last updated : 08/24/2022 # HDInsight 4.0 component versions
-In this article, you learn about the Apache Hadoop environment components and versions in Azure HDInsight 4.0.
+In this article, you learn about the open-source components and versions in Azure HDInsight 4.0.
-## Apache components available with HDInsight version 4.0
+## Open-source components available with HDInsight version 4.0
-The OSS component versions associated with HDInsight 4.0 are listed in the following table.
+The Open-source component versions associated with HDInsight 4.0 are listed in the following table.
| Component | HDInsight 4.0 | |||
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
Title: Data conversion for Azure API for FHIR description: Use the $convert-data endpoint and customize-converter templates to convert data in Azure API for FHIR. -+ Last updated 06/03/2022-+ # Converting your data to FHIR for Azure API for FHIR
The `$convert-data` custom endpoint in the FHIR service is meant for data conver
## Use the $convert-data endpoint The `$convert-data` operation is integrated into the FHIR service to run as part of the service. After enabling `$convert-data` in your server, you can make API calls to the server to convert your data into FHIR:- `https://<<FHIR service base URL>>/$convert-data` ### Parameter Resource
For more information about assigning roles in the Azure portal, see [Azure built
You can register the ACR server using the Azure portal, or using CLI. #### Registering the ACR server using Azure portal+ Browse to the **Artifacts** blade under **Data transformation** in your Azure API for FHIR instance. You'll see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance. #### Registering the ACR server using CLI+ You can register up to 20 ACR servers in the Azure API for FHIR. Install Azure Health Data Services CLI from Azure PowerShell if needed:
az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io" --resource-gr
```azurecli az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io fhiracr2020.azurecr.io" --resource-group fhir-test --resource-name fhirtest2021 ```+ ### Configure ACR firewall Select **Networking** of the Azure storage account from the portal. :::image type="content" source="media/convert-data/networking-container-registry.png" alt-text=" Screen image of the container registry."::: - Select **Selected networks**. Under the **Firewall** section, specify the IP address in the **Address range** box. Add IP ranges to allow access from the internet or your on-premises networks.
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/de-identified-export.md
Title: Exporting de-identified data for Azure API for FHIR description: This article describes how to set up and use de-identified export for Azure API for FHIR-+ Previously updated : 06/03/2022- Last updated : 08/24/2022+ # Exporting de-identified data for Azure API for FHIR
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Last updated 06/07/2022
-# Known issues: Azure Health Data Services
+# Known issues: Azure Health Data Services
-This article describes the currently known issues with Azure Health Data Services and its different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+This article describes the currently known issues with Azure Health Data Services and its different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
Refer to the table below to find details about resolution dates or possible workarounds. For more information about the different feature enhancements and bug fixes in Azure Health Data Services, see [Release notes: Azure Health Data Services](release-notes.md). - ## FHIR service |Issue | Date discovered | Status | Date resolved |
iot-central Howto Create Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-private-endpoint.md
Title: Create a private endpoint for IoT Central | Microsoft Docs
+ Title: Create a private endpoint for Azure IoT Central | Microsoft Docs
description: Learn how to create and configure a private endpoint for your IoT Central application. A private endpoint lets you securely connect your devices to IoT Central over a private virtual network.
To create a private endpoint on an existing IoT Central application:
1. Select the **Private endpoint connections** tab, and then select **+ Private endpoint**.
-1. On the **Basics** tab, enter add a name and select a region for your private endpoint. Then select **Next: Resource**.
+1. On the **Basics** tab, enter a name and select a region for your private endpoint. Then select **Next: Resource**.
1. The **Resource** tab is auto-populated for you. Select **Next: Virtual Network**. 1. On the **Virtual Network** tab, select the **Virtual network** and **Subnet** where you want to deploy your private endpoint.
-1. On the same tab, in the **Private DNS integration** section, select **Yes** for **Integrate with private DNS zone**. The private DNS resolves all the required endpoints to private IP addresses in your virtual network.
+1. On the same tab, in the **Private IP configuration** section, select **Dynamically allocate IP address**.
+
+1. Select **Next: DNS**.
+
+1. On the **DNS** tab, select **Yes** for **Integrate with private DNS zone.** The private DNS resolves all the required endpoints to private IP addresses in your virtual network.
:::image type="content" source="media/howto-create-private-endpoint/private-dns-integrationΓÇï.png" alt-text="Screenshot from Azure portal that shows private D N S integration.":::
In some situations, you may not be able to integrate with the private DNS zone o
1. In the Azure portal, navigate to your private endpoint, and select **DNS configuration**. On this page, you can find the required information for the IP address mapping to the DNS name. > [!WARNING] > This information lets you populate your custom DNS server with the necessary records. If at all possible, you should integrate with the private DNS Zones of the virtual network and not configure your own custom DNS server. Private endpoints for IoT Central applications differ from other Azure PaaS services. In some situations, such as IoT Central autoscaling, IoT Central scales out the number of IoT Hubs accessible through the private endpoint. If you choose to populate your own custom DNS server, it's your responsibility to update the DNS records whenever IoT Central autoscales, and later remove records when the number of IoT hubs scales in. ## Restrict public access
-To restrict public access for your devices to IoT Central, turn off access from public endpoints. After you turn off public access, devices can't connect to IoT Central from public networks and must use a private endpoint:
+To restrict public access for your devices to IoT Central, turn off access from public endpoints. After you turn off public access, devices can't connect to IoT Central from public networks, and must use a private endpoint:
1. In the Azure portal, navigate to your IoT Central application and then select **Networking**.
iot-hub-device-update Device Update Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-proxy-updates.md
With Proxy updates, you can (1) target over-the-air updates to multiple componen
* Targeting specific update files to different apps/components on the device * Targeting specific update files to sensors connected to an IoT devices. These sensors could be connected to the IoT device over a network protocol (for example, USB, CANbus etc.).
-## Pre-requisite
+## Prerequisite
In order to update a component or components that connected to a target IoT Device, the device builder must register a custom **Component Enumerator Extension** that is built specifically for their IoT devices. The Component Enumerator Extension is required so that the Device Update Agent can map a **'child update'** with a specific component, or group of components, which the update is intended for. See [Contoso Component Enumerator](components-enumerator.md) for an example on how to implement and register a custom Component Enumerator extension. > [!NOTE]
iot-hub Iot Hub Create Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md
This article describes how to create and manage IoT hubs using the [Azure portal](https://portal.azure.com).
-To use the steps in this tutorial, you need an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- ## Create an IoT hub [!INCLUDE [iot-hub-include-create-hub](../../includes/iot-hub-include-create-hub.md)]
To delete an IoT hub, find the IoT hub you want to delete, then choose **Delete*
Follow these links to learn more about managing Azure IoT Hub: * [Message routing with IoT Hub](tutorial-routing.md)
-* [Monitor your IoT hub](monitor-iot-hub.md)
+* [Monitor your IoT hub](monitor-iot-hub.md)
iot-hub Iot Hub Create Use Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-use-iot-toolkit.md
[!INCLUDE [iot-hub-resource-manager-selector](../../includes/iot-hub-resource-manager-selector.md)]
-This article shows you how to use the [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) to create an Azure IoT hub.
+This article shows you how to use the [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) to create an Azure IoT hub. You can create one without an existing IoT project or create one from an existing IoT project.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-To complete this article, you need the following:
--- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Prerequisites
- [Visual Studio Code](https://code.visualstudio.com/) - [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) installed for Visual Studio Code.
+## Create an IoT hub without an IoT Project
-## Create an IoT hub and device in an IoT Project
-
-The following steps show how you can create an IoT Hub and register a device to the hub within an IoT Project in Visual Studio Code.
-
-Instead of provisioning an Azure IoT Hub and device from the Azure portal. You can do it in the VS Code without leaving the development environment. The steps in this section show how to do this.
-
-1. In the new opened project window, click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Provision Azure Services...**. Follow the step-by-step guide to finish provisioning your Azure IoT Hub and creating the IoT Hub device.
+The following steps show how to create an IoT Hub without an IoT Project in Visual Studio Code (VS Code).
- ![Provision command](media/iot-hub-create-use-iot-toolkit/provision.png)
+1. In VS Code, open the **Explorer** view.
- > [!NOTE]
- > If you have not signed in Azure. Follow the pop-up notification for signing in.
+2. At the bottom of the Explorer, expand the **Azure IoT Hub** section.
-1. Select the subscription you want to use.
+ :::image type="content" source="./media/iot-hub-create-use-iot-toolkit/azure-iot-hub-devices.png" alt-text="A screenshot that shows the location of the Azure IoT Hub section in VS Code." lightbox="./media/iot-hub-create-use-iot-toolkit/azure-iot-hub-devices.png":::
- ![Select sub](media/iot-hub-create-use-iot-toolkit/select-subscription.png)
+3. Select **Create IoT Hub** from the list in the **Azure IoT Hub** section.
-1. Then select and existing resource group or create a new [resource group](../azure-resource-manager/management/overview.md#terminology).
+ :::image type="content" source="./media/iot-hub-create-use-iot-toolkit/create-iot-hub.png" alt-text="A screenshot that shows the location of the Create IoT Hub list item in VS Code." lightbox="./media/iot-hub-create-use-iot-toolkit/create-iot-hub.png":::
- ![Select resource group](media/iot-hub-create-use-iot-toolkit/select-resource-group.png)
+5. A pop-up will show in the bottom-right corner to let you sign in to Azure for the first time, if you're not signed in already.
-1. In the resource group you specified, follow the prompts to select an existing IoT Hub or create a new Azure IoT Hub.
+6. From the command palette at the top of VS Code, select your Azure subscription.
- ![Select IoT Hub steps](media/iot-hub-create-use-iot-toolkit/iot-hub-provision.png)
+7. Select your resource group.
- ![Select IoT Hub](media/iot-hub-create-use-iot-toolkit/select-iot-hub.png)
+8. Select a location.
- ![Selected IoT Hub](media/iot-hub-create-use-iot-toolkit/iot-hub-selected.png)
+9. Select a pricing tier.
-1. In the output window, you will see the Azure IoT Hub provisioned.
+10. Enter a globally unique name for your IoT hub, then press **Enter**.
- ![IoT Hub Provisioned](media/iot-hub-create-use-iot-toolkit/iot-hub-provisioned.png)
+11. Wait a few minutes until the IoT hub is created. You'll see a confirmation in the output console.
-1. Select or create a new IoT Hub Device in the Azure IoT Hub you provisioned.
+## Create an IoT hub and device in an existing IoT project
- ![Select IoT Device steps](media/iot-hub-create-use-iot-toolkit/iot-device-provision.png)
+The following steps show how to create an IoT Hub and register a device to the hub within an existing IoT project in Visual Studio (VS) Code.
- ![Select IoT Device Provisioned](media/iot-hub-create-use-iot-toolkit/select-iot-device.png)
+This method allows you to provision in VS Code without leaving your development environment.
-1. Now you have Azure IoT Hub provisioned and device created in it. Also the device connection string will be saved in VS Code.
+1. In the new opened project window, click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Provision Azure Services...**.
- ![Provision done](media/iot-hub-create-use-iot-toolkit/provision-done.png)
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/provision.png" alt-text="A screenshot that shows how to open the command palette in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/provision.png":::
+ > [!NOTE]
+ > If you have not signed in Azure. Follow the pop-up notification for signing in.
+1. Select the subscription you want to use.
-## Create an IoT hub without an IoT Project
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/select-subscription.png" alt-text="A screenshot that shows how to choose your Azure subscription in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/select-subscription.png":::
-The following steps show how you can create an IoT Hub without an IoT Project in Visual Studio Code.
+1. Select an existing resource group or create a new [resource group](../azure-resource-manager/management/overview.md#terminology).
-1. In Visual Studio Code, open the **Explorer** view.
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/select-resource-group.png" alt-text="A screenshot that shows how to choose a resource group or create a new one in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/select-resource-group.png":::
-2. At the bottom of the Explorer, expand the **Azure IoT Hub** section.
+1. In the resource group you specified, follow the prompts to select an existing IoT Hub or create a new Azure IoT Hub.
- ![Expand Azure IoT Hub Devices](./media/iot-hub-create-use-iot-toolkit/azure-iot-hub-devices.png)
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/iot-hub-provision.png" alt-text="A screenshot that shows the first prompt in choosing an existing IoT Hub in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/iot-hub-provision.png":::
-3. Click on the **...** in the **Azure IoT Hub** section header. If you don't see the ellipsis, hover over the header.
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/select-iot-hub.png" alt-text="A screenshot that shows the second prompt in choosing an existing IoT Hub in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/select-iot-hub.png":::
-4. Choose **Create IoT Hub**.
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/iot-hub-selected.png" alt-text="A screenshot that shows the third prompt in choosing an existing IoT Hub in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/iot-hub-selected.png":::
-5. A pop-up will show in the bottom-right corner to let you sign in to Azure for the first time.
+1. In the output window, you'll see the Azure IoT Hub provisioned.
-6. Select Azure subscription.
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/iot-hub-provisioned.png" alt-text="A screenshot that shows the output window in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/iot-hub-provisioned.png":::
-7. Select resource group.
+1. Select or create a new IoT Hub Device in the Azure IoT Hub you provisioned.
-8. Select location.
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/iot-device-provision.png" alt-text="A screenshot that shows the fourth prompt in choosing an existing IoT Hub in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/iot-device-provision.png":::
-9. Select pricing tier.
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/select-iot-device.png" alt-text="A screenshot that shows an example of an existing IoT Hub in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/select-iot-device.png":::
-10. Enter a globally unique name for your IoT Hub.
+1. Now you have an Azure IoT Hub provisioned and a device created in it. The device connection string will be saved in VS Code.
-11. Wait a few minutes until the IoT Hub is created.
+ :::image type="content" source="media/iot-hub-create-use-iot-toolkit/provision-done.png" alt-text="A screenshot that shows IoT Hub details in the output window in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/provision-done.png":::
## Next steps
-Now you have deployed an IoT hub using the Azure IoT Tools for Visual Studio Code. To explore further, check out the following articles:
+Now that you've deployed an IoT hub using the Azure IoT Tools for Visual Studio Code, explore these articles:
* [Use the Azure IoT Tools for Visual Studio Code to send and receive messages between your device and an IoT Hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
This region doesn't affect how the traffic will be routed. If a home region goes
* East Asia * US Gov Virginia * UK West
-* Uk South
+* UK South
> [!NOTE] > You can only deploy your cross-region load balancer or Public IP in Global tier in one of the regions above.
logic-apps Block Connections Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/block-connections-across-tenants.md
Title: Block access from other tenants
-description: Block connections shared by other tenants in Azure Logic Apps.
+ Title: Block access to and from other tenants
+description: Block connections between your tenant and other Azure Active Directory (Azure AD) tenants in Azure Logic Apps.
ms.suite: integration Last updated 08/01/2022
-# Customer intent: As a developer, I want to prevent shared connections with other Azure Active Directory tenants.
+# Customer intent: As a developer, I want to prevent access to and from other Azure Active Directory tenants.
-# Block connections shared from other tenants in Azure Logic Apps (Preview)
+# Block connections to and from other tenants in Azure Logic Apps (Preview)
> [!NOTE] > This capability is in preview and is subject to the
After the policy takes effect in a region, test the policy. You can try immediat
## Next steps
-[Block connector usage in Azure Logic Apps](block-connections-connectors.md)
+[Block connector usage in Azure Logic Apps](block-connections-connectors.md)
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 05/01/2022 Last updated : 08/19/2022
In the [Azure portal](https://portal.azure.com), add one or more authorization p
![Provide information for authorization policy](./media/logic-apps-securing-a-logic-app/set-up-authorization-policy.png)
- | Property | Required | Description |
- |-|-|-|
- | **Policy name** | Yes | The name that you want to use for the authorization policy |
- | **Claims** | Yes | The claim types and values that your logic app accepts from inbound calls. The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). Here are the available claim types: <p><p>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <p><p>At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/azuread-dev/v1-authentication-scenarios.md#claims-in-azure-ad-security-tokens). You can also specify your own claim type and value. |
+ | Property | Required | Type | Description |
+ |-|-||-|
+ | **Policy name** | Yes | String | The name that you want to use for the authorization policy |
+ | **Claims** | Yes | String | The claim types and values that your workflow accepts from inbound calls. Here are the available claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. <br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/azuread-dev/v1-authentication-scenarios.md#claims-in-azure-ad-security-tokens). You can also specify your own claim type and value. |
||| 1. To add another claim, select from these options:
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
Last updated 05/24/2022
# Data administration+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
+> * [v1](./v1/concept-network-data-access.md)
+> * [v2 (current version)](how-to-administrate-data-authentication.md)
+ Learn how to manage data access and how to authenticate in Azure Machine Learning [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] [!INCLUDE [CLI v2](../../includes/machine-learning-CLI-v2.md)]
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
# Read and write data in a job
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/how-to-train-with-datasets.md)
+> * [v2 (current version)](how-to-read-write-data-v2.md)
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] [!INCLUDE [CLI v2](../../includes/machine-learning-CLI-v2.md)]
machine-learning Concept Network Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-network-data-access.md
Last updated 11/19/2021
# Network data access with Azure Machine Learning studio
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> * [v1](concept-network-data-access.md)
+> * [v2 (current version)](../how-to-administrate-data-authentication.md)
++ Data access is complex and it's important to recognize that there are many pieces to it. For example, accessing data from Azure Machine Learning studio is different than using the SDK. When using the SDK on your local development environment, you're directly accessing data in the cloud. When using studio, you aren't always directly accessing the data store from your client. Studio relies on the workspace to access data on your behalf. > [!IMPORTANT]
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-datasets.md
-+ Last updated 10/21/2021
# Train models with Azure Machine Learning datasets
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](how-to-train-with-datasets.md)
+> * [v2 (current version)](../how-to-read-write-data-v2.md)
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] In this article, you learn how to work with [Azure Machine Learning datasets](/python/api/azureml-core/azureml.core.dataset%28class%29) to train machine learning models. You can use datasets in your local or remote compute target without worrying about connection strings or data paths.
marketplace Pc Saas Fulfillment Operations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-operations-api.md
description: Learn how to use the Operations APIs, which are part of the SaaS Fu
Previously updated : 03/07/2022 Last updated : 08/24/2022
This article describes version 2 of the SaaS fulfillment operations APIs.
+Operations are useful to respond to any requests that come through the webhook as part of ChangePlan, ChangeQuantity, and ReInstate actions. This provides an opportunity to accept or reject a request by patch that webhook operation with Success or Failure by using the below APIs.
+
+This only applies to webhook events such as ChangePlan, ChangeQuantity, and ReInstate that need an ACK. No action is needed from the independent software vendor (ISV) on Renew, Suspend, and Unsubscribe events because they are notify-only events.
+ ## List outstanding operations Get list of the pending operations for the specified SaaS subscription. The publisher should acknowledge returned operations by calling the [Operation Patch API](#update-the-status-of-an-operation).
marketplace Pc Saas Fulfillment Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-webhook.md
description: Learn how to implement a webhook on the SaaS service by using the f
Previously updated : 06/14/2022 Last updated : 08/24/2022
When creating a transactable SaaS offer in Partner Center, the partner provides
* ChangeQuantity * Renew * Suspend
- * Unsubscribe
+ * Unsubscribe (notify only, no ACK needed)
* When SaaS subscription is in *Suspended* status: * Reinstate
- * Unsubscribe
+ * Unsubscribe (notify only, no ACK needed)
The publisher must implement a webhook in the SaaS service to keep the SaaS subscription status consistent with the Microsoft side. The SaaS service is required to call the Get Operation API to validate and authorize the webhook call and payload data before taking action based on the webhook notification. The publisher should return HTTP 200 to Microsoft as soon as the webhook call is processed. This value acknowledges that the webhook call has been received successfully by the publisher.
The publisher must implement a webhook in the SaaS service to keep the SaaS subs
*Webhook payload example of unsubscribe event:*
+This is a notify only event. There is no send to ACK for this event.
+ ```json { "id": "<guid>",
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-new.md
Previously updated : 08/01/2022 Last updated : 08/24/2022 # What's new in the Microsoft commercial marketplace
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | | |
+| Offers | ISVs can now publish 1-year and 3-year prices for their Virtual Machine plans to let customers save money when they commit for a long-term agreement. To learn more, see [Plan a virtual machine offer](azure-vm-plan-pricing-and-availability.md#configure-reservation-pricing-optional). | 2022-08-24 |
| Offers | Software as a service (SaaS) plans now support 2-year and 3-year billing term with upfront, monthly or annually payment options. To learn more, see [Plan a SaaS offer for the commercial marketplace](plan-saas-offer.md#saas-billing-terms-and-payment-options). | 2022-08-01 | | Offers | ISVs can now offer custom prices, terms, conditions, and pricing for a specific customer through private offers. See [ISV to customer private offers](isv-customer.md) and the [FAQ](isv-customer-faq.yml). | 2022-04-06 | | Offers | Publishers can now [change transactable offer and plan pricing](price-changes.md) without having to discontinue an offer and recreate it with new pricing (also see [this FAQ](price-changes-faq.yml)). | 2022-03-30 |
migrate How To Delete Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-delete-project.md
Title: Delete an Azure Migrate project description: In this article, learn how you can delete an Azure Migrate project by using the Azure portal.--++ ms. Last updated 10/22/2019
migrate Hyper V Migration Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/hyper-v-migration-architecture.md
Title: How does Hyper-V migration work in Azure Migrate? description: Learn about Hyper-V migration with Azure Migrate --++ ms. Last updated 11/19/2019
migrate Migrate Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-services-overview.md
Title: About Azure Migrate description: Learn about the Azure Migrate service.--++ ms. Last updated 04/15/2020
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
Title: Support for Hyper-V migration in Azure Migrate description: Learn about support for Hyper-V migration with Azure Migrate.--++ ms. Last updated 04/15/2020
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
Title: Azure Migrate support matrix description: Provides a summary of support settings and limitations for the Azure Migrate service.--++ ms. Last updated 07/23/2020
migrate Migrate V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-v1.md
Title: Work with the previous version of Azure Migrate description: Describes how to work with the previous version of Azure Migrate.--++ ms. Last updated 9/23/2021
migrate Prepare Isv Movere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-isv-movere.md
Title: Prepare Azure Migrate to work with an ISV tool/Movere description: This article describes how to prepare Azure Migrate to work with an ISV tool or Movere, and then how to start using the tool. --++ ms. Last updated 06/10/2020
migrate Resources Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/resources-faq.md
Title: Azure Migrate FAQ description: Get answers to common questions about the Azure Migrate service.--++ ms. Last updated 04/15/2020
migrate Troubleshoot General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-general.md
Title: Troubleshoot Azure Migrate issues | Microsoft Docs description: Provides an overview of known issues in the Azure Migrate service, as well as troubleshooting tips for common errors.--++ ms.
migrate Troubleshoot Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-project.md
Title: Troubleshoot Azure Migrate projects description: Helps you to troubleshoot issues with creating and managing Azure Migrate projects.--++ ms. Last updated 01/01/2020
mysql Quickstart Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-terraform.md
Previously updated : 5/27/2022 Last updated : 8/23/2022 # Quickstart: Use Terraform to create an Azure Database for MySQL - Flexible Server
Article tested with the following Terraform and Terraform provider versions:
[!INCLUDE [About Azure Database for MySQL - Flexible Server](../includes/azure-database-for-mysql-flexible-server-abstract.md)]
-In this article, you learn how to deploy an Azure MySQL Flexible Server Database in a virtual network (VNet) using Terraform.
+This article shows how to use Terraform to deploy an Azure MySQL Flexible Server Database in a virtual network (VNet).
+
+In this article, you learn how to:
> [!div class="checklist"]
In this article, you learn how to deploy an Azure MySQL Flexible Server Database
1. Create a file named `main.tf` and insert the following code:
- [!code-terraform[master](../../../terraform_samples/quickstart/201-mysql-fs-db/main.tf)]
+ [!code-terraform[master](~/terraform_samples/quickstart/201-mysql-fs-db/main.tf)]
1. Create a file named `mysql-fs-db.tf` and insert the following code:
- [!code-terraform[master](../../../terraform_samples/quickstart/201-mysql-fs-db/mysql-fs-db.tf)]
+ [!code-terraform[master](~/terraform_samples/quickstart/201-mysql-fs-db/mysql-fs-db.tf)]
1. Create a file named `variables.tf` and insert the following code:
- [!code-terraform[master](../../../terraform_samples/quickstart/201-mysql-fs-db/variables.tf)]
+ [!code-terraform[master](~/terraform_samples/quickstart/201-mysql-fs-db/variables.tf)]
-1. Create a file named `output.tf` and insert the following code:
+1. Create a file named `outputs.tf` and insert the following code:
- [!code-terraform[master](../../../terraform_samples/quickstart/201-mysql-fs-db/output.tf)]
+ [!code-terraform[master](~/terraform_samples/quickstart/201-mysql-fs-db/outputs.tf)]
## Initialize Terraform
orbital Overview Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview-analytics.md
Title: What is Azure Orbital Analytics? description: Azure Orbital Analytics are Azure capabilities that allow you to discover and distribute the most valuable insights from your spaceborne data.-+ Previously updated : 08/08/2022- Last updated : 08/18/2022+ # What is Azure Orbital Analytics?
Azure Orbital Analytics are Azure capabilities using spaceborne data and AI that
## What it provides
-Azure Orbital Analytics provides the ability to downlink spaceborne data from Azure Orbital Ground Station (AOGS), first- or third-party archives, or customer-acquired data directly into Azure. This data is efficiently stored using Azure Data Platform components. From there, you can convert raw spaceborne sensor data into analysis-ready data using Azure Orbital Analytics processing pipelines.
+Azure Orbital Analytics provides the ability to downlink spaceborne data from Azure Orbital Ground Station (AOGS), first- or third-party archives, or customer-acquired data directly into Azure. This data is efficiently stored using Azure Data and Storage services. From there, you can convert raw spaceborne sensor data into analysis-ready data using Azure Orbital Analytics processing pipelines.
## Integrations
-Derive insights on data by applying AI models, integrating applications, and more. Partner AI models and Microsoft tools extract the highest precision results. Finally, deliver data to various endpoints such as Microsoft Teams, Power Platform, or other open-source locations. Azure Orbital Analytics enables scenarios including land classification, asset monitoring, object detection, and more.
+Derive insights on data by applying AI models, integrating applications, and more. Partner AI models and Microsoft tools extract the highest precision results. Finally, deliver data to destinations such as Microsoft Teams, Power Platform, or process it using open-source tools. Azure Orbital Analytics enables scenarios including land classification, asset monitoring, object detection, and more.
## Partnerships
-Azure Orbital Analytics is the pathway between satellite operators and Microsoft customers. Partnerships with Airbus, Blackshark, and Orbital Insight enable information extraction and publishing to EsriΓÇÖs ArcGIS workflows.
+Azure Orbital Analytics is the pathway between satellite operators and Microsoft customers. Partnerships with [Airbus](https://www.airbus.com/en), [Blackshark.ai](https://blackshark.ai/technology/), and [Orbital Insight](https://orbitalinsight.com/) enable information extraction and publishing to EsriΓÇÖs ArcGIS workflows.
Orbital Analytics for Azure Synapse applies artificial intelligence over satellite imagery at scale using Azure resources. ## Next steps - [Geospatial reference architecture](./geospatial-reference-architecture.md)-- [Spaceborne data analysis with Azure Synapse Analytics](/azure/architecture/industries/aerospace/geospatial-processing-analytics)
+- [Spaceborne data analysis with Azure Synapse Analytics](/azure/architecture/industries/aerospace/geospatial-processing-analytics)
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
Title: Create Dynatrace for Azure (preview) resource - Azure partner solutions
+ Title: Create Dynatrace for Azure resource - Azure partner solutions
description: This article describes how to use the Azure portal to create an instance of Dynatrace. Previously updated : 06/07/2022 Last updated : 08/24/2022 # QuickStart: Get started with Dynatrace
-In this quickstart, you create a new instance of Dynatrace for Azure (preview). You can either create a new Dynatrace environment or [link to an existing Dynatrace environment](dynatrace-link-to-existing.md#link-to-existing-dynatrace-environment).
+In this quickstart, you create a new instance of Dynatrace for Azure. You can either create a new Dynatrace environment or [link to an existing Dynatrace environment](dynatrace-link-to-existing.md#link-to-existing-dynatrace-environment).
When you use the integrated Dynatrace experience in Azure portal, the following entities are created and mapped for monitoring and billing purposes.
partner-solutions Dynatrace How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-configure-prereqs.md
Title: Configure pre-deployment to use Dynatrace with Azure (preview) - Azure partner solutions
+ Title: Configure pre-deployment to use Dynatrace with Azure - Azure partner solutions
description: This article describes how to complete the prerequisites for Dynatrace on the Azure portal. Previously updated : 06/07/2022 Last updated : 08/24/2022
This article describes the prerequisites that must be completed before you creat
## Access control
-To set up the Dynatrace for Azure (preview), you must have **Owner** or **Contributor** access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md) before starting the setup.
+To set up the Dynatrace for Azure, you must have **Owner** or **Contributor** access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md) before starting the setup.
## Add enterprise application
partner-solutions Dynatrace How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md
Title: Manage your Dynatrace for Azure (preview) integration - Azure partner solutions
+ Title: Manage your Dynatrace for Azure integration - Azure partner solutions
description: This article describes how to manage Dynatrace on the Azure portal. - ++ Previously updated : 06/07/2022 Last updated : 08/24/2022 # Manage the Dynatrace integration with Azure
-This article describes how to manage the settings for your Dynatrace for Azure (preview).
+This article describes how to manage the settings for Dynatrace for Azure.
## Resource overview
partner-solutions Dynatrace Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md
Title: Linking to an existing Dynatrace for Azure (preview) resource - Azure partner solutions
+ Title: Linking to an existing Dynatrace for Azure resource - Azure partner solutions
description: This article describes how to use the Azure portal to link to an instance of Dynatrace. Previously updated : 06/07/2022 Last updated : 08/24/2022
Last updated 06/07/2022
In this quickstart, you link an Azure subscription to an existing Dynatrace environment. After you link to the Dynatrace environment, you can monitor the linked Azure subscription and the resources in that subscription using the Dynatrace environment.
-When you use the integrated experience for Dynatrace in the Azure (preview) portal, your billing and monitoring for the following entities is tracked in the portal.
+When you use the integrated experience for Dynatrace in the Azure portal, your billing and monitoring for the following entities is tracked in the portal.
:::image type="content" source="media/dynatrace-link-to-existing/dynatrace-entities-linking.png" alt-text="Flowchart showing three entities: subscription 1 connected to subscription 1 and Dynatrace S A A S.":::
partner-solutions Dynatrace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md
Title: Dynatrace for Azure (preview) overview - Azure partner solutions
+ Title: Dynatrace for Azure overview - Azure partner solutions
description: Learn about using the Dynatrace Cloud-Native Observability Platform in the Azure Marketplace. Previously updated : 06/07/2022 Last updated : 08/24/2022
Last updated 06/07/2022
Dynatrace is a monitoring solution that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities in Azure.
-Dynatrace for Azure (preview) offering in the Azure Marketplace enables you to create and manage Dynatrace environments using the Azure portal with a seamlessly integrated experience. This enables you to use Dynatrace as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement, all the way to configuration and management.
+Dynatrace for Azure offering in the Azure Marketplace enables you to create and manage Dynatrace environments using the Azure portal with a seamlessly integrated experience. This enables you to use Dynatrace as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement, all the way to configuration and management.
You can create and manage the Dynatrace resources using the Azure portal through a resource provider named `Dynatrace.Observability`. Dynatrace owns and runs the software as a service (SaaS) application including the Dynatrace environments created through this experience.
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
Title: Troubleshooting Dynatrace for Azure (preview) - Azure partner solutions
+ Title: Troubleshooting Dynatrace for Azure - Azure partner solutions
description: This article provides information about troubleshooting Dynatrace for Azure Previously updated : 06/07/2022 Last updated : 08/24/2022 # Troubleshoot Dynatrace for Azure
-This article describes how to contact support when working with a Dynatrace for Azure (preview) resource. Before contacting support, see [Fix common errors](#fix-common-errors).
+This article describes how to contact support when working with a Dynatrace for Azure resource. Before contacting support, see [Fix common errors](#fix-common-errors).
## Contact support
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Title: Offerings from partners - Azure partner solutions description: Learn about solutions offered by partners on Azure.++ Previously updated : 06/07/2022- Last updated : 08/24/2022 + # Extend Azure with solutions from partners
Partner solutions are available through the Marketplace.
| [Datadog](./datadog/overview.md) | Monitor your servers, clouds, metrics, and apps in one place. | | [Elastic](./elastic/overview.md) | Monitor the health and performance of your Azure environment. | | [Logz.io](./logzio/overview.md) | Monitor the health and performance of your Azure environment. |
-| [Dynatrace for Azure (preview)](./dynatrace/dynatrace-overview.md) | Use Dynatrace for Azure (preview) for monitoring your workflows using the Azure portal. |
+| [Dynatrace for Azure](./dynatrace/dynatrace-overview.md) | Use Dynatrace for Azure for monitoring your workloads using the Azure portal. |
| [NGINX for Azure (preview)](./nginx/nginx-overview.md) | Use NGINX for Azure (preview) as a reverse proxy within your Azure environment. |
postgresql Quickstart App Stacks Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-csharp.md
recommendations: false Previously updated : 08/11/2022 Last updated : 08/24/2022 # C# app to connect and query Hyperscale (Citus)
namespace Driver
} } ```
+## App retry during database request failures
++
+```csharp
+using System;
+using System.Data;
+using System.Runtime.InteropServices;
+using System.Text;
+using Npgsql;
+
+namespace Driver
+{
+ public class Reconnect
+ {
+ static string connStr = new NpgsqlConnectionStringBuilder("Server = <host name>; Database = citus; Port = 5432; User Id = citus; Password = {Your Password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50;TrustServerCertificate = true").ToString();
+ static string executeRetry(string sql, int retryCount)
+ {
+ for (int i = 0; i < retryCount; i++)
+ {
+ try
+ {
+ using (var conn = new NpgsqlConnection(connStr))
+ {
+ conn.Open();
+ DataTable dt = new DataTable();
+ using (var _cmd = new NpgsqlCommand(sql, conn))
+ {
+ NpgsqlDataAdapter _dap = new NpgsqlDataAdapter(_cmd);
+ _dap.Fill(dt);
+ conn.Close();
+ if (dt != null)
+ {
+ if (dt.Rows.Count > 0)
+ {
+ int J = dt.Rows.Count;
+ StringBuilder sb = new StringBuilder();
+
+ for (int k = 0; k < dt.Rows.Count; k++)
+ {
+ for (int j = 0; j < dt.Columns.Count; j++)
+ {
+ sb.Append(dt.Rows[k][j] + ",");
+ }
+ sb.Remove(sb.Length - 1, 1);
+ sb.Append("\n");
+ }
+ return sb.ToString();
+ }
+ }
+ }
+ }
+ return null;
+ }
+ catch (Exception e)
+ {
+ Thread.Sleep(60000);
+ Console.WriteLine(e.Message);
+ }
+ }
+ return null;
+ }
+ static void Main(string[] args)
+ {
+ string result = executeRetry("select 1",5);
+ Console.WriteLine(result);
+ }
+ }
+}
+```
## Next steps
postgresql Quickstart App Stacks Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-java.md
recommendations: false Previously updated : 08/11/2022 Last updated : 08/24/2022 # Java app to connect and query Hyperscale (Citus)
Create a `src/main/resources/application.properties` file, and add:
``` properties driver.class.name=org.postgresql.Driver
-url=jdbc:postgresql://<host>:5432/citus?ssl=true&sslmode=require
-user=citus
-password=<password>
+db.url=jdbc:postgresql://<host>:5432/citus?ssl=true&sslmode=require
+db.username=citus
+db.password=<password>
``` Replace the \<host\> using the Connection string that you gathered previously. Replace \<password\> with the password that you set for the database.
public class DButil {
datasource.setPassword(properties.getProperty(DB_PASSWORD)); datasource.setMinimumIdle(100); datasource.setMaximumPoolSize(1000000000);
- datasource.setAutoCommit(false);
+ datasource.setAutoCommit(true);
datasource.setLoginTimeout(3); } catch (IOException | SQLException e) { e.printStackTrace();
Executing the `main` class should now produce the following output:
The following code is an example for copying in-memory data to table. ```java
-private static void inMemory(Connection connection) throws SQLException,IOException {
- log.info("Copying in-memory data into table");
- String[] input = {"0,Target,Sunnyvale,California,94001"};
-
- Connection unwrap = connection.unwrap(Connection.class);
- BaseConnection connSec = (BaseConnection) unwrap;
-
- CopyManager copyManager = new CopyManager((BaseConnection) connSec);
- String copyCommand = "COPY pharmacy FROM STDIN with csv";
-
- for (String var : input)
+private static void inMemory(Connection connection) throws SQLException,IOException
{
- Reader reader = new StringReader(var);
+ log.info("Copying inmemory data into table");
+
+ final List<String> rows = new ArrayList<>();
+ rows.add("0,Target,Sunnyvale,California,94001");
+ rows.add("1,Apollo,Guntur,Andhra,94003");
+
+ final BaseConnection baseConnection = (BaseConnection) connection.unwrap(Connection.class);
+ final CopyManager copyManager = new CopyManager(baseConnection);
+
+ // COPY command can change based on the format of rows. This COPY command is for above rows.
+ final String copyCommand = "COPY pharmacy FROM STDIN with csv";
+
+ try (final Reader reader = new StringReader(String.join("\n", rows))) {
copyManager.copyIn(copyCommand, reader); }-
- copyManager.copyIn(copyCommand);
} ```
Executing the main class should now produce the following output:
[INFO ] Closing database connection ```
+## App retry during database request failures
++
+```java
+package test.crud;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.util.logging.Logger;
+import com.zaxxer.hikari.HikariDataSource;
+
+public class DemoApplication
+{
+ private static final Logger log;
+
+ static
+ {
+ System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
+ log = Logger.getLogger(DemoApplication.class.getName());
+ }
+ private static final String DB_USERNAME = "citus";
+ private static final String DB_PASSWORD = "<Your Password>";
+ private static final String DB_URL = "jdbc:postgresql://<Server Name>:5432/citus?sslmode=require";
+ private static final String DB_DRIVER_CLASS = "org.postgresql.Driver";
+ private static HikariDataSource datasource;
+
+ private static String executeRetry(String sql, int retryCount) throws InterruptedException
+ {
+ Connection con = null;
+ PreparedStatement pst = null;
+ ResultSet rs = null;
+ for (int i = 1; i <= retryCount; i++)
+ {
+ try
+ {
+ datasource = new HikariDataSource();
+ datasource.setDriverClassName(DB_DRIVER_CLASS);
+ datasource.setJdbcUrl(DB_URL);
+ datasource.setUsername(DB_USERNAME);
+ datasource.setPassword(DB_PASSWORD);
+ datasource.setMinimumIdle(10);
+ datasource.setMaximumPoolSize(1000);
+ datasource.setAutoCommit(true);
+ datasource.setLoginTimeout(3);
+ log.info("Connecting to the database");
+ con = datasource.getConnection();
+ log.info("Connection established");
+ log.info("Read data");
+ pst = con.prepareStatement(sql);
+ rs = pst.executeQuery();
+ StringBuilder builder = new StringBuilder();
+ int columnCount = rs.getMetaData().getColumnCount();
+ while (rs.next())
+ {
+ for (int j = 0; j < columnCount;)
+ {
+ builder.append(rs.getString(j + 1));
+ if (++j < columnCount)
+ builder.append(",");
+ }
+ builder.append("\r\n");
+ }
+ return builder.toString();
+ }
+ catch (Exception e)
+ {
+ Thread.sleep(60000);
+ System.out.println(e.getMessage());
+ }
+ }
+ return null;
+ }
+
+ public static void main(String[] args) throws Exception
+ {
+ String result = executeRetry("select 1", 5);
+ System.out.print(result);
+ }
+}
+```
+ ## Next steps [!INCLUDE[app-stack-next-steps](includes/app-stack-next-steps.md)]
postgresql Quickstart App Stacks Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-nodejs.md
recommendations: false Previously updated : 08/18/2022 Last updated : 08/24/2022 # Node.js app to connect and query Hyperscale (Citus)
const pool = new Pool({
max: 300, connectionTimeoutMillis: 5000,
- host: 'c.citustest.postgres.database.azure.com',
+ host: '<host>',
port: 5432, user: 'citus',
- password: 'Password123$',
+ password: '<your password>',
database: 'citus', ssl: true, });
async function importInMemoryDatabase() {
})(); ```
+## App retry during database request failures
++
+```javascript
+const { Pool } = require('pg');
+const { sleep } = require('sleep');
+
+const pool = new Pool({
+ host: '<host>',
+ port: 5432,
+ user: 'citus',
+ password: '<your password>',
+ database: 'citus',
+ ssl: true,
+ connectionTimeoutMillis: 0,
+ idleTimeoutMillis: 0,
+ min: 10,
+ max: 20,
+});
+
+(async function() {
+ res = await executeRetry('select nonexistent_thing;',5);
+ console.log(res);
+ process.exit(res ? 0 : 1);
+})();
+
+async function executeRetry(sql,retryCount)
+{
+ for (let i = 0; i < retryCount; i++) {
+ try {
+ result = await pool.query(sql)
+ return result;
+ } catch (err) {
+ console.log(err.message);
+ sleep(60);
+ }
+ }
+
+ // didn't succeed after all the tries
+ return null;
+}
+```
+ ## Next steps [!INCLUDE[app-stack-next-steps](includes/app-stack-next-steps.md)]
postgresql Quickstart App Stacks Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-python.md
recommendations: false Previously updated : 08/11/2022 Last updated : 08/24/2022 # Python app to connect and query Hyperscale (Citus)
with conn.cursor() as cur:
conn.commit() conn.close() ```
+## App retry during database request failures
++
+```python
+import psycopg2
+import time
+from psycopg2 import pool
+
+host = "<host>"
+dbname = "citus"
+user = "citus"
+password = "{your password}"
+sslmode = "require"
+
+conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(
+ host, user, dbname, password, sslmode)
+postgreSQL_pool = psycopg2.pool.SimpleConnectionPool(1, 20, conn_string)
+
+def executeRetry(query, retryCount):
+ for x in range(retryCount):
+ try:
+ if (postgreSQL_pool):
+ # Use getconn() to Get Connection from connection pool
+ conn = postgreSQL_pool.getconn()
+ cursor = conn.cursor()
+ cursor.execute(query)
+ return cursor.fetchall()
+ break
+ except Exception as err:
+ print(err)
+ postgreSQL_pool.putconn(conn)
+ time.sleep(60)
+ return None
+
+print(executeRetry("select 1", 5))
+```
## Next steps
postgresql Quickstart App Stacks Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-ruby.md
recommendations: false Previously updated : 08/11/2022 Last updated : 08/24/2022 # Ruby app to connect and query Hyperscale (Citus)
ensure
connection.close if connection end ```
+## App retry during database request failures
++
+```ruby
+require 'pg'
+
+def executeretry(sql,retryCount)
+ begin
+ for a in 1..retryCount do
+ begin
+ # NOTE: Replace the host and password arguments in the connection string.
+ # (The connection string can be obtained from the Azure portal)
+ connection = PG::Connection.new("host=<Server Name> port=5432 dbname=citus user=citus password={Your Password} sslmode=require")
+ resultSet = connection.exec(sql)
+ return resultSet.each
+ rescue PG::Error => e
+ puts e.message
+ sleep 60
+ ensure
+ connection.close if connection
+ end
+ end
+ end
+ return nil
+end
+
+var = executeretry('select 1',5)
+
+if var !=nil then
+ var.each do |row|
+ puts 'Data row = (%s)' % [row]
+ end
+end
+```
## Next steps
purview How To Data Share Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-share-faq.md
Data provider's source storage account can support up to 20 targets, and data co
To troubleshoot issues with sharing data, refer to the [Troubleshoot section of How to share data](how-to-share-data.md#troubleshoot). To troubleshoot issues with receiving share, refer to the [Troubleshoot section of How to receive share](how-to-receive-share.md#troubleshoot).
+## Is there support for Private endpoints, VNET and IP restrictions?
+Private endpoints, VNET, and IP restrictions are supported for data share for storage. Blob should be chosen as the target sub-resource when creating a private endpoint for storage accounts.
+ ## Next steps * [Data sharing quickstart](quickstart-data-share.md)
remote-rendering Create An Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/create-an-account.md
This chapter guides you through the steps to create an account for the **Azure R
The following steps are needed to create an account for the Azure Remote Rendering service:
-1. Go to the [Mixed Reality Preview page](https://aka.ms/MixedRealityPrivatePreview)
+1. Go to the Azure portal [portal.azure.com](https://ms.portal.azure.com/)
1. Click the 'Create a resource' button 1. In the search field ("Search the marketplace"), type in "Remote Rendering" and hit 'enter'. 1. In the result list, click on the "Remote Rendering" tile
search Search Howto Connecting Azure Sql Iaas To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md
Previously updated : 06/07/2022 Last updated : 08/24/2022
-# Indexer connections to SQL Server on an Azure virtual machine
+# Indexer connections to a SQL Server instance on an Azure virtual machine
When configuring an [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) to extract content from a database on an Azure virtual machine, additional steps are required for secure connections.
-A connection from Azure Cognitive Search to SQL Server on a virtual machine is a public internet connection. In order for secure connections to succeed, complete the following steps:
+A connection from Azure Cognitive Search to SQL Server instance on a virtual machine is a public internet connection. In order for secure connections to succeed, you'll need to satisfy the following requirements:
-+ Obtain a certificate from a [Certificate Authority provider](https://en.wikipedia.org/wiki/Certificate_authority#Providers) for the fully qualified domain name of the SQL Server instance on the virtual machine
++ Obtain a certificate from a [Certificate Authority provider](https://en.wikipedia.org/wiki/Certificate_authority#Providers) for the fully qualified domain name of the SQL Server instance on the virtual machine.
-+ Install the certificate on the virtual machine, and then enable and configure encrypted connections on the VM using the instructions in this article.
++ Install the certificate on the virtual machine.+
+After you've installed the certificate on your VM, you're ready to complete the following steps in this article.
> [!NOTE] > [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
A connection from Azure Cognitive Search to SQL Server on a virtual machine is a
Azure Cognitive Search requires an encrypted channel for all indexer requests over a public internet connection. This section lists the steps to make this work.
-1. Check the properties of the certificate to verify the subject name is the fully qualified domain name (FQDN) of the Azure VM. You can use a tool like CertUtils or the Certificates snap-in to view the properties. You can get the FQDN from the VM service blade's Essentials section, in the **Public IP address/DNS name label** field, in the [Azure portal](https://portal.azure.com/).
-
- + For VMs created using the newer **Resource Manager** template, the FQDN is formatted as `<your-VM-name>.<region>.cloudapp.azure.com`
+1. Check the properties of the certificate to verify the subject name is the fully qualified domain name (FQDN) of the Azure VM.
- + For older VMs created as a **Classic** VM, the FQDN is formatted as `<your-cloud-service-name.cloudapp.net>`.
+ You can use a tool like CertUtils or the Certificates snap-in to view the properties. You can get the FQDN from the VM service blade's Essentials section, in the **Public IP address/DNS name label** field, in the [Azure portal](https://portal.azure.com/).
+
+ The FQDN is typically formatted as `<your-VM-name>.<region>.cloudapp.azure.com`
1. Configure SQL Server to use the certificate using the Registry Editor (regedit).
- Although SQL Server Configuration Manager is often used for this task, you can't use it for this scenario. It won't find the imported certificate because the FQDN of the VM on Azure doesn't match the FQDN as determined by the VM (it identifies the domain as either the local computer or the network domain to which it is joined). When names don't match, use regedit to specify the certificate.
+ Although SQL Server Configuration Manager is often used for this task, you can't use it for this scenario. It won't find the imported certificate because the FQDN of the VM on Azure doesn't match the FQDN as determined by the VM (it identifies the domain as either the local computer or the network domain to which it's joined). When names don't match, use regedit to specify the certificate.
- + In regedit, browse to this registry key: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\[MSSQL13.MSSQLSERVER]\MSSQLServer\SuperSocketNetLib\Certificate`.
+ 1. In regedit, browse to this registry key: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\[MSSQL13.MSSQLSERVER]\MSSQLServer\SuperSocketNetLib\Certificate`.
- The `[MSSQL13.MSSQLSERVER]` part varies based on version and instance name.
+ The `[MSSQL13.MSSQLSERVER]` part varies based on version and instance name.
- + Set the value of the **Certificate** key to the **thumbprint** (without spaces) of the TLS/SSL certificate you imported to the VM.
+ 1. Set the value of the **Certificate** key to the **thumbprint** (without spaces) of the TLS/SSL certificate you imported to the VM.
- There are several ways to get the thumbprint, some better than others. If you copy it from the **Certificates** snap-in in MMC, you will probably pick up an invisible leading character [as described in this support article](https://support.microsoft.com/kb/2023869/), which results in an error when you attempt a connection. Several workarounds exist for correcting this problem. The easiest is to backspace over and then retype the first character of the thumbprint to remove the leading character in the key value field in regedit. Alternatively, you can use a different tool to copy the thumbprint.
+ There are several ways to get the thumbprint, some better than others. If you copy it from the **Certificates** snap-in in MMC, you'll probably pick up an invisible leading character [as described in this support article](https://support.microsoft.com/kb/2023869/), which results in an error when you attempt a connection. Several workarounds exist for correcting this problem. The easiest is to backspace over and then retype the first character of the thumbprint to remove the leading character in the key value field in regedit. Alternatively, you can use a different tool to copy the thumbprint.
1. Grant permissions to the service account.
- Make sure the SQL Server service account is granted appropriate permission on the private key of the TLS/SSL certificate. If you overlook this step, SQL Server will not start. You can use the **Certificates** snap-in or **CertUtils** for this task.
+ Make sure the SQL Server service account is granted appropriate permission on the private key of the TLS/SSL certificate. If you overlook this step, SQL Server won't start. You can use the **Certificates** snap-in or **CertUtils** for this task.
1. Restart the SQL Server service.
-## Configure SQL Server connectivity in the VM
-
-After you set up the encrypted connection required by Azure Cognitive Search, there are additional configuration steps intrinsic to SQL Server on Azure VMs. If you haven't done so already, the next step is to finish configuration using either one of these articles:
+## Connect to SQL Server
-+ For a **Resource Manager** VM, see [Connect to a SQL Server Virtual Machine on Azure using Resource Manager](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
+After you set up the encrypted connection required by Azure Cognitive Search, you'll connect to the instance through its public endpoint. The following article explains the connection requirements and syntax:
-+ For a **Classic** VM, see [Connect to a SQL Server Virtual Machine on Azure Classic](/previous-versions/azure/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-sql-connect).
++ [Connect to SQL Server over the internet](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql#connect-to-sql-server-over-the-internet)
-In particular, review the section in each article for "connecting over the internet".
+## Configure the network security group
-## Configure the Network Security Group (NSG)
-
-It is not unusual to configure the NSG and corresponding Azure endpoint or Access Control List (ACL) to make your Azure VM accessible to other parties. Chances are you've done this before to allow your own application logic to connect to your SQL Azure VM. It's no different for an Azure Cognitive Search connection to your SQL Azure VM.
+It isn't unusual to configure the [network security group](../virtual-network/network-security-groups-overview.md) and corresponding Azure endpoint or Access Control List (ACL) to make your Azure VM accessible to other parties. Chances are you've done this before to allow your own application logic to connect to your SQL Azure VM. It's no different for an Azure Cognitive Search connection to your SQL Azure VM.
The links below provide instructions on NSG configuration for VM deployments. Use these instructions to ACL an Azure Cognitive Search endpoint based on its IP address.
-> [!NOTE]
-> For background, see [What is a Network Security Group?](../virtual-network/network-security-groups-overview.md)
+1. Obtain the IP address of your search service. See the [following section](#restrict-access-to-the-azure-cognitive-search) for instructions.
+
+1. Add the search IP address to the IP filter list of the security group. Either one of following articles explains the steps:
-+ For a **Resource Manager** VM, see [How to create NSGs for ARM deployments](../virtual-network/tutorial-filter-network-traffic.md).
+ + [Tutorial: Filter network traffic with a network security group using the Azure portal](/azure/virtual-network/tutorial-filter-network-traffic)
-+ For a **Classic** VM, see [How to create NSGs for Classic deployments](/previous-versions/azure/virtual-network/virtual-networks-create-nsg-classic-ps).
+ + [Create, change, or delete a network security group](/azure/virtual-network/manage-network-security-group)
-IP addressing can pose a few challenges that are easily overcome if you are aware of the issue and potential workarounds. The remaining sections provide recommendations for handling issues related to IP addresses in the ACL.
+IP addressing can pose a few challenges that are easily overcome if you're aware of the issue and potential workarounds. The remaining sections provide recommendations for handling issues related to IP addresses in the ACL.
### Restrict access to the Azure Cognitive Search We strongly recommend that you restrict the access to the IP address of your search service and the IP address range of `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags) in the ACL instead of making your SQL Azure VMs open to all connection requests.
-You can find out the IP address by pinging the FQDN (for example, `<your-search-service-name>.search.windows.net`) of your search service. Although it is possible for the search service IP address to change, it's unlikely that it will change. The IP address tends to be static for the lifetime of the service.
+You can find out the IP address by pinging the FQDN (for example, `<your-search-service-name>.search.windows.net`) of your search service. Although it's possible for the search service IP address to change, it's unlikely that it will change. The IP address tends to be static for the lifetime of the service.
You can find out the IP address range of `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags) by either using [Downloadable JSON files](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) or via the [Service Tag Discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api). The IP address range is updated weekly. ### Include the Azure Cognitive Search portal IP addresses
-If you are using the Azure portal to create an indexer, you must grant the portal inbound access to your SQL Azure virtual machine. An inbound rule in the firewall requires that you provide the IP address of the portal.
+If you're using the Azure portal to create an indexer, you must grant the portal inbound access to your SQL Azure virtual machine. An inbound rule in the firewall requires that you provide the IP address of the portal.
-To get the portal IP address, ping `stamp2.ext.search.windows.net`, which is the domain of the traffic manager. The request will time out, but the IP address be visible in the status message. For example, in the message "Pinging azsyrie.northcentralus.cloudapp.azure.com [52.252.175.48]", the IP address is "52.252.175.48".
+To get the portal IP address, ping `stamp2.ext.search.windows.net`, which is the domain of the traffic manager. The request will time out, but the IP address will be visible in the status message. For example, in the message "Pinging azsyrie.northcentralus.cloudapp.azure.com [52.252.175.48]", the IP address is "52.252.175.48".
Clusters in different regions connect to different traffic managers. Regardless of the domain name, the IP address returned from the ping is the correct one to use when defining an inbound firewall rule for the Azure portal in your region. ## Next steps
-With configuration out of the way, you can now specify a SQL Server on Azure VM as the data source for an Azure Cognitive Search indexer. For more information, see [Connecting Azure SQL Database to Azure Cognitive Search using indexers](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
+With configuration out of the way, you can now specify a SQL Server on Azure VM as the data source for an Azure Cognitive Search indexer. For more information, see [Index data from Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
search Search Synapseml Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synapseml-cognitive-services.md
Title: Use Search with SynapseML
-description: Add full text search to big data on Apache Spark that's been loaded and transformed through the open source SynapseML library. In this walkthrough, you'll load invoice files into data frames, apply machine learning through SynapseML, then send it into a generated search index.
+description: Add full text search to big data on Apache Spark that's been loaded and transformed through the open-source library, SynapseML. In this walkthrough, you'll load invoice files into data frames, apply machine learning through SynapseML, then send it into a generated search index.
Previously updated : 08/09/2022 Last updated : 08/23/2022 # Add search to AI-enriched data from Apache Spark using SynapseML
In this Azure Cognitive Search article, learn how to add data exploration and fu
[SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/) is an open source library that supports massively parallel machine learning over big data. In SynapseML, one of the ways in which machine learning is exposed is through *transformers* that perform specialized tasks. Transformers tap into a wide range of AI capabilities. In this article, we'll focus on just those that call Cognitive Services and Cognitive Search.
-In this walkthrough, you'll set up a workbook that does the following:
+In this walkthrough, you'll set up a workbook that includes the following actions:
> [!div class="checklist"] > + Load various forms (invoices) into a data frame in an Apache Spark session
Although Azure Cognitive Search has native [AI enrichment](cognitive-search-conc
You'll need the `synapseml` library and several Azure resources. If possible, use the same subscription and region for your Azure resources and put everything into one resource group for simple cleanup later. The following links are for portal installs. The sample data is imported from a public site.
-+ [Azure Cognitive Search](search-create-service-portal.md) (any tier) <sup>1</sup>
-+ [Azure Cognitive Services](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#create-a-new-azure-cognitive-services-resource) (any tier) <sup>2</sup>
-+ [Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal?tabs=azure-portal) (any tier) <sup>3</sup>
++ [SynapseML package](https://microsoft.github.io/SynapseML/docs/getting_started/installation/#python) <sup>1</sup> ++ [Azure Cognitive Search](search-create-service-portal.md) (any tier) <sup>2</sup> ++ [Azure Cognitive Services](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#create-a-new-azure-cognitive-services-resource) (any tier) <sup>3</sup> ++ [Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal?tabs=azure-portal) (any tier) <sup>4</sup>
-<sup>1</sup> You can use the free tier for this walkthrough but [choose a higher tier](search-sku-tier.md) if data volumes are large. You'll need the [API key](search-security-api-keys.md#find-existing-keys) for this resource.
+<sup>1</sup> This article includes instructions for loading the package.
-<sup>2</sup> This walkthrough uses Azure Forms Recognizer and Azure Translator. In the instructions below, you'll provide a [Cognitive Services multi-service key](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#get-the-keys-for-your-resource) and the region, and it'll work for both services.
+<sup>2</sup> You can use the free tier for this walkthrough but [choose a higher tier](search-sku-tier.md) if data volumes are large. You'll need the [API key](search-security-api-keys.md#find-existing-keys) for this resource.
-<sup>3</sup> In this walkthrough, Azure Databricks provides the computing platform. You could also use Azure Synapse Analytics or any other computing platform supported by `synapseml`. The Azure Databricks article listed in the prerequisites includes multiple steps. For this walkthrough, follow only the instructions in "Create a workspace".
+<sup>3</sup> This walkthrough uses Azure Forms Recognizer and Azure Translator. In the instructions below, you'll provide a [Cognitive Services multi-service key](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#get-the-keys-for-your-resource) and the region, and it will work for both services.
+
+<sup>4</sup> In this walkthrough, Azure Databricks provides the computing platform. You could also use Azure Synapse Analytics or any other computing platform supported by `synapseml`. The Azure Databricks article listed in the prerequisites includes multiple steps. For this walkthrough, follow only the instructions in "Create a workspace".
> [!NOTE]
-> All of the above resources support security features in the Microsoft Identity platform. For simplicity, this walkthrough assumes key-based authentication, using endpoints and keys copied from the portal pages of each service. If you implement this workflow in a production environment, or share the solution with others, remember to replace hard-coded keys with integrated security or encrypted keys.
+> All of the above Azure resources support security features in the Microsoft Identity platform. For simplicity, this walkthrough assumes key-based authentication, using endpoints and keys copied from the portal pages of each service. If you implement this workflow in a production environment, or share the solution with others, remember to replace hard-coded keys with integrated security or encrypted keys.
## Create a Spark cluster and notebook
In this section, you'll create a cluster, install the `synapseml` library, and c
1. Select **Install**.
+ :::image type="content" source="media/search-synapseml-cognitive-services/install-library-from-maven.png" alt-text="Screenshot of Maven package specification." border="true":::
+ 1. On the left menu, select **Create** > **Notebook**. :::image type="content" source="media/search-synapseml-cognitive-services/create-notebook.png" alt-text="Screenshot of the Create Notebook command." border="true":::
search_index = "placeholder-search-index-name"
Paste the following code into the second cell. No modifications are required, so run the code when you're ready.
-This code loads a small number of external files from an Azure storage account that's used for demo purposes. The files are various invoices, and they're read into a data frame.
+This code loads a few external files from an Azure storage account that's used for demo purposes. The files are various invoices, and they're read into a data frame.
```python def blob_to_url(blob):
df2 = (spark.read.format("binaryFile")
display(df2) ```
-## Apply form recognition
+## Add form recognition
Paste the following code into the third cell. No modifications are required, so run the code when you're ready.
analyzed_df = (AnalyzeInvoices()
display(analyzed_df) ```
-## Apply data restructuring
+The output from this step should look similar to the next screenshot. Notice how the forms analysis is packed into a densely structured column, which is difficult to work with. The next transformation resolves this issue by parsing the column into rows and columns.
++
+## Restructure form recognition output
Paste the following code into the fourth cell and run it. No modifications are required.
-This code loads [FormOntologyLearner](https://mmlspark.blob.core.windows.net/docs/0.10.0/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.FormOntologyTransformer), a transformer that analyzes the output of Form Recognizer transformers and infers a tabular data structure. The output of AnalyzeInvoices is dynamic and varies based on the features detected in your content. Furthermore, the AnalyzeInvoices transformer consolidates output into a single column. Because the output is dynamic and consolidated, it's difficult to use in downstream transformations that require more structure.
+This code loads [FormOntologyLearner](https://mmlspark.blob.core.windows.net/docs/0.10.0/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.FormOntologyTransformer), a transformer that analyzes the output of Form Recognizer transformers and infers a tabular data structure. The output of AnalyzeInvoices is dynamic and varies based on the features detected in your content. Furthermore, the transformer consolidates output into a single column. Because the output is dynamic and consolidated, it's difficult to use in downstream transformations that require more structure.
FormOntologyLearner extends the utility of the AnalyzeInvoices transformer by looking for patterns that can be used to create a tabular data structure. Organizing the output into multiple columns and rows makes the content consumable in other transformers, like AzureSearchWriter.
itemized_df = (FormOntologyLearner()
display(itemized_df) ```
-## Apply translations
+Notice how this transformation recasts the nested fields into a table, which enables the next two transformations. This screenshot is trimmed for brevity. If you're following along in your own notebook, you'll have 19 columns and 26 rows.
++
+## Add translations
Paste the following code into the fifth cell. No modifications are required, so run the code when you're ready.
display(translated_df)
> > :::image type="content" source="media/search-synapseml-cognitive-services/translated-strings.png" alt-text="Screenshot of table output, showing the Translations column." border="true":::
-## Apply search indexing
+## Add a search index with AzureSearchWriter
Paste the following code in the sixth cell and then run it. No modifications are required.
from synapse.ml.cognitive import *
)) ```
+You can check the search service pages in Azure portal to explore the index definition created by AzureSearchWriter.
+
+<!-- > [!NOTE]
+> If you can't use default search index, you can provide an external custom definition in JSON, passing its URI as a string in the "indexJson" property. Generate the default index first so that you know which fields to specify, and then follow with customized properties if you need specific analyzers, for example. -->
+ ## Query the index
-Paste the following code into the seventh cell and then run it. No modifications are required, except that you might want to vary the [query syntax](query-simple-syntax.md) or [review these query examples](search-query-simple-examples.md) to further explore your content.
+Paste the following code into the seventh cell and then run it. No modifications are required, except that you might want to vary the syntax or try more examples to further explore your content:
+++ [Query syntax](query-simple-syntax.md)++ [Query examples](search-query-simple-examples.md)+
+There's no transformer or module that issues queries. This cell is a simple call to the [Search Documents REST API](/rest/api/searchservice/search-documents).
-This code calls the [Search Documents REST API](/rest/api/searchservice/search-documents) that queries an index. This particular example is searching for the word "door". This query returns a count of the number of matching documents. It also returns just the contents of the "Description' and "Translations" fields. If you want to see the full list of fields, remove the "select" parameter.
+This particular example is searching for the word "door" (`"search": "door"`). It also returns a "count" of the number of matching documents, and selects just the contents of the "Description' and "Translations" fields for the results. If you want to see the full list of fields, remove the "select" parameter.
```python import requests
In this walkthrough, you learned about the [AzureSearchWriter](https://microsoft
As a next step, review the other SynapseML tutorials that produce transformed content you might want to explore through Azure Cognitive Search: > [!div class="nextstepaction"]
-> [Tutorial: Text Analytics with Cognitive Services](../synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md)
+> [Tutorial: Text Analytics with Cognitive Services](../synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md)
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
This table summarizes support for the cache storage account used by Site Recover
**Setting** | **Support** | **Details** | |
-General purpose V2 storage accounts (Hot and Cool tier) | Supported | Usage of GPv2 is not recommended because transaction costs for V2 are substantially higher than V1 storage accounts.
+General purpose V2 storage accounts (Hot and Cool tier) | Supported | Usage of GPv2 is recommended because GPv1 does not support ZRS (Zonal Redundant Storage).
Premium storage | Not supported | Standard storage accounts are used for cache storage, to help optimize costs. Region | Same region as virtual machine | Cache storage account should be in the same region as the virtual machine being protected. Subscription | Can be different from source virtual machines | Cache storage account need not be in the same subscription as the source virtual machine(s).
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
You can delete the Azure Spring Apps diagnostic settings by using Azure CLI:
### Which versions of Java runtime are supported in Azure Spring Apps?
-Azure Spring Apps supports Java LTS versions with the most recent builds, currently Java 8, Java 11, and Java17 are supported. For more information, see [Install the JDK for Azure and Azure Stack](/azure/developer/java/fundamentals/java-jdk-install).
+Azure Spring Apps supports Java LTS versions with the most recent builds, currently Java 8, Java 11, and Java 17 are supported.
-### Who built these Java runtimes?
-
-Azul Systems. The Azul Zulu for Azure - Enterprise Edition JDK builds are a no-cost, multi-platform, production-ready distribution of the OpenJDK for Azure and Azure Stack backed by Microsoft and Azul Systems. They contain all the components for building and running Java SE applications.
-
-### How often will Java runtimes get updated?
-
-LTS and MTS JDK releases have quarterly security updates, bug fixes, and critical out-of-band updates and patches as needed. This support includes backports to Java 7 and 8 of security updates and bug fixes reported in newer versions of Java, like Java 11.
-
-### How long will Java 8 and Java 11 LTS versions be supported?
+### For how long will Java 8, Java 11, and Java 17 LTS versions be supported?
See [Java long-term support for Azure and Azure Stack](/azure/developer/java/fundamentals/java-support-on-azure).
-* Java 8 LTS will be supported until December 2030.
-* Java 11 LTS will be supported until September 2027.
-
-### How can I download a supported Java runtime for local development?
-
-See [Install the JDK for Azure and Azure Stack](/azure/developer/java/fundamentals/java-jdk-install).
- ### What is the retire policy for older Java runtimes? Public notice will be sent out at 12 months before any old runtime version is retired. You will have 12 months to migrate to a later version.
Public notice will be sent out at 12 months before any old runtime version is re
### How can I get support for issues at the Java runtime level?
-You can open a support ticket with Azure Support. See [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+See [Java long-term support for Azure and Azure Stack](/azure/developer/java/fundamentals/java-support-on-azure).
### What is the operation system to run my apps?
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
You can manage virtual network rules for storage accounts through the Azure port
4. Add a network rule for a virtual network and subnet. ```azurecli
- subnetid=$(az network vnet subnet show --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --query id --output tsv)
+ $subnetid=(az network vnet subnet show --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --query id --output tsv)
az storage account network-rule add --resource-group "myresourcegroup" --account-name "mystorageaccount" --subnet $subnetid ```
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
description: Learn how to migrate a StorSimple 8100 or 8600 appliance to Azure F
Previously updated : 10/22/2021 Last updated : 08/24/2022
When using the StorSimple Data Manager migration service, either an entire migra
|**Backup** |*Could not find a backup for the parameters specified* |The backup selected for the job run is not found at the time of "Estimation" or "Copy". Ensure that the backup is still present in the StorSimple backup catalog. Sometimes automatic backup retention policies delete backups between selecting them for migration and actually running the migration job for this backup. Consider disabling any backup retention schedules before starting a migration. | |**Estimation </br> Configure compute** |*Installation of encryption keys failed* |Your *Service Data Encryption Key* is incorrect. Review the [encryption key section in this article](#storsimple-service-data-encryption-key) for more details and help retrieving the correct key. | | |*Batch error* |It is possible that starting up all the internal infrastructure required to perform a migration runs into an issue. Multiple other services are involved in this process. These problems generally resolve themselves when you attempt to run the job again. |
-| |*StorSimple Manager encountered an internal error. Wait for a few minutes and then try the operation again. If the issue persists, contact Microsoft Support. (Error code: 1074161829)* |This generic error has multiple causes, but one possibility encountered is that the StorSimple device manager reached the limit of 50 appliances. Check if the most recently run jobs in the device manager have suddenly started to fail with this error, which would suggest this is the problem. The mitigation for this particular issue is to remove any offline StorSimple 8001 appliances created and used byt the Data Manager Service. You can file a support ticket or delete them manually in the portal. Make sure to only delete offline 8001 series appliances. |
+| |*StorSimple Manager encountered an internal error. Wait for a few minutes and then try the operation again. If the issue persists, contact Microsoft Support. (Error code: 1074161829)* |This generic error has multiple causes, but one possibility encountered is that the StorSimple device manager reached the limit of 50 appliances. Check if the most recently run jobs in the device manager have suddenly started to fail with this error, which would suggest this is the problem. The mitigation for this particular issue is to remove any offline StorSimple 8001 appliances created and used by the Data Manager Service. You can file a support ticket or delete them manually in the portal. Make sure to only delete offline 8001 series appliances. |
|**Estimating Files** |*Clone volume job failed* |This error most likely indicates that you specified a backup that was somehow corrupted. The migration service can't mount or read it. You can try out the backup manually or open a support ticket. |
-| |*Cannot proceed as volume is in non-NTFS format* |Only NTFS volume, non dedupe enabled, can be used by the migration service. If you have a differently formatted volume, like ReFS or a third party format, the migration service won't be able to migrate this volume. See the [Known limitations](#known-limitations) section. |
-| |*Contact support. No suitable partition found on the disk* |The StorSimple disk that is supposed to have the volume specified fr migration doesn't appear to have a partition for said volume. That is unusual and can indicate a corruption or management mis-alignment. Your only option to further investigate this issue to to file a support ticket. |
-| |*Timed out* |The estimation phase failing with a timeout is typically an issues with either the StorSimple appliance, or the source Volume Backup being slow and sometimes even corrupt. If re-running the backup doesn't work, then filing a support ticket is your best course of action. |
-| |*Could not find file &lt;path&gt; </br>Could not find a part of the path* |The job definition allows you to provide a source sub-path. This error is shown when that path does not exist. For instance: *\Share1 > \Share\Share1* </br> In this example you've specified \Share1 as a sub-path in the source, mapping to another sub-path in the target. However, the source path does not exist (was misspelled?). Note: Windows is case preserving but not case dependent. Meaning specifying *\Share1* and *\share1* is equivalent. Also: Target paths that don't exist, will be automatically created. |
+| |*Cannot proceed as volume is in non-NTFS format* |Only NTFS volumes, non dedupe enabled, can be used by the migration service. If you have a differently formatted volume, like ReFS or a third-party format, the migration service won't be able to migrate this volume. See the [Known limitations](#known-limitations) section. |
+| |*Contact support. No suitable partition found on the disk* |The StorSimple disk that is supposed to have the volume specified for migration doesn't appear to have a partition for said volume. That is unusual and can indicate a corruption or management mis-alignment. Your only option to further investigate this issue is to file a support ticket. |
+| |*Timed out* |The estimation phase failing with a timeout is typically an issue with either the StorSimple appliance, or the source Volume Backup being slow and sometimes even corrupt. If re-running the backup doesn't work, then filing a support ticket is your best course of action. |
+| |*Could not find file &lt;path&gt; </br>Could not find a part of the path* |The job definition allows you to provide a source sub-path. This error is shown when that path does not exist. For instance: *\Share1 > \Share\Share1* </br> In this example you've specified *\Share1* as a sub-path in the source, mapping to another sub-path in the target. However, the source path does not exist (was misspelled?). Note: Windows is case preserving but not case dependent. Meaning specifying *\Share1* and *\share1* is equivalent. Also: Target paths that don't exist will be automatically created. |
| |*This request is not authorized to perform this operation* |This error shows when the source StorSimple storage account or the target storage account with the Azure file share has a firewall setting enabled. You must allow traffic over the public endpoint and not restrict it with further firewall rules. Otherwise the Data Transformation Service will be unable to access either storage account, even if you authorized it. Disable any firewall rules and re-run the job. |
-|**Copying Files** |*The account being accessed does not support HTTP* |This is an Azure FIles bug that is being fixed. The temporary mitigation is to disable internet routing on the target storage account or use the Microsoft routing endpoint. |
-| |*The specified share is full* |If the target is a premium Azure file share, ensure you have provisioned a sufficient capacity for the share. Temporary over-provisioning is a common practice. If the target is a standard file share, check that the target share has the "large file share" feature enabled. Standard storage is growing as you use the share. However, if you use a legacy storage account as a target, you might encounter a 5 TiB share limit. You will have to manually enable the ["Large file share"](storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account) feature. Fix the limits on the target and re-run the job. |
+|**Copying Files** |*The account being accessed does not support HTTP* |This is an Azure Files bug that is being fixed. The temporary mitigation is to disable internet routing on the target storage account or use the Microsoft routing endpoint. |
+| |*The specified share is full* |If the target is a premium Azure file share, ensure you have provisioned sufficient capacity for the share. Temporary over-provisioning is a common practice. If the target is a standard Azure file share, check that the target share has the "large file share" feature enabled. Standard storage is growing as you use the share. However, if you use a legacy storage account as a target, you might encounter a 5 TiB share limit. You will have to manually enable the ["Large file share"](storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account) feature. Fix the limits on the target and re-run the job. |
### Item level errors
-During the copy phase of a migration job run, individual namespace items (files and folders) can encounter errors. The following table lists the most common ones and suggests mitigation options when possible.
+During the copy phase of a migration job run, individual namespace items (files and folders) can encounter errors. The following table lists the most common errors and suggests mitigation options when possible.
|Phase |Error |Mitigation | |-|--||
-|**Copy** |*-2146233088 </br>The server is busy.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
-| |*-2146233088 </br>Operation could not be completed within the specified time.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
-| |*Upload timed out or copy not started* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
-| |*-2146233029 </br>The operation was cancelled.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+|**Copy** |*-2146233088 </br>The server is busy.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*-2146233088 </br>Operation could not be completed within the specified time.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*Upload timed out or copy not started* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*-2146233029 </br>The operation was cancelled.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
| |*1920 </br>The file cannot be accessed by the system.* |This is a common error when the migration engine encounters a reparse point, link, or junction. They are not supported. These types of files can't be copied. Review the [Known limitations](#known-limitations) section and the [File fidelity](#file-fidelity) section in this article. | | |*-2147024891 </br>Access is denied* |This is an error for files that are encrypted in a way that they can't be accessed on the disk. Files that can be read from disk but simply have encrypted content are not affected and can be copied. Your only option is to copy them manually. You can find such items by mounting the affected volume and running the following command: `get-childitem <path> [-Recurse] -Force -ErrorAction SilentlyContinue | Where-Object {$_.Attributes -ge "Encrypted"} | format-list fullname, attributes` |
-| |*Not a valid Win32 FileTime. Parameter name: fileTime* |In this case, the file can be accessed but can't be evaluated for copy because a timestamp the migration engine depends on is either corrupted or was written by an application in an incorrect format. There is not much you can do, because you can't change the timestamp in the backup. If retaining this file is important, perhaps on the latest version (last backup containing this file) you manually copy the file, fix the timestamp and then move it to the target Azure file share. This option doesn't scale very well but is an option for high-value files where you want to have at least one version retained in your target. |
-| |*-2146232798 </br>Safe handle has been closed* |Often a transient error. Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*Not a valid Win32 FileTime. Parameter name: fileTime* |In this case, the file can be accessed but can't be evaluated for copy because a timestamp the migration engine depends on is either corrupted or was written by an application in an incorrect format. There is not much you can do, because you can't change the timestamp in the backup. If retaining this file is important, perhaps on the latest version (last backup containing this file) you manually copy the file, fix the timestamp, and then move it to the target Azure file share. This option doesn't scale very well but is an option for high-value files where you want to have at least one version retained in your target. |
+| |*-2146232798 </br>Safe handle has been closed* |Often a transient error. Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
| |*-2147024413 </br>Fatal device hardware error* |This is a rare error and not actually reported for a physical device, but rather the 8001 series virtualized appliances used by the migration service. The appliance ran into an issue. Files with this error won't stop the migration from proceeding to the next backup. That makes it hard for you to perform a manual copy or retry the backup that contains files with this error. If the files left behind are very important or there is a large number of files, you may need to start the migration of all backups again. Open a support ticket for further investigation. | |**Delete </br>(Mirror purging)** |*The specified directory is not empty.* |This error occurs when the migration mode is set to *mirror* and the process that removes items from the Azure file share ran into an issue that prevented it from deleting items. Deletion happens only in the live share, not from previous snapshots. The deletion is necessary because the affected files are not in the current backup and thus must be removed from the live share before the next snapshot. There are two options: Option 1: mount the target Azure file share and delete the files with this error manually. Option 2: you can ignore these errors and continue processing the next backup with an expectation that the target is not identical to source and has some extra items that weren't in the original StorSimple backup. |
-| |*Bad request* |This error indicates that the source file has certain characteristics that could not be copied to the Azure file share. Most notably there could be invisible control characters in a file name or 1 byte of a double byte character in the file name or file path. You can use the copy logs to get path names, copy the files to a temporary location, rename the paths to remove the unsupported characters and then robocopy again to the Azure file share. You can then resume the migration by skipping to the next backup to be processed. |
+| |*Bad request* |This error indicates that the source file has certain characteristics that could not be copied to the Azure file share. Most notably there could be invisible control characters in a file name or 1 byte of a double byte character in the file name or file path. You can use the copy logs to get path names, copy the files to a temporary location, rename the paths to remove the unsupported characters, and then robocopy again to the Azure file share. You can then resume the migration by skipping to the next backup to be processed. |
synapse-analytics Create External Table As Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-external-table-as-select.md
USE [mydbname];
GO SELECT
- country_name, population
+ CountryName, PopulationCount
FROM PopulationCETAS WHERE
- [year] = 2019
+ [Year] = 2019
ORDER BY
- [population] DESC;
+ [PopulationCount] DESC;
``` ## Remarks
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Previously updated : 12/07/2021 Last updated : 08/24/2022
In this article, we'll give you a brief overview of what kinds of identities and
## Identities
-Azure Virtual desktop supports different types of identities depending on which configuration you choose. This section explains which identities you can use for each configuration.
+Azure Virtual Desktop supports different types of identities depending on which configuration you choose. This section explains which identities you can use for each configuration.
### On-premises identity
-Since users must be discoverable through Azure Active Directory (Azure AD) to access the Azure Virtual Desktop, user identities that exist only in Active Directory Domain Services (AD DS) are not supported. This includes standalone Active Directory deployments with Active Directory Federation Services (AD FS).
+Since users must be discoverable through Azure Active Directory (Azure AD) to access the Azure Virtual Desktop, user identities that exist only in Active Directory Domain Services (AD DS) aren't supported. This includes standalone Active Directory deployments with Active Directory Federation Services (AD FS).
### Hybrid identity
When accessing Azure Virtual Desktop using hybrid identities, sometimes the User
### Cloud-only identity
-Azure Virtual Desktop supports cloud-only identities when using [Azure AD-joined VMs](deploy-azure-ad-joined-vm.md).
+Azure Virtual Desktop supports cloud-only identities when using [Azure AD-joined VMs](deploy-azure-ad-joined-vm.md). These users are created and managed directly in Azure AD.
+
+### Third-party identity providers
+
+If you're using an Identity Provider (IdP) other than Azure AD to manage your user accounts, you must ensure that:
+
+- Your IdP is [federated with Azure AD](../active-directory/devices/azureadjoin-plan.md#federated-environment).
+- Your session hosts are Azure AD-joined or [Hybrid Azure AD-joined](../active-directory/devices/hybrid-azuread-join-plan.md).
+- You enable [Azure AD authentication](configure-single-sign-on.md) to the session host.
### External identity
Azure Virtual Desktop currently doesn't support [external identities](../active-
## Service authentication
-To access Azure Virtual Desktop resources, you must first authenticate to the service by signing in to an Azure AD account. Authentication happens when subscribing to a workspace to retrieve your resources or every time you connect to apps or desktops. You can use [third-party identity providers](../active-directory/devices/azureadjoin-plan.md#federated-environment) as long as they federate with Azure AD.
+To access Azure Virtual Desktop resources, you must first authenticate to the service by signing in with an Azure AD account. Authentication happens whenever you subscribe to a workspace to retrieve your resources and connect to apps or desktops. You can use [third-party identity providers](../active-directory/devices/azureadjoin-plan.md#federated-environment) as long as they federate with Azure AD.
### Multi-factor authentication Follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md) to learn how to enforce Azure AD Multi-Factor Authentication for your deployment. That article will also tell you how to configure how often your users are prompted to enter their credentials. When deploying Azure AD-joined VMs, note the extra steps for [Azure AD-joined session host VMs](set-up-mfa.md#azure-ad-joined-session-host-vms).
+### Passwordless authentication
+
+You can use any authentication type supported by Azure AD, such as [Windows Hello for Business](/security/identity-protection/hello-for-business/hello-overview) and other [passwordless authentication options](../active-directory/authentication/concept-authentication-passwordless.md) (for example, FIDO keys), to authenticate to the service.
+ ### Smart card authentication
-To use a smart card to authenticate to Azure AD, you must first [configure AD FS for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication).
+To use a smart card to authenticate to Azure AD, you must first [configure AD FS for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) or [configure Azure AD certificate-based authentication](../active-directory/authentication/concept-certificate-based-authentication.md).
## Session host authentication
-If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host. These are the sign-in methods for the session host that the Azure Virtual Desktop clients currently support:
+If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host when launching a connection. The following list describes which types of authentication each Azure Virtual Desktop client currently supports.
-- Windows Desktop client
+- The Windows Desktop client supports the following authentication methods:
- Username and password
- - Smartcard
+ - Smart card
- [Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) - [Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs)-- Windows Store client
+ - [Azure AD authentication](configure-single-sign-on.md)
+- The Windows Store client supports the following authentication method:
- Username and password-- Web client
+- The web client supports the following authentication method:
- Username and password-- Android
+- The Android client supports the following authentication method:
- Username and password-- iOS
+- The iOS client supports the following authentication method:
- Username and password-- macOS
+- The macOS client supports the following authentication method:
- Username and password >[!IMPORTANT]
->In order for authentication to work properly, your local machine must also be able to access the URLs in the [Remote Desktop clients](safe-url-list.md#remote-desktop-clients) section of our [required URL list](safe-url-list.md).
-
-Azure Virtual Desktop supports both NT LAN Manager (NTLM) and Kerberos for session host authentication. Smart card and Windows Hello for Business can only use Kerberos to sign in. To use Kerberos, the client needs to get Kerberos security tickets from a Key Distribution Center (KDC) service running on a domain controller. To get tickets, the client needs a direct networking line-of-sight to the domain controller. You can get a line-of-sight by connecting directly within your corporate network, using a VPN connection or setting up a [KDC Proxy server](key-distribution-center-proxy.md).
+>In order for authentication to work properly, your local machine must also be able to access the [required URLs for Remote Desktop clients](safe-url-list.md#remote-desktop-clients).
### Single sign-on (SSO)
-Azure Virtual Desktop supports [SSO using Active Directory Federation Services (ADFS)](configure-adfs-sso.md) for the Windows and web clients. SSO allows you to skip the session host authentication.
+SSO allows the connection to skip the session host credential prompt and automatically sign the user in to Windows. For session hosts that are Azure AD-joined or Hybrid Azure AD-joined, it's recommended to enable [SSO using Azure AD authentication](configure-single-sign-on.md). Azure AD authentication provides other benefits including passwordless authentication and support for third-party identity providers.
+
+Azure Virtual Desktop also supports [SSO using Active Directory Federation Services (AD FS)](configure-adfs-sso.md) for the Windows Desktop and web clients.
-Otherwise, the only way to avoid being prompted for your credentials for the session host is to save them in the client. We recommend you only do this with secure devices to prevent other users from accessing your resources.
+Without SSO, the client will prompt users for their session host credentials for every connection. The only way to avoid being prompted is to save the credentials in the client. We recommend you only save credentials on secure devices to prevent other users from accessing your resources.
+
+### Smart card and Windows Hello for Business
+
+Azure Virtual Desktop supports both NT LAN Manager (NTLM) and Kerberos for session host authentication, however Smart card and Windows Hello for Business can only use Kerberos to sign in. To use Kerberos, the client needs to get Kerberos security tickets from a Key Distribution Center (KDC) service running on a domain controller. To get tickets, the client needs a direct networking line-of-sight to the domain controller. You can get a line-of-sight by connecting directly within your corporate network, using a VPN connection or setting up a [KDC Proxy server](key-distribution-center-proxy.md).
## In-session authentication Once you're connected to your remote app or desktop, you may be prompted for authentication inside the session. This section explains how to use credentials other than username and password in this scenario.
-### Smart cards
+### In-session passwordless authentication (preview)
+
+> [!IMPORTANT]
+> In-session passwordless authentication is currently in public preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Virtual Desktop supports in-session passwordless authentication (preview) using [Windows Hello for Business](/security/identity-protection/hello-for-business/hello-overview) or security devices like FIDO keys. Passwordless authentication is currently only available for certain versions of Windows Insider. When deploying new session hosts, choose one of the following images:
+
+- Windows 11 version 22H2 Enterprise, (Preview) - X64 Gen 2.
+- Windows 11 version 22H2 Enterprise multi-session, (Preview) - X64 Gen2.
+
+Passwordless authentication is enabled by default when the local PC and session hosts use one of the supported operating systems above. You can disable it using the [WebAuthn redirection](configure-device-redirections.md#webauthn-redirection) RDP property.
+
+When enabled, all WebAuthn requests in the session are redirected to the local PC. You can use Windows Hello for Business or locally attached security devices to complete the authentication process.
-To use a smart card in your session, make sure you've installed the smart card drivers on the session host and enabled [smart card redirection](configure-device-redirections.md#smart-card-redirection) is enabled. Review the [client comparison chart](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare#other-redirection-devices-etc) to make sure your client supports smart card redirection.
+To access Azure AD resources with Windows Hello for Business or security devices, you must enable the FIDO2 Security Key as an authentication method for your users. To enable this method, follow the steps in [Enable FIDO2 security key method](../active-directory/authentication/howto-authentication-passwordless-security-key.md#enable-fido2-security-key-method).
-### FIDO2 and Windows Hello for Business
+### In-session smart card authentication
-Azure Virtual Desktop doesn't currently support in-session authentication with FIDO2 or Windows Hello for Business.
+To use a smart card in your session, make sure you've installed the smart card drivers on the session host and enabled [smart card redirection](configure-device-redirections.md#smart-card-redirection). Review the [client comparison chart](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare#other-redirection-devices-etc) to make sure your client supports smart card redirection.
## Next steps - Curious about other ways to keep your deployment secure? Check out [Security best practices](security-guide.md).-- Having issues connecting to Azure AD-joined VMs? [Troubleshoot connections to Azure AD-joined VMs](troubleshoot-azure-ad-connections.md).-- Want to use smart cards from outside your corporate network? Review how to setup a [KDC Proxy server](key-distribution-center-proxy.md).
+- Having issues connecting to Azure AD-joined VMs? Look at [Troubleshoot connections to Azure AD-joined VMs](troubleshoot-azure-ad-connections.md).
+- Having issues with in-session passwordless authentication? See [Troubleshoot WebAuthn redirection](troubleshoot-device-redirections.md#webauthn-redirection).
+- Want to use smart cards from outside your corporate network? Review how to set up a [KDC Proxy server](key-distribution-center-proxy.md).
virtual-desktop Configure Device Redirections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-device-redirections.md
Title: Configure device redirections - Azure
-description: How to configure device redirections for Azure Virtual Desktop.
+ Title: Configure device redirection - Azure
+description: How to configure device redirection for Azure Virtual Desktop.
Previously updated : 08/01/2022 Last updated : 08/24/2022
-# Configure device redirections
+# Configure device redirection
-Configuring device redirections for your Azure Virtual Desktop environment allows you to use printers, USB devices, microphones and other peripheral devices in the remote session. Some device redirections require changes to both Remote Desktop Protocol (RDP) properties and Group Policy settings.
+Configuring device redirection for your Azure Virtual Desktop environment allows you to use printers, USB devices, microphones, and other peripheral devices in the remote session. Some device redirections require changes to both Remote Desktop Protocol (RDP) properties and Group Policy settings.
-## Supported device redirections
+## Supported device redirection
-Each client supports different device redirections. Check out [Compare the clients](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare) for the full list of supported device redirections for each client.
+Each client supports different kinds of device redirections. Check out [Compare the clients](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare) for the full list of supported device redirections for each client.
>[!IMPORTANT] >You can only enable redirections with binary settings that apply to both to and from the remote machine. The service doesn't currently support one-way blocking of redirections from only one side of the connection.
Each client supports different device redirections. Check out [Compare the clien
To learn more about customizing RDP properties for a host pool using PowerShell or the Azure portal, check out [RDP properties](customize-rdp-properties.md). For the full list of supported RDP properties, see [Supported RDP file settings](/windows-server/remote/remote-desktop-services/clients/rdp-files?context=%2fazure%2fvirtual-desktop%2fcontext%2fcontext).
-## Setup device redirections
+## Setup device redirection
-You can use the following RDP properties and Group Policy settings to configure device redirections.
+You can use the following RDP properties and Group Policy settings to configure device redirection.
### Audio input (microphone) redirection
Set the following RDP property to configure clipboard redirection:
- `redirectclipboard:i:1` enables clipboard redirection. - `redirectclipboard:i:0` disables clipboard redirection.
-### COM port redirections
+### COM port redirection
Set the following RDP property to configure COM port redirection:
Set the following RDP property to configure smart card redirection:
- `redirectsmartcards:i:1` enables smart card redirection. - `redirectsmartcards:i:0` disables smart card redirection.+
+### WebAuthn redirection
+
+Set the following RDP property to configure WebAuthn redirection:
+
+- `redirectwebauthn:i:1` enables WebAuthn redirection.
+- `redirectwebauthn:i:0` disables WebAuthn redirection.
+
+When enabled, WebAuthn requests from the session are sent to the local PC to be completed using the local Windows Hello for Business or security devices like FIDO keys. For more information, see [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication-preview).
virtual-desktop Configure Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md
+
+ Title: Configure single sign-on for Azure Virtual Desktop - Azure
+description: How to configure single sign-on for an Azure Virtual Desktop environment.
++++++ Last updated : 08/24/2022++
+# Configure single sign-on for Azure Virtual Desktop
+
+> [!IMPORTANT]
+> Single sign-on using Azure AD authentication is currently in public preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article will walk you through the process of configuring single sign-on (SSO) using Azure AD authentication for Azure Virtual Desktop (preview). When you enable SSO, you can use passwordless authentication and third-party Identity Providers that federate with Azure AD to sign in to your resources.
+
+> [!NOTE]
+> Azure Virtual Desktop (classic) doesn't support this feature.
+
+## Prerequisites
+
+Single sign-on is currently only available for certain versions of Windows Insider. When deploying new session hosts, you must choose one of the following images:
+
+ - Windows 11 version 22H2 Enterprise, (Preview) - X64 Gen 2.
+ - Windows 11 version 22H2 Enterprise multi-session, (Preview) - X64 Gen2.
+
+You can enable SSO for connections to Azure Active Directory (AD)-joined VMs. You can also use SSO to access Hybrid Azure AD-joined VMs, but only after creating a Kerberos Server object. Azure Virtual Desktop doesn't support this solution with VMs joined to Azure AD Domain Services.
+
+> [!NOTE]
+> Hybrid Azure AD-joined Windows Server 2019 VMs don't support SSO.
+
+Currently, the [Windows Desktop client](./user-documentation/connect-windows-7-10.md) is the only client that supports SSO. The local PC must be running Windows 10 or later. There's no domain join requirement for the local PC.
+
+SSO is currently supported in the Azure Public cloud.
+
+## Enable single sign-on
+
+If your host pool contains Hybrid Azure AD-joined session hosts, you must first enable Azure AD Kerberos in your environment by creating a Kerberos Server object. Azure AD Kerberos enables the authentication needed with the domain controller. We recommended you also enable Azure AD Kerberos for Azure AD-joined session hosts if you have a Domain Controller (DC). Azure AD Kerberos provides a single sign-on experience when accessing legacy Kerberos-based applications or network shares. To enable Azure AD Kerberos in your environment, follow the steps to [Create a Kerberos Server object](../active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md#create-a-kerberos-server-object) on your DC.
+
+To enable SSO on your host pool, you must [customize an RDP property](customize-rdp-properties.md). You can find the **Azure AD Authentication** property under the **Connection information** tab in the Azure portal or set the **enablerdsaadauth:i:1** property using PowerShell.
+
+> [!IMPORTANT]
+> If you enable SSO on your Hybrid Azure AD-joined VMs before you create the Kerberos server object, you won't be able to connect to the VMs, and you'll see an error message saying the specific log on session doesn't exist.
+
+### Allow remote desktop connection dialog
+
+When enabling single sign-on, you'll currently be prompted to authenticate to Azure AD and allow the Remote Desktop connection when launching a connection to a new host. Azure AD remembers up to 15 hosts for 30 days before prompting again. If you see this dialogue, select **Yes** to connect.
+
+## Next steps
+
+- Check out [In-session passwordless authentication (preview)](authentication.md#in-session-passwordless-authentication-preview) to learn how to enable passwordless authentication.
+- If you're accessing Azure Virtual Desktop from our Windows Desktop client, see [Connect with the Windows Desktop client](./user-documentation/connect-windows-7-10.md).
+- If you encounter any issues, go to [Troubleshoot connections to Azure AD-joined VMs](troubleshoot-azure-ad-connections.md).
virtual-desktop Customize Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-rdp-properties.md
Title: Customize RDP properties with PowerShell - Azure
description: How to customize RDP Properties for Azure Virtual Desktop with PowerShell cmdlets. Previously updated : 08/11/2022 Last updated : 08/24/2022
>[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/customize-rdp-properties-2019.md).
-Customizing a host pool's Remote Desktop Protocol (RDP) properties, such as multi-monitor experience and audio redirection, lets you deliver an optimal experience for your users based on their needs. If you'd like to change the default RDP file properties, you can customize RDP properties in Azure Virtual Desktop by either using the Azure portal or by using the *-CustomRdpProperty* parameter in the **Update-AzWvdHostPool** cmdlet.
+You can customize a host pool's Remote Desktop Protocol (RDP) properties, such as multi-monitor experience and audio redirection, to deliver an optimal experience for your users based on their needs. If you'd like to change the default RDP file properties, you can customize RDP properties in Azure Virtual Desktop by either using the Azure portal or by using the *-CustomRdpProperty* parameter in the **Update-AzWvdHostPool** cmdlet.
See [supported RDP file settings](/windows-server/remote/remote-desktop-services/clients/rdp-files?context=%2fazure%2fvirtual-desktop%2fcontext%2fcontext) for a full list of supported properties and their default values.
RDP files have the following properties by default:
|RDP property|For both Desktop and RemoteApp| ||| |Multi-monitor mode|Enabled|
-|Drive redirections enabled|Drives, clipboard, printers, COM ports, smart cards, devices, and usbdevicestore|
+|Redirections enabled|Drives, clipboard, printers, COM ports, smart cards, devices, usbdevicestore, and WebAuthn|
|Remote audio mode|Play locally| |VideoPlayback|Enabled| |EnableCredssp|Enabled|
RDP files have the following properties by default:
>[!NOTE] >- Multi-monitor mode is only enabled for Desktop app groups and will be ignored for RemoteApp app groups. >- All default RDP file properties are exposed in the Azure Portal.
->- By default, the CustomRdpProperty field is null in the Azure portal. A null CustomRdpProperty field will apply all default RDP properties to your host pool. An empty CustomRdpProperty field will not apply any default RDP properties to your host pool.
+>- A null CustomRdpProperty field will apply all default RDP properties to your host pool. An empty CustomRdpProperty field won't apply any default RDP properties to your host pool.
## Prerequisites
CustomRdpProperty : audiocapturemode:i:1;audiomode:i:0;
## Reset all custom RDP properties
-You can reset individual custom RDP properties to their default values by following the instructions in [Add or edit a single custom RDP property](#add-or-edit-a-single-custom-rdp-property), or you can reset all custom RDP properties for a host pool by running the following PowerShell cmdlet:
+You can reset individual custom RDP properties to their default values by following the instructions in [Add or edit a single custom RDP property](#add-or-edit-a-single-custom-rdp-property). You can also reset all custom RDP properties for a host pool by running the following PowerShell cmdlet:
```powershell Update-AzWvdHostPool -ResourceGroupName <resourcegroupname> -Name <hostpoolname> -CustomRdpProperty ""
CustomRdpProperty : <CustomRDPpropertystring>
## Next steps
-Now that you've customized the RDP properties for a given host pool, you can sign in to a Azure Virtual Desktop client to test them as part of a user session. These next how-to guides will tell you how to connect to a session using the client of your choice:
+Now that you've customized the RDP properties for a given host pool, you can sign in to an Azure Virtual Desktop client to test them as part of a user session. These next how-to guides will tell you how to connect to a session using the client of your choice:
- [Connect with the Windows Desktop client](./user-documentation/connect-windows-7-10.md) - [Connect with the web client](./user-documentation/connect-web.md)
virtual-desktop Deploy Azure Ad Joined Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-ad-joined-vm.md
Title: Deploy Azure AD joined VMs in Azure Virtual Desktop - Azure
description: How to configure and deploy Azure AD joined VMs in Azure Virtual Desktop. -+ Previously updated : 04/27/2022 Last updated : 08/24/2022
User accounts can be cloud-only or synced users from the same Azure AD tenant.
## Known limitations
-The following known limitations may impact access to your on-premises or Active Directory domain-joined resources and should be considered when deciding whether Azure AD-joined VMs are right for your environment. We currently recommend Azure AD-joined VMs for scenarios where users only need access to cloud-based resources or Azure AD-based authentication.
+The following known limitations may affect access to your on-premises or Active Directory domain-joined resources and should be considered when deciding whether Azure AD-joined VMs are right for your environment. We currently recommend Azure AD-joined VMs for scenarios where users only need access to cloud-based resources or Azure AD-based authentication.
- Azure Virtual Desktop (classic) doesn't support Azure AD-joined VMs. - Azure AD-joined VMs don't currently support external identities, such as Azure AD Business-to-Business (B2B) and Azure AD Business-to-Consumer (B2C). - Azure AD-joined VMs can only access Azure Files file shares for synced users using Azure AD Kerberos. - The Windows Store client doesn't currently support Azure AD-joined VMs.-- Azure Virtual Desktop doesn't currently support single sign-on for Azure AD-joined VMs. ## Deploy Azure AD-joined VMs
To access Azure AD-joined VMs using the web, Android, macOS and iOS clients, you
You can use Azure AD Multi-Factor Authentication with Azure AD-joined VMs. Follow the steps to [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md) and note the extra steps for [Azure AD-joined session host VMs](set-up-mfa.md#azure-ad-joined-session-host-vms).
+### Single sign-on
+
+You can enable a single sign-on experience using Azure AD authentication when accessing Azure AD-joined VMs. Follow the steps to [Configure single sign-on](configure-single-sign-on.md) to provide a seamless connection experience.
+ ## User profiles You can use FSLogix profile containers with Azure AD-joined VMs when you store them on Azure Files while using synced user accounts. For more information, see [Create a profile container with Azure Files and Azure AD](create-profile-container-azure-ad.md).
While you don't need an Active Directory to deploy or access your Azure AD-joine
## Next steps
-Now that you've deployed some Azure AD joined VMs, you can sign in to a supported Azure Virtual Desktop client to test it as part of a user session. If you want to learn how to connect to a session, check out these articles:
+Now that you've deployed some Azure AD joined VMs, we recommend enabling single sign-on before connecting with a supported Azure Virtual Desktop client to test it as part of a user session. To learn more, check out these articles:
+- [Configure single sign-on](configure-single-sign-on.md)
+- [Create a profile container with Azure Files and Azure AD](create-profile-container-azure-ad.md)
- [Connect with the Windows Desktop client](user-documentation/connect-windows-7-10.md) - [Connect with the web client](user-documentation/connect-web.md) - [Troubleshoot connections to Azure AD-joined VMs](troubleshoot-azure-ad-connections.md)
virtual-desktop Set Up Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-mfa.md
Title: Enforce Azure Active Directory Multi-Factor Authentication for Azure Virt
description: How to enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access to help make it more secure. Previously updated : 05/27/2022 Last updated : 08/24/2022
> [!IMPORTANT] > If you're visiting this page from the Azure Virtual Desktop (classic) documentation, make sure to [return to the Azure Virtual Desktop (classic) documentation](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md) once you're finished.
-Users can sign into Azure Virtual Desktop from anywhere using different devices and clients. However, there are certain measures you should take to help keep yourself and your users safe. Using Azure Active Directory (Azure AD) Multi-Factor Authentication with Azure Virtual Desktop prompts users during the sign-in process for an additional form of identification, in addition to their username and password. You can enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access and whether it applies for the web client or mobile apps and desktop clients, or both.
+Users can sign into Azure Virtual Desktop from anywhere using different devices and clients. However, there are certain measures you should take to help keep yourself and your users safe. Using Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) with Azure Virtual Desktop prompts users during the sign-in process for another form of identification in addition to their username and password. You can enforce MFA for Azure Virtual Desktop using Conditional Access, and can also configure whether it applies to the web client, mobile apps, desktop clients, or all clients.
-How often a user is prompted to reauthenticate depends on [Azure AD session lifetime configuration settings](../active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md#azure-ad-session-lifetime-configuration-settings). For example, if their Windows client device is registered with Azure AD, it will receive a [Primary Refresh Token](../active-directory/devices/concept-primary-refresh-token.md) (PRT) to use single sign-on (SSO) across applications. Once issued, a PRT is valid for 14 days and is continuously renewed as long as the user actively uses the device.
+How often a user is prompted to reauthenticate depends on [Azure AD session lifetime configuration settings](../active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md#azure-ad-session-lifetime-configuration-settings). For example, if their Windows client device is registered with Azure AD, it will receive a [Primary Refresh Token](../active-directory/devices/concept-primary-refresh-token.md) (PRT) to use for single sign-on (SSO) across applications. Once issued, a PRT is valid for 14 days and is continuously renewed as long as the user actively uses the device.
While remembering credentials is convenient, it can also make deployments for Enterprise scenarios using personal devices less secure. To protect your users, you can make sure the client keeps asking for Azure AD Multi-Factor Authentication credentials more frequently. You can use Conditional Access to configure this behavior.
-Learn how to enforce Azure AD Multi-Factor Authentication for Azure Virtual Desktop and optionally configure sign-in frequency below.
+Learn how to enforce MFA for Azure Virtual Desktop and optionally configure sign-in frequency below.
## Prerequisites
Here's how to create a Conditional Access policy that requires multi-factor auth
1. Under the **Include** tab, select **Select apps**. 1. On the right, select one of the following apps based on which version of Azure Virtual Desktop you're using.
- - If you're using Azure Virtual Desktop (based on Azure Resource Manager), choose this app:
+ - If you're using Azure Virtual Desktop (based on Azure Resource Manager), you can configure MFA on two different apps:
- - **Azure Virtual Desktop** (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07)
+ - **Azure Virtual Desktop** (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07), which applies when the user subscribes to a feed and authenticates to the Azure Virtual Desktop Gateway during a connection.
> [!TIP] > The app name was previously *Windows Virtual Desktop*. If you registered the *Microsoft.DesktopVirtualization* resource provider before the display name changed, the application will be named **Windows Virtual Desktop** with the same app ID as above.
- After that, go to step 10.
+ - **Microsoft Remote Desktop** (app ID a4a365df-50f1-4397-bc59-1a1564b8bb9c), which applies when the user authenticates to the session host when [single sign-on](configure-single-sign-on.md) is enabled.
- If you're using Azure Virtual Desktop (classic), choose these apps:
Here's how to create a Conditional Access policy that requires multi-factor auth
> [!TIP] > If you're using Azure Virtual Desktop (classic) and if the Conditional Access policy blocks all access excluding Azure Virtual Desktop app IDs, you can fix this by also adding the **Azure Virtual Desktop** (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07) to the policy. Not adding this app ID will block feed discovery of Azure Virtual Desktop (classic) resources.
- After that, skip ahead to step 11.
- > [!IMPORTANT] > Don't select the app called Azure Virtual Desktop Azure Resource Manager Provider (app ID 50e95039-b200-4007-bc97-8d5790743a63). This app is only used for retrieving the user feed and shouldn't have multi-factor authentication.
For connections to succeed, you must [disable the legacy per-user multi-factor a
## Next steps - [Learn more about Conditional Access policies](../active-directory/conditional-access/concept-conditional-access-policies.md)- - [Learn more about user sign in frequency](../active-directory/conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency)
virtual-desktop Troubleshoot Azure Ad Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-azure-ad-connections.md
Title: Connections to Azure AD-joined VMs Azure Virtual Desktop - Azure
description: How to resolve issues while connecting to Azure AD-joined VMs in Azure Virtual Desktop. -+ Previously updated : 08/20/2021 Last updated : 08/24/2022 # Connections to Azure AD-joined VMs
Use this article to resolve issues with connections to Azure Active Directory (Azure AD)-joined VMs in Azure Virtual Desktop.
-## Provide feedback
-
-Visit the [Azure Virtual Desktop Tech Community](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/bd-p/AzureVirtualDesktopForum) to discuss the Azure Virtual Desktop service with the product team and active community members.
- ## All clients ### Your account is configured to prevent you from using this device If you come across an error saying **Your account is configured to prevent you from using this device. For more information, contact your system administrator**, ensure the user account was given the [Virtual Machine User Login role](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#azure-role-not-assigned) on the VMs.
-### I can't sign in, even though I'm using the right credentials
+### The user name or password is incorrect
-If you can't sign in and keep receiving an error message that says your credentials are incorrect, first make sure you're using the right credentials. If you keep seeing error messages, ask yourself the following questions:
+If you can't sign in and keep receiving an error message that says your credentials are incorrect, first make sure you're using the right credentials. If you keep seeing error messages, check to make sure you've fulfilled the following requirements:
-- Does your Conditional Access policy exclude multi-factor authentication requirements for the Azure Windows VM sign-in cloud application?-- Have you assigned the **Virtual Machine User Login** role-based access control (RBAC) permission to the VM or resource group for each user?
+- Have you assigned the **Virtual Machine User Login** role-based access control (RBAC) permission to the virtual machine (VM) or resource group for each user?
+- Does your Conditional Access policy exclude multi-factor authentication requirements for the **Azure Windows VM sign-in** cloud application?
-If you answered "no" to either of these questions, follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md#azure-ad-joined-session-host-vms) to reconfigure your multi-factor authentication.
+If you've answered "no" to either of those questions, you'll need to reconfigure your multi-factor authentication. To reconfigure your multi-factor authentication, follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md#azure-ad-joined-session-host-vms).
> [!WARNING] > VM sign-ins don't support per-user enabled or enforced Azure AD Multi-Factor Authentication. If you try to sign in with multi-factor authentication on a VM, you won't be able to sign in and will receive an error message.
If you come across an error saying **The logon attempt failed** on the Windows S
If you come across an error saying **The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator**, you have Conditional Access policies restricting access. Follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md#azure-ad-joined-session-host-vms) to enforce Azure Active Directory Multi-Factor Authentication for your Azure AD-joined VMs.
+### A specified logon session does not exist. It may already have been terminated.
+
+If you come across an error that says, **An authentication error occurred. A specified logon session does not exist. It may already have been terminated**, verify that you properly created and configured the Kerberos server object when [configuring single sign-on](configure-single-sign-on.md).
+ ## Web client ### Sign in failed. Please check your username and password and try again
If you come across an error saying **Oops, we couldn't connect to NAME. We could
If you come across an error saying **We couldn't connect to the remote PC because your credentials did not work. The remote machine is AADJ joined.** with error code 2607 when using the Android client, ensure that you [enabled connections from other clients](deploy-azure-ad-joined-vm.md#connect-using-the-other-clients).
+## Provide feedback
+
+Visit the [Azure Virtual Desktop Tech Community](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/bd-p/AzureVirtualDesktopForum) to discuss the Azure Virtual Desktop service with the product team and active community members.
+ ## Next steps - For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md).
virtual-desktop Troubleshoot Device Redirections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-device-redirections.md
+
+ Title: Device redirections in Azure Virtual Desktop - Azure
+description: How to resolve issues with device redirections in Azure Virtual Desktop.
++++++ Last updated : 08/24/2022++
+# Troubleshoot device redirections for Azure Virtual Desktop
+
+>[!IMPORTANT]
+>This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects.
+
+Use this article to resolve issues with device redirections in Azure Virtual Desktop.
+
+## WebAuthn redirection
+
+If WebAuthn requests from the session aren't redirected to the local PC, check to make sure you've fulfilled the following requirements:
+
+- Are you using supported operating systems for [in-session passwordless authentication](authentication.md#in-session-passwordless-authentication-preview) on both the local PC and session host?
+- Have you enabled WebAuthn redirection as a [device redirection](configure-device-redirections.md#webauthn-redirection)?
+
+If you've answered "yes" to both of the earlier questions but still don't see the option to use Windows Hello for Business or security keys when accessing Azure AD resources, make sure you've enabled the FIDO2 security key method for the user account in Azure AD. To enable this method, follow the directions in [Enable FIDO2 security key method](../active-directory/authentication/howto-authentication-passwordless-security-key.md#enable-fido2-security-key-method).
+
+If a user signs in to the session host with a single-factor credential like username and password, then tries to access an Azure AD resource that requires MFA, they may not be able to use Windows Hello for Business. The user should follow these instructions to authenticate properly:
+
+1. If the user isn't prompted for a user account, they should first sign out.
+1. On the **account selection** page, select **Use another account**.
+1. Next, choose **Sign-in options** at the bottom of the window.
+1. After that, select **Sign in with Windows Hello or a security key**. They should see an option to select Windows Hello or security authentication methods.
+
+## Provide feedback
+
+Visit the [Azure Virtual Desktop Tech Community](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/bd-p/AzureVirtualDesktopForum) to discuss the Azure Virtual Desktop service with the product team and active community members.
+
+## Next steps
+
+- For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md).
+- To troubleshoot issues while creating an Azure Virtual Desktop environment and host pool in an Azure Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md).
+- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).
+- To troubleshoot issues related to the Azure Virtual Desktop agent or session connectivity, see [Troubleshoot common Azure Virtual Desktop Agent issues](troubleshoot-agent.md).
+- To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md).
+- To go through a troubleshooting tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md).
virtual-machines Availability Set Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/availability-set-overview.md
Last updated 02/18/2021
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
+> [!NOTE]
+> We recommend that new customers choose [virtual machine scale sets with flexible orchestration mode](../virtual-machine-scale-sets/overview.md) for high availability with the widest range of features. Virtual machine scale sets allow VM instances to be centrally managed, configured, and updated, and will automatically increase or decrease the number of VM instances in response to demand or a defined schedule. Availability sets only offer high availability.
+ This article provides you with an overview of the availability features of Azure virtual machines (VMs). ## What is an availability set?
This article provides you with an overview of the availability features of Azure
An availability set is a logical grouping of VMs that allows Azure to understand how your application is built to provide for redundancy and availability. We recommended that two or more VMs are created within an availability set to provide for a highly available application and to meet the [99.95% Azure SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines/). There is no cost for the Availability Set itself, you only pay for each VM instance that you create. ## How do availability sets work?
-Each virtual machine in your availability set is assigned an **update domain** and a **fault domain** by the underlying Azure platform. Each availability set can be configured with up to three fault domains and twenty update domains. Update domains indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. When more than five virtual machines are configured within a single availability set with five update domains, the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on. The order of update domains being rebooted may not proceed sequentially during planned maintenance, but only one update domain is rebooted at a time. A rebooted update domain is given 30 minutes to recover before maintenance is initiated on a different update domain.
+Each virtual machine in your availability set is assigned an **update domain** and a **fault domain** by the underlying Azure platform. Each availability set can be configured with up to three fault domains and twenty update domains. These configurations cannot be changed once the availability set has been created. Update domains indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. When more than five virtual machines are configured within a single availability set with five update domains, the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on. The order of update domains being rebooted may not proceed sequentially during planned maintenance, but only one update domain is rebooted at a time. A rebooted update domain is given 30 minutes to recover before maintenance is initiated on a different update domain.
Fault domains define the group of virtual machines that share a common power source and network switch. By default, the virtual machines configured within your availability set are separated across up to three fault domains. While placing your virtual machines into an availability set does not protect your application from operating system or application-specific failures, it does limit the impact of potential physical hardware failures, network outages, or power interruptions.
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption-overview.md
There are several types of encryption available for your managed disks, includin
- **Encryption at host** ensures that data stored on the VM host hosting your VM is encrypted at rest and flows encrypted to the Storage clusters. For full details, see [Encryption at host - End-to-end encryption for your VM data](./disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data). -- **Confidential disk encryption** binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. The TPM and VM guest state is always encrypted in attested code using keys released by a secure protocol that bypasses the hypervisor and host operating system. Currently only available for the OS disk. Encryption at host may be used for other disks on a Confidential VM in addition to Confidential Disk Encryption. For full details, see [DCasv5 and ECasv5 series confidential VMs](../confidential-computing/confidential-vm-overview.md#full-disk-encryption).
+- **Confidential disk encryption** binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. The TPM and VM guest state is always encrypted in attested code using keys released by a secure protocol that bypasses the hypervisor and host operating system. Currently only available for the OS disk. Encryption at host may be used for other disks on a Confidential VM in addition to Confidential Disk Encryption. For full details, see [DCasv5 and ECasv5 series confidential VMs](../confidential-computing/confidential-vm-overview.md#confidential-os-disk-encryption).
Encryption is part of a layered approach to security and should be used with other recommendations to secure Virtual Machines and their disks. For full details, see [Security recommendations for virtual machines in Azure](security-recommendations.md) and [Restrict import/export access to managed disks](disks-enable-private-links-for-import-export-portal.md).
Here's a comparison of SSE, ADE, encryption at host, and Confidential disk encry
- [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md) - [Server-side encryption of Azure Disk Storage](./disk-encryption.md) - [Encryption at host](./disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data)-- [DCasv5 and ECasv5 series confidential VMs](../confidential-computing/confidential-vm-overview.md#full-disk-encryption)
+- [DCasv5 and ECasv5 series confidential VMs](../confidential-computing/confidential-vm-overview.md#confidential-os-disk-encryption)
- [Azure Security Fundamentals - Azure encryption overview](../security/fundamentals/encryption-overview.md)
virtual-machines Vmaccess https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess.md
Last updated 05/10/2018-+ + # Manage administrative users, SSH, and check or repair disks on Linux VMs using the VMAccess Extension with the Azure CLI ## Overview The disk on your Linux VM is showing errors. You somehow reset the root password for your Linux VM or accidentally deleted your SSH private key. If that happened back in the days of the datacenter, you would need to drive there and then open the KVM to get at the server console. Think of the Azure VMAccess extension as that KVM switch that allows you to access the console to reset access to Linux or perform disk level maintenance.
The following examples use raw JSON files. Use [az vm extension set](/cli/azure/
### Reset user access If you have lost access to root on your Linux VM, you can launch a VMAccess script to update a user's SSH key or password.
-To update the SSH public key of a user, create a file named `update_ssh_key.json` and add settings in the following format. Substitute your own values for the `username` and `ssh_key` parameters:
+To update the SSH public key of a user, create a file named `update_ssh_key.json` and add settings in the following format. Replace `username` and `ssh_key` with your own information:
```json {
az vm extension set \
--protected-settings update_ssh_key.json ```
-To reset a user password, create a file named `reset_user_password.json` and add settings in the following format. Substitute your own values for the `username` and `password` parameters:
+To reset a user password, create a file named `reset_user_password.json` and add settings in the following format. Replace `username` and `password` with your own information:
```json {
az vm extension set \
``` ### Restart SSH
-To restart the SSH daemon and reset the SSH configuration to default values, create a file named `reset_sshd.json`. Add the following content:
+To restart the SSH daemon and reset the SSH configuration to default values, create a file named `reset_sshd.json`. Add the following text:
```json {
az vm extension set \
--protected-settings create_new_user.json ```
-To delete a user, create a file named `delete_user.json` and add the following content. Substitute your own value for the `remove_user` parameter:
+To delete a user, create a file named `delete_user.json` and add the following content. Change the data for `remove_user` to the user you're trying to delete:
```json {
az vm extension set \
### Check or repair the disk Using VMAccess you can also check and repair a disk that you added to the Linux VM.
-To check and then repair the disk, create a file named `disk_check_repair.json` and add settings in the following format. Substitute your own value for the name of `repair_disk`:
+To check and then repair the disk, create a file named `disk_check_repair.json` and add settings in the following format. Change the data for `repair_disk` to the disk you're trying to repair:
```json {
virtual-machines Create Ssh Keys Detailed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-ssh-keys-detailed.md
Last updated 08/18/2022 + # Detailed steps: Create and manage SSH keys for authentication to a Linux VM in Azure
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
By default the SAP System deployment uses the credentials from the SAP Workload
### Azure NetApp Files Support > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | - | --| -- | |
-> | `ANF_use_for_HANA_data` | Create Azure NetApp Files volume for HANA data. | Optional | |
-> | `ANF_use_existing_data_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes |
-> | `ANF_data_volume_name` | Azure NetApp Files volume name for HANA data. | Optional | |
-> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data. | Optional | default size 256 |
-> | | | | |
-> | `ANF_use_for_HANA_log` | Create Azure NetApp Files volume for HANA log. | Optional | |
-> | `ANF_use_existing_log_volume` | Use existing Azure NetApp Files volume for HANA log. | Optional | Use for pre-created volumes |
-> | `ANF_log_volume_name` | Azure NetApp Files volume name for HANA log. | Optional | |
-> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA log. | Optional | default size 128 |
-> | | | | |
-> | `ANF_use_for_HANA_shared` | Create Azure NetApp Files volume for HANA shared. | Optional | |
-> | `ANF_use_existing_shared_volume` | Use existing Azure NetApp Files volume for HANA shared. | Optional | Use for pre-created volumes |
-> | `ANF_shared_volume_name` | Azure NetApp Files volume name for HANA shared. | Optional | |
-> | `ANF_HANA_shared_volume_size` | Azure NetApp Files volume size in GB for HANA shared. | Optional | default size 128 |
-> | | | | |
-> | `ANF_use_for_sapmnt` | Create Azure NetApp Files volume for sapmnt. | Optional | |
-> | `ANF_use_existing_sapmnt_volume` | Use existing Azure NetApp Files volume for sapmnt. | Optional | Use for pre-created volumes |
-> | `ANF_sapmnt_volume_name` | Azure NetApp Files volume name for sapmnt. | Optional | |
-> | `ANF_sapmnt_volume_size` | Azure NetApp Files volume size in GB for sapmnt. | Optional | default size 128 |
-> | | | | |
-> | `ANF_use_for_usrsap` | Create Azure NetApp Files volume for usrsap. | Optional | |
-> | `ANF_use_existing_usrsap_volume` | Use existing Azure NetApp Files volume for usrsap. | Optional | Use for pre-created volumes |
-> | `ANF_usrsap_volume_name` | Azure NetApp Files volume name for usrsap. | Optional | |
-> | `ANF_usrsap_volume_size` | Azure NetApp Files volume size in GB for usrsap. | Optional | default size 128 |
+> | Variable | Description | Type | Notes |
+> | -- | --| -- | |
+> | `ANF_HANA_data` | Create Azure NetApp Files volume for HANA data. | Optional | |
+> | `ANF_HANA_data_use_existing_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes |
+> | `ANF_HANA_data_volume_name` | Azure NetApp Files volume name for HANA data. | Optional | |
+> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data. | Optional | default size 256 |
+> | `ANF_HANA_data_volume_throughput` | Azure NetApp Files volume throughput for HANA data. | Optional | default is 128 MBs/s |
+> | | | | |
+> | `ANF_HANA_log` | Create Azure NetApp Files volume for HANA log. | Optional | |
+> | `ANF_HANA_log_use_existing` | Use existing Azure NetApp Files volume for HANA log. | Optional | Use for pre-created volumes |
+> | `ANF_HANA_log_volume_name` | Azure NetApp Files volume name for HANA log. | Optional | |
+> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA log. | Optional | default size 128 |
+> | `ANF_HANA_log_volume_throughput` | Azure NetApp Files volume throughput for HANA log. | Optional | default is 128 MBs/s |
+> | | | | |
+> | `ANF_HANA_shared` | Create Azure NetApp Files volume for HANA shared. | Optional | |
+> | `ANF_HANA_shared_use_existing` | Use existing Azure NetApp Files volume for HANA shared. | Optional | Use for pre-created volumes |
+> | `ANF_HANA_shared_volume_name` | Azure NetApp Files volume name for HANA shared. | Optional | |
+> | `ANF_HANA_shared_volume_size` | Azure NetApp Files volume size in GB for HANA shared. | Optional | default size 128 |
+> | `ANF_HANA_shared_volume_throughput` | Azure NetApp Files volume throughput for HANA shared. | Optional | default is 128 MBs/s |
+> | | | | |
+> | `ANF_sapmnt` | Create Azure NetApp Files volume for sapmnt. | Optional | |
+> | `ANF_sapmnt_use_existing_volume` | Use existing Azure NetApp Files volume for sapmnt. | Optional | Use for pre-created volumes |
+> | `ANF_sapmnt_volume_name` | Azure NetApp Files volume name for sapmnt. | Optional | |
+> | `ANF_sapmnt_volume_size` | Azure NetApp Files volume size in GB for sapmnt. | Optional | default size 128 |
+> | `ANF_sapmnt_throughput` | Azure NetApp Files volume throughput for sapmnt. | Optional | default is 128 MBs/s |
+> | | | | |
+> | `ANF_usr_sap` | Create Azure NetApp Files volume for usrsap. | Optional | |
+> | `ANF_usr_sap_use_existing` | Use existing Azure NetApp Files volume for usrsap. | Optional | Use for pre-created volumes |
+> | `ANF_usr_sap_volume_name` | Azure NetApp Files volume name for usrsap. | Optional | |
+> | `ANF_usr_sap_volume_size` | Azure NetApp Files volume size in GB for usrsap. | Optional | default size 128 |
+> | `ANF_usr_sap_throughput` | Azure NetApp Files volume throughput for usrsap. | Optional | default is 128 MBs/s |
## Oracle parameters
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
use_private_endpoint = true
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | --| -- | |
-> | `ANF_account_arm_id` | Azure resource identifier for the Azure NetApp Files Account. | Optional | For brown field deployments. |
> | `ANF_account_name` | Name for the Azure NetApp Files Account. | Optional | | > | `ANF_service_level` | Service level for the Azure NetApp Files Capacity Pool. | Optional | |
-> | `ANF_use_existing_pool` | Use existing the Azure NetApp Files Capacity Pool. | Optional | |
> | `ANF_pool_size` | The size (in GB) of the Azure NetApp Files Capacity Pool. | Optional | |
+> | `ANF_qos_type` | The Quality of Service type of the pool (Auto or Manual). | Optional | |
+> | `ANF_use_existing_pool` | Use existing the Azure NetApp Files Capacity Pool. | Optional | |
> | `ANF_pool_name` | The name of the Azure NetApp Files Capacity Pool. | Optional | |
+> | `ANF_account_arm_id` | Azure resource identifier for the Azure NetApp Files Account. | Optional | For brown field deployments. |
> | | | | |
-> | `ANF_use_existing_transport_volume` | Defines if an existing transport volume is used. | Optional | |
-> | `ANF_transport_volume_name` | Defines the transport volume name. | Optional | |
+> | `ANF_transport_volume_use_existing` | Defines if an existing transport volume is used. | Optional | |
+> | `ANF_transport_volume_name` | Defines the transport volume name. | Optional | For brown field deployments. |
> | `ANF_transport_volume_size` | Defines the size of the transport volume in GB. | Optional | | > | `ANF_transport_volume_throughput` | Defines the throughput of the transport volume. | Optional | | > | | | | |
-> | `ANF_use_existing_install_volume` | Defines if an existing install volume is used. | Optional | |
-> | `ANF_install_volume_name` | Defines the install volume name. | Optional | |
-> | `ANF_install_volume_size` | Defines the size of the install volume in GB. | Optional | |
-> | `ANF_install_volume_throughput` | Defines the throughput of the install volume. | Optional | |
+> | `ANF_install_volume_use_existing` | Defines if an existing install volume is used. | Optional | |
+> | `ANF_install_volume_name` | Defines the install volume name. | Optional | For brown field deployments. |
+> | `ANF_install_volume_size` | Defines the size of the install volume in GB. | Optional | |
+> | `ANF_install_volume_throughput` | Defines the throughput of the install volume. | Optional | |
**Minimum required ANF definition**
virtual-machines Dbms_Guide_Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_ibm.md
Title: IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload | Microsoft Docs description: IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload- tags: azure-resource-manager keywords: 'Azure, Db2, SAP, IBM' Previously updated : 06/29/2022 Last updated : 08/24/2022
virtual-machines Dbms_Guide_Maxdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_maxdb.md
Title: SAP MaxDB, liveCache, and Content Server deployment on Azure VMs | Microsoft Docs description: SAP MaxDB, liveCache, and Content Server deployment on Azure- tags: azure-resource-manager
-keywords: ''
Previously updated : 07/12/2018 Last updated : 08/24/2022 - # SAP MaxDB, liveCache, and Content Server deployment on Azure VMs
virtual-machines Dbms_Guide_Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_oracle.md
Title: Oracle Azure Virtual Machines DBMS deployment for SAP workload | Microsoft Docs description: Oracle Azure Virtual Machines DBMS deployment for SAP workload- tags: azure-resource-manager keywords: 'SAP, Azure, Oracle, Data Guard' Previously updated : 07/18/2022 Last updated : 08/24/2022 - # Azure Virtual Machines Oracle DBMS deployment for SAP workload
virtual-machines Dbms_Guide_Sapase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_sapase.md
Title: SAP ASE Azure Virtual Machines DBMS deployment for SAP workload | Microsoft Docs description: SAP ASE Azure Virtual Machines DBMS deployment for SAP workload- tags: azure-resource-manager
-keywords: ''
Previously updated : 11/02/2021 Last updated : 08/24/2022 - + # SAP ASE Azure Virtual Machines DBMS deployment for SAP workload In this document, covers several different areas to consider when deploying SAP ASE in Azure IaaS. As a precondition to this document, you should have read the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md) and other guides in the [SAP workload on Azure documentation](./get-started.md). This document covers SAP ASE running on Linux and on Windows Operating Systems. The minimum supported release on Azure is SAP ASE 16.0.02 (Release 16 Support Pack 2). It is recommended to deploy the latest version of SAP and the latest Patch Level. As a minimum SAP ASE 16.0.03.07 (Release 16 Support Pack 3 Patch Level 7) is recommended. The most recent version of SAP can be found in [Targeted ASE 16.0 Release Schedule and CR list Information](https://wiki.scn.sap.com/wiki/display/SYBASE/Targeted+ASE+16.0+Release+Schedule+and+CR+list+Information).
virtual-machines Dbms_Guide_Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_sqlserver.md
Title: SQL Server Azure Virtual Machines DBMS deployment for SAP workload | Microsoft Docs description: SQL Server Azure Virtual Machines DBMS deployment for SAP workload- tags: azure-resource-manager keywords: 'Azure, SQL Server, SAP, AlwaysOn, Always On' Previously updated : 03/30/2022 Last updated : 08/24/2022
virtual-machines Hana Li Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-li-portal.md
Title: Azure HANA Large Instances control through Azure portal | Microsoft Docs description: Describes the way how you can identify and interact with Azure HANA Large Instances through portal- tags: azure-resource-manager
-keywords: ''
Last updated 07/01/2021
virtual-machines Hana Vm Operations Netapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-netapp.md
Title: SAP HANA Azure virtual machine ANF configuration | Microsoft Docs description: Azure NetApp Files Storage recommendations for SAP HANA.- tags: azure-resource-manager keywords: 'SAP, Azure, ANF, HANA, Azure NetApp Files, snapshot' Last updated 02/07/2022 - # NFS v4.1 volumes on Azure NetApp Files for SAP HANA
virtual-machines Hana Vm Operations Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-storage.md
Title: SAP HANA Azure virtual machine storage configurations | Microsoft Docs description: Storage recommendations for VM that have SAP HANA deployed in them.- tags: azure-resource-manager keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage' Last updated 02/28/2022 - # SAP HANA Azure virtual machine storage configurations
virtual-machines Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations.md
Title: SAP HANA infrastructure configurations and operations on Azure | Microsoft Docs description: Operations guide for SAP HANA systems that are deployed on Azure virtual machines.- tags: azure-resource-manager
-keywords: ''
Last updated 06/06/2022 - # SAP HANA infrastructure configurations and operations on Azure
virtual-network-manager Create Virtual Network Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-cli.md
Previously updated : 11/16/2021 Last updated : 08/23/2022 ms.devlang: azurecli
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
To begin your configuration, sign in to your Azure account. If you use the Cloud Shell "Try It", you're signed in automatically. Use the following examples to help you connect:
-```azurecli-interactive
+```azurecli
az login ``` Select the subscription where network manager will be deployed.
-```azurecli-interactive
+```azurecli
az account set \
- --subscription "<subscription ID>"
+ --subscription "<subscription_id>"
```
+Update the Azure Virtual Network Manager extension for Azure CLI.
+```azurecli
+az extension update --name virtual-network-manager
+```
## Create a resource group
-Before you can deploy Azure Virtual Network Manager, you have to create a resource group to host the . Create a rnetwork manager esource group with [az group create](/cli/azure/group#az-group-create). This example creates a resource group named **myAVNMResourceGroup** in the **westus** location:
+Before you can deploy Azure Virtual Network Manager, you have to create a resource group to host a network manager with [az group create](/cli/azure/group#az-group-create). This example creates a resource group named **myAVNMResourceGroup** in the **westus** location:
-```azurecli-interactive
+```azurecli
az group create \ --name "myAVNMResourceGroup" \ --location "westus"
az group create \
## Create a Virtual Network Manager
-Define the scope and access type this Network Manager instance will have. Create the scope by using [az network manager create](/cli/azure/network/manager#az-network-manager-create). Replace the value *{mgName}* with management group name or *{subscriptionId}* with subscriptions you want Virtual Network Manager to manage virtual networks for.
+Define the scope and access type this Network Manager instance will have. Create the scope by using [az network manager create](/cli/azure/network/manager#az-network-manager-create). Replace the value *<subscription_id>* with the subscription you want Virtual Network Manager to manage virtual networks for. For management groups, replace *<mgName\>* with the management group to manage.
-```azurecli-interactive
+```azurecli
az network manager create \ --location "westus" \ --name "myAVNM" \ --resource-group "myAVNMResourceGroup" \ --scope-accesses "Connectivity" "SecurityAdmin" \
- --network-manager-scopes management-groups="/Microsoft.Management/{mgName}" subscriptions="/subscriptions/{subscriptionId}"
+ --network-manager-scopes subscriptions="/subscriptions/<subscription_id>"
```
+## Create a network group
-## Create three virtual networks
+Virtual Network Manager applies configurations to groups of VNets by placing them in **Network Groups.** Create a network group with [az network manager group create](/cli/azure/network/manager/group#az-network-manager-group-create).
+
+```azurecli
+az network manager group create \
+ --name "myNetworkGroup" \
+ --network-manager-name "myAVNM" \
+ --resource-group "myAVNMResourceGroup" \
+ --description "Network Group for Production virtual networks"
+```
+## Create virtual networks
-Create three virtual networks with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates virtual networks named **VNetA**, **VNetB** and **VNetC** in the **westus** location. If you already have virtual networks you want create a mesh network with, you can skip to the next section.
+Create five virtual networks with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates virtual networks named **VNetA**, **VNetB**,**VNetC** and **VNetD** in the **West US** location. Each virtual network will have a tag of **networkType** used for dynamic membership. If you already have virtual networks you want create a mesh network with, you can skip to the next section.
-```azurecli-interactive
+```azurecli
az network vnet create \ --name "VNetA" \ --resource-group "myAVNMResourceGroup" \
- --address-prefix "10.0.0.0/16"
+ --address-prefix "10.0.0.0/16" \
+ --tags "NetworkType=Prod"
az network vnet create \ --name "VNetB" \ --resource-group "myAVNMResourceGroup" \
- --address-prefix "10.1.0.0/16"
+ --address-prefix "10.1.0.0/16" \
+ --tags "NetworkType=Prod"
az network vnet create \ --name "VNetC" \ --resource-group "myAVNMResourceGroup" \
- --address-prefix "10.2.0.0/16"
-```
+ --address-prefix "10.2.0.0/16" \
+ --tags "NetworkType=Prod"
+
+az network vnet create \
+ --name "VNetD" \
+ --resource-group "myAVNMResourceGroup" \
+ --address-prefix "10.3.0.0/16" \
+ --tags "NetworkType=Test"
+az network vnet create \
+ --name "VNetE" \
+ --resource-group "myAVNMResourceGroup" \
+ --address-prefix "10.4.0.0/16" \
+ --tags "NetworkType=Test"
+```
### Add a subnet to each virtual network To complete the configuration of the virtual networks add a /24 subnet to each one. Create a subnet configuration named **default** with [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create):
-```azurecli-interactive
+```azurecli
az network vnet subnet create \ --name "default" \ --resource-group "myAVNMResourceGroup" \
az network vnet subnet create \
--resource-group "myAVNMResourceGroup" \ --vnet-name "VNetC" \ --address-prefix "10.2.0.0/24"+
+az network vnet subnet create \
+ --name "default" \
+ --resource-group "myAVNMResourceGroup" \
+ --vnet-name "VNetD" \
+ --address-prefix "10.3.0.0/24"
+
+az network vnet subnet create \
+ --name "default" \
+ --resource-group "myAVNMResourceGroup" \
+ --vnet-name "VNetE" \
+ --address-prefix "10.4.0.0/24"
```
+## Define membership for a mesh configuration
-## Create a network group
+Azure Virtual Network manager allows you two methods for adding membership to a network group. Static membership involves manually adding virtual networks, and dynamic membership involves using Azure Policy to dynamically add virtual networks based on conditions. Choose the option you wish to complete for your mesh configuration membership:
-Create a network group using static membership with [az network manager group create](/cli/azure/network/manager/group#az-network-manager-group-create). Replace the value *{subscriptionId}* with the subscription the virtual network is in.
+### Static membership option
-```azurecli-interactive
-az network manager group create \
- --name "myNetworkGroup" \
- --network-manager-name "myAVNM" \
- --group-members resource-id="/subscriptions/{subscriptionId}/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/virtualNetworks/VNetA" \
- --group-members resource-id="/subscriptions/{subscriptionId}/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/virtualNetworks/VNetB" \
- --group-members resource-id="/subscriptions/{subscriptionId}/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/virtualNetworks/VNetC" \
- --resource-group "myAVNMResourceGroup"
+Using **static membership**, you'll manually add 3 VNets for your Mesh configuration to your Network Group with [az network manager group static-member create](/cli/azure/network/manager/group/static-member#az-network-manager-group-static-member-create). Replace <subscription_id> with the subscription these VNets were created under.
+
+```azurecli
+az network manager group static-member create \
+ --name "VNetA" \
+ --network-group "myNetworkGroup" \
+ --network-manager "myAVNM" \
+ --resource-group "myAVNMResourceGroup" \
+ --resource-id "/subscriptions/<subscription_id>/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/virtualnetworks/VNetA"
+```
+
+```azurecli
+az network manager group static-member create \
+ --name "VNetB" \
+ --network-group "myNetworkGroup" \
+ --network-manager "myAVNM" \
+ --resource-group "myAVNMResourceGroup" \
+ --resource-id "/subscriptions/<subscription_id>/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/virtualnetworks/VNetB"
+```
+
+```azurecli
+az network manager group static-member create \
+ --name "VNetC" \
+ --network-group "myNetworkGroup" \
+ --network-manager "myAVNM" \
+ --resource-group "myAVNMResourceGroup" \
+ --resource-id "/subscriptions/<subscription_id>/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/virtualnetworks/VNetC"
+```
+### Dynamic membership option
+
+Using [Azure Policy](concept-azure-policy-integration.md), you'll dynamically add the three VNets with a tag **networkType** value of *Prod* to the Network Group. These are the three virtual networks to become part of the mesh configuration.
+
+> [!NOTE]
+> Policies can be applied to a subscription or management group, and must always be defined *at or above* the level they're created. Only virtual networks within a policy scope are added to a Network Group.
+
+### Create a Policy definition
+Create a Policy definition with [az policy definition create](/cli/azure/policy/definition#az-policy-definition-create) for virtual networks tagged as **Prod**. Replace *<subscription_id>* with the subscription you want to apply this policy to. If you want to apply it to a management group, replace `--subscription <subscription_id>` with `--management-group <mgName>`
+
+```azurecli
+az policy definition create \
+ --name "ProdVNets" \
+ --description "Choose Prod virtual networks only" \
+ --rules "{\"if\":{\"allOf\":[{\"field\":\"Name\",\"contains\":\"VNet\"},{\"field\":\"tags['NetworkType']\",\"equals\":\"Prod\"}]},\"then\":{\"effect\":\"addToNetworkGroup\",\"details\":{\"networkGroupId\":\"/subscriptions/<subscription_id>/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/networkManagers/myAVNM/networkGroups/myNetworkGroup\"}}}" \
+ --subscription <subscription_id> \
+ --mode "Microsoft.Network.Data"
+
+```
+### Apply a Policy definition
+
+Once a policy is defined, it must also be applied with [az policy assignment create](/cli/azure/policy/assignment#az-policy-assignment-create). Replace *<subscription_id>* with the subscription you want to apply this policy to. If you want to apply it to a management group, replace `--scope "/subscriptions/<subscription_id>"` with `--scope "/providers/Microsoft.Management/managementGroups/<mgName>`, and replace *<mgName\>* with your management group.
+
+```azurecli
++
+az policy assignment create \
+ --name "ProdVNets" \
+ --description "Take only virtual networks tagged NetworkType:Prod" \
+ --scope "/subscriptions/<subscription_id>" \
+ --policy "/subscriptions/<subscription_id>/providers/Microsoft.Authorization/policyDefinitions/ProdVNets"
``` ## Create a configuration
-Create a mesh network topology configuration with [az network manager connect-config create](/cli/azure/network/manager/connect-config#az-network-manager-connect-config-create):
+Now that the Network Group is created, and has the correct VNets, create a mesh network topology configuration with [az network manager connect-config create](/cli/azure/network/manager/connect-config#az-network-manager-connect-config-create). Replace <subscription_id> with your subscription.
-```azurecli-interactive
+```azurecli
az network manager connect-config create \ --configuration-name "connectivityconfig" \
- --description "CLI Mesh Connectivity Config Example" \
- --applies-to-groups network-group-id="/subscriptions/{subscriptionId}/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/networkManagers/myAVNM/networkGroups/myNetworkGroup" \
+ --description "Production Mesh Connectivity Config Example" \
+ --applies-to-groups network-group-id="/subscriptions/<subscription_id>/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/networkManagers/myAVNM/networkGroups/myNetworkGroup" \
--connectivity-topology "Mesh" \
- --delete-existing-peering true \
--network-manager-name "myAVNM" \ --resource-group "myAVNMResourceGroup" ```- ## Commit deployment
-Commit a connectivity configuration with [az network manager post-commit](/cli/azure/network/manager#az-network-manager-post-commit):
+For the configuration to take effect, commit the configuration to the target regions with [az network manager post-commit](/cli/azure/network/manager#az-network-manager-post-commit):
-```azurecli-interactive
+```azurecli
+#Currently broken - can only do via portal
az network manager post-commit \ --network-manager-name "myAVNM" \ --commit-type "Connectivity" \
- --configuration-ids "/subscriptions/{subscriptionId}/resourceGroups/myANVMResourceGroup/providers/Microsoft.Network/networkManagers/myAVNM/connectivityConfigurations/connectivityconfig" \
+ --configuration-ids "/subscriptions/<subscription_id>/resourceGroups/myANVMResourceGroup/providers/Microsoft.Network/networkManagers/myAVNM/connectivityConfigurations/connectivityconfig" \
--target-locations "westus" \ --resource-group "myAVNMResourceGroup" ```
+## Verify configuration
+Virtual Networks will display configurations applied to them with [az network manager list-effective-connectivity-config](/cli/azure/network/manager#az-network-manager-list-effective-connectivity-config):
+
+```azurecli
+az network manager list-effective-connectivity-config \
+ --resource-group "myAVNMResourceGroup" \
+ --virtual-network-name "VNetA"
+
+az network manager list-effective-connectivity-config \
+ --resource-group "myAVNMResourceGroup" \
+ --virtual-network-name "VNetB"
+
+az network manager list-effective-connectivity-config \
+ --resource-group "myAVNMResourceGroup" \
+ --virtual-network-name "VNetC"
+
+az network manager list-effective-connectivity-config \
+ --resource-group "myAVNMResourceGroup" \
+ --virtual-network-name "VNetD"
+```
+For the virtual networks that are part of the connectivity configuration, you'll see an output similar to this:
+
+```json
+{
+ "skipToken": "",
+ "value": [
+ {
+ "appliesToGroups": [
+ {
+ "groupConnectivity": "None",
+ "isGlobal": "False",
+ "networkGroupId": "/subscriptions/<subscription_id>/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/networkManagers/myAVNM/networkGroups/myNetworkGroup",
+ "useHubGateway": "False"
+ }
+ ],
+ "configurationGroups": [
+ {
+ "description": "Network Group for Production virtual networks",
+ "id": "/subscriptions/<subscription_id>/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/networkManagers/myAVNM/networkGroups/myNetworkGroup",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myAVNMResourceGroup"
+ }
+ ],
+ "connectivityTopology": "Mesh",
+ "deleteExistingPeering": "False",
+ "description": "Production Mesh Connectivity Config Example",
+ "hubs": [],
+ "id": "/subscriptions/<subscription_id>/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/networkManagers/myAVNM/connectivityConfigurations/connectivityconfig",
+ "isGlobal": "False",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myAVNMResourceGroup"
+ }
+ ]
+}
+```
+For virtual networks not part of the network group like **VNetD**, you'll see an output similar to this:
+
+```json
+az network manager list-effective-connectivity-config --resource-group "myAVNMResourceGroup" --virtual-network-name "VNetD-test"
+{
+ "skipToken": "",
+ "value": []
+}
+```
## Clean up resources If you no longer need the Azure Virtual Network Manager, you'll need to make sure all of following are true before you can delete the resource:
If you no longer need the Azure Virtual Network Manager, you'll need to make sur
1. Remove the connectivity deployment by committing no configurations with [az network manager post-commit](/cli/azure/network/manager#az-network-manager-post-commit):
- ```azurecli-interactive
+ ```azurecli
az network manager post-commit \ --network-manager-name "myAVNM" \ --commit-type "Connectivity" \
If you no longer need the Azure Virtual Network Manager, you'll need to make sur
1. Remove the connectivity configuration with [az network manager connect-config delete](/cli/azure/network/manager/connect-config#az-network-manager-connect-config-delete):
- ```azurecli-interactive
+ ```azurecli
az network manager connect-config delete \ --configuration-name "connectivityconfig" \ --name "myAVNM" \
If you no longer need the Azure Virtual Network Manager, you'll need to make sur
1. Remove the network group with [az network manager group delete](/cli/azure/network/manager/group#az-network-manager-group-delete):
- ```azurecli-interactive
+ ```azurecli
az network manager group delete \ --name "myNetworkGroup" \ --network-manager-name "myAVNM" \
If you no longer need the Azure Virtual Network Manager, you'll need to make sur
1. Delete the network manager instance with [az network manager delete](/cli/azure/network/manager#az-network-manager-delete):
- ```azurecli-interactive
+ ```azurecli
az network manager delete \ --name "myAVNM" \ --resource-group "myAVNMResourceGroup"
If you no longer need the Azure Virtual Network Manager, you'll need to make sur
1. If you no longer need the resource created, delete the resource group with [az group delete](/cli/azure/group#az-group-delete):
- ```azurecli-interactive
+ ```azurecli
az group delete \ --name "myAVNMResourceGroup" ```
If you no longer need the Azure Virtual Network Manager, you'll need to make sur
After you've created the Azure Virtual Network Manager, continue on to learn how to block network traffic by using the security admin configuration: > [!div class="nextstepaction"]
-> [Block network traffic with security admin rules](how-to-block-network-traffic-portal.md)
+[Block network traffic with security admin rules](how-to-block-network-traffic-portal.md)
+[Create a secured hub and spoke network](tutorial-create-secured-hub-and-spoke.md)
virtual-network Add Dual Stack Ipv6 Vm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-cli.md
+
+ Title: Add a dual-stack network to an existing virtual machine - Azure CLI
+
+description: Learn how to add a dual-stack network to an existing virtual machine using the Azure CLI.
+++++ Last updated : 08/24/2022+
+ms.devlang: azurecli
++
+# Add a dual-stack network to an existing virtual machine using the Azure CLI
+
+In this article, you'll add IPv6 support to an existing virtual network. You'll configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network will support private IPv6 addresses. The existing virtual machine network configuration will contain a public and private IPv4 and IPv6 address.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
++
+- This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- An existing virtual network, public IP address and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address and a virtual machine, see [Quickstart: Create a Linux virtual machine with the Azure CLI](/azure/virtual-machines/linux/quick-create-cli).
+
+ - The example virtual network used in this article is named **myVNet**. Replace this value with the name of your virtual network.
+
+ - The example virtual machine used in this article is named **myVM**. Replace this value with the name of your virtual machine.
+
+ - The example public IP address used in this article is named **myPublicIP**. Replace this value with the name of your public IP address.
+
+## Add IPv6 to virtual network
+
+In this section, you'll add an IPv6 address space and subnet to your existing virtual network.
+
+Use [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update) to update the virtual network.
+
+```azurecli-interactive
+az network vnet update \
+ --address-prefixes 10.0.0.0/16 2404:f800:8000:122::/63 \
+ --resource-group myResourceGroup \
+ --name myVNet
+```
+
+Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to create the subnet.
+
+```azurecli-interactive
+az network vnet subnet update \
+ --address-prefixes 10.0.0.0/24 2404:f800:8000:122::/64 \
+ --name myBackendSubnet \
+ --resource-group myResourceGroup \
+ --vnet-name myVNet
+```
+
+## Create IPv6 public IP address
+
+In this section, you'll create a IPv6 public IP address for the virtual machine.
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create the public IP address.
+
+```azurecli-interactive
+ az network public-ip create \
+ --resource-group myResourceGroup \
+ --name myPublicIP-Ipv6 \
+ --sku Standard \
+ --version IPv6 \
+ --zone 1 2 3
+```
+## Add IPv6 configuration to virtual machine
+
+Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-create) to create the IPv6 configuration for the NIC. The **`--nic-name`** used in the example is **myvm569**. Replace this value with the name of the network interface in your virtual machine.
+
+```azurecli-interactive
+ az network nic ip-config create \
+ --resource-group myResourceGroup \
+ --name Ipv6config \
+ --nic-name myvm569 \
+ --private-ip-address-version IPv6 \
+ --vnet-name myVNet \
+ --subnet myBackendSubnet \
+ --public-ip-address myPublicIP-IPv6
+```
+
+## Next steps
+
+In this article, you learned how to create an Azure Virtual machine with a dual-stack network.
+
+For more information about IPv6 and IP addresses in Azure, see:
+
+- [Overview of IPv6 for Azure Virtual Network.](ipv6-overview.md)
+
+- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
++
virtual-network Add Dual Stack Ipv6 Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-portal.md
+
+ Title: Add a dual-stack network to an existing virtual machine - Azure portal
+
+description: Learn how to add a dual stack network to an existing virtual machine using the Azure portal.
+++++ Last updated : 08/19/2022+++
+# Add a dual-stack network to an existing virtual machine using the Azure portal
+
+In this article, you'll add IPv6 support to an existing virtual network. You'll configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network will support private IPv6 addresses. The existing virtual machine network configuration will contain a public and private IPv4 and IPv6 address.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing virtual network, public IP address and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address and a virtual machine, see [Quickstart: Create a Linux virtual machine in the Azure portal](/azure/virtual-machines/linux/quick-create-portal).
+
+ - The example virtual network used in this article is named **myVNet**. Replace this value with the name of your virtual network.
+
+ - The example virtual machine used in this article is named **myVM**. Replace this value with the name of your virtual machine.
+
+ - The example public IP address used in this article is named **myPublicIP**. Replace this value with the name of your public IP address.
+
+## Add IPv6 to virtual network
+
+In this section, you'll add an IPv6 address space and subnet to your existing virtual network.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **myVNet** in **Virtual networks**.
+
+4. Select **Address space** in **Settings**.
+
+5. Select the box **Add additional address range**. Enter **2404:f800:8000:122::/63**.
+
+6. Select **Save**.
+
+7. Select **Subnets** in **Settings**.
+
+8. In **Subnets**, select your subnet name from the list. In this example, the subnet name is **default**.
+
+9. In the subnet configuration, select the box **Add IPv6 address space**.
+
+10. In **IPv6 address space**, enter **2404:f800:8000:122::/64**.
+
+11. Select **Save**.
+
+## Create IPv6 public IP address
+
+In this section, you'll create a IPv6 public IP address for the virtual machine.
+
+1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results.
+
+2. Select **+ Create**.
+
+3. Enter or select the following information in **Create public IP address**.
+
+ | Setting | Value |
+ | - | -- |
+ | IP version | Select IPv6. |
+ | SKU | Select **Standard**. |
+ | **IPv6 IP Address Configuration** | |
+ | Name | Enter **myPublicIP-IPv6**. |
+ | Idle timeout (minutes) | Leave the default of **4**. |
+ | Subscription | Select your subscription. |
+ | Resource group | Select your resource group. In this example, the resource group is named **myResourceGroup**. |
+ | Location | Select your location. In this example, the location is **East US 2**. |
+ | Availability zone | Select **Zone-redundant**. |
+
+4. Select **Create**.
+
+## Add IPv6 configuration to virtual machine
+
+The virtual machine must be stopped to add the IPv6 configuration to the existing virtual machine. You'll stop the virtual machine and add the IPv6 configuration to the existing virtual machine's network interface.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM** or your existing virtual machine name.
+
+3. Stop **myVM**.
+
+4. Select **Networking** in **Settings**.
+
+5. Select your network interface name next to **Network Interface:**. In this example, the network interface is named **myvm404**.
+
+6. Select **IP configurations** in **Settings** of the network interface.
+
+7. In **IP configurations**, select **+ Add**.
+
+8. Enter or select the following information in **Add IP configuration**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **Ipv6config**. |
+ | IP version | Select **IPv6**. |
+ | **Private IP address settings** | |
+ | Allocation | Leave the default of **Dynamic**. |
+ | Public IP address | Select **Associate**. |
+ | Public IP address | Select **myPublic-IPv6**. |
+
+9. Select **OK**.
+
+10. Start **myVM**.
+
+## Next steps
+
+In this article, you learned how to add a dual stack IP configuration to an existing virtual network and virtual machine.
+
+For more information about IPv6 and IP addresses in Azure, see:
+
+- [Overview of IPv6 for Azure Virtual Network.](ipv6-overview.md)
+
+- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
virtual-network Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-portal.md
For more information about Azure Bastion, see [Azure Bastion](~/articles/bastion
1. Complete the steps in [Connect to myVM1](#connect-to-myvm1), but connect to **myVM2**.
-1. Open PowerShell on **myVM2**, enter `ping myvm1`.
+1. Open PowerShell on **myVM2**, enter `ping myVM1`.
You'll receive a successful reply message like this:
virtual-network Update Virtual Network Peering Address Space https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/update-virtual-network-peering-address-space.md
In this section, you'll modify the address range prefix for an existing address
:::image type="content" source="media/update-virtual-network-peering-address-space/verify-address-space-thumb.png" alt-text="Image the Address Space page where you verify the address space has changed." lightbox="media/update-virtual-network-peering-address-space/verify-address-space-full.png"::: > [!NOTE]
-> When an update is made to the address space for a virtual network, you will need to sync the virtual network peer for each remote peered VNet to learn of the new address space updates.
+> When an update is made to the address space for a virtual network, you will need to sync the virtual network peer for each remote peered VNet to learn of the new address space updates. We recommend that you run sync after every resize address space operation instead of performing multiple resizing operations and then running the sync operation.
> > The following actions will require a sync: > - Modifying the address range prefix of an existing address range (For example changing 10.1.0.0/16 to 10.1.0.0/18)
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
Addresses can be resized in the following ways:
- Adding address ranges to a virtual network - Deleting address ranges from a virtual network
-Synching of virtual network peers can be performed through the Azure portal or with Azure PowerShell.
-To learn how to update the address space for a peered virtual network, see [Updating the address space for a peered virtual network](./update-virtual-network-peering-address-space.md).
+Synching of virtual network peers can be performed through the Azure portal or with Azure PowerShell. We recommend that you run sync after every resize address space operation instead of performing multiple resizing operations and then running the sync operation. To learn how to update the address space for a peered virtual network, see [Updating the address space for a peered virtual network](./update-virtual-network-peering-address-space.md).
> [!IMPORTANT] > This feature doesn't support scenarios where the virtual network to be updated is peered with: > * A classic virtual network
virtual-wan Create Bgp Peering Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-portal.md
Title: Create a BGP peering with virtual hub - Azure portal
+ Title: 'Configure BGP peering to an NVA: Azure portal'
description: Learn how to create a BGP peering with Virtual WAN hub router.- - Previously updated : 07/20/2022 Last updated : 08/24/2022
-# How to create BGP peering with virtual hub- Azure portal
+# Configure BGP peering to an NVA - Azure portal
-This article helps you configure an Azure Virtual WAN hub router to peer with a Network Virtual Appliance (NVA) in your virtual network using the Azure portal. The virtual hub router learns routes from the NVA in a spoke VNet that is connected to a virtual WAN hub. The virtual hub router also advertises the virtual network routes to the NVA. For more information, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md).
+This article helps you configure an Azure Virtual WAN hub router to peer with a Network Virtual Appliance (NVA) in your virtual network using BGP Peering via the Azure portal. The virtual hub router learns routes from the NVA in a spoke VNet that is connected to a virtual WAN hub. The virtual hub router also advertises the virtual network routes to the NVA. For more information, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md).
:::image type="content" source="./media/create-bgp-peering-hub-portal/diagram.png" alt-text="Diagram of configuration."::: ## Prerequisites
-Verify that you have met the following criteria before beginning your configuration:
+Verify that you've met the following criteria before beginning your configuration:
[!INCLUDE [Before you begin](../../includes/virtual-wan-before-include.md)]
A hub is a virtual network that can contain gateways for site-to-site, ExpressRo
[!INCLUDE [Create a hub](../../includes/virtual-wan-hub-basics.md)]
+Once you have the settings configured, click **Review + Create** to validate, then click **Create**. The hub will begin provisioning. After the hub is created, go to the hub's **Overview** page. When provisioning is completed, the **Routing status** is **Provisioned**.
+ ## <a name="vnet"></a>Connect the VNet to the hub
-In this section, you create a connection between your hub and VNet.
+After your hub router status is provisioned, create a connection between your hub and VNet.
[!INCLUDE [Connect a VNet to a hub](../../includes/virtual-wan-connect-vnet-hub-include.md)] ## Configure a BGP peer
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. On the portal page for your virtual WAN, in the **Connectivity** section, select **Hubs** to view the list of hubs. Click a hub to configure a BGP peer.
-
- :::image type="content" source="./media/create-bgp-peering-hub-portal/hubs.png" alt-text="Screenshot of hubs.":::
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. On the **Virtual Hub** page, under the **Routing** section, select **BGP Peers** and click **+ Add** to add a BGP peer.
+1. On the portal page for your virtual WAN, in the left pane, select **Hubs** to view the list of hubs. Click a hub to configure a BGP peer.
- :::image type="content" source="./media/create-bgp-peering-hub-portal/bgp-peers.png" alt-text="3.":::
+1. On the **Virtual Hub** page, in the left pane, select **BGP Peers**. On the **BGP Peers** page, click **+ Add** to add a BGP peer.
-1. On the **Add BGP Peer** page, complete all the fields.
+ :::image type="content" source="./media/create-bgp-peering-hub-portal/bgp-peers.png" alt-text="Screenshot of BGP Peers page.":::
- :::image type="content" source="./media/create-bgp-peering-hub-portal/add-peer.png" alt-text="4.":::
+1. On the **Add BGP Peer** page, complete the following fields.
- * **Name** ΓÇô Resource name to identify a specific BGP peer.
+ * **Name** ΓÇô Resource name to identify a specific BGP peer.
* **ASN** ΓÇô The ASN for the BGP peer. * **IPv4 address** ΓÇô The IPv4 address of the BGP peer. * **Virtual Network connection** ΓÇô Choose the connection identifier that corresponds to the Virtual network that hosts the BGP peer.
-1. Click **Add** to complete the BGP peer configuration and view the peer.
-
- :::image type="content" source="./media/create-bgp-peering-hub-portal/view-peer.png" alt-text="Screenshot of peer added.":::
-
-## Modify a BGP peer
-
-1. On the **Virtual Hub** resource, click **BGP Peers** and select the BGP peer. Click **…** then **Edit**.
+1. Click **Add** to complete the BGP peer configuration. You can view the peer on the **BGP Peers** page.
- :::image type="content" source="./media/create-bgp-peering-hub-portal/modify.png" alt-text="Screenshot of edit.":::
+ :::image type="content" source="./media/create-bgp-peering-hub-portal/view-peer.png" alt-text="Screenshot of the BGP peers page with the new peer.":::
-1. Once the BGP peer is modified, click **Add** to save.
+### Modify a BGP peer
-## Delete a BGP peer
+1. On the **Virtual Hub** resource, go to the **BGP Peers** page.
+1. Select the BGP peer.
+1. Click **…** at the end of the line for the peer, then select **Edit** from the dropdown.
+1. On the **Edit BGP Peer** page, make any necessary changes, then click **Add**.
-1. On the **Virtual Hub** resource, click **BGP Peers** and select the BGP peer. Click **…** then **Delete**.
+### Delete a BGP peer
- :::image type="content" source="./media/create-bgp-peering-hub-portal/delete.png" alt-text="Screenshot of deleting a peer.":::
+1. On the **Virtual Hub** resource, go to the **BGP Peers** page.
+1. Select the BGP peer.
+1. Click **…** at the end of the line for the peer, then select **Delete** from the dropdown.
+1. Click **Confirm** to confirm that you want to delete this resource.
## Next steps
-* For more information about BGP scenarios, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md).
+For more information about BGP scenarios, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md).
virtual-wan Virtual Wan Point To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-portal.md
Previously updated : 06/16/2022 Last updated : 08/24/2022