Updates from: 10/28/2022 01:09:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
An authentication strength Conditional Access policy works together with [MFA tr
- Email one-time pass (Guest) - Hardware-based OATH token -- **Conditional Access What-if tool** ΓÇô When running the what-if tool, it will return policies that require authentication strength correctly. However, when clicking on the authentication strength name, a name page is open with additional information about the methods the user can use. This information may be incorrect.- - **Authentication strength is not enforced on Register security information user action** ΓÇô If an Authentication strength Conditional Access policy targets **Register security information** user action, the policy would not apply. -- **Conditional Access audit log** ΓÇô When a Conditional Access policy with the authentication strength grant control is created or updated in the Azure AD portal, the auditing log includes details about the policy that was updated, but doesn't include the details about which authentication strength is referenced by the Conditional Access policy. This issue doesn't exist when a policy is created or updated By using Microsoft Graph APIs.- - **Using 'Require one of the selected controls' with 'require authentication strength' control** - After you select authentication strengths grant control and additional controls, all the selected controls must be satisfied in order to gain access to the resource. Using **Require one of the selected controls** isn't applicable, and will default to requiring all the controls in the policy. -- **Authentication loop** - when the user is required to use Microsoft Authenticator (Phone Sign-in) but the user is not registered for this method, they will be given instructions on how to set up the Microsoft Authenticator, that does not include how to enable Passwordless sign-in. As a result, the user can get into an authentication loop. To avoid this issue, make sure the user is registered for the method before the Conditional Access policy is enforced. Phone Sign-in can be registered using the steps outlined here: [Add your work or school account to the Microsoft Authenticator app](https://support.microsoft.com/en-us/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c)
+- **Authentication loop** can happen in one of the following scenarios:
+1. **Microsoft Authenticator (Phone Sign-in)** - When the user is required to use Microsoft Authenticator (Phone Sign-in) but the user is not registered for this method, they will be given instructions on how to set up the Microsoft Authenticator, that does not include how to enable Passwordless sign-in. As a result, the user can get into an authentication loop. To avoid this issue, make sure the user is registered for the method before the Conditional Access policy is enforced. Phone Sign-in can be registered using the steps outlined here: [Add your work or school account to the Microsoft Authenticator app ("Sign in with your credentials")](https://support.microsoft.com/en-us/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c)
+2. **Conditional Access Policy is targeting all apps** - When the Conditional Access policy is targeting "All apps" but the user is not registered for any of the methods required by the authentication strength, the user will get into an authentication loop. To avoid this issue, target specific applications in the Conditional Access policy or make sure the user is registered for at least one of the authentication methods required by the authentication strength Conditional Access policy.
+ ## Limitations
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 10/21/2022 Last updated : 10/27/2022
Number matching is available for the following scenarios. When enabled, all scen
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-Number matching is available for sign-in for Azure Government. It's available for combined registration two weeks after General Availability. Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
+Number matching is available for sign-in for Azure Government. However, it's currently not available for Authenticator setup in combined registration. Number matching will be available for Authenticator setup in [combined registration](howto-registration-mfa-sspr-combined.md) by November 30, 2022 for Azure Government.
+
+Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
### Multifactor authentication
During self-service password reset, the Authenticator app notification will show
### Combined registration
-When a user goes through combined registration to set up the Authenticator app, the user is asked to approve a notification as part of adding the account. For users who are enabled for number matching, this notification will show a number that they need to type in their Authenticator app notification. Number matching will be available for combined registration in Azure Government two weeks after General Availability.
+When a user goes through combined registration to set up the Authenticator app, the user is asked to approve a notification as part of adding the account. For users who are enabled for number matching, this notification will show a number that they need to type in their Authenticator app notification. Number matching will be available for Authenticator setup in combined registration in Azure Government by November 30, 2022.
### AD FS adapter
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
These samples show how to write a single-page application secured with Microsoft
> | Angular | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/4-Deployment/README.md)| MSAL Angular | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) | > | Blazor WebAssembly | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/MyOrg/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/B2C/README.md)<br/>&#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-graph-user/Call-MSGraph/README.md)<br/>&#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/Deploy-to-Azure/README.md) | MSAL.js | Implicit Flow | > | JavaScript | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/1-call-api/README.md)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/2-call-api-b2c/README.md)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Call Node.js web API via OBO and CA](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/2-call-api-api-c)| MSAL.js | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access |
-> | React | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploy to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/2-deploy-static/README.md)| MSAL React | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access |
+> | React | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploy to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/2-deploy-static/README.md)<br/>&#8226; [Call Azure REST API and Azure Storage](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/2-call-arm)| MSAL React | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access |
## Web applications
The following samples show public client desktop applications that access the Mi
> [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample(s) <br/> on GitHub | Auth<br/> libraries | Auth flow | > | - | -- | - | -- |
-> | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Micrsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
+> | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Micrsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) <br/> &#8226; [Authenticate users with MSAL.NET in a WinUI desktop application](https://github.com/Azure-Samples/ms-identity-netcore-winui) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
> | .NET | [Invoke protected API with integrated Windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | MSAL.NET | Integrated Windows authentication | > | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Integrated-Windows-Auth-Flow) | MSAL Java | Integrated Windows authentication | > | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE |
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
The value of `{tenant}` varies based on the application's sign-in audience as sh
| `consumers` |Only users with a personal Microsoft account can sign in to the application. | | `8eaef023-2b34-4da1-9baa-8bc8c9d6a490` or `contoso.onmicrosoft.com` | Only users from a specific Azure AD tenant (directory members with a work or school account or directory guests with a personal Microsoft account) can sign in to the application. <br/><br/>The value can be the domain name of the Azure AD tenant or the tenant ID in GUID format. You can also use the consumer tenant GUID, `9188040d-6c67-4c5b-b112-36a304b66dad`, in place of `consumers`. |
+> [!TIP]
+> Note that when using the `common` or `consumers` authority for personal Microsoft accounts, the consuming resource application must be configured to support such type of accounts in accordance with [signInAudience](https://learn.microsoft.com/en-us/azure/active-directory/develop/supported-accounts-validation).
+ You can also find your app's OpenID configuration document URI in its app registration in the Azure portal. To find the OIDC configuration document for your app, navigate to the [Azure portal](https://portal.azure.com) and then:
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
Previously updated : 02/15/2022 Last updated : 10/27/2022
Device administrators are assigned to all Azure AD joined devices. You canΓÇÖt s
Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you the granularity to configure distinct administrators for different groups of devices.
-Currently, there's no UI in Intune to manage these policies and they need to be configured using [Custom OMA-URI Settings](/mem/intune/configuration/custom-settings-windows-10). A few considerations for using this policy:
+Organizations can use Intune to manage these policies using [Custom OMA-URI Settings](/mem/intune/configuration/custom-settings-windows-10) or [Account protection policy](/mem/intune/protect/endpoint-security-account-protection-policy). A few considerations for using this policy:
- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID is defined by the property `securityIdentifier` in the API response.
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
Previously updated : 12/13/2021 Last updated : 10/19/2022 # Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication
-In this tutorial, you'll learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration.
+In this tutorial, you'll learn to implement secure hybrid access (SHA) with Single Sign-On (SSO) to Kerberos applications by using the F5 BIG-IP advanced configuration. Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
-Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
-
-* Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
-
-* Full SSO between Azure AD and BIG-IP published services.
-
-* Management of identities and access from a single control plane, the [Azure portal](https://azure.microsoft.com/features/azure-portal/)
-
-To learn about all of the benefits, see [Integrate F5 BIG-IP with Azure Active Directory](./f5-aad-integration.md) and [What is single sign-on in Azure Active Directory?](/azure/active-directory/active-directory-appssoaccess-whatis).
+* Improved [Zero Trust](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) governance through Azure AD pre-authentication, and use of the Conditional Access security policy enforcement solution.
+ * See, [What is Conditional Access?](../conditional-access/overview.md)
+* Full SSO between Azure AD and BIG-IP published services
+* Identity management and access from a single control plane, the [Azure portal](https://azure.microsoft.com/features/azure-portal/)
+To learn more about benefits, see [Integrate F5 BIG-IP with Azure Active Directory](./f5-aad-integration.md).
## Scenario description
-For this scenario, you'll configure a critical line-of-business application for *Kerberos authentication*, also known as *Integrated Windows Authentication*.
+For this scenario, you'll configure a line-of-business application for Kerberos authentication, also known as Integrated Windows Authentication.
-For you to integrate the application directly with Azure AD, it would need to support some form of federation-based protocol, such as Security Assertion Markup Language (SAML). But because modernizing the application introduces the risk of potential downtime, there are other options.
+To integrate the application with Azure AD requires support from a federation-based protocol, such as Security Assertion Markup Language (SAML). Because modernizing the application introduces the risk of potential downtime, there are other options.
-While you're using Kerberos Constrained Delegation (KCD) for SSO, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) to access the application remotely. In this arrangement, you can achieve the protocol transitioning that's required to bridge the legacy application to the modern identity control plane.
+While you're using Kerberos Constrained Delegation (KCD) for SSO, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) to access the application remotely. You can achieve the protocol transitioning to bridge the legacy application to the modern identity control plane.
-Another approach is to use an F5 BIG-IP Application Delivery Controller. This approach enables overlay of the application with Azure AD pre-authentication and KCD SSO. It significantly improves the overall Zero Trust posture of the application.
+Another approach is to use an F5 BIG-IP Application Delivery Controller. This approach enables overlay of the application with Azure AD pre-authentication and KCD SSO. It improves the overall Zero Trust posture of the application.
## Scenario architecture
-The SHA solution for this scenario consists of the following elements:
+The SHA solution for this scenario has the following elements:
-- **Application**: Back-end Kerberos-based service that's externally published by BIG-IP and protected by SHA.
+- **Application**: Back-end Kerberos-based service externally published by BIG-IP and protected by SHA
-- **BIG-IP**: Reverse proxy functionality that enables publishing back-end applications. The Access Policy Manager (APM) then overlays published applications with SAML service provider (SP) and SSO functionality.
+- **BIG-IP**: Reverse proxy functionality for publishing back-end applications. The Access Policy Manager (APM) overlays published applications with SAML service provider (SP) and SSO functionality.
-- **Azure AD**: Identity provider (IdP) responsible for verifying user credentials, Azure AD Conditional Access, and SSO to the BIG-IP APM through SAML.
+- **Azure AD**: Identity provider (IdP) that verifies user credentials, Azure AD Conditional Access, and SSO to the BIG-IP APM through SAML
- **KDC**: Key Distribution Center role on a domain controller (DC). It issues Kerberos tickets.
The following image illustrates the SAML SP-initiated flow for this scenario, bu
![Diagram of the scenario architecture.](./media/f5-big-ip-kerberos-easy-button/scenario-architecture.png)
-| Step| Description |
-| -- |-|
-| 1| User connects to the application endpoint (BIG-IP). |
-| 2| BIG-IP access policy redirects the user to Azure AD (SAML IdP). |
-| 3| Azure AD pre-authenticates the user and applies any enforced Conditional Access policies. |
-| 4| User is redirected to BIG-IP (SAML SP), and SSO is performed via the issued SAML token. |
-| 5| BIG-IP authenticates the user and requests a Kerberos ticket from KDC. |
-| 6| BIG-IP sends the request to the back-end application, along with the Kerberos ticket for SSO. |
-| 7| Application authorizes the request and returns the payload. |
+## User flow
+
+1. User connects to the application endpoint (BIG-IP).
+2. BIG-IP access policy redirects the user to Azure AD (SAML IdP).
+3. Azure AD pre-authenticates the user and applies any enforced Conditional Access policies.
+4. User is redirected to BIG-IP (SAML SP), and SSO is performed via the issued SAML token.
+5. BIG-IP authenticates the user and requests a Kerberos ticket from KDC.
+6. BIG-IP sends the request to the back-end application, along with the Kerberos ticket for SSO.
+7. Application authorizes the request and returns the payload.
## Prerequisites
-Prior BIG-IP experience isn't necessary, but you will need:
+Prior BIG-IP experience isn't necessary. You need:
-* An Azure AD free subscription or higher-tier subscription.
+* An [Azure AD free](https://azure.microsoft.com/free/active-directory/) or higher-tier subscription
-* An existing BIG-IP, or [deploy BIG-IP Virtual Edition in Azure](../manage-apps/f5-bigip-deployment-guide.md).
+* A BIG-IP, or [deploy BIG-IP Virtual Edition in Azure](../manage-apps/f5-bigip-deployment-guide.md)
-* Any of the following F5 BIG-IP license offers:
+* Any of the following F5 BIG-IP licenses:
* F5 BIG-IP Best bundle * F5 BIG-IP APM standalone license
- * F5 BIG-IP APM add-on license on an existing BIG-IP Local Traffic Manager
+ * F5 BIG-IP APM add-on license on a BIG-IP Local Traffic Manager
- * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php)
+ * 90-day BIG-IP [Free Trial](https://www.f5.com/trial/big-ip-trial.php) license
-* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created directly within Azure AD and flowed back to your on-premises directory.
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created in Azure AD and flowed back to your on-premises directory
-* An account with Azure AD Application Administrator [permissions](../users-groups-roles/directory-assign-admin-roles.md).
+* An account with Azure AD Application Administrator [permissions](../users-groups-roles/directory-assign-admin-roles.md)
-* A web server [certificate](../manage-apps/f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certificates while testing.
+* A web server [certificate](../manage-apps/f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certificates while testing
-* An existing Kerberos application, or [set up an Internet Information Services (IIS) app](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html) for KCD SSO.
+* A Kerberos application, or go to active-directory-wp.com to learn to configure [SSO with IIS on Windows](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html)
## BIG-IP configuration methods
-There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This article covers the advanced approach, which provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios that the guided configuration templates don't cover.
+This article covers the advanced configuration, which has a more flexible way of implementing SHA by creating BIG-IP configuration objects. You can use this approach for scenarios the Guided Configuration templates don't cover.
>[!NOTE]
-> All example strings or values in this article should be replaced with those for your actual environment.
+> Replace all example strings or values in this article with those for your actual environment.
## Register F5 BIG-IP in Azure AD
-Before BIG-IP can hand off pre-authentication to Azure AD, it must be registered in your tenant. This is the first step in establishing SSO between both entities. It's no different from making any IdP aware of a SAML relying party. In this case, the app that you create from the F5 BIG-IP gallery template is the relying party that represents the SAML SP for the BIG-IP published application.
-
-1. Sign in to the [Azure AD portal](https://portal.azure.com) by using an account with Application Administrator permissions.
+Before BIG-IP can hand off pre-authentication to Azure AD, register it in your tenant. This initiates SSO between both entities. The app you create from the F5 BIG-IP gallery template is the relying party that represents the SAML SP for the BIG-IP published application.
+1. Sign in to the [Azure AD portal](https://portal.azure.com) with Application Administrator permissions.
2. From the left pane, select the **Azure Active Directory** service.-
-3. On the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant.
-
+3. On the left menu, select **Enterprise applications**. The **All applications** pane opens with a list of the applications in your Azure AD tenant.
4. On the **Enterprise applications** pane, select **New application**.-
-5. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons that indicate whether they support federated SSO and provisioning.
-
- Search for **F5** in the Azure gallery, and select **F5 BIG-IP APM Azure AD integration**.
-
-6. Provide a name for the new application to recognize the instance of the application. Select **Add/Create** to add it to your tenant.
+5. The **Browse Azure AD Gallery** pane opens with tiles for cloud platforms, on-premises applications, and featured applications. Applications in the **Featured applications** section have icons that indicate whether they support federated SSO and provisioning.
+6. Search for **F5** in the Azure gallery, and select **F5 BIG-IP APM Azure AD integration**.
+7. Enter a name for the new application to recognize the instance of the application.
+8. Select **Add/Create** to add it to your tenant.
## Enable SSO to F5 BIG-IP Next, configure the BIG-IP registration to fulfill SAML tokens that the BIG-IP APM requests:
-1. In the **Manage** section of the left menu, select **Single sign-on** to open the **Single sign-on** pane for editing.
-
+1. In the **Manage** section of the left menu, select **Single sign-on**. The **Single sign-on** pane appears.
2. On the **Select a single sign-on method** page, select **SAML** followed by **No, I'll save later** to skip the prompt.-
-3. On the **Set up single sign-on with SAML** pane, select the pen icon to edit **Basic SAML Configuration**. Make these edits:
-
- 1. Replace the predefined **Identifier** value with the full URL for the BIG-IP published application.
-
- 2. Replace the **Reply URL** value but retain the path for the application's SAML SP endpoint.
+3. On the **Set up single sign-on with SAML** pane, select the **pen** icon to edit **Basic SAML Configuration**.
+4. Replace the predefined **Identifier** value with the full URL for the BIG-IP published application.
+5. Replace the **Reply URL** value, but retain the path for the application's SAML SP endpoint.
- In this configuration, the SAML flow would operate in IdP-initiated mode. In that mode, Azure AD issues a SAML assertion before the user is redirected to the BIG-IP endpoint for the application.
-
- 3. To use SP-initiated mode, populate **Sign on URL** with the application URL.
+> [!NOTE]
+> In this configuration, the SAML flow operates in IdP-initiated mode. Azure AD issues a SAML assertion before the user is redirected to the BIG-IP endpoint for the application.
- 4. For **Logout Url**, enter the BIG-IP APM single logout (SLO) endpoint prepended by the host header of the service that's being published. This step ensures that the user's BIG-IP APM session ends after the user is signed out of Azure AD.
+6. To use SP-initiated mode, populate **Sign on URL** with the application URL.
+7. For **Logout Url**, enter the BIG-IP APM single logout (SLO) endpoint prepended by the host header of the service being published. This ensures the user's BIG-IP APM session ends after the user signs out of Azure AD.
![Screenshot for editing basic SAML configuration.](./media/f5-big-ip-kerberos-advanced/edit-basic-saml-configuration.png)
- > [!NOTE]
- > From TMOS v16, the SAML SLO endpoint has changed to **/saml/sp/profile/redirect/slo**.
-
-4. Select **Save** before closing the SAML configuration pane and skip the SSO test prompt.
-
-5. Note the properties of the **User Attributes & Claims** section. Azure AD will issue these properties to users for BIG-IP APM authentication and for SSO to the back-end application.
+> [!NOTE]
+> From TMOS v16, the SAML SLO endpoint has changed to **/saml/sp/profile/redirect/slo**.
-6. On the **SAML Signing Certificate** pane, select **Download** to save the **Federation Metadata XML** file to your computer.
+8. Before closing SAML configuration, select **Save**.
+9. Skip the SSO test prompt.
+10. Note the properties of the **User Attributes & Claims** section. Azure AD issues properties to users for BIG-IP APM authentication and for SSO to the back-end application.
+11. On the **SAML Signing Certificate** pane, select **Download** to save the Federation Metadata XML file to your computer.
![Screenshot that shows selections for editing a SAML signing certificate.](./media/f5-big-ip-kerberos-advanced/edit-saml-signing-certificate.png)
-SAML signing certificates that Azure AD creates have a lifespan of three years. For more information, see [Managed certificates for federated single sign-on](./manage-certificates-for-federated-single-sign-on.md).
+> [!NOTE]
+> SAML signing certificates that Azure AD creates have a lifespan of three years. For more information, see [Managed certificates for federated single sign-on](./manage-certificates-for-federated-single-sign-on.md).
-## Assign users and groups
+## Grant access to users and groups
-By default, Azure AD will issue tokens only for users who have been granted access to an application. To grant specific users and groups access to the application:
+By default, Azure AD issues tokens for users granted access to an application. To grant users and groups access to the application:
1. On the **F5 BIG-IP application's overview** pane, select **Assign Users and groups**.- 2. Select **+ Add user/group**. ![Screenshot that shows the button for assigning users and groups.](./media/f5-big-ip-kerberos-advanced/authorize-users-groups.png)
-3. Select users and groups, and then select **Assign** to assign them to your application.
+3. Select users and groups, and then select **Assign**.
-## Configure Active Directory KCD
+## Configure Active Directory Kerberos constrained delegation
-For the BIG-IP APM to perform SSO to the back-end application on behalf of users, KCD must be configured in the target Active Directory domain. Delegating authentication also requires that the BIG-IP APM is provisioned with a domain service account.
+For the BIG-IP APM to perform SSO to the back-end application on behalf of users, configure KCD in the target Active Directory domain. Delegating authentication also requires the BIG-IP APM is provisioned with a domain service account.
-For the scenario in this article, the application is hosted on server **APP-VM-01** and is running in the context of a service account named **web_svc_account**, not the computer's identity. The delegating service account assigned to the APM is **F5-BIG-IP**.
+For this scenario, the application is hosted on server APP-VM-01 and runs in the context of a service account named web_svc_account, not the computer identity. The delegating service account assigned to the APM is F5-BIG-IP.
### Create a BIG-IP APM delegation account
-Because BIG-IP doesn't support group managed service accounts, create a standard user account to use as the APM service account:
+Because BIG-IP doesn't support group-managed service accounts, create a standard user account for the APM service account:
1. Enter the following PowerShell command. Replace the `UserPrincipalName` and `SamAccountName` values with those for your environment. ```New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName host/f5-big-ip.contoso.com@contoso.com SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password") ```
-2. Create a service principal name (SPN) for the APM service account to use when you're performing delegation to the web application's service account:
+2. Create a service principal name (SPN) for the APM service account to use during delegation to the web application service account:
```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @Add="host/f5-big-ip.contoso.com"} ```
-3. Ensure that the SPN now shows against the APM service account:
+3. Ensure the SPN shows against the APM service account:
```Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
- 4. Before you specify the target SPN that the APM service account should delegate to for the web application, view its existing SPN configuration:
+4. Before you specify the target SPN the APM service account will delegate to for the web application, view its SPN configuration:
- 1. Check whether your web application is running in the computer context or a dedicated service account.
+ 1. Confirm your web application is running in the computer context or a dedicated service account.
2. Use the following command to query the account object in Active Directory to see its defined SPNs. Replace `<name_of_account>` with the account for your environment.
- ```Get-ADUser -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
-
-5. You can use any SPN that you see defined against a web application's service account. But in the interest of security, it's best to use a dedicated SPN that matches the host header of the application.
+ ```Get-ADUser -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
- For example, because the web application host header in this example is **myexpenses.contoso.com**, you would add `HTTP/myexpenses.contoso.com` to the application's service account object in Active Directory:
+5. Use an SPN defined against a web application service account. For better security, use a dedicated SPN that matches the host header of the application. For example, because the web application host header in this example is myexpenses.contoso.com, add `HTTP/myexpenses.contoso.com` to the application service account object in Active Directory:
```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
- Or if the app ran in the machine context, you would add the SPN to the object of the computer account in Active Directory:
+Or if the app ran in the machine context, add the SPN to the object of the computer account in Active Directory:
- ```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
+ ```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
-With the SPNs defined, you now need to establish trust for the APM service account delegate to that service. The configuration will vary depending on the topology of your BIG-IP instance and application server.
+With SPNs defined, establish trust for the APM service account delegate to that service. The configuration varies depending on the topology of your BIG-IP instance and application server.
### Configure BIG-IP and the target application in the same domain
With the SPNs defined, you now need to establish trust for the APM service accou
```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true ```
-2. The APM service account then needs to know which target SPN it's trusted to delegate to. In other words, the APM service account needs to know which service it's allowed to request a Kerberos ticket for. Set the target SPN to the service account that's running your web application:
+2. The APM service account needs to know the target SPN it's trusted to delegate to. Set the target SPN to the service account running your web application:
```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')} ```
-If you prefer, you can complete these tasks through the **Active Directory Users and Computers** Microsoft Management Console (MMC) snap-in on a domain controller.
+> [!NOTE]
+> You can complete these tasks with the Active Directory Users and Computers, Microsoft Management Console (MMC) snap-in on a domain controller.
### Configure BIG-IP and the target application in different domains
-Starting with Windows Server 2012, cross-domain KCD uses resource-based constrained delegation. The constraints for a service have been transferred from the domain administrator to the service administrator. This delegation allows the back-end service administrator to allow or deny SSO. It also introduces a different approach at configuration delegation, which is possible only when you use either PowerShell or ADSI Edit.
+Starting with Windows Server 2012, cross-domain KCD uses resource-based constrained delegation. The constraints for a service are transferred from the domain administrator to the service administrator. This delegation allows the back-end service administrator to allow or deny SSO. It introduces a different approach at configuration delegation, which is possible when you use PowerShell or ADSI Edit.
-You can use the `PrincipalsAllowedToDelegateToAccount` property of the application's service account (computer or dedicated service account) to grant delegation from BIG-IP. For this scenario, use the following PowerShell command on a domain controller (Windows Server 2012 R2 or later) within the same domain as the application.
+You can use the `PrincipalsAllowedToDelegateToAccount` property of the application service account (computer or dedicated service account) to grant delegation from BIG-IP. For this scenario, use the following PowerShell command on a domain controller (Windows Server 2012 R2 or later) in the same domain as the application.
-If the **web_svc_account** service runs in context of a user account, use these commands:
+If the web_svc_account service runs in the context of a user account, use these commands:
```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` ```Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip``` ```Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount```
-If the **web_svc_account** service runs in context of a computer account, use these commands:
+If the web_svc_account service runs in the context of a computer account, use these commands:
```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` ```Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip```
For more information, see [Kerberos Constrained Delegation across domains](/prev
## BIG-IP advanced configuration
-Now you can proceed with setting up the BIG-IP configurations.
+Use the following section to continue setting up the BIG-IP configurations.
### Configure SAML service provider settings
-SAML service provider settings define the SAML SP properties that the APM will use for overlaying the legacy application with SAML pre-authentication. To configure them:
+SAML service provider settings define the SAML SP properties APM uses for overlaying the legacy application with SAML pre-authentication. To configure them:
1. From a browser, sign in to the F5 BIG-IP management console.- 2. Select **Access** > **Federation** > **SAML Service Provider** > **Local SP Services** > **Create**. ![Screenshot that shows the button for creating a local SAML service provider service.](./media/f5-big-ip-kerberos-advanced/create-local-services-saml-service-provider.png)
-3. Provide the **Name** and **Entity ID** values that you saved when you configured SSO for Azure AD earlier.
+3. Provide the **Name** and **Entity ID** values you saved when you configured SSO for Azure AD earlier.
![Screenshot that shows name and entity I D values entered for a new SAML service provider service.](./media/f5-big-ip-kerberos-advanced/create-new-saml-sp-service.png)
-4. You don't need to specify **SP Name Settings** information if the SAML entity ID is an exact match with the URL for the published application.
-
- For example, if the entity ID is **urn:myexpenses:contosoonline**, you need to provide the **Scheme** and **Host** values as **https** and **myexpenses.contoso.com**. But if the entity ID is `https://myexpenses.contoso.com`, you don't need to provide this information.
+4. You can skip **SP Name Settings** if the SAML entity ID is an exact match of the URL for the published application. For example, if the entity ID is urn:myexpenses:contosoonline, the **Scheme** value is **https**; the **Host** value is **myexpenses.contoso.com**. If the entity ID is "https://myexpenses.contoso.com", you don't need to provide this information.
### Configure an external IdP connector
-A SAML IdP connector defines the settings that are required for the BIG-IP APM to trust Azure AD as its SAML IdP. These settings will map the SAML SP to a SAML IdP, establishing the federation trust between the APM and Azure AD. To configure the connector:
+A SAML IdP connector defines the settings for the BIG-IP APM to trust Azure AD as its SAML IdP. These settings map the SAML SP to a SAML IdP, establishing the federation trust between the APM and Azure AD. To configure the connector:
1. Scroll down to select the new SAML SP object, and then select **Bind/Unbind IdP Connectors**.
A SAML IdP connector defines the settings that are required for the BIG-IP APM t
![Screenshot that shows selections for creating new identity provider connector from metadata.](./media/f5-big-ip-kerberos-advanced/create-new-idp-connector-from-metadata.png)
-3. Browse to the federation metadata XML file that you downloaded earlier, and provide an **Identity Provider Name** value for the APM object that will represent the external SAML IdP. The following example shows **MyExpenses_AzureAD**.
+3. Browse to the federation metadata XML file you downloaded, and provide an **Identity Provider Name** for the APM object that represents the external SAML IdP. The following example shows **MyExpenses_AzureAD**.
![Screenshot that shows example values for the federation metadata X M L file and the identity provider name.](./media/f5-big-ip-kerberos-advanced/browse-federation-metadata-xml.png)
A SAML IdP connector defines the settings that are required for the BIG-IP APM t
![Screenshot that shows selections for choosing a new identity provider connector.](./media/f5-big-ip-kerberos-advanced/choose-new-saml-idp-connector.png)
-5. Select **OK** to save the settings.
+5. Select **OK**.
### Configure Kerberos SSO
-In this section, you create an APM SSO object for performing KCD SSO to back-end applications. To complete this step, you need the APM delegation account that you created earlier.
-
-Select **Access** > **Single Sign-on** > **Kerberos** > **Create** and provide the following information:
-
-* **Name**: You can use a descriptive name. After you create it, other published applications can also use the Kerberos SSO APM object. For example, **Contoso_KCD_sso** can be used for multiple published applications for the entire Contoso domain. But **MyExpenses_KCD_sso** can be used for a single application only.
-
-* **Username Source**: Specify the preferred source for user ID. You can specify any APM session variable as the source, but **session.saml.last.identity** is typically best because it contains the logged-in user's ID derived from the Azure AD claim.
-
-* **User Realm Source**: This source is required in scenarios where the user domain is different from the Kerberos realm that will be used for KCD. If users are in a separate trusted domain, you make the APM aware by specifying the APM session variable that contains the logged-in user's domain. An example is **session.saml.last.attr.name.domain**. You also do this in scenarios where the UPN of users is based on an alternative suffix.
+In this section, create an APM SSO object for KCD SSO to back-end applications. Use the APM delegation account that you created.
-* **Kerberos Realm**: Enter the user's domain suffix in uppercase.
-
-* **KDC**: Enter the IP address of a domain controller. (Or enter a fully qualified domain name if DNS is configured and efficient.)
-
-* **UPN Support**: Select this checkbox if the specified source for username is in UPN format, such as if you're using the **session.saml.last.identity** variable.
-
-* **Account Name** and **Account Password**: Provide APM service account credentials to perform KCD.
-
-* **SPN Pattern**: If you use **HTTP/%h**, APM then uses the host header of the client request to build the SPN that it's requesting a Kerberos token for.
+1. Select **Access** > **Single Sign-on** > **Kerberos** > **Create** and provide the following information:
+* **Name**: After you create it, other published applications can use the Kerberos SSO APM object. For example, use Contoso_KCD_sso for multiple published applications for the Contoso domain. Use MyExpenses_KCD_sso for a single application.
+* **Username Source**: Specify the user ID source. Use an APM session variable as the source, but using **session.saml.last.identity** is advised because it contains the logged-in user ID from the Azure AD claim.
+* **User Realm Source**: This is required in scenarios when the user domain differs from the Kerberos realm for KCD. If users are in a separate trusted domain, you make the APM aware by specifying the APM session variable with the logged-in user domain. An example is session.saml.last.attr.name.domain. You do this in scenarios when the user UPN is based on an alternative suffix.
+* **Kerberos Realm**: User domain suffix in uppercase.
+* **KDC**: Domain controller IP address. Or enter a fully qualified domain name if DNS is configured and efficient.
+* **UPN Support**: Select this checkbox if the source for username is in UPN format, for instance the session.saml.last.identity variable.
+* **Account Name** and **Account Password**: APM service account credentials to perform KCD.
+* **SPN Pattern**: If you use HTTP/%h, APM uses the host header of the client request to build the SPN for which it's requesting a Kerberos token.
* **Send Authorization**: Disable this option for applications that prefer negotiating authentication, instead of receiving the Kerberos token in the first request (for example, Tomcat). ![Screenshot that shows selections for configuring Kerberos single sign-on.](./media/f5-big-ip-kerberos-advanced/configure-kerberos-sso.png)
-You can leave KDC undefined if the user realm is different from the back-end server realm. This rule also applies for multiple-domain realm scenarios. If you leave KDC undefined, BIG-IP will try to discover a Kerberos realm through a DNS lookup of SRV records for the back-end server's domain. So it expects the domain name to be the same as the realm name. If the domain name is different from the realm name, it must be specified in the [/etc/krb5.conf](https://support.f5.com/csp/article/K17976428) file.
+You can leave KDC undefined if the user realm is different from the back-end server realm. This rule applies to multiple-domain realm scenarios. If you leave KDC undefined, BIG-IP will try to discover a Kerberos realm through a DNS lookup of SRV records for the back-end server domain. It expects the domain name to be the same as the realm name. If the domain name differs, specify it in the [/etc/krb5.conf](https://support.f5.com/csp/article/K17976428) file.
++
+Kerberos SSO processing is faster when a KDC is specified by IP address. Kerberos SSO processing is slower if a KDC is specified by host name. Because of additional DNS queries, processing is slower when a KDC is undefined. Ensure your DNS is performing optimally before moving a proof-of-concept into production.
-Kerberos SSO processing is fastest when a KDC is specified by IP address. Kerberos SSO processing is slower when a KDC is specified by host name. Because of additional DNS queries, processing is even slower when a KDC is left undefined. For this reason, you should ensure that your DNS is performing optimally before moving a proof of concept into production.
> [!NOTE]
-> If back-end servers are in multiple realms, you must create a separate SSO configuration object for each realm.
+> If back-end servers are in multiple realms, create a separate SSO configuration object for each realm.
-You can inject headers as part of the SSO request to the back-end application. Simply change the **General Properties** setting from **Basic** to **Advanced**.
+You can inject headers as part of the SSO request to the back-end application. Change the **General Properties** setting from **Basic** to **Advanced**.
-For more information on configuring an APM for KCD SSO, see the F5 article [Overview of Kerberos constrained delegation](https://support.f5.com/csp/article/K17976428).
+For more information on configuring an APM for KCD SSO, see the F5 article [K17976428: Overview of Kerberos constrained delegation](https://support.f5.com/csp/article/K17976428).
### Configure an access profile
-An *access profile* binds many APM elements that manage access to BIG-IP virtual servers. These elements include access policies, SSO configuration, and UI settings.
+An access profile binds APM elements that manage access to BIG-IP virtual servers. These elements include access policies, SSO configuration, and UI settings.
-1. Select **Access** > **Profiles / Policies** > **Access Profiles (Per-Session Policies)** > **Create** and provide these general properties:
+1. Select **Access** > **Profiles / Policies** > **Access Profiles (Per-Session Policies)** > **Create** and enter the following properties:
- * **Name**: For example, enter **MyExpenses**.
+ * **Name**: For example, enter MyExpenses.
* **Profile Type:** Select **All**.
- * **SSO Configuration:** Select the KCD SSO configuration object that you just created.
+ * **SSO Configuration:** Select the KCD SSO configuration object you created.
* **Accepted Language:** Add at least one language. ![Screenshot that shows selections for creating an access profile.](./media/f5-big-ip-kerberos-advanced/create-new-access-profile.png)
-2. Select **Edit** for the per-session profile that you just created.
+2. Select **Edit** for the per-session profile you created.
![Screenshot that shows the button for editing a per-session profile.](./media/f5-big-ip-kerberos-advanced/edit-per-session-profile.png)
-3. When the visual policy editor opens, select the plus sign (**+**) next to the fallback.
+3. The visual policy editor opens. Select the **plus sign** next to the fallback.
![Screenshot that shows the plus sign next to fallback.](./media/f5-big-ip-kerberos-advanced/select-plus-fallback.png)
-4. In the pop-up dialog, select **Authentication** > **SAML Auth** > **Add Item**.
+4. In the dialog, select **Authentication** > **SAML Auth** > **Add Item**.
![Screenshot that shows selections for adding a SAML authentication item.](./media/f5-big-ip-kerberos-advanced/add-item-saml-auth.png)
-5. In the **SAML authentication SP** configuration, set the **AAA Server** option to use the SAML SP object that you created earlier.
+5. In the **SAML authentication SP** configuration, set the **AAA Server** option to use the SAML SP object you created.
![Screenshot that shows the list box for configuring an A A A server.](./media/f5-big-ip-kerberos-advanced/configure-aaa-server.png)
-6. Select the link in the upper **Deny** box to change the **Successful** branch to **Allow**, and then select **Save**.
+6. Select the link in the upper **Deny** box to change the **Successful** branch to **Allow**.
+7. Select **Save**.
![Screenshot that shows changing the successful branch to Allow.](./media/f5-big-ip-kerberos-advanced/select-allow-successful-branch.png) ### Configure attribute mappings
-Although it's optional, adding a **LogonID_Mapping** configuration enables the BIG-IP active sessions list to display the UPN of the logged-in user instead of a session number. This information is useful when you're analyzing logs or troubleshooting.
+Although it's optional, you can add a **LogonID_Mapping** configuration to enable the BIG-IP active sessions list to display the UPN of the logged-in user, instead of a session number. This information is useful for analyzing logs or troubleshooting.
-1. Select the **+** symbol for the **SAML Auth Successful** branch.
+1. Select the **plus sign** for the **SAML Auth Successful** branch.
-2. In the pop-up dialog, select **Assignment** > **Variable Assign** > **Add Item**.
+2. In the dialog, select **Assignment** > **Variable Assign** > **Add Item**.
![Screenshot that shows the option for assigning custom variables.](./media/f5-big-ip-kerberos-advanced/configure-variable-assign.png)
-3. Enter **Name**.
+3. Enter a **Name**.
-4. On the **Variable Assign** pane, select **Add new entry** > **change**. The following example shows **LogonID_Mapping** in the **Name** box.
+4. On the **Variable Assign** pane, select **Add new entry** > **change**. The following example shows LogonID_Mapping in the Name box.
![Screenshot that shows selections for adding an entry for variable assignment.](./media/f5-big-ip-kerberos-advanced/add-new-entry-variable-assign.png) 5. Set both variables:
- * **Custom Variable**: Enter **session.logon.last.username**.
- * **Session Variable**: Enter **session.saml.last.identity**.
+ * **Custom Variable**: Enter session.logon.last.username.
+ * **Session Variable**: Enter session.saml.last.identity.
6. Select **Finished** > **Save**. 7. Select the **Deny** terminal of the access policy's **Successful** branch and change it to **Allow**. Then select **Save**.
-8. Commit those settings by selecting **Apply Access Policy**, and close the visual policy editor.
+8. Select **Apply Access Policy**, and close the editor.
![Screenshot of the button for applying an access policy.](./media/f5-big-ip-kerberos-advanced/apply-access-policy.png) ### Configure the back-end pool
-For BIG-IP to know where to forward client traffic, you need to create a BIG-IP node object that represents the back-end server that hosts your application. Then, place that node in a BIG-IP server pool.
+For BIG-IP to forward client traffic accurately, create a BIG-IP node object that represents the back-end server hosting your application. Then, place that node in a BIG-IP server pool.
-1. Select **Local Traffic** > **Pools** > **Pool List** > **Create** and provide a name for a server pool object. For example, enter **MyApps_VMs**.
+1. Select **Local Traffic** > **Pools** > **Pool List** > **Create** and provide a name for a server pool object. For example, enter MyApps_VMs.
![Screenshot that shows selections for creatng an advanced back-end pool.](./media/f5-big-ip-kerberos-advanced/create-new-backend-pool.png) 2. Add a pool member object with the following resource details:
- * **Node Name**: Optional display name for the server that hosts the back-end web application.
- * **Address**: IP address of the server that hosts the application.
- * **Service Port**: HTTP/S port that the application is listening on.
+ * **Node Name**: Display name for the server hosting the back-end web application
+ * **Address**: IP address of the server hosting the application
+ * **Service Port**: HTTP/S port the application is listening on
![Screenshot that shows entries for adding a pool member object.](./media/f5-big-ip-kerberos-advanced/add-pool-member-object.png) > [!NOTE]
-> The health monitors require [additional configuration](https://support.f5.com/csp/article/K13397) that this article doesn't cover.
+> The health monitors require additional configuration this article doesn't cover. See, [K13397: Overview of HTTP health monitor request formatting for the BIG-IP DNS system](https://support.f5.com/csp/article/K13397).
### Configure the virtual server
-A *virtual server* is a BIG-IP data plane object that's represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM access profile that's associated with the virtual server, before being directed according to the policy results and settings.
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Received traffic is processed and evaluated against the APM access profile associated with the virtual server, before being directed according to policy.
To configure a virtual server: 1. Select **Local Traffic** > **Virtual Servers** > **Virtual Server List** > **Create**.
-2. Provide the virtual server with a **Name** value and an IPv4/IPv6 address that isn't already allocated to an existing BIG-IP object or device on the connected network. The IP address will be dedicated to receiving client traffic for the published back-end application. Then set **Service Port** to **443**.
+2. Enter a **Name** and an IPv4/IPv6 address not allocated to a BIG-IP object or device on the connected network. The IP address is dedicated to receiving client traffic for the published back-end application.
+3. Set **Service Port** to **443**.
![Screenshot that shows selections and entries for configuring a virtual server.](./media/f5-big-ip-kerberos-advanced/configure-new-virtual-server.png)
-3. Set **HTTP Profile (Client)** to **http**.
-
-4. Enable a virtual server for Transport Layer Security to allow services to be published over HTTPS. For **SSL Profile (Client)**, select the profile that you created as part of the prerequisites. (Or leave the default if you're testing.)
+4. Set **HTTP Profile (Client)** to **http**.
+5. Enable a virtual server for Transport Layer Security (TLS) to allow services to be published over HTTPS.
+6. For **SSL Profile (Client)**, select the profile you created for the prerequisites. Or leave the default if you're testing.
![Screenshot that shows selections for H T T P profile and S S L profile for the client.](./media/f5-big-ip-kerberos-advanced/update-http-profile-client.png)
-5. Change **Source Address Translation** to **Auto Map**.
+7. Change **Source Address Translation** to **Auto Map**.
![Screenshot to change source address translation](./media/f5-big-ip-kerberos-advanced/change-auto-map.png)
-6. Under **Access Policy**, set **Access Profile** based on the profile that you created earlier. This step binds the Azure AD SAML pre-authentication profile and KCD SSO policy to the virtual server.
+
+8. Under **Access Policy**, set **Access Profile** based on the profile you created. This binds the Azure AD SAML pre-authentication profile and KCD SSO policy to the virtual server.
![Screenshot that shows the box for setting an access profile for an access policy.](./media/f5-big-ip-kerberos-advanced/set-access-profile-for-access-policy.png)
-7. Set **Default Pool** to use the back-end pool objects that you created in the previous section. Then select **Finished**.
+9. Set **Default Pool** to use the back-end pool objects you created in the previous section.
+10. Select **Finished**.
![Screenshot that shows selecting a default pool.](./media/f5-big-ip-kerberos-advanced/set-default-pool-use-backend-object.png) ### Configure session management settings
-BIG-IP's session management settings define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. You can create your own policy here. Go to **Access Policy** > **Access Profiles** > **Access Profile** and select your application from the list.
+BIG-IP session management settings define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. You can create policy here.
+
+Go to **Access Policy** > **Access Profiles** > **Access Profile** and select application from the list.
-If you've defined a **Single Logout URI** value in Azure AD, it will ensure that an IdP-initiated sign-out from the MyApps portal also ends the session between the client and the BIG-IP APM. The imported application's federation metadata XML file provides the APM with the Azure AD SAML logout endpoint for SP-initiated sign-outs. But for this to be truly effective, the APM needs to know exactly when a user signs out.
+If you defined a Single Logout URI value in Azure AD, it ensures an IdP-initiated sign-out from the MyApps portal ends the session between the client and the BIG-IP APM. The imported application federation metadata XML file provides the APM with the Azure AD SAML log-out endpoint for SP-initiated sign-outs. For this to be effective, the APM needs to know when a user signs out.
-Consider a scenario where a BIG-IP web portal is not used. The user has no way of instructing the APM to sign out. Even if the user signs out of the application itself, BIG-IP is technically oblivious to this, so the application session could easily be reinstated through SSO. For this reason, SP-initiated sign-out needs careful consideration to ensure that sessions are securely terminated when they're no longer required.
+Consider a scenario when a BIG-IP web portal is not used. The user can't instruct the APM to sign out. Even if the user signs out of the application, BIG-IP is oblivious to this, so the application session could be reinstated through SSO. SP-initiated sign-out needs consideration to ensure sessions terminate securely.
-One way to achieve this is by adding an SLO function to your application's sign-out button. This function can redirect your client to the Azure AD SAML sign-out endpoint. You can find this SAML sign-out endpoint at **App Registrations** > **Endpoints**.
+> [!NOTE]
+> You can add an SLO function to your application sign-out button. This function redirects your client to the Azure AD SAML sign-out endpoint. Find the SAML sign-out endpoint at **App Registrations** > **Endpoints**.
+
+If you can't change the app, consider having BIG-IP listen for the app sign-out call. When it detects the request, it triggers SLO.
-If you can't change the app, consider having BIG-IP listen for the app's sign-out call. When it detects the request, it should trigger SLO.
+For more information, see the F5 articles:
-For more information, see the F5 articles [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+* [K42052145: Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145)
+* [K12056: Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
-Your application should now be published and accessible via SHA, either directly via its URL or through Microsoft's application portals. The application should also be visible as a target resource in [Azure AD Conditional Access](../conditional-access/concept-conditional-access-policies.md).
+Your application is published and accessible via SHA, by its URL or through Microsoft application portals. The application is visible as a target resource in [Azure AD Conditional Access](../conditional-access/concept-conditional-access-policies.md).
-For increased security, organizations that use this pattern can also consider blocking all direct access to the application. Blocking all direct access forces a strict path through BIG-IP.
+For increased security, organizations that use this pattern can block direct access to the application, which forces a strict path through BIG-IP.
## Next steps
-As a user, open a browser and connect to the application's external URL. You can also select the application's icon from the [Microsoft MyApps portal](https://myapps.microsoft.com/). After you authenticate against your Azure AD tenant, you'll be redirected to the BIG-IP endpoint for the application and automatically signed in via SSO.
+As a user, open a browser and connect to the application external URL. You can select the application icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After you authenticate against your Azure AD tenant, you are redirected to the BIG-IP endpoint for the application and signed in via SSO.
![Screenshot of the an example application's website.](./media/f5-big-ip-kerberos-advanced/app-view.png) ### Azure AD B2B guest access
-SHA also supports [Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md). Guest identities are synchronized from your Azure AD tenant to your target Kerberos domain. It's necessary to have a local representation of guest objects for BIG-IP to perform KCD SSO to the back-end application.
-
-## Troubleshoot
+SHA supports [Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md). Guest identities are synchronized from your Azure AD tenant to your target Kerberos domain. Have a local representation of guest objects for BIG-IP to perform KCD SSO to the back-end application.
-There can be many reasons for failure to access a SHA-protected application, including a misconfiguration. Consider the following points while troubleshooting any problem:
+## Troubleshooting
-* Kerberos is time sensitive. It requires that servers and clients are set to the correct time and, where possible, synchronized to a reliable time source.
+Consider the following points while troubleshooting:
-* Ensure that the host names for the domain controller and web application are resolvable in DNS.
-
-* Ensure that there are no duplicate SPNs in your environment by running the following query at the command line: `setspn -q HTTP/my_target_SPN`.
+* Kerberos is time sensitive. It requires servers and clients set to the correct time and, where possible, synchronized to a reliable time source.
+* Ensure the host names for the domain controller and web application are resolvable in DNS
+* Ensure there are no duplicate SPNs in your environment. Run the following query at the command line: `setspn -q HTTP/my_target_SPN`.
> [!NOTE]
-> To validate that an IIS application is configured appropriately for KCD, see [Troubleshoot Kerberos constrained delegation configurations for Application Proxy](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md). F5's article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+> To validate an IIS application is configured for KCD, see [Troubleshoot Kerberos constrained delegation configurations for Application Proxy](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md). See also the AskF5 article, [Kerberos Single Sign-On Method](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html).
-### Authentication and SSO problems
+**Increase log verbosity**
BIG-IP logs are a reliable source of information. To increase the log verbosity level: 1. Go to **Access Policy** > **Overview** > **Event Logs** > **Settings**.
+2. Select the row for your published application.
+3. Select **Edit** > **Access System Logs**.
+4. Select **Debug** from the SSO list.
+5. Select **OK**.
-2. Select the row for your published application. Then, select **Edit** > **Access System Logs**.
+Reproduce your problem before you look at the logs. Then revert this feature, when finished.
-3. Select **Debug** from the SSO list, and then select **OK**. Reproduce your problem before you look at the logs, but remember to switch this back when finished.
+**BIG-IP error**
-If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, it's possible that the problem relates to SSO from Azure AD to BIG-IP. To find out:
+If a BIG-IP error appears after Azure AD pre-authentication, the problem might relate to SSO from Azure AD to BIG-IP.
1. Go to **Access** > **Overview** > **Access reports**.
+2. Run the report for the last hour to see if logs have any clues. Use the **View session variables** link for your session to understand if the APM receives the expected claims from Azure AD.
-2. Run the report for the last hour to see if logs provide any clues. The **View session variables** link for your session will also help you understand if the APM is receiving the expected claims from Azure AD.
+**Back-end request**
-If you don't see a BIG-IP error page, the problem is probably more related to the back-end request or related to SSO from BIG-IP to the application. To find out:
+If no BIG-IP error appears, the problem is probably related to the back-end request, or related to SSO from BIG-IP to the application.
1. Go to **Access Policy** > **Overview** > **Active Sessions**.
+2. Select the link for your active session. Use the **View Variables** link to determine root-cause KCD problems, particularly if the BIG-IP APM fails to get the right user and domain identifiers.
-2. Select the link for your active session. The **View Variables** link in this location might also help you determine root-cause KCD problems, particularly if the BIG-IP APM fails to get the right user and domain identifiers.
-
-For help with diagnosing KCD-related problems, see the F5 BIG-IP deployment guide [Configuring Kerberos Constrained Delegation](https://www.f5.com/pdf/deployment-guides/kerberos-constrained-delegation-dg.pdf).
-
-## Additional resources
+For help when diagnosing KCD-related problems, see the F5 BIG-IP deployment guide [Configuring Kerberos Constrained Delegation](https://www.f5.com/pdf/deployment-guides/kerberos-constrained-delegation-dg.pdf), which has been archived.
-* [Active Directory Authentication](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html) (F5 article about BIG-IP advanced configuration)
+## Resources
+* AskF5 article, [Active Directory Authentication](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html)
* [Forget passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless)- * [What is Conditional Access?](../conditional-access/overview.md)- * [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
Title: Configure F5 BIG-IP Easy Button for Kerberos SSO
-description: Learn to implement Secure Hybrid Access (SHA) with Single Sign-on to Kerberos applications using F5ΓÇÖs BIG-IP Easy Button guided configuration..
+description: Learn to implement secure hybrid access (SHA) with Single Sign-on to Kerberos applications using F5ΓÇÖs BIG-IP Easy Button guided configuration.
Previously updated : 12/20/2021 Last updated : 10/19/2022
-# Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO
+# Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on
-In this article, learn to secure Kerberos-based applications with Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+In this article, learn to secure Kerberos-based applications with Azure Active Directory (Azure AD), through F5 BIG-IP Easy Button Guided Configuration 16.1.
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
-* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
-
+* Improved governance: See, [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) and learn more about Azure AD pre-authentication. See also, [What is Conditional Access?](../conditional-access/overview.md) to learn about how it helps enforce organizational policies.
* Full SSO between Azure AD and BIG-IP published services- * Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
-To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn more about benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md).
## Scenario description
-This scenario looks at the classic legacy application using **Kerberos authentication**, also known as **Integrated Windows Authentication (IWA)**, to gate access to protected content.
+This scenario is the classic, legacy application using Kerberos authentication, also known as Integrated Windows Authentication (IWA), to gate access to protected content.
-Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+Because it's legacy, the application lacks modern protocols to support direct integration with Azure AD. You can modernize the application, but it's costly, requires planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) bridges the gap between the legacy application and the modern ID control plane, through protocol transitioning.
-Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and headers-based SSO, significantly improving the overall security posture of the application.
+A BIG-IP in front of the application enables overlay of the service with Azure AD pre-authentication and headers-based SSO, improving the security posture of the application.
> [!NOTE]
-> Organizations can also gain remote access to this type of application with [Azure AD Application Proxy](../app-proxy/application-proxy.md)
+> Organizations can gain remote access to this type of application with [Azure AD Application Proxy](../app-proxy/application-proxy.md)
## Scenario architecture
-The SHA solution for this scenario is made up of the following:
+The secure hybrid access (SHA) solution for this scenario has the following components:
-**Application:** BIG-IP published service to be protected by and Azure AD SHA. The application host is domain-joined and so is integrated with Active Directory (AD).
+**Application:** BIG-IP published service to be protected by Azure AD SHA. The application host is domain-joined, therefore is integrated with Active Directory (AD).
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verifying user credentials, Conditional Access (CA), and SAML-based SSO to the BIG-IP. Through SSO, Azure AD provides BIG-IP with required session attributes.
-**KDC:** Key Distribution Center (KDC) role on a Domain Controller (DC), issuing Kerberos tickets.
+**KDC:** Key Distribution Center (KDC) role on a Domain Controller (DC), issuing Kerberos tickets
-**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing Kerberos-based SSO to the backend application.
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing Kerberos-based SSO to the back-end application.
-SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+SHA for this scenario supports SP- and IdP-initiated flows. The following image illustrates the SP flow.
![Scenario architecture](./media/f5-big-ip-kerberos-easy-button/scenario-architecture.png)
-| Steps| Description|
-| -- |-|
-| 1| User connects to application endpoint (BIG-IP) |
-| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
-| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
-| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
-| 5| BIG-IP requests Kerberos ticket from KDC |
-| 6| BIG-IP sends request to backend application, along with Kerberos ticket for SSO |
-| 7| Application authorizes request and returns payload |
+## User flow
+
+1. User connects to application endpoint (BIG-IP).
+2. BIG-IP APM access policy redirects user to Azure AD (SAML IdP).
+3. Azure AD pre-authenticates user and applies any enforced Conditional Access policies.
+4. User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token.
+5. BIG-IP requests Kerberos ticket from KDC.
+6. BIG-IP sends request to backend application, along with Kerberos ticket for SSO.
+7. Application authorizes request and returns payload.
## Prerequisites
-Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
-* An Azure AD free subscription or above
+Prior BIG-IP experience isnΓÇÖt necessary, but you need:
-* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
+* An [Azure AD free](https://azure.microsoft.com/free/active-directory/) subscription or above
-* Any of the following F5 BIG-IP license offers
+* A BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
+
+* Any of the following F5 BIG-IP licenses
* F5 BIG-IP® Best bundle
- * F5 BIG-IP APM standalone license
+ * F5 BIG-IP APM standalone
- * F5 BIG-IP APM add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+ * F5 BIG-IP APM add-on license on a BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
- * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+ * 90-day BIG-IP [Free Trial](https://www.f5.com/trial/big-ip-trial.php) license
-* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created directly within Azure AD and flowed back to your on-premises directory
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created in Azure AD and flowed back to your on-premises directory
* An account with Azure AD Application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
-* An [SSL Web certificate](./f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certs while testing
+* An [SSL Web certificate](./f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certificates while testing
-* An existing Kerberos application or [setup an IIS (Internet Information Services) app](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html) for KCD SSO
+* A Kerberos application, or go to active-directory-wp.com to learn to configure [SSO with IIS on Windows](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html).
## BIG-IP configuration methods
-There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+This tutorial covers the latest Guided Configuration 16.1 with an Easy Button template. With the Easy Button, Admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled by the APM Guided Configuration wizard and Microsoft Graph. This integration between BIG-IP APM and Azure AD ensures applications support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
>[!NOTE]
-> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+> Example strings or values in this guide should be replaced with those for your actual environment.
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
-
-This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
-
-1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights
-
-2. From the left navigation pane, select the **Azure Active Directory** service
-
-3. Under Manage, select **App registrations > New registration**
-
-4. Enter a display name for your application. For example, *F5 BIG-IP Easy Button*
-
-5. Specify who can use the application > **Accounts in this organizational directory only**
-
-6. Select **Register** to complete the initial app registration
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md). This action creates a tenant app registration to authorize Easy Button access to Graph. Through these permissions, the BIG-IP pushes the configurations to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
+1. Sign in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights.
+2. From the left navigation pane, select the **Azure Active Directory** service.
+3. Under Manage, select **App registrations > New registration**.
+4. Enter a display name for your application. For example, F5 BIG-IP Easy Button.
+5. Specify who can use the application > **Accounts in this organizational directory only**.
+6. Select **Register**.
7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**: * Application.Read.All
This first step creates a tenant app registration that will be used to authorize
* Policy.ReadWrite.ConditionalAccess * User.Read.All
-8. Grant admin consent for your organization
-
-9. In the **Certificates & Secrets** blade, generate a new **client secret** and note it down
-
-10. From the **Overview** blade, note the **Client ID** and **Tenant ID**
+8. Grant admin consent for your organization.
+9. On **Certificates & Secrets**, generate a new client secret. Make a note of this secret.
+10. From **Overview**, note the Client ID and Tenant ID.
## Configure Easy Button
-Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
+Initiate the APM Guided Configuration to launch the Easy Button template.
1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**. ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
-2. Review the list of configuration steps and select **Next**
+2. Review the configuration steps and select **Next**
![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
-3. Follow the sequence of steps required to publish your application.
+3. Follow the steps to publish your application.
![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox) ### Configuration Properties
-The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
-
-Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
-
-1. Provide a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. The **Azure Service Account Details** section can represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to register a SAML SP in your tenant, with the SSO properties you configure manually. Easy Button does this action for every BIG-IP service published and enabled for SHA.
-2. Enable **Single Sign-On (SSO) & HTTP Headers**
+Some settings are global, so can be reused for publishing more applications, reducing deployment time and effort.
+1. Provide a unique **Configuration Name**.
+2. Enable **Single Sign-On (SSO) & HTTP Headers**.
3. Enter the **Tenant Id, Client ID,** and **Client Secret** you noted when registering the Easy Button client in your tenant. ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-kerberos-easy-button/azure-configuration-properties.png)
-Before you select **Next**, confirm the BIG-IP can successfully connect to your tenant.
+4. Confirm the BIG-IP connects to your tenant.
+5. Select **Next**.
-### Service Provider
+### Service Provider settings
-The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
+The Service Provider settings are the properties for the SAML SP instance of the application protected through SHA.
-1. Enter **Host**. This is the public FQDN of the application being secured
-
-2. Enter **Entity ID.** This is the identifier Azure AD will use to identify the SAML SP requesting a token
+1. Enter **Host**, the public FQDN of the application being secured.
+2. Enter **Entity ID**, the identifier Azure AD uses to identify the SAML SP requesting a token.
![Screenshot for Service Provider settings](./media/f5-big-ip-kerberos-easy-button/service-provider.png)
-The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+The optional **Security Settings** specify whether Azure AD encrypts issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides more assurance the content tokens canΓÇÖt be intercepted, and personal or corporate data can't be compromised.
-3. From the **Assertion Decryption Private Key** list, select **Create New**
+3. From the **Assertion Decryption Private Key** list, select **Create New**.
![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
-4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
-
-6. Select **PKCS 12 (IIS) ** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
+4. Select **OK**. The **Import SSL Certificate and Keys** dialog appears.
+5. Select **PKCS 12 (IIS)** to import your certificate and private key.
+6. After provisioning, close the browser tab to return to the main tab.
![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
-6. Check **Enable Encrypted Assertion**
-
-8. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
-
-10. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions.
+7. Check **Enable Encrypted Assertion**.
+8. If you enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This private key is for the certificate that BIG-IP APM uses to decrypt Azure AD assertions.
+9. If you enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. BIG-IP uploads this certificate to Azure AD to encrypt the issued SAML assertions.
![Screenshot for Service Provider security settings](./media/f5-big-ip-kerberos-easy-button/service-provider-security-settings.png) ### Azure Active Directory
-This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **F5 BIG-IP APM Azure AD Integration > Add.**
+This section defines properties used to manually configure a new BIG-IP SAML application in your Azure AD tenant. Easy Button has application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP and an SHA template for other apps.
+
+For this scenario, select **F5 BIG-IP APM Azure AD Integration > Add.**
![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-kerberos-easy-button/azure-config-add-app.png) #### Azure Configuration
-1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users will see in [MyApps portal](https://myapplications.microsoft.com/).
-
+1. Enter a **Display Name** of the app that BIG-IP creates in your Azure AD tenant, and the icon in [MyApps portal](https://myapplications.microsoft.com/).
2. Leave the **Sign On URL (optional)** blank to enable IdP initiated sign-on. ![Screenshot for Azure configuration add display info](./media/f5-big-ip-kerberos-easy-button/azure-config-display-name.png)
-3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
-
-5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
-
-6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+3. Select the **refresh** icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported.
+4. Enter the certificate password in **Signing Key Passphrase**.
+5. Enable **Signing Option** (optional) to ensure BIG-IP accepts tokens and claims signed by Azure AD.
![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
-7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
+6. **User and User Groups** are dynamically queried from your Azure AD tenant and authorize access to the application. Add a user or group for testing, otherwise all access will be denied.
![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-kerberos-easy-button/azure-configuration-add-user-groups.png) #### User Attributes & Claims
-When a user successfully authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
+When a user authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. Use it to configure more claims.
-As our AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](./f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs login using an alternate suffix.
+The AD infrastructure is based on a .com domain suffix used internally and externally. More attributes aren't required to achieve a functional KCD SSO implementation. See the [advanced tutorial](./f5-big-ip-kerberos-advanced.md) for multiple domains or user sign-in using an alternate suffix.
![Screenshot for user attributes and claims](./media/f5-big-ip-kerberos-easy-button/user-attributes-claims.png) #### Additional User Attributes
-The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories, for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+The **Additional User Attributes** tab supports various distributed systems requiring attributes stored in other directories, for session augmentation. Attributes fetched from an LDAP source can be injected as SSO headers to help control access based on roles, Partner IDs, etc.
![Screenshot for additional user attributes](./media/f5-big-ip-kerberos-easy-button/additional-user-attributes.png)
The **Additional User Attributes** tab can support a variety of distributed syst
#### Conditional Access Policy
-CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+CA policies are enforced after Azure AD pre-authentication to control access based on device, application, location, and risk signals.
-The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
+The **Available Policies** view show all CA policies without user-based actions.
-The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+The **Selected Policies** view shows all policies targeting all cloud apps. You can't deselect policies or moved them to the Available Policies list because they're enforced at a tenant level.
-To select a policy to be applied to the application being published:
+To select a policy to apply to the application being published:
-1. Select the desired policy in the **Available Policies** list
-2. Select the right arrow and move it to the **Selected Policies** list
+1. Select a policy in the **Available Policies** list.
+2. Select the **right arrow** and move it to the **Selected Policies** list.
-Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
+Selected policies need an **Include** or **Exclude** option checked. If both options are checked, the selected policy isn't enforced.
![Screenshot for CA policies](./media/f5-big-ip-kerberos-easy-button/conditional-access-policy.png) >[!NOTE]
->The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+>The policy list appears once, after switching to this tab. You can use the **refresh** button to manually force the wizard to query your tenant, but this button is appears after the application is deployed.
### Virtual Server Properties
-A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing.
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to policy.
-2. Enter **Service Port** as *443* for HTTPS
-
-3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
-
-4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
+1. Enter a **Destination Address**, an available IPv4/IPv6 address the BIG-IP can use to receive client traffic. There's a corresponding record in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the application. Using a test PC localhost DNS is acceptable for testing.
+2. For **Service Port** enter 443 for HTTPS.
+3. Check **Enable Redirect Port** and then enter **Redirect Port**, which redirects incoming HTTP client traffic to HTTPS.
+4. The Client SSL Profile enables the virtual server for HTTPS, so client connections are encrypted over TLS. Select the **Client SSL Profile** you created for prerequisites, or leave the default if you're testing.
![Screenshot for Virtual server](./media/f5-big-ip-kerberos-easy-button/virtual-server.png) ### Pool Properties
-The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
-
-1. Choose from **Select a Pool.** Create a new pool or select an existing one
+The **Application Pool** tab details the services behind a BIG-IP, represented as a pool with application servers.
-2. Choose the **Load Balancing Method** as *Round Robin*
-
-3. For **Pool Servers** select an existing server node or specify an IP and port for the backend node hosting the header-based application
+1. Choose from **Select a Pool.** Create a new pool or select one.
+2. Choose the **Load Balancing Method**, such as Round Robin.
+3. For **Pool Servers** select a server node, or specify an IP and port for the back-end node hosting the header-based application.
![Screenshot for Application pool](./media/f5-big-ip-oracle/application-pool.png)
-Our backend application runs on HTTP port 80. You can switch this to 443 if your application runs on HTTPS.
+The back-end application runs on HTTP port 80. You can switch the port to 443, if your application runs on HTTPS.
-#### Single Sign-On & HTTP Headers
+#### Single sign-on and HTTP Headers
-Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO. You will need the Kerberos delegation account created earlier to complete this step.
+Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The Easy Button wizard supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO. Use the Kerberos delegation account you created to complete this step.
Enable **Kerberos** and **Show Advanced Setting** to enter the following:
-* **Username Source:** Specifies the preferred username to cache for SSO. You can provide any session variable as the source of the user ID, but *session.saml.last.identity* tends to work best as it holds the Azure AD claim containing the logged in user ID
+* **Username Source:** The preferred username to cache for SSO. You can provide a session variable as the source of the user ID, but *session.saml.last.identity* works better because it holds the Azure AD claim containing the logged in user ID.
-* **User Realm Source:** Required if the user domain is different to the BIG-IPΓÇÖs kerberos realm. In that case, the APM session variable would contain the logged in user domain. For example,*session.saml.last.attr.name.domain*
+* **User Realm Source:** Required if the user domain differs from the BIG-IP Kerberos realm. In that case, the APM session variable contains the logged-in user domain. For example,*session.saml.last.attr.name.domain*
![Screenshot for SSO and HTTP headers](./media/f5-big-ip-kerberos-easy-button/sso-headers.png)
-* **KDC:** IP of a Domain Controller (Or FQDN if DNS is configured & efficient)
+* **KDC:** IP of a Domain Controller, or FQDN if DNS is configured and efficient
-* **UPN Support:** Enable for the APM to use the UPN for kerberos ticketing
+* **UPN Support:** Enable this option for the APM to use the UPN for Kerberos ticketing
-* **SPN Pattern:** Use HTTP/%h to inform the APM to use the host header of the client request and build the SPN that it is requesting a kerberos token for.
+* **SPN Pattern:** Use HTTP/%h to inform the APM to use the host header of the client request, and build the SPN for which it's requesting a Kerberos token.
-* **Send Authorization:** Disable for applications that prefer negotiating authentication instead of receiving the kerberos token in the first request. For example, *Tomcat.*
+* **Send Authorization:** Disable for applications that negotiate authentication instead of receiving the kerberos token in the first request. For example, *Tomcat*.
![Screenshot for SSO method configuration](./media/f5-big-ip-kerberos-easy-button/sso-method-config.png)
-### Session Management
-The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
+### Session management
+
+The BIG-IPs session management settings define the conditions under which user sessions terminate or continue, limits for users and IP addresses, and corresponding user info. Refer to the AskF5 article [K18390492: Security | BIG-IP APM operations guide](https://support.f5.com/csp/article/K18390492) for settings details.
-What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+What isnΓÇÖt covered is Single Log Out (SLO) functionality, which ensures sessions between the IdP, the BIG-IP, and the user agent terminate when users sign out. When the Easy Button instantiates a SAML application in your Azure AD tenant, it populates the sign out URL with the APM SLO endpoint. An IdP-initiated sign out from the Azure AD MyApps portal terminates the session between the BIG-IP and a client.
-Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
+The SAML federation metadata for the published application is imported from your tenant, providing the APM with the SAML sign out endpoint for Azure AD. This action ensures an SP-initiated sign out terminates the session between a client and Azure AD. The APM needs to know when a user signs out of the application.
-If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+If the BIG-IP webtop portal accesses published applications, then a sign out is processed by the APM to call the Azure AD sign out endpoint. But consider a scenario when the BIG-IP webtop portal isnΓÇÖt used, then the user can't instruct the APM to sign out. Even if the user signs out of the application, the BIG-IP is oblivious. Therefore, SP-initiated sign out needs consideration to ensure sessions terminate securely. You can add an SLO function to your application's Sign out button, so it redirects your client to the Azure AD SAML, or the BIG-IP sign out endpoint.
-If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](./f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+The URL for SAML sign out endpoint for your tenant is found in **App Registrations > Endpoints**.
+
+If you can't change the app, then consider having the BIG-IP listen for the application sign out call, and upon detecting the request, it triggers SLO. Refer to [Oracle PeopleSoft SLO guidance](./f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) to learn about BIG-IP iRules. For more information about using BIG-IP iRules, see:
+
+* [K42052145: Configuring automatic session termination (log out) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145)
+* [K12056: Overview of the Log-out URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
-This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of Enterprise applications.
+This section is a breakdown of your configurations.
+
+Select **Deploy** to commit settings and verify the application is in your tenant's list of Enterprise applications.
## Active Directory KCD configurations
-For the BIG-IP APM to perform SSO to the backend application on behalf of users, KCD must be configured in the target AD domain. Delegating authentication also requires that the BIG-IP APM be provisioned with a domain service account.
+For the BIG-IP APM to perform SSO to the back-end application on behalf of users, configure KCD in the target AD domain. Delegating authentication requires the BIG-IP APM is provisioned with a domain service account.
-Skip this section if your APM service account and delegation are already setup, otherwise log into a domain controller with an admin account.
+Skip this section if your APM service account and delegation are set up, otherwise log into a domain controller with an Admin account.
-For our scenario, the application is hosted on server **APP-VM-01** and is running in the context of a service account named **web_svc_account**, not the computerΓÇÖs identity. The delegating service account assigned to the APM will be called **F5-BIG-IP**.
+For this scenario, the application is hosted on server APP-VM-01 and runs in the context of a service account named web_svc_account, not the computer identity. The delegating service account assigned to the APM is F5-BIG-IP.
### Create a BIG-IP APM delegation account
-As the BIG-IP doesnΓÇÖt support group Managed Service Accounts (gMSA), create a standard user account to use as the APM service account:
+The BIG-IP doesnΓÇÖt support group Managed Service Accounts (gMSA), therefore create a standard user account for the APM service account.
-1. Replace the **UserPrincipalName** and **SamAccountName** values with those for your environment.
+1. Replace the **UserPrincipalName** and **SamAccountName** values with the values needed for your environment.
```New-ADUser -Name "F5 BIG-IP Delegation Account" -UserPrincipalName host/f5-big-ip.contoso.com@contoso.com -SamAccountName "f5-big-ip" -PasswordNeverExpires $true -Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password") ```
-2. Create a **Service Principal Name (SPN)** for the APM service account to use when performing delegation to the web applicationΓÇÖs service account.
+2. Create a **Service Principal Name (SPN)** for the APM service account for performing delegation to the web application service account.
```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @{Add="host/f5-big-ip.contoso.com"} ```
-3. Ensure the SPN now shows against the APM service account.
+3. Ensure the SPN shows against the APM service account.
```Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
As the BIG-IP doesnΓÇÖt support group Managed Service Accounts (gMSA), create a
```Get-ADUser -identity <name_of _account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
-5. You can use any SPN you see defined against a web applicationΓÇÖs service account, but in the interest of security itΓÇÖs best to use a dedicated SPN matching the host header of the application. For example, as our web application host header is myexpenses.contoso.com we would add HTTP/myexpenses.contoso.com to the applications service account object in AD.
+5. You can use an SPN defined against a web application service account, but for better security, use a dedicated SPN matching the host header of the application. For example, the web application host header is myexpenses.contoso.com. You can add HTTP/myexpenses.contoso.com to the applications service account object in AD.
```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
- Or if the app ran in the machine context, we would add the SPN to the object of the computer account in AD.
+Or if the app ran in the machine context, add the SPN to the object of the computer account in AD.
- ```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
+ ```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
-With the SPNs defined, the APM service account now needs trusting to delegate to that service. The configuration will vary depending on the topology of your BIG-IP and application server.
+With the SPNs defined, the APM service account needs trust to delegate to that service. The configuration varies depending on the topology of your BIG-IP and application server.
-### Configure BIG-IP and target application in same domain
+### Configure BIG-IP and target application in the same domain
-1. Set trust for the APM service account to delegate authentication
+1. Set trust for the APM service account to delegate authentication.
```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true ```
-2. The APM service account then needs to know which target SPN itΓÇÖs trusted to delegate to, Or in other words which service is it allowed to request a Kerberos ticket for. Set target SPN to the service account running your web application.
+2. The APM service account needs to know the target SPN itΓÇÖs trusted to delegate to, or which service for which it's allowed to request a Kerberos ticket. Set target SPN to the service account running your web application.
```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')} ```
-If preferred, you can also complete these tasks through the Active Directory Users and Computers MMC (Microsoft Management Console) on a domain controller.
+>[!NOTE]
+>You can complete these tasks with the Active Directory Users and Computers Microsoft Management Console (MMC) on a domain controller.
### BIG-IP and application in different domains
-Starting with Windows Server 2012, cross domain KCD uses Resource-based constrained delegation (RCD). The constraints for a service have been transferred from the domain administrator to the service administrator. This allows the back-end service administrator to allow or deny SSO. This also introduces a different approach at configuration delegation, which is only possible using either PowerShell or ADSIEdit.
+From the Windows Server 2012 version onward, cross-domain KCD uses resource-based constrained delegation (RCD). The constraints for a service transferred from the domain administrator to the service administrator. The back-end service Administrator can allow or deny SSO. This situation creates a different approach at configuration delegation, which is possible using PowerShell or ADSIEdit.
-The **PrincipalsAllowedToDelegateToAccount** property of the applications service account (computer or dedicated service account) can be used to grant delegation from the BIG-IP. For this scenario, use the following PowerShell command on a Domain Controller DC (2012 R2+) within the same domain as the application.
+You can use PrincipalsAllowedToDelegateToAccount property of the applications service account (computer or dedicated service account) to grant delegation from the BIG-IP. For this scenario, use the following PowerShell command on a domain controller (2012 R2+) in the same domain as the application.
-If the **web_svc_account** service runs in context of a user account:
+If the web_svc_account service runs in the context of a user account:
```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com ``` ```Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip ``` ```Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount ```
-If the **web_svc_account** service runs in context of a computer account:
+If the web_svc_account service runs in the context of a computer account:
```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com ``` ```Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip ```
For more information, see [Kerberos Constrained Delegation across domains](/prev
## Next steps
-From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+From a browser, connect to the application external URL or select the **application** icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, you're redirected to the BIG-IP virtual server for the application and signed in through SSO.
![Screenshot for App views](./media/f5-big-ip-kerberos-easy-button/app-view.png)
-For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+For increased security, organizations using this pattern can consider blocking direct access to the application, thereby forcing a strict path through the BIG-IP.
### Azure AD B2B guest access
-[Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md) is supported for this scenario, by having guest identities flowed down from your Azure AD tenant to the directory the application uses for authorisation. Without a local representation of a guest object in AD, the BIG-IP would fail to recieve a kerberos ticket for KCD SSO to the backend application.
+[Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md) is supported for this scenario, with guest identities flowing down from your Azure AD tenant to the directory the application uses for authorization. Without a local representation of a guest object in AD, the BIG-IP fails to receive a kerberos ticket for KCD SSO to the back-end application.
## Advanced deployment
-There may be cases where the Guided Configuration templates lacks the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md).
+The Guided Configuration templates can lack the flexibility to achieve some requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md).
-Alternatively, the BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+Alternatively, in BIG-IP you can disable the Guided Configuration strict management mode. You can manually change your configurations, although the bulk of your configurations are automated through the wizard-based templates.
-You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+You can navigate to **Access > Guided Configuration** and select the small **padlock** icon on the far-right of the row for your applications configs.
![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
-At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+At this point, changes with the wizard UI aren't possible, but all BIG-IP objects associated with the published instance of the application are unlocked for management.
>[!NOTE]
->Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+>Re-enabling strict mode and deploying a configuration overwrites settings performed outside the Guided Configuration UI. Therefore we recommend the advanced configuration method for production services.
## Troubleshooting
-Failure to access a SHA protected application can be due to any number of factors. If troubleshooting kerberos SSO issues, be aware of the following.
-
-* Kerberos is time sensitive, so requires that servers and clients be set to the correct time and where possible synchronized to a reliable time source
+If troubleshooting kerberos SSO issues, be aware of the following concepts.
+* Kerberos is time sensitive, so it requires servers and clients set to the correct time, and when possible, synchronized to a reliable time source
* Ensure the hostname for the domain controller and web application are resolvable in DNS
+* Ensure there are no duplicate SPNs in your AD environment: executing the following query at the command line on a domain PC: setspn -q HTTP/my_target_SPN
-* Ensure there are no duplicate SPNs in your AD environment by executing the following query at the command line on a domain PC: setspn -q HTTP/my_target_SPN
+You can refer to our [App Proxy guidance](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md) to validate an IIS application is configured for KCD. See also the AskF5 article, [Kerberos single sign on method](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html).
-You can refer to our [App Proxy guidance](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md) to validate an IIS application is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+### Log analysis: increase verbosity
-### Log analysis
-
-BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
-
-1. Navigate to **Access Policy > Overview > Event Logs > Settings**
-
-2. Select the row for your published application, then **Edit > Access System Logs**
+Use BIG-IP logging to isolate issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**.
+2. Select the row for your published application, then **Edit > Access System Logs**.
3. Select **Debug** from the SSO list, and then select **OK**.
-Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data.
+Reproduce your issue and inspect the logs. When complete, revert the feature because verbose mode generates much data.
-If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+### BIG-IP error page
-1. Navigate to **Access > Overview > Access reports**
+If aBIG-IP error appears after Azure AD pre-authentication, the issue might relate to SSO from Azure AD to the BIG-IP.
-2. Run the report for the last hour to see logs provide any clues. The **View session variables** link for your session will also help understand if the APM is receiving the expected claims from Azure AD.
+1. Navigate to **Access > Overview > Access reports**.
+2. Run the report for the last hour to see logs for clues. Use the **View session variables** link to help understand if the APM receives the expected claims from Azure AD.
-If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+### Back-end request
-1. Navigate to **Access Policy > Overview > Active Sessions**
+If no error page appears, the issue is probably related to the back-end request, or SSO from the BIG-IP to the application.
+1. Navigate to **Access Policy > Overview > Active Sessions**.\
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables.
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+For more information, see:
+
+* dev/central: [APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107)
+* AskF5: [Session Variables](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html)
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
Previously updated : 11/22/2021 Last updated : 10/19/2022
-# Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO
+# Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP single sign-on
-In this article, learn to secure header & LDAP based applications using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+In this article, you can learn to secure header and LDAP-based applications using Azure Active Directory (Azure AD), by using the F5 BIG-IP Easy Button Guided Configuration 16.1. Integrating a BIG-IP with Azure AD provides many benefits:
-Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
+* Improved governance: See, [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) and learn more about Azure AD pre-authentication. See also, [What is Conditional Access?](../conditional-access/overview.md) to learn about how it helps enforce organizational policies.
+* Full single sign-on (SSO) between Azure AD and BIG-IP published services
+* Manage identities and access from one control plane, the [Azure portal](https://portal.azure.com/)
-* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
-
-* Full SSO between Azure AD and BIG-IP published services
-
-* Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
-
-To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn about more benefits, see [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md).
## Scenario description
-This scenario looks at the classic legacy application using **HTTP authorization headers** sourced from LDAP directory attributes, to manage access to protected content.
+This scenario focuses on the classic, legacy application using **HTTP authorization headers** sourced from LDAP directory attributes, to manage access to protected content.
-Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+Because it's legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it's costly, requires planning, and introduces risk of potential downtime. Instead, you can use an F5 BIG-IP Application Delivery Controller (ADC) to bridge the gap between the legacy application and the modern ID control plane, with protocol transitioning.
-Having a BIG-IP in front of the app enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+Having a BIG-IP in front of the app enables overlay of the service with Azure AD pre-authentication and header-based SSO, improving the overall security posture of the application.
## Scenario architecture
-The secure hybrid access solution for this scenario is made up of:
+The secure hybrid access solution for this scenario has:
-**Application:** BIG-IP published service to be protected by Azure AD SHA.
+**Application:** BIG-IP published service to be protected by Azure AD secure hybrid access (SHA).
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) that verifies user credentials, Conditional Access (CA), and SAML-based SSO to the BIG-IP. With SSO, Azure AD provides the BIG-IP with required session attributes.
-**HR system:** LDAP based employee database acting as source of truth for fine grained application permissions.
+**HR system:** LDAP-based employee database as the source of truth for application permissions.
-**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the backend application.
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the back-end application.
-SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+SHA for this scenario supports SP and IdP initiated flows. The following image illustrates the SP initiated flow.
![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-ldap/sp-initiated-flow.png)
-| Steps| Description |
-| -- |-|
-| 1| User connects to application endpoint (BIG-IP) |
-| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
-| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
-| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
-| 5| BIG-IP requests additional attributes from LDAP based HR system |
-| 6| BIG-IP injects Azure AD and HR system attributes as headers in request to application |
-| 7| Application authorizes access with enriched session permissions |
+## User flow
+
+1. User connects to application endpoint (BIG-IP).
+2. BIG-IP APM access policy redirects user to Azure AD (SAML IdP).
+3. Azure AD pre-authenticates user and applies enforced Conditional Access policies.
+4. User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token.
+5. BIG-IP requests more attributes from LDAP based HR system.
+6. BIG-IP injects Azure AD and HR system attributes as headers in request to application.
+7. Application authorizes access with enriched session permissions.
## Prerequisites+ Prior BIG-IP experience isn't necessary, but you'll need: -- An Azure AD free subscription or above
+- An [Azure AD free](https://azure.microsoft.com/products/active-directory/?OCID=AIDcmm5edswduu_SEM_6706bbf3ede61902c886d8bcef5f7616:G:s&ef_id=6706bbf3ede61902c886d8bcef5f7616:G:s&msclkid=6706bbf3ede61902c886d8bcef5f7616) subscription or above
-- An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in
+- A BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in
Azure](./f5-bigip-deployment-guide.md) - Any of the following F5 BIG-IP license SKUs
Prior BIG-IP experience isn't necessary, but you'll need:
- F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
- - 90-day BIG-IP full feature [trial
- license](https://www.f5.com/trial/big-ip-trial.php).
+ - 90-day BIG-IP product [Free Trial](https://www.f5.com/trial/big-ip-trial.php)
- User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD - An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator) -- An [SSL Web certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
+- An [SSL Web certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certificates while testing
-- An existing header-based application or [setup a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing
+- A header-based application or [set up a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing
- A user directory that supports LDAP, such as Windows Active Directory Lightweight Directory Services (AD LDS), OpenLDAP etc. ## BIG-IP configuration methods
-There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+There are many methods to configure BIG-IP. This tutorial covers the latest Guided Configuration 16.1 with an Easy Button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled between the APM Guided Configuration wizard and Microsoft Graph. This integration between BIG-IP APM and Azure AD ensures applications support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
>[!NOTE]
->All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+>Replace example strings or values in this guide with those for your actual environment.
## Register Easy Button Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
-This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
-
-1. Sign-in to the [Azure AD portal](https://portal.azure.com) using an account with Application Administrative rights
-
-2. From the left navigation pane, select the **Azure Active Directory** service
-
-3. Under Manage, select **App registrations > New registration**
-
-4. Enter a display name for your application. For example, *F5 BIG-IP Easy Button*
-
-5. Specify who can use the application > **Accounts in this organizational directory only**
-
-6. Select **Register** to complete the initial app registration
+This first step creates a tenant app registration to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP can push the configurations to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
+1. Sign in to the [Azure AD portal](https://portal.azure.com) using an account with Application Administrative rights.
+2. From the left navigation pane, select the **Azure Active Directory** service.
+3. Under Manage, select **App registrations > New registration**.
+4. Enter a display name for your application. For example, F5 BIG-IP Easy Button.
+5. Specify who can use the application > **Accounts in this organizational directory only**.
+6. Select **Register**.
7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**: * Application.Read.All
This first step creates a tenant app registration that will be used to authorize
* Policy.ReadWrite.ConditionalAccess * User.Read.All
-8. Grant admin consent for your organization
-
-9. In the **Certificates & Secrets** blade, generate a new **client secret** and note it down
-
-10. From the **Overview** blade, note the **Client ID** and **Tenant ID**
+8. Grant admin consent for your organization.
+9. On **Certificates & Secrets**, generate a new **client secret**. Make a note of this secret.
+10. On **Overview**, note the **Client ID** and **Tenant ID**.
## Configure Easy Button
-Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
+Initiate the APM **Guided Configuration** to launch the **Easy Button** template.
1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**. ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
-2. Review the list of configuration steps and select **Next**
+2. Review the list of steps and select **Next**
![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
-3. Follow the sequence of steps required to publish your application.
+3. Follow the steps to publish your application.
![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox) ### Configuration Properties
-The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
-
-Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. The **Azure Service Account Details** section represents the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to register a SAML SP in your tenant, with the SSO properties you would configure manually. Easy Button does this action for every BIG-IP service published and enabled for SHA.
-1. Enter a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations.
-
-2. Enable **Single Sign-On (SSO) & HTTP Headers**
+Some of these settings are global, therefore can be reused to publish more applications, reducing deployment time and effort.
+1. Enter a unique **Configuration Name** so admins can distinguish between Easy Button configurations.
+2. Enable **Single Sign-On (SSO) & HTTP Headers**.
3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** you noted when registering the Easy Button client in your tenant.-
-5. Confirm the BIG-IP can successfully connect to your tenant, and then select **Next**
+4. Confirm the BIG-IP can connect to your tenant.
+5. Select **Next**.
![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-ldap/config-properties.png) ### Service Provider
-The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
-1. Enter **Host**. This is the public FQDN of the application being secured
-
-2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
+1. Enter **Host**, the public FQDN of the application being secured.
+2. Enter **Entity ID**, the identifier Azure AD uses to identify the SAML SP requesting a token.
![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-ldap/service-provider.png)
-The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+Use the optional **Security Settings** to specify whether Azure AD encrypts issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides assurance the content tokens canΓÇÖt be intercepted, and personal or corporate data can't be compromised.
3. From the **Assertion Decryption Private Key** list, select **Create New** ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
-4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
-
-6. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
+4. Select **OK**. The **Import SSL Certificate and Keys** dialog opens in a new tab.
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. After provisioning, close the browser tab to return to the main tab.
![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png) 6. Check **Enable Encrypted Assertion**.-
-8. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions.
-
-9. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions.
+7. If you enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. BIG-IP APM uses this certificate private key to decrypt Azure AD assertions.
+8. If you enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. BIG-IP uploads this certificate to Azure AD to encrypt the issued SAML assertions.
![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png) ### Azure Active Directory
-This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **F5 BIG-IP APM Azure AD Integration > Add**.
+This section contains properties you use to manually configure a new BIG-IP SAML application in your Azure AD tenant. Easy Button has application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP and an SHA template for other apps.
+
+For this scenario, select **F5 BIG-IP APM Azure AD Integration > Add**.
![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-ldap/azure-config-add-app.png) #### Azure Configuration
-1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users will see on [MyApps portal](https://myapplications.microsoft.com/)
-
-2. Do not enter anything in the **Sign On URL (optional)** to enable IdP initiated sign-on
+1. Enter **Display Name** of the app that the BIG-IP creates in your Azure AD tenant, and the icon that users see on [MyApps portal](https://myapplications.microsoft.com/).
+2. Enter nothing in the **Sign On URL (optional)**.
![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-ldap/azure-configuration-properties.png)
-3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
-
-5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
-
-6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+3. Select the **Refresh** icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported.
+4. Enter the certificate password in **Signing Key Passphrase**.
+5. Enable **Signing Option** (optional) to ensure BIG-IP accepts tokens and claims signed by Azure AD.
![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
-7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
+6. **User and User Groups** are dynamically queried from your Azure AD tenant and authorize access to the application. Add a user or group for testing, otherwise access will be denied.
![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png) #### User Attributes & Claims
-When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
+When a user authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. It also lets you configure more claims.
-For this example, you can include one more attribute:
+For this example, include one more attribute:
-1. Enter **Header Name** as *employeeid*
-
-2. Enter **Source Attribute** as *user.employeeid*
+1. For **Header Name** enter **employeeid**.
+2. For **Source Attribute** enter **user.employeeid**.
![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png) #### Additional User Attributes
-In the **Additional User Attributes tab**, you can enable session augmentation required by various distributed systems such as Oracle, SAP, and other JAVA based implementations requiring attributes stored in other directories. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
-
-1. Enable the **Advanced Settings** option
-
-2. Check the **LDAP Attributes** check box
-
-3. Choose **Create New** in Choose Authentication Server
+On the **Additional User Attributes** tab, you can enable session augmentation for distributed systems such as Oracle, SAP, and other JAVA-based implementations requiring attributes stored in other directories. Attributes fetched from an LDAP source can be injected as more SSO headers to control access based on roles, Partner IDs, etc.
-4. Depending on your setup, select either **Use pool** or **Direct** Server Connection mode to provide the **Server Address** of the target LDAP service. If using a single LDAP server, choose *Direct*
-
-5. Enter **Service Port** as 389, 636 (Secure), or any other port your LDAP service uses
-
-6. Enter the **Base Search DN** to the exact distinguished name of the location containing the account the APM will authenticate with for LDAP service queries
+1. Enable the **Advanced Settings** option.
+2. Check the **LDAP Attributes** check box.
+3. Choose **Create New** in Choose Authentication Server.
+4. Depending on your setup, select either **Use pool** or **Direct** Server Connection mode to provide the **Server Address** of the target LDAP service. If using a single LDAP server, choose *Direct*.
+5. Enter **Service Port** as 389, 636 (Secure), or another port your LDAP service uses.
+6. Enter the **Base Search DN** to the exact distinguished name of the location containing the account the APM will authenticate with for LDAP service queries.
![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-ldap/additional-user-attributes.png)
-7. Set the **Base Search DN** to the exact distinguished name of the location containing the user account objects that the APM will query via LDAP
-
-8. Set both membership options to **None** and add the name of the user object attribute that must be returned from the LDAP directory. For our scenario, this is **eventroles**
+7. Set the **Base Search DN** to the distinguished name of the location containing the user account objects that the APM queries via LDAP.
+8. Set both membership options to **None** and add the name of the user object attribute to be returned from the LDAP directory. For this scenario: **eventroles**.
![Screenshot for LDAP query properties](./media/f5-big-ip-easy-button-ldap/user-properties-ldap.png) #### Conditional Access Policy
-CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+CA policies are enforced after Azure AD pre-authentication to control access based on device, application, location, and risk signals.
-The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
+The **Available Policies** view lists CA policies that don't include user actions.
-The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+The **Selected Policies** view shows policies targeting all cloud apps. These policies can't be deselected or moved to the Available Policies list because they're enforced at a tenant level.
To select a policy to be applied to the application being published:
-1. Select the desired policy in the **Available Policies** list.
+1. Select a policy in the **Available Policies** list.
2. Select the right arrow and move it to the **Selected Policies** list. -
-Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
+>[!NOTE]
+>Selected policies have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
![Screenshot for CA policies](./media/f5-big-ip-kerberos-easy-button/conditional-access-policy.png) >[!NOTE]
->The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+>The policy list is enumerated once when you initially select this tab. Use the **Refresh** button to manually force the wizard to query your tenant. This button appears when the application is deployed.
### Virtual Server Properties
-A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for clients requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Received traffic is processed and evaluated against the APM profile associated with the virtual server, before directed according to policy.
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing.
-
-2. Enter **Service Port** as *443* for HTTPS
-
-3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
-
-4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
+1. Enter **Destination Address**, an available IPv4/IPv6 address the BIG-IP can use to receive client traffic. There should be a corresponding record in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the application. Using a test PC localhost DNS is acceptable for testing.
+2. For **Service Port** enter 443 and HTTPS.
+3. Check **Enable Redirect Port** and then enter **Redirect Port** to redirects incoming HTTP client traffic to HTTPS.
+4. The Client SSL Profile enables the virtual server for HTTPS, so client connections are encrypted over TLS. Select the **Client SSL Profile** you created or leave the default while testing.
![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png) ### Pool Properties
-The **Application Pool tab** details the services behind a BIG-IP that are represented as a pool, containing one or more application servers.
-
-1. Choose from **Select a Pool**. Create a new pool or select an existing one
-
-2. Choose the **Load Balancing Method** as *Round Robin*
+The **Application Pool** tab has the services behind a BIG-IP represented as a pool, with one or more application servers.
-3. For **Pool Servers** select an existing node or specify an IP and port for the server hosting the header-based application
+1. Choose from **Select a Pool**. Create a new pool or select one.
+2. Choose the **Load Balancing Method** such as Round Robin.
+3. For **Pool Servers** select a node or specify an IP and port for the server hosting the header-based application.
![Screenshot for Application pool](./media/f5-big-ip-oracle/application-pool.png)
-Our backend application sits on HTTP port 80 but obviously switch to 443 if yours is HTTPS.
+>[!NOTE]
+>Our back-end application sits on HTTP port 80. Switch to 443 if yours is HTTPS.
-#### Single Sign-On & HTTP Headers
+### Single sign-on and HTTP Headers
-Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO, the latter of which weΓÇÖll enable to configure the following.
+Enabling SSO allows users to access BIG-IP published services without entering credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO, the latter of which weΓÇÖll enable to configure the following options.
* **Header Operation:** Insert * **Header Name:** upn
Enabling SSO allows users to access BIG-IP published services without having to
![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-ldap/sso-headers.png) >[!NOTE]
->APM session variables defined within curly brackets are CASE sensitive. For example, if you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure
+>APM session variables in curly brackets are case-sensitive. For example, if you enter OrclGUID and the Azure AD attribute name is orclguid, an attribute mapping failure occurs.
+
+### Session management settings
-### Session Management
+The BIG-IPs session management settings define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to the F5 article [K18390492: Security | BIG-IP APM operations guide](https://support.f5.com/csp/article/K18390492) for details on these settings.
-The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
+What isnΓÇÖt covered is Single Log Out (SLO) functionality, which ensures sessions between the IdP, the BIG-IP, and the user agent terminate as users sign out. When the Easy Button instantiates a SAML application in your Azure AD tenant, it populates the sign out URL with the APM SLO endpoint. An IdP-initiated sign out from the Azure AD MyApps portal terminates the session between the BIG-IP and a client.
-What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+The SAML federation metadata for the published application is imported from your tenant, which provides the APM with the SAML sign out endpoint for Azure AD. This action ensures an SP-initiated sign out terminates the session between a client and Azure AD. The APM needs to know when a user signs out of the application.
-Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
+If the BIG-IP webtop portal is used to access published applications, then a sign out is processed by the APM to call the Azure AD sign-out endpoint. But consider a scenario wherein the BIG-IP webtop portal isnΓÇÖt used. The user can't instruct the APM to sign out. Even if the user signs out of the application, the BIG-IP is oblivious. Therefore, SP-initiated sign-out needs consideration to ensure sessions terminate securely. You can add an SLO function to an application's Sign-out button, so it can redirect your client to the Azure AD SAML or BIG-IP sign out endpoint. The URL for SAML sign out endpoint for your tenant is in **App Registrations > Endpoints**.
-If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+If you can't make a change to the app, then consider having the BIG-IP listen for the application sign-out call, and upon detecting the request have it trigger SLO. Refer to the [Oracle PeopleSoft SLO guidance](./f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) to learn about BIG-IP iRules. For more information about using BIG-IP iRules, see:
-If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](./f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+* [K42052145: Configuring automatic session termination (log-out) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145)
+* [K12056: Overview of the Log-out URI Include option](https://support.f5.com/csp/article/K12056)
## Summary
-This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of ΓÇÿEnterprise applications.
+This last step provides a breakdown of your configurations.
+
+Select **Deploy** to commit settings and verify the application is in your tenant list of Enterprise applications.
-Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals. For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+Your application should be published and accessible via SHA, either via its URL or through Microsoft application portals. For increased security, organizations using this pattern can block direct access to the application, thereby forcing a strict path through the BIG-IP.
## Next steps
-From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapplications.microsoft.com/). After authenticating against Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+From a browser, in the [Microsoft MyApps portal](https://myapplications.microsoft.com/) connect to the application external URL or select the application icon. After authenticating against Azure AD, you're redirected to the BIG-IP virtual server for the application and signed in through SSO.
-This shows the output of the injected headers displayed by our headers-based application.
+See the following screenshot for output of the injected headers in our headers-based application.
![Screenshot for App views](./media/f5-big-ip-easy-button-ldap/app-view.png)
-For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+For increased security, organizations using this pattern can block direct access to the application, thereby forcing a strict path through the BIG-IP.
## Advanced deployment
-There may be cases where the Guided Configuration templates lacks the flexibility to achieve more specific requirements.
+The Guided Configuration templates can lack flexibility to achieve specific requirements.
-The BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+In BIG-IP, you can disable **Guided ConfigurationΓÇÖs strict management mode**. You can then manually change your configurations, although the bulk of your configurations are automated through the wizard-based templates.
-You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+You can navigate to **Access > Guided Configuration** and select the small **padlock** icon on the far-right of the row for your applications configurations.
![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
-At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+At this point, changes with the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application are unlocked for direct management.
> [!NOTE]
-> Re-enabling strict mode and deploying a configuration overwrites any settings performed outside of the Guided Configuration UI. We recommend the advanced configuration method for production services.
-
+> Re-enabling strict mode and deploying a configuration overwrites any settings performed outside the Guided Configuration UI. We recommend the advanced configuration method for production services.
## Troubleshooting
-Failure to access a SHA protected application can be due to any number of factors. BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+**BIG-IP logging**
+
+BIG-IP logging can help isolate issues with connectivity, SSO, policy violations, or misconfigured variable mappings.
-1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+To troubleshoot, you can increase the log verbosity level.
-2. Select the row for your published application then **Edit > Access System Logs**
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**.
+2. Select the row for your published application then **Edit > Access System Logs**.
+3. Select **Debug** from the SSO list then **OK**.
-3. Select **Debug** from the SSO list then **OK**
+Reproduce your issue, then inspect the logs, but revert this setting when finished. Verbose mode generates significant amounts of data.
-Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data.
+**BIG-IP error page**
-If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+If a BIG-IP error appears after Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
-1. Navigate to **Access > Overview > Access reports**
+1. Navigate to **Access > Overview > Access reports**.
+2. Run the report for the last hour to see if the logs provide any clues. Use the **View Variables** link for your session to understand if the APM is receiving the expected claims from Azure AD.
-2. Run the report for the last hour to see if the logs provide any clues. The **View session** variables link for your session will also help understand if the APM is receiving the expected claims from Azure AD
+**Back-end request**
-If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+If there's no error page, then the issue is probably related to the back-end request, or SSO from the BIG-IP to the application.
-1. In which case head to **Access Policy > Overview > Active Sessions** and select the link for your active session
+1. Navigate to **Access Policy > Overview > Active Sessions** and select the link for your active session.
+2. Use the **View Variables** link to help root-cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source.
-2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source
+**Validate the APM service account**
-The following command can also be used from the BIG-IP bash shell to validate the APM service account used for LDAP queries and can successfully authenticate and query a user object:
+Use the following command from the BIG-IP bash shell to validate the APM service account for LDAP queries. Confirm authentication and query of a user object.
```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=partners,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
+For more information, see the F5 article [K11072: Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). You can use a BIG-IP reference table to help diagnose LDAP-related issues in AskF5 document, [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
active-directory Migrate Applications From Okta To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-applications-from-okta-to-azure-active-directory.md
Previously updated : 09/01/2021 Last updated : 10/19/2022
In this tutorial, you'll learn how to migrate your applications from Okta to Azu
## Create an inventory of current Okta applications
-Before you begin the migration, you should document the current environment and application settings. You can use the Okta API to collect this information from a centralized location. To use the API, you'll need an API explorer tool such as [Postman](https://www.postman.com/).
+Before migration, document the current environment and application settings. You can use the Okta API to collect this information. Use an API explorer tool such as [Postman](https://www.postman.com/).
-Follow these steps to create an application inventory:
+To create an application inventory:
-1. Install the Postman app. Then generate an API token from the Okta admin console.
-
-1. On the API dashboard, under **Security**, select **Tokens** > **Create Token**.
+1. With the Postman app, generate an API token from the Okta admin console.
+2. On the API dashboard, under **Security**, select **Tokens** > **Create Token**.
![Screenshot that shows the button for creating a token.](media/migrate-applications-from-okta-to-azure-active-directory/token-creation.png)
-1. Insert a token name and then select **Create Token**.
+3. Insert a token name and then select **Create Token**.
![Screenshot that shows where to name the token.](media/migrate-applications-from-okta-to-azure-active-directory/token-created.png)
-1. Record the token value and save it. It won't be accessible after you select **OK, got it**.
+4. Record the token value and save it. It's not accessible after you select **OK, got it**.
![Screenshot that shows the Token Value box.](media/migrate-applications-from-okta-to-azure-active-directory/record-created.png)
-1. In the Postman app, in the workspace, select **Import**.
+5. In the Postman app, in the workspace, select **Import**.
![Screenshot that shows the Import A P I.](media/migrate-applications-from-okta-to-azure-active-directory/import-api.png)
-1. On the **Import** page, select **Link**. Then insert the following link to import the API:
+6. On the **Import** page, select **Link**. Then insert the following link to import the API:
+ `https://developer.okta.com/docs/api/postman/example.oktapreview.com.environment` ![Screenshot that shows the link to import.](media/migrate-applications-from-okta-to-azure-active-directory/link-to-import.png)
- >[!NOTE]
- >Don't modify the link with your tenant values.
+>[!NOTE]
+>Don't modify the link with your tenant values.
-1. Continue by selecting **Import**.
+7. Select **Import**.
![Screenshot that shows the next Import page.](media/migrate-applications-from-okta-to-azure-active-directory/next-import-menu.png)
-1. After the API is imported, change the **Environment** selection to **{yourOktaDomain}**.
-
- :::image type="content" source="media/migrate-applications-from-okta-to-azure-active-directory/change-environment.png" alt-text="Screenshot that shows how to change the environment." lightbox="media/migrate-applications-from-okta-to-azure-active-directory/change-environment.png":::
-
-1. Edit your Okta environment by selecting the eye icon. Then select **Edit**.
+8. After the API is imported, change the **Environment** selection to **{yourOktaDomain}**.
+9. To edit your Okta environment select the **eye** icon. Then select **Edit**.
![Screenshot that shows how to edit the Okta environment.](media/migrate-applications-from-okta-to-azure-active-directory/edit-environment.png)
-1. Update the values for the URL and API key in the **Initial Value** and **Current Value** fields. Change the name to reflect your environment. Then save the values.
+10. Update the values for the URL and API key in the **Initial Value** and **Current Value** fields. Change the name to reflect your environment.
+11. Save the values.
![Screenshot that shows how to update values for the A P I.](media/migrate-applications-from-okta-to-azure-active-directory/update-values-for-api.png)
-1. [Load the API into Postman](https://app.getpostman.com/run-collection/377eaf77fdbeaedced17).
+12. [Load the API into Postman](https://app.getpostman.com/run-collection/377eaf77fdbeaedced17).
+13. Select **Apps** > **Get List Apps** > **Send**.
-1. Select **Apps** > **Get List Apps** > **Send**.
+>[!NOTE]
+>You can print the applications in your Okta tenant. The list is in JSON format.
- Now you can print all the applications in your Okta tenant. The list is in JSON format.
+ ![Screenshot that shows a list of applications in the Okta tenant.](media/migrate-applications-from-okta-to-azure-active-directory/list-of-applications.png)
- ![Screenshot that shows a list of applications in the Okta tenant.](media/migrate-applications-from-okta-to-azure-active-directory/list-of-applications.png)
+We recommend you copy and convert this JSON list to a CSV format:
-We recommend that you copy and convert this JSON list to a CSV format. You can use a public converter such as [Konklone](https://konklone.io/json/). Or for PowerShell, use [ConvertFrom-Json](/powershell/module/microsoft.powershell.utility/convertfrom-json)
-and [ConvertTo-CSV](/powershell/module/microsoft.powershell.utility/convertto-csv).
+* Use a public converter such as [Konklone](https://konklone.io/json/)
+* Or for PowerShell, use [ConvertFrom-Json](/powershell/module/microsoft.powershell.utility/convertfrom-json) and [ConvertTo-CSV](/powershell/module/microsoft.powershell.utility/convertto-csv)
-Download the CSV to keep a record of the applications in your Okta tenant for future reference.
+>[!NOTE]
+>Download the CSV to have a record of the applications in your Okta tenant.
## Migrate a SAML application to Azure AD
-To migrate a SAML 2.0 application to Azure AD, first configure the application in your Azure AD tenant for application access. In this example, we'll convert a Salesforce instance. Follow [this tutorial](../saas-apps/salesforce-tutorial.md) to configure the applications.
+To migrate a SAML 2.0 application to Azure AD, configure the application in your Azure AD tenant for application access. In this example, we convert a Salesforce instance. Follow the [Salesforce tutorial](../saas-apps/salesforce-tutorial.md) to configure the applications.
-To complete the migration, repeat the configuration steps for all applications discovered in the Okta tenant.
+To complete the migration, repeat the configuration for all applications in the Okta tenant.
1. In the [Azure AD portal](https://aad.portal.azure.com), select **Azure Active Directory** > **Enterprise applications** > **New application**. ![Screenshot that shows a list of new applications.](media/migrate-applications-from-okta-to-azure-active-directory/list-of-new-applications.png)
-1. In **Azure AD Gallery**, search for **Salesforce**, select the application, and then select **Create**.
+2. In **Azure AD Gallery**, search for **Salesforce**, select the application, and then select **Create**.
![Screenshot that shows the Salesforce application in Azure A D Gallery.](media/migrate-applications-from-okta-to-azure-active-directory/salesforce-application.png)
-1. After the application is created, on the **Single sign-on** (SSO) tab, select **SAML**.
+3. After the application is created, on the **Single sign-on** (SSO) tab, select **SAML**.
![Screenshot that shows the SAML application.](media/migrate-applications-from-okta-to-azure-active-directory/saml-application.png)
-1. Download the **Certificate (Raw)** and **Federation Metadata XML** to import it into Salesforce.
+4. Download the **Certificate (Raw)** and **Federation Metadata XML** to import it into Salesforce.
![Screenshot that shows where to download federation metadata.](media/migrate-applications-from-okta-to-azure-active-directory/federation-metadata.png)
-1. On the Salesforce admin console, select **Identity** > **Single Sign-On Settings** > **New from Metadata File**.
+5. On the Salesforce admin console, select **Identity** > **Single Sign-On Settings** > **New from Metadata File**.
![Screenshot that shows the Salesforce admin console.](media/migrate-applications-from-okta-to-azure-active-directory/salesforce-admin-console.png)
-1. Upload the XML file that you downloaded from the Azure AD portal. Then select **Create**.
-
- :::image type="content" source="media/migrate-applications-from-okta-to-azure-active-directory/upload-xml-file.png" alt-text="Screenshot that shows where to upload the XML file." lightbox="media/migrate-applications-from-okta-to-azure-active-directory/upload-xml-file.png":::
-
-1. Upload the certificate that you downloaded from Azure. Then select **Save** to create the SAML provider in Salesforce.
+6. Upload the XML file you downloaded from the Azure AD portal. Then select **Create**.
+7. Upload the certificate you downloaded from Azure. Select **Save** to create the SAML provider in Salesforce.
![Screenshot that shows how to create the SAML provider in Salesforce.](media/migrate-applications-from-okta-to-azure-active-directory/create-saml-provider.png)
-1. Record the values in the following fields. You'll use these values in Azure.
+8. Record the values in the following fields. The values are in Azure.
+ * **Entity ID** * **Login URL** * **Logout URL**
- Then select **Download Metadata**.
+9. Select **Download Metadata**.
![Screenshot that shows the values you should record for use in Azure.](media/migrate-applications-from-okta-to-azure-active-directory/record-values-for-azure.png)
-1. On the Azure AD **Enterprise applications** page, in the SAML SSO settings, select **Upload metadata file** to upload the file to the Azure AD portal. Before you save, make sure that the imported values match the recorded values.
+10. On the Azure AD **Enterprise applications** page, in the SAML SSO settings, select **Upload metadata file** to upload the file to the Azure AD portal. Ensure the imported values match the recorded values. Select **Save**.
![Screenshot that shows how to upload the metadata file in Azure A D.](media/migrate-applications-from-okta-to-azure-active-directory/upload-metadata-file.png)
-1. In the Salesforce administration console, select **Company Settings** > **My Domain**. Go to **Authentication Configuration** and then select **Edit**.
+11. In the Salesforce administration console, select **Company Settings** > **My Domain**. Go to **Authentication Configuration** and then select **Edit**.
![Screenshot that shows how to edit company settings.](media/migrate-applications-from-okta-to-azure-active-directory/edit-company-settings.png)
-1. For a sign-in option, select the new SAML provider you configured earlier. Then select **Save**.
+12. For a sign-in option, select the new SAML provider you configured. Select **Save**.
![Screenshot that shows where to save the SAML provider option.](media/migrate-applications-from-okta-to-azure-active-directory/save-saml-provider.png)
-1. In Azure AD, on the **Enterprise applications** page, select **Users and groups**. Then add test users.
+13. In Azure AD, on the **Enterprise applications** page, select **Users and groups**. Then add test users.
![Screenshot that shows added test users.](media/migrate-applications-from-okta-to-azure-active-directory/add-test-user.png)
-1. To test the configuration, sign in as one of the test users. Go to your Microsoft [apps gallery](https://aka.ms/myapps) and then select **Salesforce**.
+14. To test the configuration, sign in as a test user. Go to the Microsoft [apps gallery](https://aka.ms/myapps) and then select **Salesforce**.
![Screenshot that shows how to open Salesforce from the app gallery.](media/migrate-applications-from-okta-to-azure-active-directory/test-user-sign-in.png)
-1. Select the newly configured identity provider (IdP) to sign in.
+15. Select the configured identity provider (IdP) to sign in.
![Screenshot that shows where to sign in.](media/migrate-applications-from-okta-to-azure-active-directory/new-identity-provider.png)
- If everything has been correctly configured, the test user will land on the Salesforce home page. For troubleshooting help, see the [debugging guide](../manage-apps/debug-saml-sso-issues.md).
+>[!NOTE]
+>If configuration is correct, the test user lands on the Salesforce home page. For troubleshooting help, see the [debugging guide](../manage-apps/debug-saml-sso-issues.md).
-1. On the **Enterprise applications** page, assign the remaining users to the Salesforce application with the correct roles.
+16. On the **Enterprise applications** page, assign the remaining users to the Salesforce application with the correct roles.
- >[!NOTE]
- >After you add the remaining users to the Azure AD application, the users should test the connection to ensure they have access. Test the connection before you move on to the next step.
+>[!NOTE]
+>After you add the remaining users to the Azure AD application, users can test the connection to ensure they have access. Test the connection before the next step.
-1. On the Salesforce administration console, select **Company Settings** > **My Domain**.
+17. On the Salesforce administration console, select **Company Settings** > **My Domain**.
-1. Under **Authentication Configuration**, select **Edit**. Clear the selection for **Okta** as an authentication service.
+18. Under **Authentication Configuration**, select **Edit**. For authentication service, clear the selection for **Okta**.
![Screenshot that shows where to clear the selection for Okta as an authentication service.](media/migrate-applications-from-okta-to-azure-active-directory/deselect-okta.png)
-Salesforce is now successfully configured with Azure AD for SSO.
-
-## Migrate an OIDC/OAuth 2.0 application to Azure AD
-
-To migrate an OpenID Connect (OIDC) or OAuth 2.0 application to Azure AD, in your Azure AD tenant, first configure the application for access. In this example, we'll convert a custom OIDC app.
+## Migrate an OpenID Connect or OAuth 2.0 application to Azure AD
-To complete the migration, repeat the following configuration steps for all applications that are discovered in the Okta tenant.
+To migrate an OpenID Connect (OIDC) or OAuth 2.0 application to Azure AD, in your Azure AD tenant, configure the application for access. In this example, we convert a custom OIDC app.
-1. In the [Azure AD portal](https://aad.portal.azure.com), select **Azure Active Directory** > **Enterprise applications**. Under **All applications**, select **New application**.
+To complete the migration, repeat configuration for all applications in the Okta tenant.
-1. Select **Create your own application**. On the menu that appears, name the OIDC app and then select **Register an application you're working on to integrate with Azure AD**. Then select **Create**.
-
- :::image type="content" source="media/migrate-applications-from-okta-to-azure-active-directory/new-oidc-application.png" alt-text="Screenshot that shows how to create an O I D C application." lightbox="media/migrate-applications-from-okta-to-azure-active-directory/new-oidc-application.png":::
-
-1. On the next page, set up the tenancy of your application registration. For more information, see [Tenancy in Azure Active Directory](../develop/single-and-multi-tenant-apps.md).
-
- In this example, we'll choose **Accounts in any organizational directory (Any Azure AD directory - Multitenant)** > **Register**.
+1. In the [Azure AD portal](https://aad.portal.azure.com), select **Azure Active Directory** > **Enterprise applications**.
+2. Under **All applications**, select **New application**.
+3. Select **Create your own application**.
+4. On the menu that appears, name the OIDC app and then select **Register an application you're working on to integrate with Azure AD**.
+5. Select **Create**.
+6. On the next page, set up the tenancy of your application registration. For more information, see [Tenancy in Azure Active Directory](../develop/single-and-multi-tenant-apps.md). Go to **Accounts in any organizational directory (Any Azure AD directory - Multitenant)** > **Register**.
![Screenshot that shows how to select Azure A D directory multitenant.](media/migrate-applications-from-okta-to-azure-active-directory/multitenant-azure-ad-directory.png)
-1. On the **App registrations** page, under **Azure Active Directory**, open the newly created registration.
+7. On the **App registrations** page, under **Azure Active Directory**, open the created registration.
- Depending on the [application scenario](../develop/authentication-flows-app-scenarios.md), various configuration actions might be needed. Most scenarios require an app client secret, so we'll cover those scenarios.
+>[!NOTE]
+>Depending on the [application scenario](../develop/authentication-flows-app-scenarios.md), there are various configuration actions. Most scenarios require an app client secret.
-1. On the **Overview** page, record the **Application (client) ID**. You'll use this ID in your application.
+8. On the **Overview** page, record the **Application (client) ID**. You use this ID in your application.
![Screenshot that shows the application client I D.](media/migrate-applications-from-okta-to-azure-active-directory/application-client-id.png)
-1. On the left, select **Certificates & secrets**. Then select **New client secret**. Name the client secret and set its expiration.
+9. On the left, select **Certificates & secrets**. Then select **New client secret**. Name the client secret and set its expiration.
![Screenshot that shows the new client secret.](media/migrate-applications-from-okta-to-azure-active-directory/new-client-secret.png)
-1. Record the value and ID of the secret.
-
- >[!NOTE]
- >If you lose the client secret, you can't retrieve it. Instead, you'll need to regenerate a secret.
-
-1. On the left, select **API permissions**. Then grant the application access to the OIDC stack.
+10. Record the value and ID of the secret.
-1. Select **Add permission** > **Microsoft Graph** > **Delegated permissions**.
+>[!NOTE]
+>If you misplace the client secret, you can't retrieve it. Instead, regenerate a secret.
-1. In the **OpenId permissions** section, select **email**, **openid**, and **profile**. Then select **Add permissions**.
-
- :::image type="content" source="media/migrate-applications-from-okta-to-azure-active-directory/add-openid-permission.png" alt-text="Screenshot that shows where to add Open I D permissions." lightbox="media/migrate-applications-from-okta-to-azure-active-directory/add-openid-permission.png":::
-
-1. To improve user experience and suppress user consent prompts, select **Grant admin consent for Tenant Domain Name**. Then wait for the **Granted** status to appear.
+11. On the left, select **API permissions**. Then grant the application access to the OIDC stack.
+12. Select **Add permission** > **Microsoft Graph** > **Delegated permissions**.
+13. In the **OpenId permissions** section, select **email**, **openid**, and **profile**. Then select **Add permissions**.
+14. To improve user experience and suppress user consent prompts, select **Grant admin consent for Tenant Domain Name**. Wait for the **Granted** status to appear.
![Screenshot that shows where to grant admin consent.](media/migrate-applications-from-okta-to-azure-active-directory/grant-admin-consent.png)
-1. If your application has a redirect URI, enter the appropriate URI. If the reply URL targets the **Authentication** tab, followed by **Add a platform** and **Web**, enter the appropriate URL. Select **Access tokens** and **ID tokens**. Then select **Configure**.
-
- :::image type="content" source="media/migrate-applications-from-okta-to-azure-active-directory/configure-tokens.png" alt-text="Screenshot that shows how to configure tokens." lightbox="media/migrate-applications-from-okta-to-azure-active-directory/configure-tokens.png":::
-
- On the **Authentication** menu, under **Advanced settings** and **Allow public client flows**, if necessary, select **Yes**.
+15. If your application has a redirect URI, enter the URI. If the reply URL targets the **Authentication** tab, followed by **Add a platform** and **Web**, enter the URL.
+16. Select **Access tokens** and **ID tokens**.
+17. Select **Configure**.
+18. If needed, on the **Authentication** menu, under **Advanced settings** and **Allow public client flows**, select **Yes**.
![Screenshot that shows how to allow public client flows.](media/migrate-applications-from-okta-to-azure-active-directory/allow-client-flows.png)
-1. In your OIDC-configured application, import the application ID and client secret before you test. Follow the preceding steps to configure your application with settings such as client ID, secret, and scopes.
+19. In your OIDC-configured application, import the application ID and client secret before you test.
+
+>[!NOTE]
+>Use the previous steps to configure your application with settings such as client ID, secret, and scopes.
## Migrate a custom authorization server to Azure AD Okta authorization servers map one-to-one to application registrations that [expose an API](../develop/quickstart-configure-app-expose-web-apis.md#add-a-scope).
-The default Okta authorization server should be mapped to Microsoft Graph scopes or permissions.
+Map the default Okta authorization server to Microsoft Graph scopes or permissions.
![Screenshot that shows the default Okta authorization.](media/migrate-applications-from-okta-to-azure-active-directory/default-okta-authorization.png)
active-directory Silverfort Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
Previously updated : 9/13/2021 Last updated : 10/19/2022 # Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort
-[Silverfort](https://www.silverfort.com/) uses innovative agent-less and proxy-less technology to connect all your assets on-premises and in the cloud to Azure AD. This solution enables organizations to apply identity protection, visibility, and user experience across all environments in Azure AD. It enables universal risk-based monitoring and assessment of authentication activity for on-premises and cloud environments, and proactively prevents threats.
+[Silverfort](https://www.silverfort.com/) uses innovative agent-less and proxy-less technology to connect your assets on-premises and in the cloud to Azure Active Directory (Azure AD). This solution enables organizations to apply identity protection, visibility, and user experience across environments in Azure AD. It enables universal risk-based monitoring and assessment of authentication activity for on-premises and cloud environments, and proactively prevents threats.
-In this tutorial, learn how to integrate your existing on premises Silverfort implementation with Azure Active Directory (Azure AD) for [hybrid access](../devices/concept-azure-ad-join-hybrid.md).
+In this tutorial, learn how to integrate your on-premises Silverfort implementation with Azure AD for [hybrid access](../devices/concept-azure-ad-join-hybrid.md).
-Silverfort seamlessly connects assets with Azure AD. These **bridged** assets appear as regular applications in Azure AD and can be protected with Conditional Access, single-sign-on (SSO), multifactor authentication, auditing and more. Use Silverfort to connect assets including:
+Silverfort connects assets with Azure AD. These bridged assets appear as regular applications in Azure AD and can be protected with Conditional Access, single-sign-on (SSO), multifactor authentication (MFA), auditing and more. Use Silverfort to connect assets including:
- Legacy and homegrown applications- - Remote desktop and Secure Shell (SSH)- - Command-line tools and other admin access- - File shares and databases- - Infrastructure and industrial systems
-Silverfort integrates your corporate assets and third-party Identity and Access Management (IAM) platforms. This includes Active Directory, Active Directory Federation Services (ADFS), and Remote Authentication Dial-In User Service (RADIUS) on Azure AD, including hybrid and multi-cloud environments.
+Silverfort integrates your corporate assets and third-party Identity and Access Management (IAM) platforms. This includes Active Directory, Active Directory Federation Services (ADFS), and Remote Authentication Dial-In User Service (RADIUS) on Azure AD, including hybrid and multicloud environments.
-Follow the steps in this tutorial to configure and test the Silverfort Azure AD bridge in your Azure AD tenant to communicate with your existing Silverfort implementation. Once configured, you can create Silverfort authentication policies that bridge authentication requests from various identity sources to Azure AD for SSO. After an application is bridged, it can be managed in Azure AD.
+Use this tutorial to configure and test the Silverfort Azure AD bridge in your Azure AD tenant to communicate with your Silverfort implementation. After configuration, you can create Silverfort authentication policies that bridge authentication requests from identity sources to Azure AD for SSO. After an application is bridged, you can manage it in Azure AD.
-## Silverfort with Azure AD Authentication Architecture
+## Silverfort with Azure AD authentication architecture
The following diagram describes the authentication architecture orchestrated by Silverfort in a hybrid environment. ![image shows the architecture diagram](./media/silverfort-azure-ad-integration/silverfort-architecture-diagram.png)
-| Step | Description|
-|:|:|
-| 1. | User sends authentication request to the original Identity provider (IdP) through protocols such as Kerberos, SAML, NTLM, OIDC, and LDAP(s).|
-| 2. | The response is routed as-is to Silverfort for validation to check authentication state.|
-| 3. | Silverfort provides visibility, discovery, and bridging to Azure AD.|
-| 4. | If the application is configured as **bridged**, the authentication decision is passed on to Azure AD. Azure AD evaluates Conditional Access policies and validates authentication.|
-| 5. | The authentication state response is then released and sent as-is to the IdP by Silverfort. |
-| 6.| IdP grants or denies access to the resource.|
-| 7. | User is notified if access request is granted or denied. |
+### User flow
+
+1. User sends authentication request to the original Identity provider (IdP) through protocols such as Kerberos, SAML, NTLM, OIDC, and LDAP(s).
+2. The response is routed as-is to Silverfort for validation to check authentication state.
+3. Silverfort provides visibility, discovery, and bridging to Azure AD.
+4. If the application is bridged, the authentication decision is passed to Azure AD. Azure AD evaluates Conditional Access policies and validates authentication.
+5. The authentication state response goes as-is to the IdP by Silverfort.
+6. IdP grants or denies access to the resource.
+7. User is notified if access request is granted or denied.
## Prerequisites
-You must already have Silverfort deployed in your tenant or infrastructure in order to perform this tutorial. To deploy Silverfort in your tenant or infrastructure, [contact Silverfort](https://www.silverfort.com/). You will need to install Silverfort Desktop app on relevant workstations.
+You need Silverfort deployed in your tenant or infrastructure to perform this tutorial. To deploy Silverfort in your tenant or infrastructure, go to [Silverfort](https://www.silverfort.com/). Install Silverfort Desktop app on your workstations.
This tutorial requires you to set up Silverfort Azure AD Adapter in your Azure AD tenant. You'll need: -- An Azure account with an active subscription. You can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- One of the following roles in your Azure account - Global administrator, Cloud application administrator, Application administrator, or Owner of the service principal.--- The Silverfort Azure AD Adapter application in the Azure AD gallery is pre-configured to support SSO. You'll need to add Silverfort Azure AD Adapter to your tenant as an Enterprise application from the gallery.
+- An Azure account with an active subscription
+ - You can create an [Azure free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- One of the following roles in your Azure account:
+ - Global Administrator
+ - Cloud Application Administrator
+ - Application Administrator
+ - Service Principal Owner
+- The Silverfort Azure AD Adapter application in the Azure AD gallery is pre-configured to support SSO. Add the Silverfort Azure AD Adapter to your tenant as an Enterprise application, from the gallery.
## Configure Silverfort and create a policy
-1. From a browser, log in to the **Silverfort admin console**.
-
-2. In the main menu, navigate to **Settings** and then scroll to
- **Azure AD Bridge Connector** in the General section. Confirm your tenant ID, and then select **Authorize**.
+1. From a browser, sign in to the Silverfort admin console.
+2. In the main menu, navigate to **Settings** and then scroll to **Azure AD Bridge Connector** in the General section.
+3. Confirm your tenant ID, and then select **Authorize**.
![image shows azure ad bridge connector](./media/silverfort-azure-ad-integration/azure-ad-bridge-connector.png) ![image shows registration confirmation](./media/silverfort-azure-ad-integration/grant-permission.png)
-3. A registration confirmation is shown in a new tab. Close this tab.
+4. A registration confirmation appears in a new tab. Close this tab.
![image shows registration completed](./media/silverfort-azure-ad-integration/registration-completed.png)
-4. In the **Settings** page, select **Save changes**
+5. On the **Settings** page, select **Save Changes**.
![image shows the azure ad adapter](./media/silverfort-azure-ad-integration/silverfort-azure-ad-adapter.png)
- Log in to your Azure AD console. You'll see **Silverfort Azure AD Adapter** application registered as an Enterprise application.
+6. Sign in to your Azure AD console. You'll see **Silverfort Azure AD Adapter** application registered as an Enterprise application.
![image shows enterprise application](./media/silverfort-azure-ad-integration/enterprise-application.png)
-5. In the Silverfort admin console, navigate to the **Policies** page and select **Create Policy**.
-
-6. The **New Policy** dialog will appear. Enter a **Policy Name** that would indicate the application name that will be created in Azure. For example, if you're adding multiple servers or applications under this policy, name it to reflect the resources covered by the policy. In the example, we'll create a policy for the *SL-APP1* server.
+7. In the Silverfort admin console, navigate to the **Policies** page and select **Create Policy**. The **New Policy** dialog appears.
+8. Enter a **Policy Name**, the application name to be created in Azure. For example, if adding multiple servers or applications under this policy, name it to reflect the resources covered by the policy. In the example, we create a policy for the SL-APP1 server.
![image shows define policy](./media/silverfort-azure-ad-integration/define-policy.png)
-7. Select appropriate **Authentication** type, and **Protocol**.
+9. Select the **Authentication** type, and **Protocol**.
-8. In the **Users and Groups** field, select the edit icon to configure users that will be affected by the policy. These users' authentication will be bridged to Azure AD.
+10. In the **Users and Groups** field, select the **edit** icon to configure users affected by the policy. These users' authentication will be bridged to Azure AD.
![image shows user and groups](./media/silverfort-azure-ad-integration/user-groups.png)
-9. Search and select users, groups, or Organization units (OUs).
+11. Search and select users, groups, or Organization Units (OUs).
![image shows search users](./media/silverfort-azure-ad-integration/search-users.png)
- Selected users will be listed in the SELECTED box.
+12. Selected users appear in the **SELECTED** box.
![image shows selected user](./media/silverfort-azure-ad-integration/select-user.png)
-10. Select the **Source** for which the policy will apply. In this example, *All Devices* are selected.
+13. Select the **Source** for which the policy will apply. In this example, All Devices are selected.
![image shows source](./media/silverfort-azure-ad-integration/source.png)
-11. Set the **Destination** to *SL-App1*. You can select the edit button to change or add more resources or groups of resources (optional).
+14. Set the **Destination** to SL-App1. Optional: You can select the **edit** button to change or add more resources or groups of resources.
![image shows destination](./media/silverfort-azure-ad-integration/destination.png)
-12. Select the Action to **AZURE AD BRIDGE**.
+15. For Action, select **AZURE AD BRIDGE**.
![image shows save azure ad bridge](./media/silverfort-azure-ad-integration/save-azure-ad-bridge.png)
-13. Select **SAVE** to save the new policy. You'll be prompted to enable or activate it.
+16. Select **Save** to save the policy. You're prompted to enable or activate it.
![image shows change status](./media/silverfort-azure-ad-integration/change-status.png)
- The policy will appear in the Policies page, in the Azure AD Bridge section:
+17. The policy appears on the Policies page, in the Azure AD Bridge section.
![image shows add policy](./media/silverfort-azure-ad-integration/add-policy.png)
-14. Return to the Azure AD console, and navigate to **Enterprise applications**. The new Silverfort application should now appear. This application can now be included in [Conditional Access policies](../authentication/tutorial-enable-azure-mfa.md?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json%23create-a-conditional-access-policy).
+18. Return to the Azure AD console, and navigate to **Enterprise applications**. The new Silverfort application appears. You can include this application in [Conditional Access policies](../authentication/tutorial-enable-azure-mfa.md?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json%23create-a-conditional-access-policy).
## Next steps - [Silverfort Azure AD adapter](https://azuremarketplace.microsoft.com/marketplace/apps/aad.silverfortazureadadapter?tab=overview)- - [Silverfort resources](https://www.silverfort.com/resources/)--- [Contact Silverfort](https://www.silverfort.com/company/contact/)
+- [Silverfort, company contact](https://www.silverfort.com/company/contact/)
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
na Previously updated : 09/12/2022 Last updated : 10/27/2022 -+
When you need to take on an Azure resource role, you can request activation by u
Privileged Identity Management supports Azure Resource Manager (ARM) API commands to manage Azure resource roles, as documented in the [PIM ARM API reference](/rest/api/authorization/roleeligibilityschedulerequests). For the permissions required to use the PIM API, see [Understand the Privileged Identity Management APIs](pim-apis.md).
+To activate an eligible Azure role assignment and gain activated access, use the [Role Assignment Schedule Requests - Create REST API](/rest/api/authorization/role-assignment-schedule-requests/create?tabs=HTTP) to create a new request and specify the security principal, role definition, requestType = SelfActivate and scope. To call this API, you must have an eligible role assignment on the scope.
+
+Use a GUID tool to generate a unique identifier that will be used for the role assignment identifier. The identifier has the format: 00000000-0000-0000-0000-000000000000.
+
+Replace {roleAssignmentScheduleRequestName} in the below PUT request with the GUID identifier of the role assignment.
+
+For more details on managing eligible roles for Azure resources, see this [PIM ARM API tutorial](/rest/api/authorization/privileged-role-assignment-rest-sample?source=docs#activate-an-eligible-role-assignment).
+ The following is a sample HTTP request to activate an eligible assignment for an Azure role. ### Request ````HTTP
-PUT https://management.azure.com/providers/Microsoft.Subscription/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleAssignmentScheduleRequests/fea7a502-9a96-4806-a26f-eee560e52045?api-version=2020-10-01
+PUT https://management.azure.com/providers/Microsoft.Subscription/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleAssignmentScheduleRequests/{roleAssignmentScheduleRequestName}?api-version=2020-10-01
```` ### Request body
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
Previously updated : 08/04/2022 Last updated : 10/27/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Your Google Cloud / G Suite Connector by Microsoft application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Google Cloud / G Suite Connector by Microsoft expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
- ![image](common/default-attributes.png)
+ ![image](common/default-attributes.png)
+
+ > [!NOTE]
+ > Ensure that the the SAML Response doesn't include any non-standard ASCII characters in the DisplayName and Surname attributes.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
active-directory Workday Inbound Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-tutorial.md
In this section, you will configure how user data flows from Workday to Active D
| **WorkerID** | EmployeeID | **Yes** | Written on create only | | **PreferredNameData** | cn | | Written on create only | | **SelectUniqueValue( Join("\@", Join(".", \[FirstName\], \[LastName\]), "contoso.com"), Join("\@", Join(".", Mid(\[FirstName\], 1, 1), \[LastName\]), "contoso.com"), Join("\@", Join(".", Mid(\[FirstName\], 1, 2), \[LastName\]), "contoso.com"))** | userPrincipalName | | Written on create only
-| `Replace(Mid(Replace(\[UserID\], , "(\[\\\\/\\\\\\\\\\\\\[\\\\\]\\\\:\\\\;\\\\\|\\\\=\\\\,\\\\+\\\\\*\\\\?\\\\&lt;\\\\&gt;\])", , "", , ), 1, 20), , "([\\\\.)\*\$](file:///\\.)*$)", , "", , )` | sAMAccountName | | Written on create only |
+| `Replace(Mid(Replace([UserID], , "([\\/\\\\\\[\\]\\:\\;\\|\\=\\,\\+\\*\\?\\<\\>])", , "", , ), 1, 20), , "(\\.)*$", , "", , )` | sAMAccountName | | Written on create only |
| **Switch(\[Active\], , "0", "True", "1", "False")** | accountDisabled | | Create + update | | **FirstName** | givenName | | Create + update | | **LastName** | sn | | Create + update |
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
- *Can I customize Cilium configuration?*
- No, the Cilium configuration is managed by AKS can't be modified. We recommend that customers who require more control use [AKS BYO CNI](./use-byo-cni.md) and install Cilium manually.
+ No, the Cilium configuration is managed by AKS and can't be modified. We recommend that customers who require more control use [AKS BYO CNI](./use-byo-cni.md) and install Cilium manually.
- *Can I use `CiliumNetworkPolicy` custom resources instead of Kubernetes `NetworkPolicy` resources?*
Learn more about networking in AKS in the following articles:
[aks-ingress-tls]: ingress-tls.md [aks-ingress-static-tls]: ingress-static-ip.md [aks-http-app-routing]: http-application-routing.md
-[aks-ingress-internal]: ingress-internal-ip.md
+[aks-ingress-internal]: ingress-internal-ip.md
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
The maxPod per node setting can be defined when you create a new node pool. If y
When you create an AKS cluster, the following parameters are configurable for Azure CNI networking:
-**Virtual network**: The virtual network into which you want to deploy the Kubernetes cluster. If you want to create a new virtual network for your cluster, select *Create new* and follow the steps in the *Create virtual network* section. For information about the limits and quotas for an Azure virtual network, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).
+**Virtual network**: The virtual network into which you want to deploy the Kubernetes cluster. If you want to create a new virtual network for your cluster, select *Create new* and follow the steps in the *Create virtual network* section. If you want to select an existing virtual network, make sure it is in the same location and Azure subscription as your Kubernetes cluster. For information about the limits and quotas for an Azure virtual network, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).
**Subnet**: The subnet within the virtual network where you want to deploy the cluster. If you want to create a new subnet in the virtual network for your cluster, select *Create new* and follow the steps in the *Create subnet* section. For hybrid connectivity, the address range shouldn't overlap with any other virtual networks in your environment.
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
Limitations:
* Using the same route table with multiple AKS clusters isn't supported. > [!NOTE]
-> To create and use your own VNet and route table with `kubelet` network plugin, you need to use [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For system-assigned control plane identity, the identity ID cannot be retrieved before creating a cluster, which causes a delay during role assignment.
+> To create and use your own VNet and route table with `kubenet` network plugin, you need to use [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For system-assigned control plane identity, the identity ID cannot be retrieved before creating a cluster, which causes a delay during role assignment.
+>
> To create and use your own VNet and route table with `azure` network plugin, both system-assigned and user-assigned managed identities are supported. But user-assigned managed identity is more recommended for BYO scenarios. After creating a custom route table and associating it with a subnet in your virtual network, you can create a new AKS cluster specifying your route table with a user-assigned managed identity.
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
Last updated 05/18/2022
-#Customer intent: As a cluster operator or developer, I want to use TLS with an ingress controller to handle the flow of incoming traffic and secure my apps using my own or automatically generated certificates
+#Customer intent: As a cluster operator or developer, I want to use TLS with an ingress controller to handle the flow of incoming traffic and secure my apps using my own certificates or automatically generated certificates.
# Use TLS with an ingress controller on Azure Kubernetes Service (AKS)
-Transport layer security (TLS) is a protocol for providing security in communication, such as encryption, authentication, and integrity, by using certificates. Using TLS with an ingress controller on AKS allows you to secure communication between your applications, while also having the benefits of an ingress controller.
+The transport layer security (TLS) protocol uses certificates to provide security for communication, encryption, authentication, and integrity. Using TLS with an ingress controller on AKS allows you to secure communication between your applications and experience the benefits of an ingress controller.
-You can bring your own certificates and integrate them with the Secrets Store CSI driver. Alternatively, you can also use [cert-manager][cert-manager], which is used to automatically generate and configure [Let's Encrypt][lets-encrypt] certificates. Finally, two applications are run in the AKS cluster, each of which is accessible over a single IP address.
+You can bring your own certificates and integrate them with the Secrets Store CSI driver. Alternatively, you can use [cert-manager][cert-manager], which automatically generates and configures [Let's Encrypt][lets-encrypt] certificates. Two applications run in the AKS cluster, each of which is accessible over a single IP address.
> [!NOTE]
-> There are two open source ingress controllers for Kubernetes based on Nginx: one is maintained by the Kubernetes community ([kubernetes/ingress-nginx][nginx-ingress]), and one is maintained by NGINX, Inc. ([nginxinc/kubernetes-ingress]). This article will be using the Kubernetes community ingress controller.
+> There are two open source ingress controllers for Kubernetes based on Nginx: one is maintained by the Kubernetes community ([kubernetes/ingress-nginx][nginx-ingress]), and one is maintained by NGINX, Inc. ([nginxinc/kubernetes-ingress]). This article uses the Kubernetes community ingress controller.
## Before you begin
-This article also assumes that you have an ingress controller and applications set up. If you need an ingress controller or example applications, see [Create an ingress controller][aks-ingress-basic].
+* This article assumes you have an ingress controller and applications set up. If you need an ingress controller or example applications, see [Create an ingress controller][aks-ingress-basic].
-This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you're using the latest release of Helm and have access to the `ingress-nginx` and `jetstack` Helm repositories. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes.
+* This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure you're using the latest release of Helm and have access to the `ingress-nginx` and `jetstack` Helm repositories. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes.
-For more information on configuring and using Helm, see [Install applications with Helm in Azure Kubernetes Service (AKS)][use-helm]. For upgrade instructions, see the [Helm install docs][helm-install].
+ * For more information on configuring and using Helm, see [Install applications with Helm in Azure Kubernetes Service (AKS)][use-helm]. For upgrade instructions, see the [Helm install docs][helm-install].
-### [Azure CLI](#tab/azure-cli)
-
-In addition, this article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
-
-This article also requires that you're running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-In addition, this article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+* This article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
-This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-
+* If you're using Azure PowerShell, this article requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
## Use TLS with your own certificates with Secrets Store CSI Driver
-To use TLS with your own certificates with Secrets Store CSI Driver, you'll need an AKS cluster with the Secrets Store CSI Driver configured, and an Azure Key Vault instance. For more information, see [Set up Secrets Store CSI Driver to enable NGINX Ingress Controller with TLS][aks-nginx-tls-secrets-store].
+To use TLS with your own certificates with Secrets Store CSI Driver, you need an AKS cluster with the Secrets Store CSI Driver configured and an Azure Key Vault instance. For more information, see [Set up Secrets Store CSI Driver to enable NGINX Ingress Controller with TLS][aks-nginx-tls-secrets-store].
## Use TLS with Let's Encrypt certificates
-To use TLS with Let's Encrypt certificates, you'll deploy [cert-manager][cert-manager], which is used to automatically generate and configure [Let's Encrypt][lets-encrypt] certificates.
+To use TLS with [Let's Encrypt][lets-encrypt] certificates, you'll deploy [cert-manager][cert-manager], which automatically generates and configures Let's Encrypt certificates.
### Import the cert-manager images used by the Helm chart into your ACR ### [Azure CLI](#tab/azure-cli)
-Use `az acr import` to import those images into your ACR.
+Use `az acr import` to import the following images into your ACR.
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGE
### [Azure PowerShell](#tab/azure-powershell)
-Use `Import-AzContainerRegistryImage` to import those images into your ACR.
+Use `Import-AzContainerRegistryImage` to import the following images into your ACR.
```azurepowershell $RegistryName = "<REGISTRY_NAME>"
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage "${CertManagerImageWebhook}:${CertManagerTag}" Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage "${CertManagerImageCaInjector}:${CertManagerTag}" ```+ > [!NOTE]
-> In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure Container Registry][acr-helm].
+> In addition to importing container images into your ACR, you can import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure Container Registry][acr-helm].
## Ingress controller configuration options
-By default, an NGINX ingress controller is created with a new public IP address assignment. This public IP address is only static for the life-span of the ingress controller, and is lost if the controller is deleted and re-created.
+An NGINX ingress controller is created with a new public IP address assignment by default. This public IP address is only static for the lifespan of the ingress controller. If you delete the ingress controller, the public IP address assignment will be lost. If you create another ingress controller, a new public IP address will be assigned.
+
+You can configure your ingress controller using one of the following methods:
-You have the option of choosing one of the following methods:
* Using a dynamic public IP address. * Using a static public IP address.
You have the option of choosing one of the following methods:
A common configuration requirement is to provide the NGINX ingress controller an existing static public IP address. The static public IP address remains if the ingress controller is deleted.
-The commands below create an IP address that will be deleted if you delete your AKS cluster.
+Follow the commands below to create an IP address that will be deleted if you delete your AKS cluster.
### [Azure CLI](#tab/azure-cli)
- First get the resource group name of the AKS cluster with the [az aks show][az-aks-show] command:
+Get the resource group name of the AKS cluster with the [az aks show][az-aks-show] command.
```azurecli-interactive az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv ```
-Next, create a public IP address with the *static* allocation method using the [az network public-ip create][az-network-public-ip-create] command. The following example creates a public IP address named *myAKSPublicIP* in the AKS cluster resource group obtained in the previous step:
+Next, create a public IP address with the *static* allocation method using the [az network public-ip create][az-network-public-ip-create] command. The following example creates a public IP address named *myAKSPublicIP* in the AKS cluster resource group obtained in the previous step.
```azurecli-interactive az network public-ip create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv
az network public-ip create --resource-group MC_myResourceGroup_myAKSCluster_eas
### [Azure PowerShell](#tab/azure-powershell)
-First get the resource group name of the AKS cluster with the [Get-AzAksCluster][get-az-aks-cluster] command:
+Get the resource group name of the AKS cluster with the [Get-AzAksCluster][get-az-aks-cluster] command:
```azurepowershell-interactive (Get-AzAksCluster -ResourceGroupName $ResourceGroup -Name myAKSCluster).NodeResourceGroup
Next, create a public IP address with the *static* allocation method using the [
-Alternatively, you can create an IP address in a different resource group, which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the following are true:
-
-* The cluster identity used by the AKS cluster has delegated permissions to the resource group, such as *Network Contributor*.
-* Add the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"="<RESOURCE_GROUP>"` parameter. Replace `<RESOURCE_GROUP>` with the name of the resource group where the IP address resides.
+> [!NOTE]
+> Alternatively, you can create an IP address in a different resource group, which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the following are true:
+>
+> * The cluster identity used by the AKS cluster has delegated permissions to the resource group, such as *Network Contributor*.
+> * Add the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"="<RESOURCE_GROUP>"` parameter. Replace `<RESOURCE_GROUP>` with the name of the resource group where the IP address resides.
+>
-When you update the ingress controller, you must pass a parameter to the Helm release so the ingress controller is made aware of the static IP address of the load balancer to be allocated to the ingress controller service. For the HTTPS certificates to work correctly, a DNS name label is used to configure an FQDN for the ingress controller IP address.
+You must pass a parameter to the Helm release when you upgrade the ingress controller. This ensures that the ingress controller service is made aware of the load balancer that will be allocated to it. For the HTTPS certificates to work correctly, a DNS name label is used to configure a fully qualified domain name (FQDN) for the ingress controller IP address.
-1. Add the `--set controller.service.loadBalancerIP="<EXTERNAL_IP>"` parameter. Specify your own public IP address that was created in the previous step.
1. Add the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="<DNS_LABEL>"` parameter. The DNS label can be set either when the ingress controller is first deployed, or it can be configured later.
+2. Add the `--set controller.service.loadBalancerIP="<STATIC_IP>"` parameter. Specify your own public IP address that was created in the previous step.
### [Azure CLI](#tab/azure-cli)
For more information, see [Use a static public IP address and DNS label with the
## Use a dynamic IP address
-When the ingress controller is created, an Azure public IP address is created for the ingress controller. This public IP address is static for the life-span of the ingress controller. If you delete the ingress controller, the public IP address assignment is lost. If you then create another ingress controller, a new public IP address is assigned.
+An Azure public IP address is created for the ingress controller upon creation. This public IP address is static for the lifespan of the ingress controller. If you delete the ingress controller, the public IP address assignment will be lost. If you create another ingress controller, a new public IP address will be assigned.
To get the public IP address, use the `kubectl get service` command.
To get the public IP address, use the `kubectl get service` command.
kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller ```
-The example output shows the details about the ingress controller:
+The example output shows the details about the ingress controller.
-```
+```console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.133 EXTERNAL_IP 80:32486/TCP,443:30953/TCP 44s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx ```
-If you're using a custom domain, you'll need to add an A record to your DNS zone. Otherwise, you'll need to configure the public IP address with a fully qualified domain name (FQDN).
- ### Add an A record to your DNS zone
+If you're using a custom domain, you need to add an A record to your DNS zone. Otherwise, you need to configure the public IP address with an FQDN.
+ ### [Azure CLI](#tab/azure-cli) Add an *A* record to your DNS zone with the external IP address of the NGINX service using [az network dns record-set a add-record][az-network-dns-record-set-a-add-record].
New-AzDnsRecordSet -Name "*" `
### Configure an FQDN for the ingress controller
-Optionally, you can configure an FQDN for the ingress controller IP address instead of a custom domain. Your FQDN will be of the form `<CUSTOM LABEL>.<AZURE REGION NAME>.cloudapp.azure.com`.
+Optionally, you can configure an FQDN for the ingress controller IP address instead of a custom domain. Your FQDN will be of the form `<CUSTOM LABEL>.<AZURE REGION NAME>.cloudapp.azure.com`. You can configure it using one of the following methods:
-#### Method 1: Set the DNS label using the Azure CLI
-This sample is for a Bash shell.
+* Setting the DNS label using the Azure CLI or Azure PowerShell
+* Setting the DNS label using Helm chart settings
+
+#### Method 1: Set the DNS label using the Azure CLI or Azure PowerShell
### [Azure CLI](#tab/azure-cli)
IP="MY_EXTERNAL_IP"
# Name to associate with public IP address DNSNAME="demo-aks-ingress"
-# Get the resource-id of the public ip
+# Get the resource-id of the public IP
PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv)
-# Update public ip address with DNS name
+# Update public IP address with DNS name
az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME # Display the FQDN
az network public-ip show --ids $PUBLICIPID --query "[dnsSettings.fqdn]" --outpu
# Public IP address of your ingress controller $AksIpAddress = "MY_EXTERNAL_IP"
-# Get the Public IP Address for the ingress controller
+# Get the public IP address for the ingress controller
$PublicIp = Get-AzPublicIpAddress | Where-Object {$_.IpAddress -eq $AksIpAddress}
-# Update public ip address with DNS name
+# Update public IP address with DNS name
$PublicIp.DnsSettings = @{"DomainNameLabel" = "demo-aks-ingress"} $UpdatedPublicIp = Set-AzPublicIpAddress -PublicIpAddress $publicIp
Write-Output $UpdatedPublicIp.DnsSettings.Fqdn
-#### Method 2: Set the DNS label using helm chart settings
-You can pass an annotation setting to your helm chart configuration by using the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"` parameter. This parameter can be set either when the ingress controller is first deployed, or it can be configured later.
+#### Method 2: Set the DNS label using Helm chart settings
+
+You can pass an annotation setting to your Helm chart configuration by using the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"` parameter. This parameter can be set either when the ingress controller is first deployed, or it can be configured later.
+ The following example shows how to update this setting after the controller has been deployed. ### [Azure CLI](#tab/azure-cli)
NAMESPACE="ingress-basic"
helm upgrade nginx-ingress ingress-nginx/ingress-nginx \ --namespace $NAMESPACE \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL- ``` ### [Azure PowerShell](#tab/azure-powershell)
$Namespace = "ingress-basic"
helm upgrade nginx-ingress ingress-nginx/ingress-nginx ` --namespace $Namespace ` --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel- ``` ## Install cert-manager
-The NGINX ingress controller supports TLS termination. There are several ways to retrieve and configure certificates for HTTPS. This article demonstrates using [cert-manager][cert-manager], which provides automatic [Lets Encrypt][lets-encrypt] certificate generation and management functionality.
+The NGINX ingress controller supports TLS termination. There are several ways to retrieve and configure certificates for HTTPS. This article uses [cert-manager][cert-manager], which provides automatic [Lets Encrypt][lets-encrypt] certificate generation and management functionality.
-To install the cert-manager controller:
+To install the cert-manager controller, use the following commands.
### [Azure CLI](#tab/azure-cli)
For more information on cert-manager configuration, see the [cert-manager projec
## Create a CA cluster issuer
-Before certificates can be issued, cert-manager requires an [Issuer][cert-manager-issuer] or [ClusterIssuer][cert-manager-cluster-issuer] resource. These Kubernetes resources are identical in functionality, however `Issuer` works in a single namespace, and `ClusterIssuer` works across all namespaces. For more information, see the [cert-manager issuer][cert-manager-issuer] documentation.
+Before certificates can be issued, cert-manager requires one of the following:
+
+* An [Issuer][cert-manager-issuer], which works in a single namespace.
+* A [ClusterIssuer][cert-manager-cluster-issuer] resource, which works across all namespaces.
-Create a cluster issuer, such as `cluster-issuer.yaml`, using the following example manifest. Update the email address with a valid address from your organization:
+For more information, see the [cert-manager issuer][cert-manager-issuer] documentation.
+
+Create a cluster issuer, such as `cluster-issuer.yaml`, using the following example manifest. Replace `MY_EMAIL_ADDRESS` with a valid address from your organization.
```yaml apiVersion: cert-manager.io/v1
kubectl apply -f cluster-issuer.yaml
## Update your ingress routes
-You'll need to update your ingress routes to handle traffic to your FQDN or custom domain.
+You need to update your ingress routes to handle traffic to your FQDN or custom domain.
+
+In the following example, traffic is routed as such:
-In the following example, traffic to the address *hello-world-ingress.MY_CUSTOM_DOMAIN* is routed to the *aks-helloworld-one* service. Traffic to the address *hello-world-ingress.MY_CUSTOM_DOMAIN/hello-world-two* is routed to the *aks-helloworld-two* service. Traffic to *hello-world-ingress.MY_CUSTOM_DOMAIN/static* is routed to the service named *aks-helloworld-one* for static assets.
+* Traffic to *hello-world-ingress.MY_CUSTOM_DOMAIN* is routed to the *aks-helloworld-one* service.
+* Traffic to *hello-world-ingress.MY_CUSTOM_DOMAIN/hello-world-two* is routed to the *aks-helloworld-two* service.
+* Traffic to *hello-world-ingress.MY_CUSTOM_DOMAIN/static* is routed to the service named *aks-helloworld-one* for static assets.
> [!NOTE]
-> If you configured an FQDN for the ingress controller IP address instead of a custom domain, use the FQDN instead of *hello-world-ingress.MY_CUSTOM_DOMAIN*. For example if your FQDN is *demo-aks-ingress.eastus.cloudapp.azure.com*, replace *hello-world-ingress.MY_CUSTOM_DOMAIN* with *demo-aks-ingress.eastus.cloudapp.azure.com* in `hello-world-ingress.yaml`.
+> If you configured an FQDN for the ingress controller IP address instead of a custom domain, use the FQDN instead of *hello-world-ingress.MY_CUSTOM_DOMAIN*.
+>
+> For example, if your FQDN is *demo-aks-ingress.eastus.cloudapp.azure.com*, replace *hello-world-ingress.MY_CUSTOM_DOMAIN* with *demo-aks-ingress.eastus.cloudapp.azure.com* in `hello-world-ingress.yaml`.
+>
-Create or update the `hello-world-ingress.yaml` file using below example YAML. Update the `spec.tls.hosts` and `spec.rules.host` to the DNS name you created in a previous step.
+Create or update the `hello-world-ingress.yaml` file using the following example YAML file. Update the `spec.tls.hosts` and `spec.rules.host` to the DNS name you created in a previous step.
```yaml apiVersion: networking.k8s.io/v1
kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic
## Verify a certificate object has been created
-Next, a certificate resource must be created. The certificate resource defines the desired X.509 certificate. For more information, see [cert-manager certificates][cert-manager-certificates]. Cert-manager has automatically created a certificate object for you using ingress-shim, which is automatically deployed with cert-manager since v0.2.2. For more information, see the [ingress-shim documentation][ingress-shim].
+Next, a certificate resource must be created. The certificate resource defines the desired X.509 certificate. For more information, see [cert-manager certificates][cert-manager-certificates]. Cert-manager automatically creates a certificate object for you using ingress-shim, which is automatically deployed with cert-manager since v0.2.2. For more information, see the [ingress-shim documentation][ingress-shim].
-To verify that the certificate was created successfully, use the `kubectl get certificate --namespace ingress-basic` command and verify *READY* is *True*, which may take several minutes.
+To verify that the certificate was created successfully, use the `kubectl get certificate --namespace ingress-basic` command and verify *READY* is *True*. This may take several minutes.
```console kubectl get certificate --namespace ingress-basic ```
-The example output below shows the certificate's status:
+The following output shows the certificate's status.
``` NAME READY SECRET AGE
tls-secret True tls-secret 11m
## Test the ingress configuration
-Open a web browser to *hello-world-ingress.MY_CUSTOM_DOMAIN* or the FQDN of your Kubernetes ingress controller. Notice you're redirected to use HTTPS and the certificate is trusted and the demo application is shown in the web browser. Add the */hello-world-two* path and notice the second demo application with the custom title is shown.
+Open a web browser to *hello-world-ingress.MY_CUSTOM_DOMAIN* or the FQDN of your Kubernetes ingress controller. Ensure the following are true:
+
+* You're redirected to use HTTPS.
+* The certificate is *trusted*.
+* The demo application is shown in the web browser.
+* Add */hello-world-two* to the end of the domain and ensure the second demo application with the custom title is shown.
## Clean up resources
-This article used Helm to install the ingress components, certificates, and sample apps. When you deploy a Helm chart, many Kubernetes resources are created. These resources include pods, deployments, and services. To clean up these resources, you can either delete the entire sample namespace, or the individual resources.
+This article used Helm to install the ingress components, certificates, and sample apps. When you deploy a Helm chart, many Kubernetes resources are created. These resources include pods, deployments, and services. To clean up these resources, you can either delete the entire sample namespace or the individual resources.
### Delete the sample namespace and all resources
kubectl delete namespace ingress-basic
### Delete resources individually
-Alternatively, a more granular approach is to delete the individual resources created. First, remove the cluster issuer resources:
+Alternatively, you can delete the resource individually. First, remove the cluster issuer resources.
```console kubectl delete -f cluster-issuer.yaml --namespace ingress-basic ```
-List the Helm releases with the `helm list` command. Look for charts named *nginx* and *cert-manager*, as shown in the following example output:
+List the Helm releases with the `helm list` command. Look for charts named *nginx* and *cert-manager*, as shown in the following example output.
```console $ helm list --namespace ingress-basic
release "cert-manager" uninstalled
release "nginx" uninstalled ```
-Next, remove the two sample applications:
+Next, remove the two sample applications.
```console kubectl delete -f aks-helloworld-one.yaml --namespace ingress-basic kubectl delete -f aks-helloworld-two.yaml --namespace ingress-basic ```
-Remove the ingress route that directed traffic to the sample apps:
+Remove the ingress route that directed traffic to the sample apps.
```console kubectl delete -f hello-world-ingress.yaml --namespace ingress-basic ```
-Finally, you can delete the itself namespace. Use the `kubectl delete` command and specify your namespace name:
+Finally, you can delete the itself namespace. Use the `kubectl delete` command and specify your namespace name.
```console kubectl delete namespace ingress-basic
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
spec:
The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this will activate the add-on. To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command. ```azurecli-interactive
-az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> query "id" --output tsv
+az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
``` Create a file named **ingress.yaml** and copy in the following YAML.
spec:
The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this will activate the add-on. To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command. ```azurecli-interactive
-az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> query "id" --output tsv
+az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
``` Create a file named **ingress.yaml** and copy in the following YAML.
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
az webapp connection create postgres \
--target-resource-group $RESOURCE_GROUP \ --server $POSTGRESQL_HOST \ --database $DATABASE_NAME \
- --system-assigned-identity
+ --system-identity
``` This command creates a connection between your web app and your PostgreSQL server, and manages authentication through a system-assigned managed identity.
az webapp browse \
Learn more about running Java apps on App Service on Linux in the developer guide. > [!div class="nextstepaction"]
-> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
+> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
application-gateway Ingress Controller Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-annotations.md
Previously updated : 3/18/2022 Last updated : 10/26/2022
-# Annotations for Application Gateway Ingress Controller
+# Annotations for Application Gateway Ingress Controller
-## Introductions
-
-The Kubernetes Ingress resource can be annotated with arbitrary key/value pairs. AGIC relies on annotations to program Application Gateway features, which are not configurable via the Ingress YAML. Ingress annotations are applied to all HTTP setting, backend pools, and listeners derived from an ingress resource.
+The Kubernetes Ingress resource can be annotated with arbitrary key/value pairs. AGIC relies on annotations to program Application Gateway features, which aren't configurable using the Ingress YAML. Ingress annotations are applied to all HTTP settings, backend pools, and listeners derived from an ingress resource.
## List of supported annotations
-For an Ingress resource to be observed by AGIC, it **must be annotated** with `kubernetes.io/ingress.class: azure/application-gateway`. Only then AGIC will work with the Ingress resource in question.
+For an Ingress resource to be observed by AGIC, it **must be annotated** with `kubernetes.io/ingress.class: azure/application-gateway`. Only then AGIC works with the Ingress resource in question.
-| Annotation Key | Value Type | Default Value | Allowed Values
+| Annotation Key | Value Type | Default Value | Allowed Values |
| -- | -- | -- | -- |
-| [appgw.ingress.kubernetes.io/backend-path-prefix](#backend-path-prefix) | `string` | `nil` | |
+| [appgw.ingress.kubernetes.io/backend-path-prefix](#backend-path-prefix) | `string` | `nil` ||
| [appgw.ingress.kubernetes.io/ssl-redirect](#tls-redirect) | `bool` | `false` | |
-| [appgw.ingress.kubernetes.io/connection-draining](#connection-draining) | `bool` | `false` | |
-| [appgw.ingress.kubernetes.io/connection-draining-timeout](#connection-draining) | `int32` (seconds) | `30` | |
-| [appgw.ingress.kubernetes.io/cookie-based-affinity](#cookie-based-affinity) | `bool` | `false` | |
-| [appgw.ingress.kubernetes.io/request-timeout](#request-timeout) | `int32` (seconds) | `30` | |
-| [appgw.ingress.kubernetes.io/use-private-ip](#use-private-ip) | `bool` | `false` | |
+| [appgw.ingress.kubernetes.io/connection-draining](#connection-draining) | `bool` | `false` ||
+| [appgw.ingress.kubernetes.io/connection-draining-timeout](#connection-draining) | `int32` (seconds) | `30` ||
+| [appgw.ingress.kubernetes.io/cookie-based-affinity](#cookie-based-affinity) | `bool` | `false` ||
+| [appgw.ingress.kubernetes.io/request-timeout](#request-timeout) | `int32` (seconds) | `30` ||
+| [appgw.ingress.kubernetes.io/use-private-ip](#use-private-ip) | `bool` | `false` ||
| [appgw.ingress.kubernetes.io/backend-protocol](#backend-protocol) | `string` | `http` | `http`, `https` |
-| [appgw.ingress.kubernetes.io/rewrite-rule-set](#rewrite-rule-set) | `string` | `nil` | |
+| [appgw.ingress.kubernetes.io/rewrite-rule-set](#rewrite-rule-set) | `string` | `nil` ||
## Backend Path Prefix
-This annotation allows the backend path specified in an ingress resource to be rewritten with prefix specified in this annotation. This allows users to expose services whose endpoints are different than endpoint names used to expose a service in an ingress resource.
+The following annotation allows the backend path specified in an ingress resource to be rewritten with prefix specified in this annotation. It allows users to expose services whose endpoints are different than endpoint names used to expose a service in an ingress resource.
### Usage
appgw.ingress.kubernetes.io/backend-path-prefix: <path prefix>
### Example ```yaml
-apiVersion: extensions/v1beta1
+apiVersion: apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: go-server-ingress-bkprefix
spec:
- http: paths: - path: /hello/
+ pathType: Exact
backend:
- serviceName: go-server-service
- servicePort: 80
+ service:
+ name: go-server-service
+ port:
+ number: 80
```
-In the example above, we have defined an ingress resource named `go-server-ingress-bkprefix` with an annotation `appgw.ingress.kubernetes.io/backend-path-prefix: "/test/"` . The annotation tells application gateway to create an HTTP setting, which will have a path prefix override for the path `/hello` to `/test/`.
-> [!NOTE]
-> In the above example we have only one rule defined. However, the annotations are applicable to the entire ingress resource, so if a user had defined multiple rules, the backend path prefix would be set up for each of the paths specified. Thus, if a user wants different rules with different path prefixes (even for the same service) they would need to define different ingress resources.
+In the previous example, you've defined an ingress resource named `go-server-ingress-bkprefix` with an annotation `appgw.ingress.kubernetes.io/backend-path-prefix: "/test/"`. The annotation tells application gateway to create an HTTP setting, which has a path prefix override for the path `/hello` to `/test/`.
+
+> [!NOTE]
+> In the above example, only one rule is defined. However, the annotations are applicable to the entire ingress resource, so if a user defined multiple rules, the backend path prefix would be set up for each of the paths specified. If a user wants different rules with different path prefixes (even for the same service), they would need to define different ingress resources.
## TLS Redirect Application Gateway [can be configured](./redirect-overview.md) to automatically redirect HTTP URLs to their HTTPS counterparts. When this annotation is present and TLS is properly configured, Kubernetes Ingress
-controller will create a [routing rule with a redirection configuration](./redirect-http-to-https-portal.md#add-a-routing-rule-with-a-redirection-configuration)
-and apply the changes to your Application Gateway. The redirect created will be HTTP `301 Moved Permanently`.
+controller creates a [routing rule with a redirection configuration](./redirect-http-to-https-portal.md#add-a-routing-rule-with-a-redirection-configuration)
+and applies the changes to your Application Gateway. The redirect created will be HTTP `301 Moved Permanently`.
### Usage
appgw.ingress.kubernetes.io/ssl-redirect: "true"
### Example ```yaml
-apiVersion: extensions/v1beta1
+apiVersion: apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: go-server-ingress-redirect
spec:
http: paths: - backend:
- serviceName: websocket-repeater
- servicePort: 80
+ service:
+ name: websocket-repeater
+ port:
+ number: 80
``` ## Connection Draining
-`connection-draining`: This annotation allows users to specify whether to enable connection draining.
-`connection-draining-timeout`: This annotation allows users to specify a timeout after which Application Gateway will terminate the requests to the draining backend endpoint.
+`connection-draining`: This annotation allows us to specify whether to enable connection draining.
+`connection-draining-timeout`: This annotation allows us to specify a timeout, after which Application Gateway terminates the requests to the draining backend endpoint.
### Usage
appgw.ingress.kubernetes.io/connection-draining-timeout: "60"
### Example ```yaml
-apiVersion: extensions/v1beta1
+apiVersion: apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: go-server-ingress-drain
spec:
- http: paths: - path: /hello/
+ pathType: Exact
backend:
- serviceName: go-server-service
- servicePort: 80
+ service:
+ name: go-server-service
+ port:
+ number: 80
``` ## Cookie Based Affinity
-This annotation allows to specify whether to enable cookie based affinity.
+The following annotation allows you to specify whether to enable cookie based affinity.
### Usage
appgw.ingress.kubernetes.io/cookie-based-affinity: "true"
### Example ```yaml
-apiVersion: extensions/v1beta1
+apiVersion: apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: go-server-ingress-affinity
spec:
- http: paths: - path: /hello/
+ pathType: Exact
backend:
- serviceName: go-server-service
- servicePort: 80
+ service:
+ name: go-server-service
+ port:
+ number: 80
``` ## Request Timeout
-This annotation allows to specify the request timeout in seconds after which Application Gateway will fail the request if response is not received.
+The following annotation allows you to specify the request timeout in seconds, after which Application Gateway fails the request if response is not received.
### Usage
appgw.ingress.kubernetes.io/request-timeout: "20"
### Example ```yaml
-apiVersion: extensions/v1beta1
+apiVersion: apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: go-server-ingress-timeout
spec:
- http: paths: - path: /hello/
+ pathType: Exact
backend:
- serviceName: go-server-service
- servicePort: 80
+ service:
+ name: go-server-service
+ port:
+ number: 80
``` ## Use Private IP
-This annotation allows us to specify whether to expose this endpoint on Private IP of Application Gateway.
+The following annotation allows you to specify whether to expose this endpoint on Private IP of Application Gateway.
> [!NOTE]
-> * Application Gateway doesn't support multiple IPs on the same port (example: 80/443). Ingress with annotation `appgw.ingress.kubernetes.io/use-private-ip: "false"` and another with `appgw.ingress.kubernetes.io/use-private-ip: "true"` on `HTTP` will cause AGIC to fail in updating the Application Gateway.
-> * For Application Gateway that doesn't have a private IP, Ingresses with `appgw.ingress.kubernetes.io/use-private-ip: "true"` will be ignored. This will reflected in the controller logs and ingress events for those ingresses with `NoPrivateIP` warning.
-
+> * Application Gateway doesn't support multiple IPs on the same port (example: 80/443). Ingress with annotation `appgw.ingress.kubernetes.io/use-private-ip: "false"` and another with `appgw.ingress.kubernetes.io/use-private-ip: "true"` on `HTTP` will cause AGIC to fail while updating the Application Gateway.
+> * For Application Gateway that doesn't have a private IP, Ingresses with `appgw.ingress.kubernetes.io/use-private-ip: "true"` is ignored. This is reflected in the controller logs and ingress events for those ingresses with `NoPrivateIP` warning.
### Usage+ ```yaml appgw.ingress.kubernetes.io/use-private-ip: "true" ``` ### Example+ ```yaml
-apiVersion: extensions/v1beta1
+apiVersion: apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: go-server-ingress-timeout
spec:
- http: paths: - path: /hello/
+ pathType: Exact
backend:
- serviceName: go-server-service
- servicePort: 80
+ service:
+ name: go-server-service
+ port:
+ number: 80
``` ## Backend Protocol
-This annotation allows us to specify the protocol that Application Gateway should use while talking to the Pods. Supported Protocols: `http`, `https`
+The following annotation allows you to specify the protocol that Application Gateway should use while communicating with the pods. Supported Protocols are `http` and `https`.
> [!NOTE]
-> * While self-signed certificates are supported on Application Gateway, currently, AGIC only support `https` when Pods are using certificate signed by a well-known CA.
-> * Make sure to not use port 80 with HTTPS and port 443 with HTTP on the Pods.
+> While self-signed certificates are supported on Application Gateway, currently AGIC only supports `https` when pods are using a certificate signed by a well-known CA.
+>
+> Don't use port 80 with HTTPS and port 443 with HTTP on the pods.
### Usage+ ```yaml appgw.ingress.kubernetes.io/backend-protocol: "https" ``` ### Example+ ```yaml
-apiVersion: extensions/v1beta1
+apiVersion: apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: go-server-ingress-timeout
spec:
- http: paths: - path: /hello/
+ pathType: Exact
backend:
- serviceName: go-server-service
- servicePort: 443
+ service:
+ name: go-server-service
+ port:
+ number: 443
``` ## Rewrite Rule Set
-This annotation allows you to assign an existing rewrite rule set to the corresponding request routing rule.
+The following annotation allows you to assign an existing rewrite rule set to the corresponding request routing rule.
### Usage
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Title: Form Recognizer general document model
+ Title: General key-value extraction - Form Recognizer
-description: Concepts related to data extraction and analysis using prebuilt general document v3.0 model
+description: Extract key-value paits, tables, selection marks,and text from your documents with Form Recognizer
recommendations: false
<!-- markdownlint-disable MD033 -->
-# Form Recognizer general document model
+# General key-value extraction with General Document model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Previously updated : 10/14/2022 Last updated : 10/27/2022
-monikerRange: '>=form-recog-2.1.0'
recommendations: false+ <!-- markdownlint-disable MD033 --> # Identity document (ID) processing + ## What is identity document (ID) processing
The Form Recognizer Identity document (ID) model combines Optical Character Reco
## Development options The following tools are supported by Form Recognizer v3.0: | Feature | Resources | Model ID | |-|-|--| |**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-idDocument**|
-### Try Identity document (ID) extraction
The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>| Extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio. You'll need the following resources:
Extract data, including name, birth date, machine-readable zone, and expiration
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio
+
+#### Form Recognizer Studio
> [!NOTE] > Form Recognizer studio is available with the v3.0 API (API version 2022-08-31 generally available (GA) release)
Extract data, including name, birth date, machine-readable zone, and expiration
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)++
+#### Form Recognizer sample labeling tool
+
+1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select **Use prebuilt model to get data**.
+
+ :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Analyze results of Form Recognizer Layout":::
+
+1. Select the **Form Type** to analyze from the dropdown window.
+
+1. Choose a URL for the file you would like to analyze from the below options:
+
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
+ * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
+ * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
+ * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
+
+1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
+
+1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
+
+ :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot: select form type dropdown window.":::
+
+1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
+
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+
+ :::image type="content" source="media/id-example-drivers-license.jpg" alt-text="Analyze Results of Form Recognizer invoice model":::
+
+1. Download the JSON output file to view the detailed results.
+
+ * The "readResults" node contains every line of text with its respective bounding box placement on the page.
+ * The "selectionMarks" node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected".
+ * The "pageResults" section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted.
+ * The "documentResults" field contains key/value pairs information and line items information for the most relevant parts of the document.
## Input requirements
Extract data, including name, birth date, machine-readable zone, and expiration
| Model | LanguageΓÇöLocale code | Default | |--|:-|:|
-|ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li><li>English (United States)ΓÇöen-US (state ID)</li><li>English (United States)ΓÇöen-US (social security card)</li><li>English (United States)ΓÇöen-US (Residence permit card)</li></ul></br>|English (United States)ΓÇöen-US|
+|ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li><li>English (United States)ΓÇöen-US (state ID)</li><li>English (United States)ΓÇöen-US (social security card)</li><li>English (United States)ΓÇöen-US (permanent resident card)</li></ul></br>|English (United States)ΓÇöen-US|
+ ## Field extractions
-Below are the fields extracted per document type. The Azure Form Recognizer ID model `prebuilt-idDocument` extracts the below fields in the `documents.*.fields`. It also extracts all the text in the documents, words, lines and styles which will be included in the JSON output in the different sections.
- * `pages.*.words`
- * `pages.*.lines`
- * `paragraphs`
- * `styles`
- * `documents`
- * `documents.*.fields`
+Below are the fields extracted per document type. The Azure Form Recognizer ID model `prebuilt-idDocument` extracts the below fields in the `documents.*.fields`. It also extracts all the text in the documents, words, lines, and styles that are included in the JSON output in the different sections.
+
+* `pages.*.words`
+* `pages.*.lines`
+* `paragraphs`
+* `styles`
+* `documents`
+* `documents.*.fields`
+
+### Document type - `idDocument.driverLicense` fields extracted
-#### Document type - `idDocument.driverLicense` fields extracted:
| Field | Type | Description | Example | |:|:--|:|:--| |`CountryRegion`|`countryRegion`|Country or region code|USA|
Below are the fields extracted per document type. The Azure Form Recognizer ID m
|`Restrictions`|`string`|Restrictions|B| |`VehicleClassifications`|`string`|Vehicle classification|D|
-#### Document type - `idDocument.passport` fields extracted:
+### Document type - `idDocument.passport` fields extracted
+ | Field | Type | Description | Example | |:|:--|:|:--| |`DocumentNumber`|`string`|Passport number|340020013|
Below are the fields extracted per document type. The Azure Form Recognizer ID m
|`MachineReadableZone.DateOfExpiration`|`date`|Date of expiration|2019-05-05| |`MachineReadableZone.Sex`|`string`|Sex|F|
-#### Document type - `idDocument.nationalIdentityCard` fields extracted:
+### Document type - `idDocument.nationalIdentityCard` fields extracted
+ | Field | Type | Description | Example | |:|:--|:|:--| |`CountryRegion`|`countryRegion`|Country or region code|USA|
Below are the fields extracted per document type. The Azure Form Recognizer ID m
|`Weight`|`string`|Weight|185LB| |`Sex`|`string`|Sex|M|
-#### Document type - `idDocument.residencePermit` fields extracted:
+### Document type - `idDocument.residencePermit` fields extracted
+ | Field | Type | Description | Example | |:|:--|:|:--| |`CountryRegion`|`countryRegion`|Country or region code|USA|
Below are the fields extracted per document type. The Azure Form Recognizer ID m
|`PlaceOfBirth`|`string`|Place of birth|Germany| |`Category`|`string`|Permit category|DV2|
-#### Document type - `idDocument.usSocialSecurityCard` fields extracted:
+### Document type - `idDocument.usSocialSecurityCard` fields extracted
+ | Field | Type | Description | Example | |:|:--|:|:--| |`DocumentNumber`|`string`|Social security card number|WDLABCD456DG|
Below are the fields extracted per document type. The Azure Form Recognizer ID m
|`LastName`|`string`|Surname|TALBOT| |`DateOfIssue`|`date`|Date of issue|08/12/2012| ++
+### ID document field extractions
+
+|Name| Type | Description | Standardized output|
+|:--|:-|:-|:-|
+| DateOfIssue | Date | Issue date | yyyy-mm-dd |
+| Height | String | Height of the holder. | |
+| Weight | String | Weight of the holder. | |
+| EyeColor | String | Eye color of the holder. | |
+| HairColor | String | Hair color of the holder. | |
+| DocumentDiscriminator | String | Document discriminator is a security code that identifies where and when the license was issued. | |
+| Endorsements | String | More driving privileges granted to a driver such as Motorcycle or School bus. | |
+| Restrictions | String | Restricted driving privileges applicable to suspended or revoked licenses.| |
+| VehicleClassification | String | Types of vehicles that can be driven by a driver. ||
+| CountryRegion | countryRegion | Country or region code compliant with ISO 3166 standard | |
+| DateOfBirth | Date | DOB | yyyy-mm-dd |
+| DateOfExpiration | Date | Expiration date DOB | yyyy-mm-dd |
+| DocumentNumber | String | Relevant passport number, driver's license number, etc. | |
+| FirstName | String | Extracted given name and middle initial if applicable | |
+| LastName | String | Extracted surname | |
+| Nationality | countryRegion | Country or region code compliant with ISO 3166 standard (Passport only) | |
+| Sex | String | Possible extracted values include "M", "F" and "X" | |
+| MachineReadableZone | Object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
+| DocumentType | String | Document type, for example, Passport, Driver's License, Social security card and more | "passport" |
+| Address | String | Extracted address, address is also parsed to its components - address, city, state, country, zip code ||
+| Region | String | Extracted region, state, province, etc. (Driver's License only) | |
+
+### Migration guide and REST API v3.0
+
+* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
+ ## Next steps
-* Try the prebuilt ID model in the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument). Use the sample documents or bring your own documents.
-* Complete a Form Recognizer quickstart:
+* [Learn how to process your own forms and documents](quickstarts/try-v3-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
- > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-* Explore our REST API:
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+
+* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Form Recognizer sample labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
recommendations: false
<!-- markdownlint-disable MD033 -->
-# Form Recognizer invoice model
+# Automated invoice processing
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
applied-ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-sas-tokens.md
Title: Create SAS tokens for containers and blobs with the Azure portal
-description: Learn how to create shared access signature (SAS) tokens for containers using Azure portal, or Azure Explorer
+ Title: Create shared access signature (SAS) tokens for your storage containers and blobs
+description: How to create Shared Access Signature tokens (SAS) for containers and blobs with Microsoft Storage Explorer and the Azure portal.
Previously updated : 10/20/2022 Last updated : 10/26/2022 monikerRange: '>=form-recog-2.1.0' recommendations: false
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
recommendations: false
* For the usage with [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [Form Recognizer REST API](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md) and [Sample Labeling Tool](https://fott-2-1.azurewebsites.net/):
-| Quota | Free (F0)<sup>1</sup> | Standard (S0) |
+|Quota|Free (F0)<sup>1</sup>|Standard (S0)|
|--|--|--|
-| **Concurrent Request limit** | 1 | 15 (default value) |
+| **Transactions Per Second limit** | 1 | 15 (default value) |
| Adjustable | No | Yes<sup>2</sup> | | **Max document size** | 4 MB | 500 MB | | Adjustable | No | No |
recommendations: false
::: moniker range="form-recog-3.0.0"
-| Quota | Free (F0)<sup>1</sup> | Standard (S0) |
+|Quota|Free (F0)<sup>1</sup>|Standard (S0)|
|--|--|--| | **Compose Model limit** | 5 | 200 (default value) | | Adjustable | No | No |
-| **Training dataset size - Template** | 50MB | 50MB (default value) |
+| **Training dataset size - Template** | 50 MB | 50 MB (default value) |
| Adjustable | No | No |
-| **Training dataset size - Neural** | 1GB | 1GB (default value) |
+| **Training dataset size - Neural** | 1 GB | 1 GB (default value) |
| Adjustable | No | No | | **Max number of pages (Training) - Template** | 500 | 500 (default value) | | Adjustable | No | No | | **Max number of pages (Training) - Neural** | 50,000 | 50,000 (default value) | | Adjustable | No | No | | **Custom neural model train** | 10 per month | 10 per month |
-| Adjustable | No | Yes<sup>3</sup> |
+| Adjustable | No |Yes<sup>3</sup>|
<sup>3</sup> Open a support request to increase the monthly training limit.
recommendations: false
## Detailed description, Quota adjustment, and best practices
-Before requesting a quota increase (where applicable), ensure that it's necessary. Form Recognizer service uses autoscaling to bring the required computational resources in "on-demand" and at the same time to keep the customer costs low, deprovision unused resources by not maintaining an excessive amount of hardware capacity. Every time your application receives a Response Code 429 ("Too many requests") while your workload is within the defined limits (see [Quotas and Limits quick reference](#form-recognizer-service-quotas-and-limits)) the most likely explanation is that the Service is scaling up to your demand and didn't reach the required scale yet, thus it doesn't immediately have enough resources to serve the request. This state is transient and shouldn't last long.
+Before requesting a quota increase (where applicable), ensure that it's necessary. Form Recognizer service uses autoscaling to bring the required computational resources in "on-demand" and at the same time to keep the customer costs low, deprovision unused resources by not maintaining an excessive amount of hardware capacity. If your application receives a Response Code 429 ("Too many requests") while your workload is within the defined limits, the most likely explanation is that the service is scaling up to your demand, but hasn't yet reached the required scale. Thus the service doesn't immediately have enough resources to serve the request. This state is transient and shouldn't last long. For more information, *see* [Quotas and Limits quick reference](#form-recognizer-service-quotas-and-limits))
### General best practices to mitigate throttling during autoscaling
Jump to [Form Recognizer: increasing concurrent request limit](#create-and-submi
### Increasing transactions per second request limit
-By default the number of concurrent requests is limited to 15 transactions per second for a Form Recognizer resource. For the Standard pricing tier, this amount can be increased. Before submitting the request, ensure you're familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#example-of-a-workload-pattern-best-practice).
+By default the number of transactions per second is limited to 15 transactions per second for a Form Recognizer resource. For the Standard pricing tier, this amount can be increased. Before submitting the request, ensure you're familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#example-of-a-workload-pattern-best-practice).
Increasing the Concurrent Request limit does **not** directly affect your costs. Form Recognizer service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests. Existing value of Concurrent Request limit parameter is **not** visible via Azure portal, Command-Line tools, or API requests. To verify the existing value, create an Azure Support Request.
+If you would like to increase your transactions per second, you can enable auto scaling on your resource. Follow this document to enable auto scaling on your resource - [enable auto scaling](../../cognitive-services/autoscale.md). You can also submit an increase TPS support request.
+ #### Have the required information ready -- Form Recognizer Resource ID-- Region
+* Form Recognizer Resource ID
+* Region
-- **How to get information (Base model)**:
- - Go to [Azure portal](https://portal.azure.com/)
- - Select the Form Recognizer Resource for which you would like to increase the transaction limit
- - Select *Properties* (*Resource Management* group)
- - Copy and save the values of the following fields:
- - **Resource ID**
- - **Location** (your endpoint Region)
+* **How to get information (Base model)**:
+ * Go to [Azure portal](https://portal.azure.com/)
+ * Select the Form Recognizer Resource for which you would like to increase the transaction limit
+ * Select *Properties* (*Resource Management* group)
+ * Copy and save the values of the following fields:
+ * **Resource ID**
+ * **Location** (your endpoint Region)
#### Create and submit support request
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md). +
+## October 2022
+
+### Guidance for Disaster Recovery of Azure Automation account
+
+Azure Automation now supports you to build your own disaster recovery strategy to handle a region-wide or zone-wide failure. [Learn more](https://learn.microsoft.com/azure/automation/automation-disaster-recovery).
+ ## September 2022 ### Availability zones support for Azure Automation Azure Automation now supports [Azure availability zones](../availability-zones/az-overview.md#availability-zones) to provide improved resiliency and high availability to a service instance in a specific Azure region. [Learn more](https://learn.microsoft.com/azure/automation/automation-availability-zones). + ## July 2022 ### Support for Run As accounts
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
Title: Deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance
-description: Explains how to deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance
+ Title: Deploy Active Directory-integrated Azure Arc-enabled SQL Managed Instance
+description: Learn how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory authentication.
Last updated 10/11/2022
-# Deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance
+# Deploy Active Directory-integrated Azure Arc-enabled SQL Managed Instance
-This article explains how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory (AD) authentication.
-
-Before you proceed, complete the steps explained in [Customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a system-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md)
+In this article, learn how to deploy Azure Arc-enabled Azure SQL Managed Instance with Active Directory authentication.
## Prerequisites
-Before you proceed, verify that you have:
+Before you begin your SQL Managed Instance deployment, make sure you have these prerequisites:
-* An Active Directory (AD) Domain
-* An instance of data controller deployed
-* An instance of Active Directory connector deployed
+- An Active Directory domain
+- A deployed Azure Arc data controller
+- A deployed Active Directory connector with a [customer-managed keytab](deploy-customer-managed-keytab-active-directory-connector.md) or [system-managed keytab](deploy-system-managed-keytab-active-directory-connector.md)
-### Specific requirements for different modes
+## Connector requirements
-#### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
+The customer-managed keytab Active Directory connector and the system-managed keytab Active Directory connector are different deployment modes that have different requirements and steps. Each mode has specific requirements during deployment. Select the tab for the connector you use.
-The following instructions expect that the users can bring in the Active Directory domain and provide to the AD customer-managed keytab deployment.
+### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
-* An Active Directory user account for SQL
-* Service Principal Names (SPNs) under the user account
-* DNS A (forward) record for the primary (and optionally, secondary) endpoint of SQL
+For an Active Directory customer-managed keytab deployment, you must provide:
-#### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+- An Active Directory user account for SQL
+- Service principal names (SPNs) under the user account
+- DNS A (forward) record for the primary endpoint of SQL (and optionally, a secondary endpoint)
-The following instructions expect that the users can bring in the Active Directory domain and provide to the AD system-managed keytab deployment.
+### [System-managed keytab mode](#tab/system-managed-keytab-mode)
-* A unique name of an Active Directory user account for SQL
-* DNS A (forward) record for the primary (and optionally, secondary) endpoint of SQL
+For an Active Directory system-managed keytab deployment, you must provide:
-
+- A unique name of an Active Directory user account for SQL
+- DNS A (forward) record for the primary endpoint of SQL (and optionally, a secondary endpoint)
-## Before you deploy SQL Managed Instance
+
-1. Identify a DNS name for the SQL endpoints.
+## Prepare for deployment
- Choose unique DNS names for the SQL endpoints that clients will connect to from outside the Kubernetes cluster.
+Depending on your deployment mode, complete the following steps to prepare to deploy SQL Managed Instance.
- These DNS names should be in the Active Directory domain or its descendant domains.
+### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
- The examples in these instructions use `sqlmi-primary.contoso.local` for the primary DNS name and `sqlmi-secondary.contoso.local` for the secondary DNS name.
+To prepare for deployment in customer-managed keytab mode:
-2. Identify the port numbers for the SQL endpoints.
+1. **Identify a DNS name for the SQL endpoints**: Choose unique DNS names for the SQL endpoints that clients will connect to from outside the Kubernetes cluster.
- You provide a port number for each of the SQL endpoints.
+ - The DNS names should be in the Active Directory domain or in its descendant domains.
+ - The examples in this article use `sqlmi-primary.contoso.local` for the primary DNS name and `sqlmi-secondary.contoso.local` for the secondary DNS name.
- These port numbers must be in the acceptable range of port numbers for Kubernetes cluster.
+1. **Identify the port numbers for the SQL endpoints**: Enter a port number for each of the SQL endpoints.
- The examples in these instructions use `31433` for the primary port number and `31434` for the secondary port number.
+ - The port numbers must be in the acceptable range of port numbers for your Kubernetes cluster.
+ - The examples in this article use `31433` for the primary port number and `31434` for the secondary port number.
-### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
+1. **Create an Active Directory account for the managed instance**: Choose a name for the Active Directory account to represent your managed instance.
-3. Create an Active Directory account for the SQL managed instance.
+ - The name must be unique in the Active Directory domain.
+ - The examples in this article use `sqlmi-account` for the Active Directory account name.
- Choose a name for the Active Directory account that will represent your SQL. This name should be unique in the Active Directory domain.
+ To create the account:
- Open `Active Directory Users and Computers` tool on the Domain Controller and create an account that will represent this SQL Managed Instance.
+ 1. On the domain controller, open the Active Directory Users and Computers tool. Create an account to represent the managed instance.
+ 1. Enter an account password that complies with the Active Directory domain password policy. You'll use this password in some of the steps in the next sections.
+ 1. Ensure that the account is enabled. The account doesn't need any special permissions.
- Provide a complex password to this account that is acceptable to the Active Directory domain password policy. This password will be needed in some of the next steps.
+1. **Create DNS records for the SQL endpoints in the Active Directory DNS servers**: In one of the Active Directory DNS servers, create A records (forward lookup records) for the DNS name you chose in step 1.
- The account does not need any special permissions. Ensure that the account is enabled.
+ - The DNS records should point to the IP address that the SQL endpoint will listen on for connections from outside the Kubernetes cluster.
+ - You don't need to create reverse-lookup Pointer (PTR) records in association with the A records.
- The examples in these instructions use `sqlmi-account` for the AD account name.
+1. **Create SPNs**: For SQL to be able to accept Active Directory authentication against the SQL endpoints, you must register two SPNs in the account you created in the preceding step. Two SPNs must be registered for the primary endpoint. If you want Active Directory authentication for the secondary endpoint, the SPNs must also be registered for the secondary endpoint.
-### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+ To create and register SPNs:
-3. Choose an Active Directory account name for SQL.
+ 1. Use the following format to create the SPNs:
- Choose a name for the Active Directory account that will represent your SQL. This name should be unique in the Active Directory domain and the account must NOT pre-exist in the domain. The system will generate this account in the domain.
+ ```output
+ MSSQLSvc/<DNS name>
+ MSSQLSvc/<DNS name>:<port>
+ ```
- The examples in these instructions use `sqlmi-account` for the AD account name.
+ 1. On one of the domain controllers, run the following commands to register the SPNs:
-
+ ```console
+ setspn -S MSSQLSvc/<DNS name> <account>
+ setspn -S MSSQLSvc/<DNS name>:<port> <account>
+ ```
-4. Create DNS records for the SQL endpoints in the Active Directory DNS servers.
+ Your commands might look like the following example:
- In one of the Active Directory DNS servers, create A records (forward lookup records) for the DNS names chosen in step 1. These DNS records should point to the IP address that the SQL endpoint will listen on for connections from outside the Kubernetes cluster.
+ ```console
+ setspn -S MSSQLSvc/sqlmi-primary.contoso.local sqlmi-account
+ setspn -S MSSQLSvc/sqlmi-primary.contoso.local:31433 sqlmi-account
+ ```
- You do not need to create PTR records (reverse lookup records) in association with the A records.
+ 1. If you want Active Directory authentication on the secondary endpoint, run the same commands to add SPNs for the secondary endpoint:
-### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
-
-5. Create Service Principal Names (SPNs)
+ ```console
+ setspn -S MSSQLSvc/<DNS name> <account>
+ setspn -S MSSQLSvc/<DNS name>:<port> <account>
+ ```
+
+ Your commands might look like the following example:
- In order for SQL to be able to accept AD authentication against the SQL endpoints, we need to register two SPNs under the account generated in the previous step. SPNs must be registered for the primary endpoint and optionally for the secondary endpoint if AD authentication is desired on the secondary endpoint. The SPNs should be of the following format:
+ ```console
+ setspn -S MSSQLSvc/sqlmi-secondary.contoso.local sqlmi-account
+ setspn -S MSSQLSvc/sqlmi-secondary.contoso.local:31434 sqlmi-account
+ ```
- ```output
- MSSQLSvc/<DNS name>
- MSSQLSvc/<DNS name>:<port>
- ```
+1. **Generate a keytab file that has entries for the account and SPNs**: For SQL to be able to authenticate itself to Active Directory and accept authentication from Active Directory users, provide a keytab file by using a Kubernetes secret.
- To register the SPNs, run the following commands on one of the domain controllers.
+ - The keytab file contains encrypted entries for the Active Directory account that's generated for the managed instance and the SPNs.
+ - SQL Server uses this file as its credential against Active Directory.
+ - You can choose from multiple tools to generate a keytab file:
- ```console
- setspn -S MSSQLSvc/<DNS name> <account>
- setspn -S MSSQLSvc/<DNS name>:<port> <account>
- ```
+ - `adutil`: Available for Linux (see [Introduction to adutil](/sql/linux/sql-server-linux-ad-auth-adutil-introduction))
+ - `ktutil`: Available on Linux
+ - `ktpass`: Available on Windows
+ - Custom scripts
+
+ To generate the keytab file specifically for the managed instance:
- With the chosen example primary endpoint DNS name, port number and the account name in this document, the commands should look like the following:
+ 1. Use one of these custom scripts:
- ```console
- setspn -S MSSQLSvc/sqlmi-primary.contoso.local sqlmi-account
- setspn -S MSSQLSvc/sqlmi-primary.contoso.local:31433 sqlmi-account
- ```
+ - Linux: [create-sql-keytab.sh](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.sh)
+ - Windows Server: [create-sql-keytab.ps1](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.ps1)
- Additionally, if AD authentication is needed on the secondary endpoint, the following commands will add SPNs for the secondary endpoint using the chosen example DNS name and port number:
+ The scripts accept several parameters and generate a keytab file and a YAML specification file for the Kubernetes secret that contains the keytab.
- ```console
- setspn -S MSSQLSvc/sqlmi-secondary.contoso.local sqlmi-account
- setspn -S MSSQLSvc/sqlmi-secondary.contoso.local:31434 sqlmi-account
- ```
+ 1. In your script, replace the parameter values with values for your managed instance deployment.
-6. Generate a keytab file containing entries for the account and SPNs
+ For the input parameters, use the following values:
- For SQL to be able to authenticate itself to Active Directory and accept authentication from Active Directory users, provide a keytab file using a Kubernetes secret.
+ - `--realm`: The Active Directory domain in uppercase. Example: `CONTOSO.LOCAL`
+ - `--account`: The Active Directory account where the SPNs are registered. Example: `sqlmi-account`
+ - `--port`: The primary SQL endpoint port number. Example: `31433`
+ - `--dns-name`: The DNS name for the primary SQL endpoint.
+ - `--keytab-file`: The path to the keytab file.
+ - `--secret-name`: The name of the keytab secret to generate a specification for.
+ - `--secret-namespace`: The Kubernetes namespace that contains the keytab secret.
+ - `--secondary-port`: The secondary SQL endpoint port number (optional). Example: `31434`
+ - `--secondary-dns-name`: The DNS name for the secondary SQL endpoint (optional).
- The keytab file contains encrypted entries for the Active Directory account generated for the managed instance and the SPNs.
+ Choose a name for the Kubernetes secret that hosts the keytab. Use the namespace where the managed instance is deployed.
- SQL Server will use this file as its credential against Active Directory.
+ 1. Run the following command to create a keytab:
- There are multiple tools available to generate a keytab file.
+ ```console
+ AD_PASSWORD=<password> ./create-sql-keytab.sh --realm <Active Directory domain in uppercase> --account <Active Directory account name> --port <endpoint port> --dns-name <endpoint DNS name> --keytab-file <keytab file name/path> --secret-name <keytab secret name> --secret-namespace <keytab secret namespace>
+ ```
- - `adutil`: This tool is available for Linux. See [Introduction to `adutil` - Active Directory utility](/sql/linux/sql-server-linux-ad-auth-adutil-introduction).
- - `ktutil`: This tool is available on Linux
- - `ktpass`: This tool is available on Windows
-
- To generate the keytab file specifically for the managed instance, use a bash shell script we have published. It wraps `ktutil` and `adutil` together. It is for use on Linux.
+ Your command might look like the following example:
- A bash script works on Linux-based OS can be found here: [create-sql-keytab.sh](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.sh).
- A PowerShell script works on Windows server based OS can be found here: [create-sql-keytab.ps1](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.ps1).
+ ```console
+ AD_PASSWORD=<password> ./create-sql-keytab.sh --realm CONTOSO.LOCAL --account sqlmi-account --port 31433 --dns-name sqlmi.contoso.local --keytab-file sqlmi.keytab --secret-name sqlmi-keytab-secret --secret-namespace sqlmi-ns
+ ```
- This script accepts several parameters and will output a keytab file and a yaml specification file for the Kubernetes secret containing the keytab.
+ 1. Run the following command to verify that the keytab is correct:
- Use the following command to run the script after replacing the parameter values with the ones for your managed instance deployment.
+ ```console
+ klist -kte <keytab file>
+ ```
- ```console
- AD_PASSWORD=<password> ./create-sql-keytab.sh --realm <AD domain in uppercase> --account <AD account name> --port <endpoint port> --dns-name <endpoint DNS name> --keytab-file <keytab file name/path> --secret-name <keytab secret name> --secret-namespace <keytab secret namespace>
- ```
+1. **Deploy the Kubernetes secret for the keytab**: Use the Kubernetes secret specification file you create in the preceding step to deploy the secret.
- The input parameters are expecting the following values:
- * `--realm` expects the uppercase of the AD domain, such as CONTOSO.LOCAL
- * `--account` expects the AD account under where the SPNs are registered, such as sqlmi-account
- * `--port` expects the primary SQL endpoint port number, such as 31433
- * `--dns-name` expects the DNS name for the primary SQL endpoint
- * `--keytab-file` expects the path to the keytab file
- * `--secret-name` expects the name of the keytab secret to generate a specification for
- * `--secret-namespace` expects the Kubernetes namespace containing the keytab secret
- * `--secondary-port` expects the secondary SQL endpoint port number, such as 31434 (optional)
- * `--secondary-dns-name` expects the DNS name for the secondary SQL endpoint (optional)
+ The specification file looks similar to this example:
- Choose a name for the Kubernetes secret hosting the keytab. The namespace should be the same as what SQL will be deployed in.
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ type: Opaque
+ metadata:
+ name: <secret name>
+ namespace: <secret namespace>
+ data:
+ keytab: <keytab content in Base64>
+ ```
+
+ To deploy the Kubernetes secret, run this command:
+
+ ```console
+ kubectl apply -f <file>
+ ```
+
+ Your command might look like this example:
+
+ ```console
+ kubectl apply -f sqlmi-keytab-secret.yaml
+ ```
- The following command creates a keytab. It uses values that this article describes:
+### [System-managed keytab mode](#tab/system-managed-keytab-mode)
- ```console
- AD_PASSWORD=<password> ./create-sql-keytab.sh --realm CONTOSO.LOCAL --account sqlmi-account --port 31433 --dns-name sqlmi.contoso.local --keytab-file sqlmi.keytab --secret-name sqlmi-keytab-secret --secret-namespace sqlmi-ns
- ```
+To prepare for deployment in system-managed keytab mode:
- To verify that the keytab is correct, you may run the following command:
+1. **Identify a DNS name for the SQL endpoints**: Choose unique DNS names for the SQL endpoints that clients will connect to from outside the Kubernetes cluster.
- ```console
- klist -kte <keytab file>
- ```
+ - The DNS names should be in the Active Directory domain or its descendant domains.
+ - The examples in this article use `sqlmi-primary.contoso.local` for the primary DNS name and `sqlmi-secondary.contoso.local` for the secondary DNS name.
-## Deploy Kubernetes secret for the keytab
+1. **Identify the port numbers for the SQL endpoints**: Enter a port number for each of the SQL endpoints.
-Use the Kubernetes secret specification file generated in the previous step to deploy the secret.
-The specification file should look like the following:
+ - The port numbers must be in the acceptable range of port numbers for your Kubernetes cluster.
+ - The examples in this article use `31433` for the primary port number and `31434` for the secondary port number.
-```yaml
-apiVersion: v1
-kind: Secret
-type: Opaque
-metadata:
- name: <secret name>
- namespace: <secret namespace>
-data:
- keytab: <keytab content in base64>
-```
+1. **Choose an Active Directory account name for SQL**: Choose a name for the Active Directory account that will represent your managed instance.
-Deploy the Kubernetes secret with `kubectl apply -f <file>`. For example:
+ - This name should be unique in the Active Directory domain, and the account must *not* already exist in the domain. This account is automatically generated in the domain.
+ - The examples in this article use `sqlmi-account` for the Active Directory account name.
-```console
-kubectl apply ΓÇôf sqlmi-keytab-secret.yaml
-```
-### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+1. **Create DNS records for the SQL endpoints in the Active Directory DNS servers**: In one of the Active Directory DNS servers, create A records (forward lookup records) for the DNS names chosen in step 1.
-These steps do not apply to the system-managed keytab mode.
+ - The DNS records should point to the IP address that the SQL endpoint will listen on for connections from outside the Kubernetes cluster.
+ - You don't need to create reverse-lookup Pointer (PTR) records in association with the A records.
-## Azure Arc-enabled SQL Managed Instance specification for Active Directory Authentication
+## Set properties for Active Directory authentication
-To deploy an Azure Arc-enabled SQL Managed Instance for Azure Arc Active Directory Authentication, the deployment specification needs to reference the Active Directory connector instance it wants to use. Referencing the Active Directory connector in SQL specification will automatically set up SQL to perform Active Directory authentication.
-
-To support Active Directory authentication on SQL, the deployment specification uses the following fields:
+To deploy an Azure Arc-enabled SQL Managed Instance for Azure Arc Active Directory authentication, update your deployment specification file to reference the Active Directory connector instance to use. Referencing the Active Directory connector in the SQL specification file automatically sets up SQL for Active Directory authentication.
### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) -- **Required** (For AD authentication)
- - `spec.security.activeDirectory.connector.name`
- Name of the pre-existing Active Directory connector custom resource to join for AD authentication. When provided, system will assume that AD authentication is desired.
- - `spec.security.activeDirectory.accountName`
- Name of the Active Directory account for this managed instance.
- - `spec.security.activeDirectory.keytabSecret`
- Name of the Kubernetes secret hosting the pre-created keytab file by users. This secret must be in the same namespace as the managed instance. This parameter is only required for the AD deployment in customer-managed keytab mode.
- - `spec.services.primary.dnsName`
- You provide a DNS name for the primary SQL endpoint.
- - `spec.services.primary.port`
- You provide a port number for the primary SQL endpoint.
--- **Optional**
- - `spec.security.activeDirectory.connector.namespace`
- Kubernetes namespace of the pre-existing Active Directory connector to join for AD authentication. When not provided, system will assume the same namespace as SQL.
- - `spec.services.readableSecondaries.dnsName`
- You provide a DNS name for the secondary SQL endpoint.
- - `spec.services.readableSecondaries.port`
- You provide a port number for the secondary SQL endpoint.
+To support Active Directory authentication on SQL in customer-managed keytab mode, set the following properties in your deployment specification file. Some properties are required and some are optional.
+
+#### Required
+
+- `spec.security.activeDirectory.connector.name`: The name of the preexisting Active Directory connector custom resource to join for Active Directory authentication. If you enter a value for this property, Active Directory authentication is implemented.
+- `spec.security.activeDirectory.accountName`: The name of the Active Directory account for the managed instance.
+- `spec.security.activeDirectory.keytabSecret`: The name of the Kubernetes secret that hosts the pre-created keytab file for users. This secret must be in the same namespace as the managed instance. This parameter is required only for the Active Directory deployment in customer-managed keytab mode.
+- `spec.services.primary.dnsName`: Enter a DNS name for the primary SQL endpoint.
+- `spec.services.primary.port`: Enter a port number for the primary SQL endpoint.
+
+#### Optional
+
+- `spec.security.activeDirectory.connector.namespace`: The Kubernetes namespace of the preexisting Active Directory connector to join for Active Directory authentication. If you don't enter a value, the SQL namespace is used.
+- `spec.services.readableSecondaries.dnsName`: Enter a DNS name for the secondary SQL endpoint.
+- `spec.services.readableSecondaries.port`: Enter a port number for the secondary SQL endpoint.
### [System-managed keytab mode](#tab/system-managed-keytab-mode) -- **Required** (For AD authentication)
- - `spec.security.activeDirectory.connector.name`
- Name of the pre-existing Active Directory connector custom resource to join for AD authentication. When provided, system will assume that AD authentication is desired.
- - `spec.security.activeDirectory.accountName`
- Name of the Active Directory (AD) account for this SQL. This account will be automatically generated for this SQL by the system and must not exist in the domain before deploying SQL.
- - `spec.services.primary.dnsName`
- You provide a DNS name for the primary SQL endpoint.
- - `spec.services.primary.port`
- You provide a port number for the primary SQL endpoint.
--- **Optional**
- - `spec.security.activeDirectory.connector.namespace`
- Kubernetes namespace of the pre-existing Active Directory connector to join for AD authentication. When not provided, system will assume the same namespace as SQL.
- - `spec.security.activeDirectory.encryptionTypes`
- List of Kerberos encryption types to allow for the automatically generated AD account provided in `spec.security.activeDirectory.accountName`. Accepted values are RC4, AES128 and AES256. It defaults to allow all encryption types when there is no value provided. You can disable RC4 by providing only AES128 and AES256 as encryption types.
- - `spec.services.readableSecondaries.dnsName`
- You provide a DNS name for the secondary SQL endpoint.
- - `spec.services.readableSecondaries.port`
- You provide a port number for the secondary SQL endpoint.
+To support Active Directory authentication on SQL in system-managed keytab mode, set the following properties in your deployment specification file. Some properties are required and some are optional.
+
+#### Required
+
+- `spec.security.activeDirectory.connector.name`: The name of the preexisting Active Directory connector custom resource to join for Active Directory authentication. If you enter a value for this property, Active Directory authentication is implemented.
+- `spec.security.activeDirectory.accountName`: The name of the Active Directory account for the managed instance. This account is automatically generated for this managed instance and must not exist in the domain before you deploy SQL.
+- `spec.services.primary.dnsName`: Enter a DNS name for the primary SQL endpoint.
+- `spec.services.primary.port`: Enter a port number for the primary SQL endpoint.
+
+#### Optional
+
+- `spec.security.activeDirectory.connector.namespace`: The Kubernetes namespace of the preexisting Active Directory connector to join for Active Directory authentication. If you don't enter a value, the SQL namespace is used.
+- `spec.security.activeDirectory.encryptionTypes`: A list of Kerberos encryption types to allow for the automatically generated Active Directory account provided in `spec.security.activeDirectory.accountName`. Accepted values are `RC4`, `AES128`, and `AES256`. If you don't enter an encryption type, all encryption types are allowed. You can disable RC4 by entering only `AES128` and `AES256` as encryption types.
+- `spec.services.readableSecondaries.dnsName`: Enter a DNS name for the secondary SQL endpoint.
+- `spec.services.readableSecondaries.port`: Enter a port number for the secondary SQL endpoint.
-### Prepare deployment specification for SQL Managed Instance for Azure Arc
+## Prepare your deployment specification file
-Prepare the following .yaml specification to deploy SQL. Set the fields described in the spec.
+Next, prepare a YAML specification file to deploy SQL Managed Instance. For the mode you use, enter your deployment values in the specification file.
> [!NOTE]
-> The *admin-login-secret* in the yaml example is used for basic authentication. You can use it to login into the SQL managed instance, and then create logins for AD users and groups. Check out [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md) for further details.
+> In the specification file for both modes, the `admin-login-secret` value in the YAML example provides basic authentication. You can use the parameter value to log in to the managed instance, and then create logins for Active Directory users and groups. For more information, see [Connect to Active Directory-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
+The following example shows a specification file for customer-managed keytab mode:
+ ```yaml apiVersion: v1 data:
- password: <your base64 encoded password>
- username: <your base64 encoded username>
+ password: <your Base64-encoded password>
+ username: <your Base64-encoded username>
kind: Secret metadata: name: admin-login-secret
spec:
adminLoginSecret: admin-login-secret activeDirectory: connector:
- name: <AD connector name>
- namespace: <AD connector namespace>
- accountName: <AD account name>
- keytabSecret: <Keytab secret name>
+ name: <Active Directory connector name>
+ namespace: <Active Directory connector namespace>
+ accountName: <Active Directory account name>
+ keytabSecret: <keytab secret name>
primary: type: LoadBalancer
- dnsName: <Primary Endpoint DNS name>
- port: <Primary Endpoint port number>
+ dnsName: <primary endpoint DNS name>
+ port: <primary endpoint port number>
readableSecondaries: type: LoadBalancer
- dnsName: <Secondary Endpoint DNS name>
- port: <Secondary Endpoint port number>
+ dnsName: <secondary endpoint DNS name>
+ port: <secondary endpoint port number>
storage: data: volumes:
spec:
### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+The following example shows a specification file for system-managed keytab mode:
+ ```yaml apiVersion: v1 data:
- password: <your base64 encoded password>
- username: <your base64 encoded username>
+ password: <your Base64-encoded password>
+ username: <your Base64-encoded username>
kind: Secret metadata: name: admin-login-secret
spec:
adminLoginSecret: admin-login-secret activeDirectory: connector:
- name: <AD connector name>
- namespace: <AD connector namespace>
- accountName: <AD account name>
+ name: <Active Directory connector name>
+ namespace: <Active Directory connector namespace>
+ accountName: <Active Directory account name>
primary: type: LoadBalancer
- dnsName: <Primary Endpoint DNS name>
- port: <Primary Endpoint port number>
+ dnsName: <primary endpoint DNS name>
+ port: <primary endpoint port number>
readableSecondaries: type: LoadBalancer
- dnsName: <Secondary Endpoint DNS name>
- port: <Secondary Endpoint port number>
+ dnsName: <secondary endpoint DNS name>
+ port: <secondary endpoint port number>
storage: data: volumes:
spec:
-### Deploy a managed instance
+## Deploy the managed instance
-To deploy a managed instance using the prepared specification:
+For both customer-managed keytab mode and system-managed keytab mode, deploy the managed instance by using the prepared specification YAML file:
-1. Save the file. The example uses the name `sqlmi.yaml`. Use any name.
-1. Run the following command to deploy the instance according to the specification:
+1. Save the file. The example in the next step uses *sqlmi.yaml* for the specification file name, but you can choose any file name.
-```console
-kubectl apply -f sqlmi.yaml
-```
+1. Run the following command to deploy the instance by using the specification:
-## Next steps
+ ```console
+ kubectl apply -f <specification file name>
+ ```
+
+ Your command might look like the following example:
-* [Connect to Active Directory integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
+ ```console
+ kubectl apply -f sqlmi.yaml
+ ```
+
+## Next steps
+- [Connect to Active Directory-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md)
+- [Upgrade your Active Directory connector](upgrade-active-directory-connector.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge (preview) overview description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 07/14/2022 Last updated : 10/27/2022
Azure Arc resource bridge (preview) hosts other components such as [custom locat
Azure Arc resource bridge (preview) can host other Azure services or solutions running on-premises. For this preview, there are two objects hosted on the Arc resource bridge (preview):
-* Cluster extension: The Azure service deployed to run on-premises. For the preview release, it supports two
+* Cluster extension: The Azure service deployed to run on-premises. For the preview release, it supports three
* Azure Arc-enabled VMware- * Azure Arc-enabled Azure Stack HCI
+ * Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)
* Custom locations: A deployment target where you can create Azure resources. It maps to different resource for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Arc-enabled Azure Stack HCI, it maps to an HCI cluster instance.
-Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge (preview) resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter or Azure Stack HCI cluster.
+Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge (preview) resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter, Azure Stack HCI cluster, or SCVMM.
Some resources are unique to the infrastructure. For example, vCenter has a resource pool, network, and template resources. During VM creation, these resources need to be specified. With Azure Stack HCI, you just need to select the custom location, network and template to create a VM.
By registering resource pools, networks, and VM templates, you can represent a s
You can provision and manage on-premises Windows and Linux virtual machines (VMs) running on Azure Stack HCI clusters.
+### System Center Virtual Machine Manager (SCVMM)
+
+You can connect an SCVMM management server to Azure by deploying Azure Arc resource bridgeΓÇ»(preview) in the VMM environment. Azure Arc resource bridge (preview) enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and perform various operations on them:
+
+* Start, stop, and restart a virtual machine
+* Control access and add Azure tags
+* Add, remove, and update network interfaces
+* Add, remove, and update disks and update VM size (CPU cores and memory)
+ ## Prerequisites [Azure CLI](/cli/azure/install-azure-cli) is required to deploy the Azure Arc resource bridge on supported private cloud environments.
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
Compare _active-passive_ to _active-active_, where you can write to either side
With passive geo-replication, the cache instances are typically located in different Azure regions, though that isn't required. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagates changes to the secondary.
-Failover is not automatic. For more information and information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+Failover is not automatic. For more information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
> [!NOTE] > Geo-replication is designed as a disaster-recovery solution.
After geo-replication is configured, the following restrictions apply to your li
- You can't [Import](cache-how-to-import-export-data.md#import) into the secondary linked cache. - You can't delete either linked cache, or the resource group that contains them, until you unlink the caches. For more information, see [Why did the operation fail when I tried to delete my linked cache?](#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - If the caches are in different regions, network egress costs apply to the data moved across regions. For more information, see [How much does it cost to replicate my data across Azure regions?](#how-much-does-it-cost-to-replicate-my-data-across-azure-regions)-- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information and information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
- Private links can't be added to caches that are already geo-replicated. To add a private link to a geo-replicated cache: 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication. ## Add a geo-replication link
-1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from the working pane.
+1. To link two caches together for geo-replication, first select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from the working pane.
:::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Screenshot showing the cache's Geo-replication menu.":::
azure-cache-for-redis Cache Moving Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-moving-resources.md
After geo-replication is configured, the following restrictions apply to your li
- You can't [Import](cache-how-to-import-export-data.md#import) into the secondary linked cache. - You can't delete either linked cache, or the resource group that contains them, until you unlink the caches. For more information, see [Why did the operation fail when I tried to delete my linked cache?](cache-how-to-geo-replication.md#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - If the caches are in different regions, network egress costs apply to the data moved across regions. For more information, see [How much does it cost to replicate my data across Azure regions?](cache-how-to-geo-replication.md#how-much-does-it-cost-to-replicate-my-data-across-azure-regions)-- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information and information on how to failover a client application, see [Initiate a failover from geo-primary to geo-secondary (preview)](cache-how-to-geo-replication.md#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information on how to failover a client application, see [Initiate a failover from geo-primary to geo-secondary (preview)](cache-how-to-geo-replication.md#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
### Move
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 09/29/2022 Last updated : 10/2/2022 # What's New in Azure Cache for Redis
+## October 2022
+
+### Enhancements for passive geo-replication
+
+Several enhancements have been made to the passive geo-replication functionality offered on the Premium tier of Azure Cache for Redis.
+
+- New metrics are available for customers to better track the health and status of their geo-replication link, including statistics around the amount of data that is waiting to be replicated. For more information, see [Monitor Azure Cache for Redis](cache-how-to-monitor.md).
+
+ - Geo Replication Connectivity Lag (preview)
+ - Geo Replication Data Sync Offset (preview)
+ - Geo Replication Full Sync Event Finished (preview)
+ - Geo Replication Full Sync Event Started (preview)
+
+- Customers can now initiate a failover between geo-primary and geo-replica caches with a single selection or CLI command, eliminating the hassle of manually unlinking and relinking caches. For more information, see [Initiate a failover from geo-primary to geo-secondary (preview)](cache-how-to-geo-replication.md#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+
+- A global cache URL is also now offered that automatically updates their DNS records after geo-failovers are triggered, allowing their application to manage only one cache address. For more information, see [Geo-primary URLs (preview)](cache-how-to-geo-replication.md#geo-primary-urls-preview).
+ ## September 2022 ### Upgrade your Azure Cache for Redis instances to use Redis version 6 by June 30, 2023
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
+
+ Title: How to create Azure Maps applications using the C# REST SDK
+
+description: How to develop applications that incorporate Azure Maps using the C# SDK Developers Guide.
++ Last updated : 10/31/2021+++++
+# C# REST SDK Developers Guide
+
+The Azure Maps C# SDK supports all of the functionality provided in the [Azure Maps Rest API][Rest API], like searching for an address, routing between different coordinates, and getting the geo-location of a specific IP address. This article will help you get started building location-aware applications that incorporate the power of Azure Maps.
+
+> [!NOTE]
+> Azure Maps C# SDK supports any .NET version that is compatible with [.NET standard 2.0][.NET standard]. For an interactive table, seeΓÇ»[.NET Standard versions][.NET Standard versions].
+
+## Prerequisites
+
+- [Azure Maps account][Azure Maps account].
+- [Subscription key][Subscription key] or other form of [authentication][authentication].
+- [.NET standard][.NET standard] version 2.0 or higher.
+
+> [!TIP]
+> You can create an Azure Maps account programmatically, Here's an example using the Azure CLI:
+>
+> ```azurecli
+> az maps account create --kind "Gen2" --account-name "myMapAccountName" --resource-group "<resource group>" --sku "G2"
+> ```
+
+## Create a .NET project
+
+The following PowerShell code snippet demonstrates how to use PowerShell to create a console program `MapsDemo` with .NET 7.0. You can use any .NET standard 2.0-compatible version as the framework.
+
+```powershell
+dotnet new console -lang C# -n MapsDemo -f net7.0
+cd MapsDemo
+```
+
+### Install required packages
+
+To use Azure Maps C# SDK, we need to install the required packages. Each of the Azure Maps services including search, routing, rendering and geolocation are each in their own package. Since the Azure Maps C# SDK is in public preview, you need to add the `--prerelease` flag:
+
+```powershell
+dotnet add package Azure.Maps.Rendering --prerelease
+dotnet add package Azure.Maps.Routing --prerelease
+dotnet add package Azure.Maps.Search --prerelease
+dotnet add package Azure.Maps.Geolocation --prerelease
+```
+
+#### Azure Maps services
+
+| Service Name  | NuGet package  | Samples  |
+||-|--|
+| [Search][search readme] | [Azure.Maps.Search][search package] | [search samples][search sample] |
+| [Routing][routing readme] | [Azure.Maps.Routing][routing package] | [routing samples][routing sample] |
+| [Rendering][rendering readme]| [Azure.Maps.Rendering][rendering package]|[rendering sample][rendering sample] |
+| [Geolocation][geolocation readme]|[Azure.Maps.Geolocation][geolocation package]|[geolocation sample][geolocation sample]|
+
+### Fuzzy search an entity
+
+The following code snippet demonstrates how, in a simple console application, to import the `Azure.Maps.Search` package and perform a fuzzy search onΓÇ£StarbucksΓÇ¥ near Seattle. In `Program.cs`:
+
+```csharp
+using Azure;
+using Azure.Core.GeoJson;
+using Azure.Maps.Search;
+using Azure.Maps.Search.Models;
+
+// Use Azure Maps subscription key authentication
+var credential = new AzureKeyCredential("Azure_Maps_Subscription_key");
+var client = new MapsSearchClient(credential);
+
+SearchAddressResult searchResult = client.FuzzySearch(
+ "Starbucks", new FuzzySearchOptions
+ {
+ Coordinates = new GeoPosition(-122.31, 47.61),
+ Language = SearchLanguage.EnglishUsa
+ });
++
+// Print the search results
+foreach (var result in searchResult.Results)
+{
+ Console.WriteLine($"""
+ * {result.PointOfInterest.Name}
+ {result.Address.StreetNumber} {result.Address.StreetName}
+ {result.Address.Municipality} {result.Address.CountryCode} {result.Address.PostalCode}
+ Coordinate: ({result.Position.Latitude:F4}, {result.Position.Longitude:F4})
+ """);
+}
+```
+
+In the above code snippet, you create a `MapsSearchClient` object using your Azure credentials, then use that Search Client's [FuzzySearch][FuzzySearch] method passing in the point of interest (POI) name "_Starbucks_" and coordinates _GeoPosition(-122.31, 47.61)_. This all gets wrapped up by the SDK and sent to the Azure Maps REST endpoints. When the search results are returned, they're written out to the screen using `Console.WriteLine`.
+
+The following libraries are used:
+
+1. `Azure.Maps.Search` is required for the `MapsSearchClient` class.
+1. `Azure.Maps.Search.Models` is required for the `SearchAddressResult` class.
+1. `Azure.Core.GeoJson` is required for the `GeoPosition` struct used by the `FuzzySearchOptions` class.
+
+To run your application, go to the project folder and execute `dotnet run` in PowerShell:
+
+```powershell
+dotnet run
+```
+
+You should see a list of Starbucks address and coordinate results:
+
+```text
+* Starbucks
+ 1600, East Jefferson Street
+ Seattle US 98122
+ Coordinate: (47.6065, -122.3110)
+* Starbucks
+ 800, 12th Avenue
+ Seattle US 98122
+ Coordinate: (47.6093, -122.3165)
+* Starbucks
+ 2201, East Madison Street
+ Seattle US 98112
+ Coordinate: (47.6180, -122.3036)
+* Starbucks
+ 101, Broadway East
+ Seattle US 98102
+ Coordinate: (47.6189, -122.3213)
+* Starbucks
+ 2300, South Jackson Street
+ Seattle US 98144
+ Coordinate: (47.5995, -122.3020)
+* Starbucks
+ 1600, East Olive Way
+ Seattle US 98102
+ Coordinate: (47.6195, -122.3251)
+* Starbucks
+ 1730, Howell Street
+ Seattle US 98101
+ Coordinate: (47.6172, -122.3298)
+* Starbucks
+ 505, 5Th Ave S
+ Seattle US 98104
+ Coordinate: (47.5977, -122.3285)
+* Starbucks
+ 121, Lakeside Avenue South
+ Seattle US 98122
+ Coordinate: (47.6020, -122.2851)
+* Starbucks Regional Office
+ 220, 1st Avenue South
+ Seattle US 98104
+ Coordinate: (47.6003, -122.3338)
+```
+
+## Search an address
+
+Call the `SearchAddress` method to get the coordinate of an address. Modify the Main program from the sample as follows:
+
+```csharp
+// Use Azure Maps subscription key authentication
+var credential = new AzureKeyCredential("Azure_Maps_Subscription_key");
+var client = new MapsSearchClient(credential);
+
+SearchAddressResult searchResult = client.SearchAddress(
+ "1301 Alaskan Way, Seattle, WA 98101, US");
+
+if (searchResult.Results.Count > 0)
+{
+ SearchAddressResultItem result = searchResult.Results.First();
+ Console.WriteLine($"The Coordinate: ({result.Position.Latitude:F4}, {result.Position.Longitude:F4})");
+}
+```
+
+Results returned by the `SearchAddress` method are ordered by confidence score and because `searchResult.Results.First()` is used, only the coordinates of the first result will be returned.
+
+## Batch reverse search
+
+Azure Maps Search also provides some batch query methods. These methods will return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so users can choose to wait until completion or query the result periodically. The example below demonstrates how to call the batched reverse search methods:
+
+```csharp
+var queries = new List<ReverseSearchAddressQuery>()
+{
+ new ReverseSearchAddressQuery(new ReverseSearchOptions()
+ {
+ Coordinates = new GeoPosition(2.294911, 48.858561)
+ }),
+ new ReverseSearchAddressQuery(new ReverseSearchOptions()
+ {
+ Coordinates = new GeoPosition(-122.127896, 47.639765),
+ RadiusInMeters = 5000
+ })
+};
+```
+
+In the above example, two queries are passed to the batched reverse search request. To get the LRO results, you have few options. The first option is to pass `WaitUntil.Completed` to the method. The request will wait until all requests are finished and return the results:
+
+```csharp
+// Wait until the LRO return batch results
+Response<ReverseSearchAddressBatchOperation> waitUntilCompletedResults = client.ReverseSearchAddressBatch(WaitUntil.Completed, queries);
+
+// Print the result addresses
+printReverseBatchAddresses(waitUntilCompletedResults.Value);
+```
+
+Another option is to pass `WaitUntil.Started`. The request will return immediately, and you'll need to manually poll the results:
+
+```csharp
+// Manual polling the batch results
+Response<ReverseSearchAddressBatchOperation> manualPollingOperation = client.ReverseSearchAddressBatch(WaitUntil.Started, queries);
+
+// Keep polling until we get the results
+while (true)
+{
+ manualPollingOperation.Value.UpdateStatus();
+ if (manualPollingOperation.Value.HasCompleted) break;
+ Task.Delay(1000);
+}
+printReverseBatchAddresses(manualPollingOperation);
+```
+
+We can also call `WaitUntilCompletion()` to explicitly wait for the result:
+
+```csharp
+Response<ReverseSearchAddressBatchOperation> manualPollingResult = manualPollingResults.WaitUntilCompleted();
+
+printReverseBatchAddresses(manualPollingResult.Value);
+```
+
+The third method requires the operation ID to get the results, which will be cached on the server side for 14 days:
+
+```csharp
+ ReverseSearchAddressBatchOperation longRunningOperation = client.ReverseSearchAddressBatch(WaitUntil.Started, queries);
+
+ // Get batch results by ID
+ string operationId = longRunningOperation.Value.Id;
+
+ // After the LRO completes, create a new operation
+ // to get the results from the server
+ ReverseSearchAddressBatchOperation newOperation = new ReverseSearchAddressBatchOperation(client, operationId);
+ Response<ReverseSearchAddressBatchOperation> newOperationResult = newOperation.WaitForCompletion();
+
+printReverseBatchAddresses(newOperationResult);
+```
+
+The complete code for reverse address batch search with operation ID:
+
+```csharp
+using Azure;
+using Azure.Core.GeoJson;
+using Azure.Maps.Search;
+using Azure.Maps.Search.Models;
+
+// Use Azure Maps subscription key authentication
+var credential = new AzureKeyCredential("Azure_Maps_Subscription_key");
+var client = new MapsSearchClient(credential);
+
+var queries = new List<ReverseSearchAddressQuery>()
+{
+ new ReverseSearchAddressQuery(new ReverseSearchOptions()
+ {
+ Coordinates = new GeoPosition(2.294911, 48.858561)
+ }),
+ new ReverseSearchAddressQuery(new ReverseSearchOptions()
+ {
+ Coordinates = new GeoPosition(-122.127896, 47.639765),
+ RadiusInMeters = 5000
+ })
+};
+
+// Manual polling the batch results
+ReverseSearchAddressBatchOperation longRunningOperation = client.ReverseSearchAddressBatch(WaitUntil.Started, queries);
+
+// Get batch results by ID
+string operationId = longRunningOperation.Id;
+
+// A few days later, create a new operation and get the result from server
+ReverseSearchAddressBatchOperation newOperation = new ReverseSearchAddressBatchOperation(client, operationId);
+Response<ReverseSearchAddressBatchResult> newOperationResult = newOperation.WaitForCompletion();
+printReverseBatchAddresses(newOperationResult.Value);
+void printReverseBatchAddresses(ReverseSearchAddressBatchResult batchResult)
+{
+ // Print the search results
+ for (int i = 0; i < batchResult.Results.Count; i++)
+ {
+ Console.WriteLine($"Possible addresses for query {i}:");
+ var result = batchResult.Results[i];
+ foreach (var address in result.Addresses)
+ {
+ Console.WriteLine($"{address.Address.FreeformAddress}");
+ }
+ }
+}
+```
+
+## Additional information
+
+The [Azure.Maps Namespace][Azure.Maps Namespace] in the .NET documentation.
+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+
+[authentication]: azure-maps-authentication.md
+[.NET standard]: /dotnet/standard/net-standard?tabs=net-standard-2-0
+[Rest API]: /rest/api/maps/
+[.NET Standard versions]: https://dotnet.microsoft.com/platform/dotnet-standard#versions
+[search package]: https://www.nuget.org/packages/Azure.Maps.Search
+[search readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Search/README.md
+[search sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Search/samples
+[routing package]: https://www.nuget.org/packages/Azure.Maps.Routing
+[routing readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Routing/README.md
+[routing sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Routing/samples
+[rendering package]: https://www.nuget.org/packages/Azure.Maps.Rendering
+[rendering readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Rendering/README.md
+[rendering sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Rendering/samples
+[geolocation package]: https://www.nuget.org/packages/Azure.Maps.geolocation
+[geolocation readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.geolocation/README.md
+[geolocation sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Geolocation/samples
+[FuzzySearch]: /dotnet/api/azure.maps.search.mapssearchclient.fuzzysearch
+[Azure.Maps Namespace]: /dotnet/api/azure.maps
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
+
+ Title: REST SDK Developer Guide
+
+description: How to develop applications that incorporate Azure Maps using the various SDK Developer how-to articles.
++ Last updated : 10/31/2021+++++
+# REST SDK Developer Guide
+
+You can call the Azure Maps [Rest API][Rest API] directly from any programming language, however that can be error prone work requiring extra effort. To make incorporating Azure Maps in your applications easier and less error prone, the Azure Maps team has encapsulated their REST API in SDKs for C# (.NET), Python, JavaScript/Typescript, and Java.
+
+This article lists the libraries currently available for each SDK with links to how-to articles to help you get started.
+
+## C# SDK
+
+Azure Maps C# SDK supports any .NET version that is compatible with [.NET standard 2.0][.NET Standard versions].
+
+| Service Name  | NuGet package  | Samples  |
+||-|--|
+| [Search][C# search readme] | [Azure.Maps.Search][C# search package] | [search samples][C# search sample] |
+| [Routing][C# routing readme] | [Azure.Maps.Routing][C# routing package] | [routing samples][C# routing sample] |
+| [Rendering][C# rendering readme]| [Azure.Maps.Rendering][C# rendering package]|[rendering sample][C# rendering sample] |
+| [Geolocation][C# geolocation readme]|[Azure.Maps.Geolocation][C# geolocation package]|[geolocation sample][C# geolocation sample] |
+
+For more information, see the [C# SDK Developers Guide](how-to-dev-guide-csharp-sdk.md).
+
+## Python SDK
+
+Azure Maps Python SDK supports Python version 3.7 or later. Check the [Azure SDK for Python policy planning][Python-version-support-policy] for more details on future Python versions.
+
+| Service Name  | PyPi package  | Samples  |
+||-|--|
+| [Search][py search readme] | [azure-maps-search][py search package] | [search samples][py search sample] |
+| [Routing][py routing readme] | [azure-maps-routing][py routing package] | [routing samples][py routing sample] |
+| [Rendering][py rendering readme]| [azure-maps-rendering][py rendering package]|[rendering sample][py rendering sample] |
+| [Geolocation][py geolocation readme]|[azure-maps-geolocation][py geolocation package]|[geolocation sample][py geolocation sample] |
+
+<!--For more information, see the [python SDK Developers Guide](how-to-dev-guide-py-sdk.md).-->
+
+## JavaScript/TypeScript
+
+Azure Maps JavaScript/TypeScript SDK supports LTS versions of [Node.js][Node.js] including versions in Active status and Maintenance status.
+
+| Service Name  | NPM package  | Samples  |
+||-|--|
+| [Search][js search readme] | [azure-maps-search][js search package] | [search samples][js search sample] |
+
+<!--For more information, see the [JavaScript/TypeScript SDK Developers Guide](how-to-dev-guide-js-sdk.md).-->
+
+## Java
+
+Azure Maps Java SDK supports [Java 8][Java 8] or above.
+
+| Service Name  | Maven package  | Samples  |
+||-|--|
+| [Search][java search readme] | [azure-maps-search][java search package] | [search samples][java search sample] |
+| [Routing][java routing readme] | [azure-maps-routing][java routing package] | [routing samples][java routing sample] |
+| [Rendering][java rendering readme]| [azure-maps-rendering][java rendering package]|[rendering sample][java rendering sample] |
+| [Geolocation][java geolocation readme]|[azure-maps-geolocation][java geolocation package]|[geolocation sample][java geolocation sample] |
+| [TimeZone][java timezone readme] | [azure-maps-TimeZone][java timezone package] | [TimeZone samples][java timezone sample] |
+| [Elevation][java elevation readme] | [azure-maps-Elevation][java elevation package] | [Elevation samples][java elevation sample] |
+
+<!--For more information, see the [Java SDK Developers Guide](how-to-dev-guide-java-sdk.md).-->
+
+<!-- C# SDK Developers Guide >
+[Rest API]: /rest/api/maps/
+[.NET Standard versions]: https://dotnet.microsoft.com/platform/dotnet-standard#versions
+[C# search package]: https://www.nuget.org/packages/Azure.Maps.Search
+[C# search readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Search/README.md
+[C# search sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Search/samples
+[C# routing package]: https://www.nuget.org/packages/Azure.Maps.Routing
+[C# routing readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Routing/README.md
+[C# routing sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Routing/samples
+[C# rendering package]: https://www.nuget.org/packages/Azure.Maps.Rendering
+[C# rendering readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Rendering/README.md
+[C# rendering sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Rendering/samples
+[C# geolocation package]: https://www.nuget.org/packages/Azure.Maps.geolocation
+[C# geolocation readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.geolocation/README.md
+[C# geolocation sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Geolocation/samples
+
+<!-- Python SDK Developers Guide >
+[Python-version-support-policy]: https://github.com/Azure/azure-sdk-for-python/wiki/Azure-SDKs-Python-version-support-policy
+[py search package]: https://pypi.org/project/azure-maps-search
+[py search readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-search/README.md
+[py search sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-search/samples
+[py routing package]: https://pypi.org/project/azure-maps-route
+[py routing readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-routing/README.md
+[py routing sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-routing/samples
+[py rendering package]: https://pypi.org/project/azure-maps-render
+[py rendering readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-rendering/README.md
+[py rendering sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-rendering/samples
+[py geolocation package]: https://pypi.org/project/azure-maps-geolocation
+[py geolocation readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-geolocation/README.md
+[py geolocation sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-geolocation/samples
+
+<!-- JavaScript/TypeScript SDK Developers Guide >
+[Node.js]: https://nodejs.org/en/download/
+[js search package]: https://www.npmjs.com/package/@azure/maps-search
+[js search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search/README.md
+[js search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search/samples/v1-beta/javascript
+
+<!-- Java SDK Developers Guide >
+[Java 8]: https://www.java.com/en/download/java8_update.jsp
+[java search package]: https://repo1.maven.org/maven2/com/azure/azure-maps-search
+[java search readme]: https://github.com/Azure/azure-sdk-for-jav
+[java search sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-search/src/samples/java/com/azure/maps/search/samples
+[java routing package]: https://repo1.maven.org/maven2/com/azure/azure-maps-route
+[java routing readme]: https://github.com/Azure/azure-sdk-for-jav
+[java routing sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-route/src/samples/java/com/azure/maps/route/samples
+[java rendering package]: https://repo1.maven.org/maven2/com/azure/azure-maps-render
+[java rendering readme]: https://github.com/Azure/azure-sdk-for-jav
+[java rendering sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-render/src/samples/java/com/azure/maps/render/samples
+[java geolocation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-geolocation
+[java geolocation readme]: https://github.com/Azure/azure-sdk-for-jav
+[java geolocation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-geolocation/src/samples/java/com/azure/maps/geolocation/samples
+[java timezone package]: https://repo1.maven.org/maven2/com/azure/azure-maps-timezone
+[java timezone readme]: https://github.com/Azure/azure-sdk-for-jav
+[java timezone sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-timezone/src/samples/java/com/azure/maps/timezone/samples
+[java elevation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-elevation
+[java elevation readme]: https://github.com/Azure/azure-sdk-for-jav
+[java elevation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-elevation/src/samples/java/com/azure/maps/elevation/samples
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
3. **Verify that the agent is running**: 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR: ```Kusto
- Heartbeat | where Category == "Azure Monitor Agent" and 'Computer' == "<computer-name>" | take 10
+ Heartbeat | where Category == "Azure Monitor Agent" and Computer == "<computer-name>" | take 10
``` 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up. 3. If not, check if you see any errors in core agent logs located at `C:\Resources\Directory\AMADataStore\Configuration` on your machine
azure-monitor Alerts Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-logic-apps.md
+
+ Title: Customize alert notifications using Logic Apps
+description: Learn how to create a logic app to process Azure Monitor alerts.
++ Last updated : 09/07/2022+++
+# Customer intent: As an administrator I want to create a logic app that is triggered by an alert so that I can send emails or Teams messages when an alert is fired.
+++
+# Customize alert notifications using Logic Apps
+
+This article shows you how to create a Logic App and integrate it with an Azure Monitor Alert.
+
+[Azure Logic Apps](https://docs.microsoft.com/azure/logic-apps/logic-apps-overview) allows you to build and customize workflows for integration. Use Logic Apps to customize your alert notifications.
+
++ Customize the alerts email, using your own email subject and body format. ++ Customize the alert metadata by looking up tags for affected resources or fetching a log query search result. ++ Integrate with external services using existing connectors like Outlook, Microsoft Teams, Slack and PagerDuty, or by configuring the Logic App for your own services.+
+In this example, we'll use the following steps to create a Logic App that uses the [common alerts schema](./alerts-common-schema.md) to send details from the alert. The example uses the following steps:
+
+1. [Create a Logic App](#create-a-logic-app) for sending an email or a Teams post.
+1. [Create an alert action group](#create-an-action-group) that triggers the logic app.
+1. [Create a rule](#create-a-rule-using-your-action-group) the uses the action group.
+## Create a Logic App
+
+1. Create a new Logic app. Set **Logic App name** , select **Consumption Plan type**.
+1. Select **Review + create**, then select **Create**.
+1. Select **Go to resource** when the deployment is complete.
+1. On the Logic Apps Designer page, select **When a HTTP request is received**.
+
+1. Paste the common alert schema into the **Request Body JSON Schema** field from the following JSON:
+ ```json
+ {
+ "type": "object",
+ "properties": {
+ "schemaId": {
+ "type": "string"
+ },
+ "data": {
+ "type": "object",
+ "properties": {
+ "essentials": {
+ "type": "object",
+ "properties": {
+ "alertId": {
+ "type": "string"
+ },
+ "alertRule": {
+ "type": "string"
+ },
+ "severity": {
+ "type": "string"
+ },
+ "signalType": {
+ "type": "string"
+ },
+ "monitorCondition": {
+ "type": "string"
+ },
+ "monitoringService": {
+ "type": "string"
+ },
+ "alertTargetIDs": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ }
+ },
+ "originAlertId": {
+ "type": "string"
+ },
+ "firedDateTime": {
+ "type": "string"
+ },
+ "resolvedDateTime": {
+ "type": "string"
+ },
+ "description": {
+ "type": "string"
+ },
+ "essentialsVersion": {
+ "type": "string"
+ },
+ "alertContextVersion": {
+ "type": "string"
+ }
+ }
+ },
+ "alertContext": {
+ "type": "object",
+ "properties": {}
+ }
+ }
+ }
+ }
+ }
+ ```
+
+1. Select the **+** icon to insert a new step.
+
+1. Send an email or post a Teams message.
+
+## [Send an email](#tab/send-email)
+
+1. In the search field, search for *outlook*.
+1. Select **Office 365 Outlook**.
+ :::image type="content" source="./media/alerts-logic-apps/choose-operation-outlook.png" alt-text="A screenshot showing add action page of the logic apps designer with Office 365 Outlook selected.":::
+1. Select **Send an email (V2)** from the list of actions.
+1. Sign into Office 365 when prompted to create a connection.
+1. Create the email **Body** by entering static text and including content taken from the alert payload by choosing fields from the **Dynamic content** list.
+For example:
+ - Enter *An alert has monitoring condition:* then select **monitorCondition** from the **Dynamic content** list.
+ - Then enter *Date fired:* and select **firedDateTime** from the **Dynamic content** list.
+ - Enter *Affected resources:* and select **alterTargetIDs** from the **Dynamic content** list.
+
+1. In the **Subject** field, create the subject text by entering static text and including content taken from the alert payload by choosing fields from the **Dynamic content** list.
+For example:
+ - Enter *Alert:* and select **alertRule** from the **Dynamic content** list.
+ - Then enter *with severity:* and select **severity** from the **Dynamic content** list.
+ - Enter *has condition:* and select **monitorCondition** from the **Dynamic content** list.
+
+1. Enter the email address to send the alert to in the **To** field.
+1. Select **Save**.
+
+ :::image type="content" source="./media/alerts-logic-apps/configure-email.png" alt-text="A screenshot showing the parameters tab for the send email action.":::
+
+You've created a Logic App that will send an email to the specified address, with details from the alert that triggered it.
+
+The next step is to create an action group to trigger your Logic App.
+
+## [Post a Teams message](#tab/send-teams-message)
+
+1. In the search field, search for *Microsoft Teams*.
+
+1. Select **Microsoft Teams**
+ :::image type="content" source="./media/alerts-logic-apps/choose-operation-teams.png" alt-text="A screenshot showing add action page of the logic apps designer with Microsoft Teams selected.":::
+1. Select **Post a message in a chat or channel** from the list of actions.
+1. Sign into Teams when prompted to create a connection.
+1. Select *User* from the **Post as** dropdown.
+1. Select *Group chat* from the **Post in** dropdown.
+1. Select your group from the **Group chat** dropdown.
+1. Create the message text in the **Message** field by entering static text and including content taken from the alert payload by choosing fields from the **Dynamic content** list.
+ For example:
+ - Enter *Alert:* then select **alertRule** from the **Dynamic content** list.
+ - Enter *with severity:* and select **severity** from the **Dynamic content** list.
+ - Enter *was fired at:* and select **firedDateTime** from the **Dynamic content** list.
+ - Add more fields according to your requirements.
+1. Select **Save**
+ :::image type="content" source="./media/alerts-logic-apps/configure-teams-message.png" alt-text="A screenshot showing the parameters tab for the post a message in a chat or channel action.":::
+
+You've created a Logic App that will send a Teams message to the specified group, with details from the alert that triggered it.
+
+The next step is to create an action group to trigger your Logic App.
+++
+## Create an action group
+
+To trigger your Logic app, create an action group, then create an alert that uses that action group.
+
+1. Go to the Azure Monitor page and select **Alerts** from the sidebar.
+
+1. Select **Action groups**, then select **Create**.
+1. Select a **Subscription**, **Resource group** and **Region**.
+1. Enter an **Actions group name** and **Display name**.
+1. Select the **Actions** tab.
+1. In the **Actions** tab under **Action type**, select **Logic App**.
+1. In the **Logic App** section, select your logic app from the dropdown.
+1. Set **Enable common alert schema** to *Yes*. If you select *No*, the alert type will determine which alert schema is used. For more information about alert schemas, see [Context specific alert schemas](./alerts-non-common-schema-definitions.md).
+1. Select **OK**.
+1. Enter a name in the **Name** field.
+1. Select **Review + create**, the **Create**.
+
+## Test your action group
+
+1. Select your action group.
+1. In the **Logic App** section, select **Test action group(preview)**.
+1. Select a **Sample alert type** from the dropdown.
+1. Select **Test**.
+
+
+The following email will be sent to the specified account:
+++
+## Create a rule using your action group
+
+1. [Create a rule](./alerts-create-new-alert-rule.md) for one of your resources.
+
+1. In the actions section of your rule, select **Select action groups**.
+1. Select your action group from the list.
+1. Select **Select**.
+1. Finish the creation of your rule.
+ :::image type="content" source="./media/alerts-logic-apps/select-action-groups.png" alt-text="A screenshot showing the actions tab of the create rules page and the select action groups blade.":::
+
+## Next steps
+
+* [Learn more about action groups](./action-groups.md).
+* [Learn more about the common alert schema](./alerts-common-schema.md).
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
description: Monitor ASP.NET Core web applications for availability, performance
ms.devlang: csharp Previously updated : 10/12/2021 Last updated : 10/27/2022 # Application Insights for ASP.NET Core applications
Run your application and make requests to it. Telemetry should now flow to Appli
### ILogger logs
-The default configuration collects `ILogger` `Warning` logs and more severe logs. Review the FAQ to [customize this configuration](../faq.yml).
+The default configuration collects `ILogger` `Warning` logs and more severe logs. Review [How do I customize ILogger logs collection?](#how-do-i-customize-ilogger-logs-collection) for more information.
### Dependencies
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Download the [applicationinsights-agent-3.4.2.jar](https://github.com/microsoft/
> > Starting from 3.3.0: >
-> - `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
+> - `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logging-level-as-a-custom-dimension)
> - Exception records are no longer recorded for failed dependencies, they are only recorded for failed requests. > > Starting from 3.2.0:
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Connection string is required. You can find your connection string in your Appli
You can also set the connection string by using the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`. It then takes precedence over the connection string specified in the JSON configuration.
+Or you can set the connection string by using the Java system property `applicationinsights.connection.string`. It also takes precedence over the connection string specified in the JSON configuration.
+ You can also set the connection string by specifying a file to load the connection string from. If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.2.jar` is located.
Sampling is also based on trace ID to help ensure consistent sampling decisions
### Rate-limited sampling
-Starting from 3.4.2, rate-limited sampling is available and is now the default.
+Starting from 3.4.0, rate-limited sampling is available and is now the default.
If no sampling has been configured, the default is now rate-limited sampling configured to capture at most (approximately) 5 requests per second, along with all the dependencies and logs on those requests.
Starting from version 3.2.0, if you want to set a custom dimension programmatica
## Connection string overrides (preview)
-This feature is in preview, starting from 3.4.2.
+This feature is in preview, starting from 3.4.0.
Connection string overrides allow you to override the [default connection string](#connection-string). For example, you can:
You can use these valid `level` values to specify in the `applicationinsights.js
> | project timestamp, message, itemType > ```
-### Log markers for Logback and Log4j 2 (preview)
-
-Log markers are disabled by default.
-
-You can enable the `Marker` property for Logback and Log4j 2:
+### Log markers (preview)
-```json
-{
- "preview": {
- "captureLogbackMarker": true
- }
-}
-```
+Starting from 3.4.2, you can capture the log markers for Logback and Log4j 2:
```json { "preview": {
+ "captureLogbackMarker": true,
"captureLog4jMarker": true } } ```
-This feature is in preview, starting from 3.4.2.
+### Additional log attributes for Logback (preview)
-You can enable code properties, such as `FileName`, `ClassName`, `MethodName`, and `LineNumber`, for Logback:
+Starting from 3.4.2, you can capture `FileName`, `ClassName`, `MethodName`, and `LineNumber`, for Logback:
```json {
You can enable code properties, such as `FileName`, `ClassName`, `MethodName`, a
> [!WARNING] >
-> This feature could add a performance overhead.
-
-This feature is in preview, starting from 3.4.2.
+> Capturing these additional log attributes can add a performance overhead.
-### LoggingLevel
+### Logging level as a custom dimension
Starting from version 3.3.0, `LoggingLevel` isn't captured by default as part of the Traces custom dimension because that data is already captured in the `SeverityLevel` field.
To disable auto-collection of Micrometer metrics and Spring Boot Actuator metric
Literal values in JDBC queries are masked by default to avoid accidentally capturing sensitive data.
-Starting from 3.4.2, this behavior can be disabled. For example:
+Starting from 3.4.0, this behavior can be disabled. For example:
```json {
Starting from 3.4.2, this behavior can be disabled. For example:
Literal values in Mongo queries are masked by default to avoid accidentally capturing sensitive data.
-Starting from 3.4.2, this behavior can be disabled. For example:
+Starting from 3.4.0, this behavior can be disabled. For example:
```json {
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
const appInsights = new ApplicationInsights({ config: { // Application Insights
> [!NOTE]
-> There are two distributed tracing modes/protocols: AI (Classic) and [W3C TraceContext](https://www.w3.org/TR/trace-context/) (New). In version 2.6.0 and later, they are _both_ enabled by default. For older versions, users need to [explicitly opt in to WC3 mode](../app/correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).
+> There are two distributed tracing modes/protocols: AI (Classic) and [W3C TraceContext](https://www.w3.org/TR/trace-context/) (New). In version 2.6.0 and later, they are _both_ enabled by default. For older versions, users need to [explicitly opt in to W3C mode](../app/correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).
### Route tracking
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
For example, a particular blob might have a name similar to:
insights-logs-networksecuritygrouprulecounter/resourceId=/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/y=2020/m=06/d=08/h=18/m=00/PT1H.json ```
-Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL, for example, h=12. During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00 because resource log events are broken into individual blobs per hour.
+Each PT1H.json blob contains a JSON object with events from log files that were received during the hour specified in the blob URL. During the present hour, events are appended to the PT1H.json file as they are received, regardless of when they were generated. The minute value in the URL, `m=00` is always `00` as blobs are created on a per hour basis.
Each event is stored in the PT1H.json file with the following format. This format uses a common top-level schema but is otherwise unique for each category, as described in [Activity log schema](activity-log-schema.md).
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md
scrape_configs:
- <job-y> ```
-Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration will fail validation and won't be applied.
+Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration will fail validation and won't be applied.
+
+Refer to [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the prometheus config.
> [!NOTE] > When custom scrape configuration fails to apply due to validation errors, default scrape configuration will continue to be used.
scrape_configs:
- action: labelmap regex: __meta_kubernetes_pod_label_(.+) ```-
+Refer to [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the prometheus config.
## Next steps
azure-monitor Prometheus Metrics Scrape Validate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-validate.md
Title: Create and validate custom configuration file for Prometheus metrics in Azure Monitor (preview)
+ Title: Create, validate and troubleshoot custom configuration file for Prometheus metrics in Azure Monitor (preview)
description: Describes how to create custom configuration file Prometheus metrics in Azure Monitor and use validation tool before applying to Kubernetes cluster.
In addition to the default scrape targets that Azure Monitor Prometheus agent scrapes by default, use the following steps to provide additional scrape config to the agent using a configmap. The Azure Monitor Prometheus agent doesn't understand or process operator [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) for scrape configuration, but instead uses the native Prometheus configuration as defined in [Prometheus configuration](https://aka.ms/azureprometheus-promioconfig-scrape).
+The 2 configmaps that can be used for custom target scraping are -
+- ama-metrics-prometheus-config - When a configmap with this name is created, the scraping of custom targets is done by the replicaset.
+- ama-metrics-prometheus-config-node - When a configmap with this name is created, the scraping of custom targets is done by each daemonset. See [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset) for more details .
+ ## Create Prometheus configuration file Create a Prometheus scrape configuration file named `prometheus-config`. See the [configuration tips and examples](prometheus-metrics-scrape-configuration.md#prometheus-configuration-tips-and-examples) for more details on authoring scrape config for Prometheus. You can also refer to [Prometheus.io](https://aka.ms/azureprometheus-promio) scrape configuration [reference](https://aka.ms/azureprometheus-promioconfig-scrape). Your config file will list the scrape configs under the section `scrape_configs` and can optionally use the global section for setting the global `scrape_interval`, `scrape_timeout`, and `external_labels`.
Your custom Prometheus configuration file is consumed as a field named `promethe
kubectl create configmap ama-metrics-prometheus-config --from-file=prometheus-config -n kube-system ```
-*Ensure that the Prometheus config file is named `prometheus-metrics` before running the following command since the file name is used as the configmap setting name.*
+*Ensure that the Prometheus config file is named `prometheus-config` before running the following command since the file name is used as the configmap setting name.*
This will create a configmap named `ama-metrics-prometheus-config` in `kube-system` namespace. The Azure Monitor metrics pod will then restart to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics` pods. A sample of the `ama-metrics-prometheus-config` configmap is [here](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-prometheus-config-configmap.yaml). -
+### Troubleshooting
+If you successfully created the configmap (ama-metrics-prometheus-config or ama-metrics-prometheus-config-node) in the **kube-system** namespace and still don't see the custom targets being scraped, check for errors in the **replicaset pod** logs for **ama-metrics-prometheus-config** configmap or **daemonset pod** logs for **ama-metrics-prometheus-config-node** configmap) using *kubectl logs* and make sure there are no errors in the *Start Merging Default and Custom Prometheus Config* section with prefix *prometheus-config-merger*
## Next steps
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-managed-identity.md
This step isn't required if you're using an AKS identity since it will already h
```
+## Verify remote write is working correctly
+
+You can verify that Prometheus data is being sent into your Azure Monitor workspace in a couple of ways.
+
+1. By viewing your container log using kubectl commands:
+
+ ```azurecli
+ kubectl logs <Prometheus-Pod-Name> <Azure-Monitor-Side-Car-Container-Name>
+ # example: kubectl logs prometheus-prometheus-kube-prometheus-prometheus-0 prom-remotewrite
+ ```
+ Expected output: time="2022-10-19T22:11:58Z" level=info msg="Metric packets published in last 1 minute" avgBytesPerRequest=19809 avgRequestDuration=0.17153638698214294 failedPublishingToAll=0 successfullyPublishedToAll=112 successfullyPublishedToSome=0
+
+ You can confirm that the data is flowing via remote write if the above output has non-zero value for ΓÇ£avgBytesPerRequestΓÇ¥ and ΓÇ£avgRequestDurationΓÇ¥.
+
+2. By performing PromQL queries on the data and verifying results
+ This can be done via Grafana. Refer to our documentation for [getting Grafana setup with Managed Prometheus](prometheus-grafana.md).
+
+## Troubleshooting remote write setup
+
+1. If the data is not flowing
+You can run the following commands to view errors from the container that cause the data not flowing.
+
+ ```azurecli
+ kubectl --namespace <Namespace> describe pod <Prometheus-Pod-Name>
+ ```
+These logs should indicate the errors if any in the remote write container.
+
+2. If the container is restarting constantly
+This is likely due to misconfiguration of the container. In order to view the configuration values set for the container, run the following command:
+ ```azurecli
+ kubectl get po <Prometheus-Pod-Name> -o json | jq -c '.spec.containers[] | select( .name | contains(" <Azure-Monitor-Side-Car-Container-Name> "))'
+ ```
+Output:
+{"env":[{"name":"INGESTION_URL","value":"https://rwtest-eus2-qu4m.eastus2-1.metrics.ingest.monitor.azure.com/dataCollectionRules/dcr-90b2d5e5feac43f486311dff33c3c116/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview"},{"name":"LISTENING_PORT","value":"8081"},{"name":"IDENTITY_TYPE","value":"userAssigned"},{"name":"AZURE_CLIENT_ID","value":"fe9b242a-1cdb-4d30-86e4-14e432f326de"}],"image":"mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221012.2","imagePullPolicy":"Always","name":"prom-remotewrite","ports":[{"containerPort":8081,"name":"rw-port","protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-vbr9d","readOnly":true}]}
+
+Verify the configuration values especially ΓÇ£AZURE_CLIENT_IDΓÇ¥ and ΓÇ£IDENTITY_TYPEΓÇ¥
## Next steps
+= [Setup Grafana to use Managed Prometheus as a data source](prometheus-grafana.md).
- [Learn more about Azure Monitor managed service for Prometheus](prometheus-metrics-overview.md).
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
The blob for a network security group might have a name similar to this example:
insights-logs-networksecuritygrouprulecounter/resourceId=/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUP/TESTNSG/y=2016/m=08/d=22/h=18/m=00/PT1H.json ```
-Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL, for example, h=12. During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00 because resource log events are broken into individual blobs per hour.
+Each PT1H.json blob contains a JSON object with events from log files that were received during the hour specified in the blob URL. During the present hour, events are appended to the PT1H.json file as they are received, regardless of when they were generated. The minute value in the URL, `m=00` is always `00` as blobs are created on a per hour basis.
Within the PT1H.json file, each event is stored in the following format. It uses a common top-level schema but is unique for each Azure service, as described in [Resource logs schema](./resource-logs-schema.md). > [!NOTE]
-> Logs are written to the blob relevant to the time that the log was generated, not the time that it was received. So, at the turn of the hour, both the previous hour and current hour blobs could be receiving new writes.
+> Logs are written to blobs based on the time that the log was received, regardless of the time it was generated. This means that a given blob can contain log data that is outside the hour specified in the blobΓÇÖs URL. Where a data source like Application insights, supports uploading stale telemetry a blob can contain data from the previous 48 hours.
+> At the start of a new hour, it is possible that existing logs are still being written to the previous hourΓÇÖs blob while new logs are written to the new hourΓÇÖs blob.
```json {"time": "2016-07-01T00:00:37.2040000Z","systemId": "46cdbb41-cb9c-4f3d-a5b4-1d458d827ff1","category": "NetworkSecurityGroupRuleCounter","resourceId": "/SUBSCRIPTIONS/s1id1234-5679-0123-4567-890123456789/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/TESTNSG","operationName": "NetworkSecurityGroupCounters","properties": {"vnetResourceGuid": "{12345678-9012-3456-7890-123456789012}","subnetPrefix": "10.3.0.0/24","macAddress": "000123456789","ruleName": "/subscriptions/ s1id1234-5679-0123-4567-890123456789/resourceGroups/testresourcegroup/providers/Microsoft.Network/networkSecurityGroups/testnsg/securityRules/default-allow-rdp","direction": "In","type": "allow","matchedConnections": 1988}}
azure-monitor Resource Manager Vminsights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/resource-manager-vminsights.md
resource mmaExtension 'Microsoft.Compute/virtualMachines/extensions@2021-11-01'
typeHandlerVersion: MmaExtensionVersion autoUpgradeMinorVersion: true settings: {
- workspaceId: reference(WorkspaceResourceId, '@2021-12-01-preview').customerId
+ workspaceId: reference(WorkspaceResourceId, '2021-12-01-preview').customerId
azureResourceId: VmResourceId stopOnMultipleConnections: true } protectedSettings: {
- workspaceKey: listKeys(WorkspaceResourceId, '@2021-12-01-preview').primarySharedKey
+ workspaceKey: listKeys(WorkspaceResourceId, '2021-12-01-preview').primarySharedKey
} } }
resource mmaExtension 'Microsoft.Compute/virtualMachines/extensions@2021-11-01'
"typeHandlerVersion": "[variables('MmaExtensionVersion')]", "autoUpgradeMinorVersion": true, "settings": {
- "workspaceId": "[reference(parameters('WorkspaceResourceId'), '@2021-12-01-preview').customerId]",
+ "workspaceId": "[reference(parameters('WorkspaceResourceId'), '2021-12-01-preview').customerId]",
"azureResourceId": "[parameters('VmResourceId')]", "stopOnMultipleConnections": true }, "protectedSettings": {
- "workspaceKey": "[listKeys(parameters('WorkspaceResourceId'), '@2021-12-01-preview').primarySharedKey]"
+ "workspaceKey": "[listKeys(parameters('WorkspaceResourceId'), '2021-12-01-preview').primarySharedKey]"
} }, "dependsOn": [
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
You can create an Azure support request to increase the adjustable limits from t
- [Regional capacity quota for Azure NetApp Files](regional-capacity-quota.md) - [Request region access for Azure NetApp Files](request-region-access.md) - [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)--
azure-netapp-files Faq Application Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md
Previously updated : 06/09/2022 Last updated : 10/27/2022 # Application resilience FAQs for Azure NetApp Files
The scale-out architecture would be comprised of multiple IBM MQ multi-instance
>[!NOTE] > This section contains references to the terms *slave* and *master*, terms that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-If you're running the Apache ActiveMQ, it is recommended to deploy [ActiveMQ High Availability with Pluggable Storage Lockers](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq).
+If you're running the Apache ActiveMQ, it's recommended to deploy [ActiveMQ High Availability with Pluggable Storage Lockers](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq).
ActiveMQ high availability (HA) models ensure that a broker instance is always online and able to process message traffic. The two most common ActiveMQ HA models involve sharing a filesystem over a network. The purpose is to provide either LevelDB or KahaDB to the active and passive broker instances. These HA models require that an OS-level lock be obtained and maintained on a file in the LevelDB or KahaDB directories, called "lock". There are some problems with this ActiveMQ HA model. They can lead to a "no-master" situation, where the "slave" isnΓÇÖt aware that it can lock the file. They can also lead to a "master-master" configuration that results in index or journal corruption and ultimately message loss. Most of these problems stem from factors outside of ActiveMQ's control. For instance, a poorly optimized NFS client can cause locking data to become stale under load, leading to ΓÇ£no-masterΓÇ¥ downtime during failover.
Because most problems with this HA solution stem from inaccurate OS-level file l
The general industry recommendation is to [not run your KahaDB shared storage on CIFS/SMB](https://www.openlogic.com/blog/activemq-community-deprecates-leveldb-what-you-need-know). If you're having trouble maintaining accurate lock state, check out the JDBC Pluggable Storage Locker, which can provide a more reliable locking mechanism. For support or consultancy on ActiveMQ HA architectures and deployments, you should [contact OpenLogic by Perforce](https://www.openlogic.com/contact-us).
+## IΓÇÖm running Boomi on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events?
+
+If you're running Boomi, it's recommended you follow the [Boomi Best Practices for Run Time High Availability and Disaster Recovery](https://community.boomi.com/s/article/bestpracticesforruntimehighavailabilityanddisasterrecovery).
+
+Boomi recommends Boomi Molecule is used to implement high availability for Boomi Atom. The [Boomi Molecule system requirements](https://help.boomi.com/bundle/integration/page/r-atm-Molecule_system_requirements.html) state that either NFS with NFS locking enabled (NLM support) or SMB file shares can be used. In the context of Azure NetApp Files, NFSv4.1 volumes have NLM support.
+
+Boomi recommends that SMB file share is used with Windows VMs; for NFS, Boomi recommends Linux VMs.
+
+>[!NOTE]
+>[Azure NetApp Files Continuous Availability Shares](enable-continuous-availability-existing-smb.md) are not supported with Boomi.
+ ## Next steps - [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)
The general industry recommendation is to [not run your KahaDB shared storage on
- [Data migration and protection FAQs](faq-data-migration-protection.md) - [Azure NetApp Files backup FAQs](faq-backup.md) - [Integration FAQs](faq-integration.md)-
+- [Mount NFS volumes for Linux or Windows VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md)
+- [Mount SMB volumes for Windows VMs](mount-volumes-vms-smb.md)
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
This section lists the most common service limits you might encounter as you use
[!INCLUDE [sentinel-service-limits](../../../includes/sentinel-limits-machine-learning.md)]
+## Multi workspace limits
++ ### Notebook limits [!INCLUDE [sentinel-service-limits](../../../includes/sentinel-limits-notebooks.md)]
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
This section describes languages supported by Azure Video Indexer API.
| Urdu | `ur-PK` | | | | Γ£ö | | | Vietnamese | `vi-VN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-\*By default, languages marked by * are supported by LID or/and MLID auto-detection. When [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with API, you can specify to use other supported languages (see the table above) to auto-detect one or more languages by language identification (LID) or multi-language identification (MLID) by using `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by language identification (LID) or multi-language identification (MLID).
+\*By default, languages marked with * (in the table above) are supported by language identification (LID) or/and multi-language identification (MLID) auto-detection. When [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with an API, you can specify to use other supported languages, from the table above, by using `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID.
> [!NOTE]
-> To change the default languages, set the `customLanguages` parameter. Setting the parameter, will replace the default languages supported by language identification (LID) and by multi-language identification (MLID).
+> To change the default languages to auto-detect one or more languages by LID or MLID, set the `customLanguages` parameter.
## Language support in frontend experiences
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Azure Backup provides several ways to restore a VM.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
>[!Tip]
chaos-studio Chaos Studio Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-permissions-security.md
All user interactions with Chaos Studio happen through Azure Resource Manager. I
* Service-direct AKS Chaos Mesh faults - Service-direct faults for Azure Kubernetes Service that use Chaos Mesh require access that the AKS cluster have a publicly-exposed Kubernetes API server. [You can learn how to limit AKS network access to a set of IP ranges here.](../aks/api-server-authorized-ip-ranges.md) * Agent-based faults - Agent-based faults require agent access to the Chaos Studio agent service. A virtual machine or virtual machine scale set must have outbound access to the agent service endpoint for the agent to connect successfully. The agent service endpoint is `https://acs-prod-<region>.chaosagent.trafficmanager.net`, replacing `<region>` with the region where your virtual machine is deployed, for example, `https://acs-prod-eastus.chaosagent.trafficmanager.net` for a virtual machine in East US.
-Azure Chaos Studio doesn't support Private Link.
+Azure Chaos Studio doesn't support Private Link for agent-based scenarios.
## Data encryption
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
+
+ Title: Integration of VNet Injection with Chaos Studio
+description: Chaos Studio supports VNet Injections
+++ Last updated : 10/26/2022+++
+# VNet Injection in Chaos Studio
+VNet is the fundamental building block for your private network in Azure. VNet enables many Azure resources to securely communicate with each other, the internet, and on-premises networks. VNet is like a traditional network you would operate in your own data center. However, VNet also has the benefits of Azure infrastructure, scale, availability, and isolation.
+
+## How VNet Injection works in Chaos Studio
+VNet injection allows Chaos resource provider to inject containerized workloads into your VNet. This means that resources without public internet access can be accessed via a private IP address on the VNet. Below are the steps you can follow for vnet injection:
+1. Register the Microsoft.ContainerInstance resource provider with your subscription (if applicable).
+2. Re-register the Microsoft.Chaos resource provider with your subscription.
+3. Create a subnet named ChaosStudioSubnet in the VNet you want to inject into.
+4. Set the properties.subnetId property when you create or update the Target resource. The value should be the resource ID of the subnet created in step 1.
+5. Start the experiment.
+
+## Limitations
+* At present the VNet injection will only be possible in subscriptions/regions where Azure Container Instances and Azure Relay are available.
+* When you create a Target resource that you would like to enable with VNet injection, you will need Microsoft.Network/virtualNetworks/subnets/write access to the virtual network. For example, if the AKS cluster is deployed to VNet_A, then you must have permissions to create subnets in VNet_A in order to enable VNet injection for the AKS cluster. You will have to specify a subnet (in VNet_A) that the container will be deployed to.
+
+Request Body when created Target resource with VNet injection enabled:
+
+![Target resource with VNet Injection](images/chaos-studio-rp-vnet-injection.png)
+
+## Next steps
+Now that you understand how VNet Injection can be achieved for Chaos Studio, you're ready to:
+- [Create and run your first experiment](chaos-studio-tutorial-service-direct-portal.md)
+- [Create and run your first Azure Kubernetes Service experiment](chaos-studio-tutorial-aks-portal.md)
cognitive-services Overview Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/overview-multivariate.md
- Title: What is Multivariate Anomaly Detection?-
-description: Overview of new Anomaly Detector preview multivariate APIs.
------ Previously updated : 07/06/2022--
-keywords: anomaly detection, machine learning, algorithms
--
-# What is Multivariate Anomaly Detector? (Public Preview)
-
-The **multivariate anomaly detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
-
-![Multiple time series line graphs for variables of: rotation, optical filter, pressure, bearing with anomalies highlighted in orange](./media/multivariate-graph.png)
-
-Imagine 20 sensors from an auto engine generating 20 different signals like rotation, fuel pressure, bearing, etc. The readings of those signals individually may not tell you much about system level issues, but together they can represent the health of the engine. When the interaction of those signals deviates outside the usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The underlying AI models are trained and customized using your data such that it understands the unique needs of your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for complex enterprise software, or business intelligence tools.
--
-## Sample Notebook
-
-To learn how to call the Multivariate Anomaly Detector API, try this [Notebook](https://github.com/Azure-Samples/AnomalyDetector/blob/master/ipython-notebook/API%20Sample/Multivariate%20API%20Demo%20Notebook.ipynb). To run the Notebook, you only need a valid Anomaly Detector API **subscription key** and an **API endpoint**. In the notebook, add your valid Anomaly Detector API subscription key to the `subscription_key` variable, and change the `endpoint` variable to your endpoint.
-
-Multivariate Anomaly Detector includes three main steps, **data preparation**, **training** and **inference**.
-
-### Data preparation
-For data preparation, you should prepare two parts of data, **training data** and **inference data**. As for training data, you should upload your data to Blob Storage and generate an SAS url which will be used in training API. As for inference data, you could either use the same data format as training data, or send the data into API header, which will be formatted as JSON. This depends on what API you choose to use in the inference process.
-
-### Training
-When training a model, you should call an asynchronous API on your training data, which means you won't get the model status immediately after calling this API, you should request another API to get the model status.
-
-### Inference
-In the inference process, you have two options to choose, an asynchronous API or a synchronous API. If you would like to do a batch validation, you are suggested to use the asynchronous API. If you want to do streaming in a short granularity and get the inference result immediately after each API request, you are suggested to use the synchronous API.
-* As for the asynchronous API, you won't get the inference result immediately like training process, which means you should use another API to request the result after some time. Data preparation is similar with the training process.
-* As for synchronized API, you could get the inference result immediately after you request, and you should send your data in a JSON format into the API body.
-
-## Region support
-
-The preview of Multivariate Anomaly Detector is currently available in 26 Azure regions.
-
-| Geography | Regions |
-| - | - |
-| Africa | South Africa North |
-| Asia Pacific | Southeast Asia, East Asia|
-| Australia | Australia East |
-| Brazil |Brazil South|
-|Canada | Canada Central |
-| Europe | North Europe, West Europe, Switzerland North |
-|France |France Central |
-|Germany| Germany West Central |
-|India| Jio India West, Central India |
-|Japan | Japan East |
-|Korea | Korea Central |
-|Norway | Norway East|
-|United Arab Emirates| UAE North |
-| United Kingdom | UK South |
-| United States | East US, East US 2, South Central US, West US, West US 2, West US 3, Central US, North Central US|
----
-## Algorithms
-
-See the following technical documents for information about the algorithms used:
-
-* Blog: [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679)
-* Paper: [Multivariate time series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040)
--
-> [!VIDEO https://www.youtube.com/embed/FwuI02edclQ]
--
-## Join the Anomaly Detector community
-
-Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates!
-
-## Next steps
--- [Tutorial](./tutorials/learn-multivariate-anomaly-detection.md): This article is an end-to-end tutorial of how to use the multivariate APIs.-- [Quickstarts](./quickstarts/client-libraries-multivariate.md).-- [Best Practices](./concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
cognitive-services Overview Univariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/overview-univariate.md
- Title: What is the Univariate Anomaly Detector?-
-description: Use the Anomaly Detector univariate API's algorithms to apply anomaly detection on your time series data.
------ Previously updated : 10/18/2022-
-keywords: anomaly detection, machine learning, algorithms
---
-# What is Univariate Anomaly Detector?
-
-The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
-
-![Detect pattern changes in service requests](./media/anomaly_detection2.png)
-
-Using Anomaly Detector doesn't require any prior experience in machine learning, and the REST API enables you to easily integrate the service into your applications and processes.
-
-## Features
-
-With the Univariate Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time.
-
-|Feature |Description |
-|||
-|Anomaly detection in real-time. | Detect anomalies in your streaming data by using previously seen data points to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target point is an anomaly. By calling the API with each new data point you generate, you can monitor your data as it's created. |
-|Detect anomalies throughout your data set as a batch. | Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
-|Detect change points throughout your data set as a batch. | Use your time series to detect any trend change points that exist in your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
-
-## Demo
-
-Check out this [interactive demo](https://aka.ms/adDemo) to understand how Anomaly Detector works.
-To run the demo, you need to create an Anomaly Detector resource and get the API key and endpoint.
-
-## Notebook
-
-To learn how to call the Anomaly Detector API, try this [Notebook](https://aka.ms/adNotebook). This Jupyter Notebook shows you how to send an API request and visualize the result.
-
-To run the Notebook, you should get a valid Anomaly Detector API **subscription key** and an **API endpoint**. In the notebook, add your valid Anomaly Detector API subscription key to the `subscription_key` variable, and change the `endpoint` variable to your endpoint.
-
-<!-- ## Workflow
-
-The Anomaly Detector API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON.
---
-After signing up:
-
-1. Take your time series data and convert it into a valid JSON format. Use [best practices](concepts/anomaly-detection-best-practices.md) when preparing your data to get the best results.
-1. Send a request to the Anomaly Detector API with your data.
-1. Process the API response by parsing the returned JSON message.
-
-## Algorithms
-
-* See the following technical blogs for information about the algorithms used:
- * [Introducing Azure Anomaly Detector API](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Introducing-Azure-Anomaly-Detector-API/ba-p/490162)
- * [Overview of SR-CNN algorithm in Azure Anomaly Detector](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Overview-of-SR-CNN-algorithm-in-Azure-Anomaly-Detector/ba-p/982798)
-
-You can read the paper [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) (accepted by KDD 2019) to learn more about the SR-CNN algorithms developed by Microsoft.
-
-> [!VIDEO https://www.youtube.com/embed/ERTaAnwCarM]
-
-## Service availability and redundancy
-
-### Is the Anomaly Detector service zone resilient?
-
-Yes. The Anomaly Detector service is zone-resilient by default.
-
-### How do I configure the Anomaly Detector service to be zone-resilient?
-
-No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Anomaly Detector resources is available by default and managed by the service itself.
-
-## Deploy on premises using Docker containers
-
-[Use Univariate Anomaly Detector containers](anomaly-detector-container-howto.md) to deploy API features on-premises. Docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
-
-## Join the Anomaly Detector community
-
-Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates!
-
-## Next steps
-
-* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detector](quickstarts/client-libraries.md)
-* [What's multivariate anomaly detection?](./overview-multivariate.md)
-* The Anomaly Detector API [online demo](https://github.com/Azure-Samples/AnomalyDetector/tree/master/ipython-notebook)
-* The Anomaly Detector [REST API reference](https://aka.ms/anomaly-detector-rest-api-ref)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/overview.md
Previously updated : 10/17/2022 Last updated : 10/27/2022 keywords: anomaly detection, machine learning, algorithms-+ # What is Anomaly Detector?
-Anomaly Detector is an AI service with a set of APIs, which enables you to monitor and detect anomalies in your time series data with little ML knowledge, either batch validation or real-time inference.
+Anomaly Detector is an AI service with a set of APIs, which enables you to monitor and detect anomalies in your time series data with little machine learning (ML) knowledge, either batch validation or real-time inference.
This documentation contains the following types of articles:
-* The [quickstarts](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./how-to/identify-anomalies.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles](./concepts/anomaly-detection-best-practices.md) provide in-depth explanations of the service's functionality and features.
-* The [tutorials](./tutorials/batch-anomaly-detection-powerbi.md) are longer guides that show you how to use this service as a component in broader business solutions.
+* [**Quickstarts**](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* [**Interactive demo**](https://aka.ms/adDemo) could you help understand how Anomaly Detector works with easy operations.
+* [**How-to guides**](./how-to/identify-anomalies.md) contain instructions for using the service in more specific or customized ways.
+* [**Tutorials**](./tutorials/batch-anomaly-detection-powerbi.md) are longer guides that show you how to use this service as a component in broader business solutions.
+* [**Code samples**](https://github.com/Azure-Samples/AnomalyDetector/tree/master/ipython-notebook) demonstrate how to use Anomaly Detector.
+* [**Conceptual articles**](./concepts/anomaly-detection-best-practices.md) provide in-depth explanations of the service's functionality and features.
-## Features
+## Anomaly Detector capabilities
-With the Anomaly Detector, you can either detect anomalies in one variable using Univariate Anomaly Detector, or detect anomalies in multiple variables with Multivariate Anomaly Detector.
+With Anomaly Detector, you can either detect anomalies in one variable using Univariate Anomaly Detector, or detect anomalies in multiple variables with Multivariate Anomaly Detector.
|Feature |Description | |||
-|Univariate Anomaly Detector | Detect anomalies in one variable, like revenue, cost, etc. The model is selected automatically based on your data pattern. |
-|Multivariate Anomaly Detector| Detect anomalies in multiple variables with correlations, which are usually gathered from equipment or other complex system. The underlying model used is a Graph Attention Network (GAT).|
+|Univariate Anomaly Detection | Detect anomalies in one variable, like revenue, cost, etc. The model was selected automatically based on your data pattern. |
+|Multivariate Anomaly Detection| Detect anomalies in multiple variables with correlations, which are usually gathered from equipment or other complex system. The underlying model used is a Graph Attention Network.|
-### When to use **Univariate Anomaly Detector** v.s. **Multivariate Anomaly Detector**
+### Univariate Anomaly Detection
-If your goal is to detect anomalies out of a normal pattern on each individual time series purely based on their own historical data, use univariate anomaly detection APIs. For example, you want to detect daily revenue anomalies based on revenue data itself, or you want to detect a CPU spike purely based on CPU data.
+The Univariate Anomaly Detection API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
-If your goal is to detect system level anomalies from a group of time series data, use multivariate anomaly detection APIs. Particularly, when any individual time series won't tell you much, and you have to look at all signals (a group of time series) holistically to determine a system level issue. For example, you have an expensive physical asset like aircraft, equipment on an oil rig, or a satellite. Each of these assets has tens or hundreds of different types of sensors. You would have to look at all those time series signals from those sensors to decide whether there is a system level issue.
+![Line graph of detect pattern changes in service requests.](./media/anomaly_detection2.png)
-## Demo
+Using the Anomaly Detector doesn't require any prior experience in machine learning, and the REST API enables you to easily integrate the service into your applications and processes.
-Check out this [interactive demo](https://aka.ms/adDemo) to understand how Anomaly Detector works.
-To run the demo, you need to create an Anomaly Detector resource and get the API key and endpoint.
+With the Univariate Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time.
-## Notebook
+|Feature |Description |
+|||
+| Streaming detection| Detect anomalies in your streaming data by using previously seen data points to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target point is an anomaly. By calling the API with each new data point you generate, you can monitor your data as it's created. |
+| Batch detection | Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
+| Change points detection | Use your time series to detect any trend change points that exist in your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
-To learn how to call the Anomaly Detector API, try this [Notebook](https://aka.ms/adNotebook). This Jupyter Notebook shows you how to send an API request and visualize the result.
+### Multivariate Anomaly Detection
-To run the Notebook, you should get a valid Anomaly Detector API **subscription key** and an **API endpoint**. In the notebook, add your valid Anomaly Detector API subscription key to the `subscription_key` variable, and change the `endpoint` variable to your endpoint.
+The **Multivariate Anomaly Detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
-## Service availability and redundancy
+![Line graph for multiple variables including: rotation, optical filter, pressure, bearing with anomalies highlighted in orange.](./media/multivariate-graph.png)
-### Is the Anomaly Detector service zone resilient?
+Imagine 20 sensors from an auto engine generating 20 different signals like rotation, fuel pressure, bearing, etc. The readings of those signals individually may not tell you much about system level issues, but together they can represent the health of the engine. When the interaction of those signals deviates outside the usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The underlying AI models are trained and customized using your data such that it understands the unique needs of your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for complex enterprise software, or business intelligence tools.
-Yes. The Anomaly Detector service is zone-resilient by default.
+## Join the Anomaly Detector community
-### How do I configure the Anomaly Detector service to be zone-resilient?
+Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates!
-No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Anomaly Detector resources is available by default and managed by the service itself.
+## Algorithms
+* Blogs and papers:
+ * [Introducing Azure Anomaly Detector API](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Introducing-Azure-Anomaly-Detector-API/ba-p/490162)
+ * [Overview of SR-CNN algorithm in Azure Anomaly Detector](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Overview-of-SR-CNN-algorithm-in-Azure-Anomaly-Detector/ba-p/982798)
+ * [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679)
+ * [Multivariate time series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040)
+ * [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) (accepted by KDD 2019)
+* Videos:
+ > [!VIDEO https://www.youtube.com/embed/ERTaAnwCarM]
+
+ > [!VIDEO https://www.youtube.com/embed/FwuI02edclQ]
+
## Next steps
-* [What is Univariate Anomaly Detector?](./overview-univariate.md)
-* [What is Multivariate Anomaly Detector?](./overview-multivariate.md)
-* Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates!
+* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detection](quickstarts/client-libraries.md)
+* [Quickstart: Detect anomalies in your time series data using the Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md)
+* The Anomaly Detector [REST API reference](https://aka.ms/ad-api)
cognitive-services Client Libraries Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/quickstarts/client-libraries-multivariate.md
zone_pivot_groups: anomaly-detector-quickstart-multivariate
Previously updated : 04/21/2021 Last updated : 10/27/2022 keywords: anomaly detection, algorithms ms.devlang: csharp, java, javascript, python
-# Quickstart: Use the Anomaly Detector multivariate client library
+# Quickstart: Use the Multivariate Anomaly Detector client library
::: zone pivot="programming-language-csharp"
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/quickstarts/client-libraries.md
zone_pivot_groups: anomaly-detector-quickstart
Previously updated : 09/22/2020 Last updated : 10/27/2022 keywords: anomaly detection, algorithms ms.devlang: csharp, javascript, python
recommendations: false
-# Quickstart: Use the Anomaly Detector univariate client library
+# Quickstart: Use the Univariate Anomaly Detector client library
::: zone pivot="programming-language-csharp"
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/regions.md
+
+ Title: Regions - Anomaly Detector service
+
+description: A list of available regions and endpoints for the Anomaly Detector service, including Univariate Anomaly Detection and Multivariate Anomaly Detection.
++++++ Last updated : 11/1/2022++++
+# Anomaly Detector service supported regions
+
+The Anomaly Detector service provides anomaly detection technology on your time series data. The service is available in multiple regions with unique endpoints for the Anomaly Detector SDK and REST APIs.
+
+Keep in mind the following points:
+
+* If your application uses one of the Anomaly Detector service REST APIs, the region is part of the endpoint URI you use when making requests.
+* Keys created for a region are valid only in that region. If you attempt to use them with other regions, you will get authentication errors.
+
+> [!NOTE]
+> The Anomaly Detector service doesn't store or process customer data outside the region the customer deploys the service instance in.
+
+## Univariate Anomaly Detection
+
+The following regions are supported for Univariate Anomaly Detection. The geographies are listed in alphabetical order.
+
+| Geography | Region | Region identifier |
+| -- | -- | -- |
+| Africa | South Africa North | `southafricanorth` |
+| Asia Pacific | East Asia | `eastasia` |
+| Asia Pacific | Southeast Asia | `southeastasia` |
+| Asia Pacific | Australia East | `australiaeast` |
+| Asia Pacific | Central India | `centralindia` |
+| Asia Pacific | Japan East | `japaneast` |
+| Asia Pacific | Japan West | `japanwest` |
+| Asia Pacific | Jio India West | `jioindiawest` |
+| Asia Pacific | Korea Central | `koreacentral` |
+| Canada | Canada Central | `canadacentral` |
+| China | China East 2 | `chinaeast2` |
+| China | China North 2 | `chinanorth2` |
+| Europe | North Europe | `northeurope` |
+| Europe | West Europe | `westeurope` |
+| Europe | France Central | `francecentral` |
+| Europe | Germany West Central | `germanywestcentral` |
+| Europe | Norway East | `norwayeast` |
+| Europe | Switzerland North | `switzerlandnorth` |
+| Europe | UK South | `uksouth` |
+| Middle East | UAE North | `uaenorth` |
+| Qatar | Qatar Central | `qatarcentral` |
+| South America | Brazil South | `brazilsouth` |
+| Sweden | Sweden Central | `swedencentral` |
+| US | Central US | `centralus` |
+| US | East US | `eastus` |
+| US | East US 2 | `eastus2` |
+| US | North Central US | `northcentralus` |
+| US | South Central US | `southcentralus` |
+| US | West Central US | `westcentralus` |
+| US | West US | `westus`|
+| US | West US 2 | `westus2` |
+| US | West US 3 | `westus3` |
+
+## Multivariate Anomaly Detection
+
+The following regions are supported for Multivariate Anomaly Detection. The geographies are listed in alphabetical order.
+
+| Geography | Region | Region identifier |
+| -- | -- | -- |
+| Africa | South Africa North | `southafricanorth` |
+| Asia Pacific | East Asia | `eastasia` |
+| Asia Pacific | Southeast Asia | `southeastasia` |
+| Asia Pacific | Australia East | `australiaeast` |
+| Asia Pacific | Central India | `centralindia` |
+| Asia Pacific | Japan East | `japaneast` |
+| Asia Pacific | Jio India West | `jioindiawest` |
+| Asia Pacific | Korea Central | `koreacentral` |
+| Canada | Canada Central | `canadacentral` |
+| Europe | North Europe | `northeurope` |
+| Europe | West Europe | `westeurope` |
+| Europe | France Central | `francecentral` |
+| Europe | Germany West Central | `germanywestcentral` |
+| Europe | Norway East | `norwayeast` |
+| Europe | Switzerland North | `switzerlandnorth` |
+| Europe | UK South | `uksouth` |
+| Middle East | UAE North | `uaenorth` |
+| South America | Brazil South | `brazilsouth` |
+| US | Central US | `centralus` |
+| US | East US | `eastus` |
+| US | East US 2 | `eastus2` |
+| US | North Central US | `northcentralus` |
+| US | South Central US | `southcentralus` |
+| US | West Central US | `westcentralus` |
+| US | West US | `westus`|
+| US | West US 2 | `westus2` |
+| US | West US 3 | `westus3` |
+
+## Next steps
+
+* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detection](quickstarts/client-libraries.md)
+* [Quickstart: Detect anomalies in your time series data using the Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md)
+* The Anomaly Detector [REST API reference](https://aka.ms/ad-api)
cognitive-services Multivariate Anomaly Detection Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md
If you have the need to run training code and inference code in separate noteboo
### About Anomaly Detector
-* Learn about [what is Multivariate Anomaly Detector](../overview-multivariate.md).
+* Learn about [what is Multivariate Anomaly Detector](../overview.md).
* SynapseML documentation with [Multivariate Anomaly Detector feature](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly). * Recipe: [Cognitive Services - Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Multivariate%20Anomaly%20Detection/). * Need support? [Join the Anomaly Detector Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2Ci-wb6-iNDoBoNxrnEk9VURjNXUU1VREpOT0U1UEdURkc0OVRLSkZBNC4u).
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
Title: What is Optical character recognition?
+ Title: What is Optical Character Recognition (OCR)?
description: The optical character recognition (OCR) service extracts print and handwritten text from images.
-# What is Optical character recognition?
+# What is Optical Character Recognition (OCR)
-Optical character recognition (OCR) allows you to extract printed or handwritten text from images, such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices.
+OCR or Optical Character Recognition is also referred to as text recognition or text extraction. Machine-learning based OCR techniques allow you to extract printed or handwritten text from images, such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. The text is typically extracted as words, text lines, and paragraphs or text blocks, enabling access to digital version of the scanned text. This eliminates or significantly reduces the need for manual data entry.
-## How is OCR related to intelligent document processing (IDP)?
+## How is OCR related to Intelligent Document Processing (IDP)?
-OCR typically refers to the foundational technology focusing on extracting text while delegating the extraction of structure, relationships, key-values, entities, and other document-centric insights to intelligent document processing service like [Form Recognizer](../../applied-ai-services/form-recognizer/overview.md). Form Recognizer includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Form Recognizer Read OCR](../../applied-ai-services/form-recognizer/concept-read.md).
+Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Form Recognizer](../../applied-ai-services/form-recognizer/overview.md). Form Recognizer includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Form Recognizer Read OCR](../../applied-ai-services/form-recognizer/concept-read.md).
## Read OCR engine Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). This allows them to extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
cognitive-services Custom Neural Voice Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice-lite.md
+
+ Title: Custom Neural Voice Lite - Speech service
+
+description: Use Custom Neural Voice Lite to demo and evaluate Custom Neural Voice before investing in professional recordings to create a higher-quality voice.
++++++ Last updated : 10/27/2022+++
+# Custom Neural Voice Lite (preview)
+
+Speech Studio provides two Custom Neural Voice (CNV) project types: CNV Lite and CNV Pro.
+
+- Custom Neural Voice (CNV) Pro allows you to upload your training data collected through professional recording studios and create a higher-quality voice that is nearly indistinguishable from its human samples. CNV Pro access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
+- Custom Neural Voice (CNV) Lite is a project type in public preview. You can demo and evaluate Custom Neural Voice before investing in professional recordings to create a higher-quality voice. No application is required. Microsoft restricts and selects the recording and testing samples for use with CNV Lite. You must apply for full access to CNV Pro in order to deploy and use the CNV Lite model for business purpose.
+
+With a CNV Lite project, you record your voice online by reading 20-50 pre-defined scripts provided by Microsoft. After you've recorded at least 20 samples, you can start to train a model. Once the model is trained successfully, you can review the model and check out 20 output samples produced with another set of pre-defined scripts.
+
+See the [supported languages](language-support.md?tabs=stt-tts) for Custom Neural Voice.
+
+## Compare project types
+
+The following table summarizes key differences between the CNV Lite and CNV Pro project types.
+
+|**Items**|**Lite (Preview)**| **Pro**|
+||||
+|Target scenarios |Demonstration or evaluation |Professional scenarios like brand and character voices for chat bots, or audio content reading.|
+|Training data |Record online using Speech Studio |Bring your own data. Recording in a professional studio is recommended. |
+|Scripts for recording |Provided in Speech Studio |Use your own scripts that match the use case scenario. Microsoft provides [example scripts](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) for reference. |
+|Required data size |20-50 utterances |300-2000 utterances|
+|Training time |Less than one compute hour| Approximately 20-40 compute hours |
+|Voice quality |Moderate quality|High quality |
+|Availability |Anyone can record samples online and train a model for demo and evaluation purpose. Full access to Custom Neural Voice is required if you want to deploy the CNV Lite model for business use. |Data upload isn't restricted, but you can only train and deploy a CNV Pro model after access is approved. CNV Pro access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).|
+|Pricing |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |
+
+## Create a Custom Neural Voice Lite project
+
+To create a Custom Neural Voice Lite project, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select the subscription and Speech resource to work with.
+
+ > [!IMPORTANT]
+ > Custom Neural Voice training is currently only available in some regions. See footnotes in the [regions](regions.md#speech-service) table for more information.
+
+1. Select **Custom Voice** > **Create a project**.
+1. Select **Custom Neural Voice Lite** > **Next**.
+
+ > [!NOTE]
+ > To create a Custom Neural Voice Pro project, see [Create a project for Custom Neural Voice](how-to-custom-voice.md).
+
+1. Follow the instructions provided by the wizard to create your project.
+1. Select the new project by name or select **Go to project**. You'll see these menu items in the left panel: **Record and build**, **Review model**, and **Deploy model**.
+ :::image type="content" source="media/custom-voice/lite/lite-project-get-started.png" alt-text="Screenshot with an overview of the CNV Lite record, train, test, and deploy workflow.":::
+
+The CNV Lite project expires after 90 days unless the [verbal statement](#submit-verbal-statement) recorded by the voice talent is submitted.
+
+## Record and build a CNV Lite model
+
+Record at least 20 voice samples (up to 50) with provided scripts online. Voice samples recorded here will be used to create a synthetic version of your voice.
+
+Here are some tips to help you record your voice samples:
+- Use a good microphone. Increase the clarity of your samples by using a high-quality microphone. Speak about 8 inches away from the microphone to avoid mouth noises.
+- Avoid background noise. Record in a quiet room without background noise or echoing.
+- Relax and speak naturally. Allow yourself to express emotions as you read the sentences.
+- Record in one take. To keep a consistent energy level, record all sentences in one session.
+- Pronounce each word correctly, and speak clearly.
+
+To record and build a CNV Lite model, follow these steps:
+
+1. Select **Custom Voice** > Your project name > **Record and build**.
+1. Select **Get started**.
+1. Read the Voice talent terms of use carefully. Select the checkbox to acknowledge the terms of use.
+1. Select **Accept**
+1. Press the microphone icon to start the noise check. This noise check will take only a few seconds, and you won't need to speak during it.
+1. If noise was detected, you can select **Check again** to repeat the noise check. If no noise was detected, you can select **Done** to proceed to the next step.
+ :::image type="content" source="media/custom-voice/lite/cnv-record-noise-check.png" alt-text="Screenshot of the noise check results when noise was detected.":::
+1. Review the recording tips and select **Got it**. For the best results, go to a quiet area without background noise before recording your voice samples.
+1. Press the microphone icon to start recording.
+ :::image type="content" source="media/custom-voice/lite/cnv-record-sample.png" alt-text="Screenshot of the record sample dashboard.":::
+1. Press the stop icon to stop recording.
+1. Review quality metrics. After recording each sample, check its quality metric before continuing to the next one.
+1. Record more samples. Although you can create a model with just 20 samples, it's recommended that you record up to 50 to get better quality.
+1. Select **Train model** to start the training process.
+
+The training process takes approximately one compute hour. You can check the progress of the training process in the **Review model** page.
+
+## Review model
+
+To review the CNV Lite model and listen to your own synthetic voice, follow these steps:
+
+1. Select **Custom Voice** > Your project name > **Review model**. Here you can review the voice model name, model language, sample data size, and training progress. The voice name is composed of the word "Neural" appended to your project name.
+1. Select the voice model name to review the model details and listen to the sample text-to-speech results.
+1. Select the play icon to hear your voice speak each script.
+ :::image type="content" source="media/custom-voice/lite/lite-review-model.png" alt-text="Screenshot of the review sample output dashboard.":::
+
+## Submit verbal statement
+
+A verbal statement recorded by the voice talent is required before you can [deploy the model](#deploy-model) for your business use.
+
+To submit the voice talent verbal statement, follow these steps:
+
+1. Select **Custom Voice** > Your project name > **Deploy model** > **Manage your voice talent**.
+ :::image type="content" source="media/custom-voice/lite/lite-voice-talent-consent.png" alt-text="Screenshot of the record voice talent consent dashboard.":::
+1. Select the model.
+1. Enter the voice talent name and company name.
+1. Read and record the statement. Select the microphone icon to start recording. Select the stop icon to stop recording.
+1. Select **Submit** to submit the statement.
+1. Check the processing status in the script table at the bottom of the dashboard. Once the status is **Succeeded**, you can [deploy the model](#deploy-model).
+
+## Deploy model
+
+To deploy your voice model and use it in your applications, you must get the full access to Custom Neural Voice. Request access on the [intake form](https://aka.ms/customneural). Within approximately 10 business days, you'll receive an email with the approval status. A [verbal statement](#submit-verbal-statement) recorded by the voice talent is also required before you can deploy the model for your business use.
+
+To deploy a CNV Lite model, follow these steps:
+
+1. Select **Custom Voice** > Your project name > **Deploy model** > **Deploy model**.
+1. Select a voice model name and then select **Next**.
+1. Enter a name and description for your endpoint and then select **Next**.
+1. Select the checkbox to agree to the terms of use and then select **Next**.
+1. Select **Deploy** to deploy the model.
+
+From here, you can use the CNV Lite voice model similarly as you would use a CNV Pro voice model. For example, you can [suspend or resume](how-to-deploy-and-use-endpoint.md) an endpoint after it's created, to limit spend and conserve resources that aren't in use. You can also access the voice in the [Audio Content Creation](how-to-audio-content-creation.md) tool in the [Speech Studio](https://aka.ms/speechstudio/audiocontentcreation).
+
+## Next steps
+
+* [Create a CNV Pro project](how-to-custom-voice.md)
+* [Try the text-to-speech quickstart](get-started-text-to-speech.md)
+* [Learn more about speech synthesis](how-to-speech-synthesis.md)
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Previously updated : 08/01/2022 Last updated : 10/27/2022 # What is Custom Neural Voice?
-Custom Neural Voice is a text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data. If you're looking for ready-to-use options, check out our [text-to-speech](text-to-speech.md) service.
-
-Based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=stt-tts) for Custom Neural Voice.
+Custom Neural Voice (CNV) is a text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data.
> [!IMPORTANT]
-> Custom Neural Voice access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
-
-## The basics of Custom Neural Voice
-
-Custom Neural Voice consists of three major components: the text analyzer, the neural acoustic
-model, and the neural vocoder. To generate natural synthetic speech from text, text is first input into the text analyzer, which provides output in the form of phoneme sequence. A *phoneme* is a basic unit of sound that distinguishes one word from another in a particular language. A sequence of phonemes defines the pronunciations of the words provided in the text.
-
-Next, the phoneme sequence goes into the neural acoustic model to predict acoustic features that define speech signals. Acoustic features include the timbre, the speaking style, speed, intonations, and stress patterns. Finally, the neural vocoder converts the acoustic features into audible waves, so that synthetic speech is generated.
-
-![Flowchart that shows the components of Custom Neural Voice.](./media/custom-voice/cnv-intro.png)
-
-Neural text-to-speech voice models are trained by using deep neural networks based on
-the recording samples of human voices. For more information, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911). To learn more about how a neural vocoder is trained, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
-
-You can adapt the neural text-to-speech engine to fit your needs. To create a custom neural voice, use [Speech Studio](https://aka.ms/speechstudio/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Custom Neural Voice can use text provided by the user to convert text into speech in real time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [web portal](https://speech.microsoft.com/audiocontentcreation).
-
-## Custom Neural Voice project types
-
-Speech Studio provides two Custom Neural Voice (CNV) project types: CNV Lite and CNV Pro.
+> Custom Neural Voice access is [limited](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
-The following table summarizes key differences between the CNV Lite and CNV Pro project types.
+Out of the box, [text-to-speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=stt-tts). The prebuilt neural voices work very well in most text-to-speech scenarios.
-|**Items**|**Lite (Preview)**| **Pro**|
-||||
-|Target scenarios |Demonstration or evaluation |Professional scenarios like brand and character voices for chat bots, or audio content reading.|
-|Training data |Record online using Speech Studio |Bring your own data. Recording in a professional studio is recommended. |
-|Scripts for recording |Provided in Speech Studio |Use your own scripts that match the use case scenario. Microsoft provides [example scripts](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) for reference. |
-|Required data size |20-50 utterances |300-2000 utterances|
-|Training time |Less than 1 compute hour| Approximately 20-40 compute hours |
-|Voice quality |Moderate quality|High quality |
-|Availability |Anyone can record samples online and train a model for demo and evaluation purpose. Full access to Custom Neural Voice is required if you want to deploy the CNV Lite model for business use. |Data upload is not restricted, but you can only train and deploy a CNV Pro model after access is approved. CNV Pro access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).|
-|Pricing |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |
+Custom Neural Voice is based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model. You can create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=stt-tts) for Custom Neural Voice.
-### Custom Neural Voice Lite (preview)
+## How does it work?
-Custom Neural Voice (CNV) Lite is a new project type in public preview. You can demo and evaluate Custom Neural Voice before investing in professional recordings to create a higher-quality voice.
+To create a custom neural voice, use [Speech Studio](https://aka.ms/speechstudio/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint.
-With a CNV Lite project, you record your voice online by reading 20-50 pre-defined scripts provided by Microsoft. After you've recorded at least 20 samples, you can start to train a model. Once the model is trained successfully, you can review the model and check out 20 output samples produced with another set of pre-defined scripts.
+> [!TIP]
+> Try [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
-Full access to Custom Neural Voice is required if you want to deploy a CNV Lite model and use it beyond reading the pre-defined scripts. A verbal statement recorded by the voice talent is also required before you can deploy the model for your business use.
+Creating a great custom neural voice requires careful quality control in each step, from voice design and data preparation, to the deployment of the voice model to your system.
-### Custom Neural Voice Pro
+Before you get started in Speech Studio, here are some considerations:
-Custom Neural Voice (CNV) Pro allows you to upload your training data collected through professional recording studios and create a higher-quality voice that is nearly indistinguishable from its human samples. Training a voice in a CNV Pro project is restricted to those who are approved.
+- [Design a persona](record-custom-voice-samples.md#choose-your-voice-talent) of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning.
+- [Select the recording script](record-custom-voice-samples.md#script-selection-criteria) to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
-Review these CNV Pro articles to learn more and get started.
+Here's an overview of the steps to create a Custom Neural Voice in Speech Studio:
-* To prepare and upload your audio data, see [Prepare training data](how-to-custom-voice-prepare-data.md).
-* To train your model, see [Train your voice model](how-to-custom-voice-create-voice.md).
-* To deploy your model and use it in your apps, see [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md).
-* Learn how to prepare for the script and record your voice samples, see [How to record voice samples](record-custom-voice-samples.md).
+1. [Create a project](how-to-custom-voice.md) to contain your data, voice models, tests, and endpoints. Each project is specific to a country and language.
+1. [Set up voice talent](how-to-custom-voice.md). Before you can train a neural voice, you must submit a recording of the voice talent's consent statement. The voice talent statement is a recording of the voice talent reading a statement that they consent to the usage of their speech data to train a custom voice model.
+1. [Prepare training data](how-to-custom-voice-prepare-data.md) in the right [format](how-to-custom-voice-training-data.md). It's a good idea to capture the audio recordings in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model depends heavily on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.
+1. [Train your voice model](how-to-custom-voice-create-voice.md). Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
+1. [Test your voice](how-to-custom-voice-create-voice.md#test-your-voice-model). Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs a good idea to use scripts within and outside the training dataset, so you can test the quality more broadly for different content.
+1. [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) in your apps.
-## Terms and definitions
+You can tune, adjust, and use your custom voice, similarly as you would use a prebuilt neural voice. Convert text into speech in real time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [Speech Studio](https://speech.microsoft.com/audiocontentcreation).
-| **Term** | **Definition** |
-|||
-| Voice model | A text-to-speech model that can mimic the unique vocal characteristics of a target speaker. A *voice model* is also known as a *voice font* or *synthetic voice*. A voice model is a set of parameters in binary format that isn't human readable and doesn't contain audio recordings. It can't be reverse engineered to derive or construct the audio of a human voice. |
-| Voice talent | Individuals or target speakers whose voices are recorded and used to create voice models. These voice models are intended to sound like the voice talentΓÇÖs voice.|
-| Standard text-to-speech | The standard, or "traditional," method of text-to-speech. This method breaks down spoken language into phonetic snippets so that they can be remixed and matched by using classical programming or statistical methods.|
-| Neural text-to-speech | This method synthesizes speech by using deep neural networks. These networks have "learned" the way phonetics are combined in natural human speech, rather than using procedural programming or statistical methods. In addition to the recordings of a target voice talent, neural text-to-speech uses a source library or base model that is built with voice recordings from many different speakers. |
-| Training data | A Custom Neural Voice training dataset that includes the audio recordings of the voice talent, and the associated text transcriptions.|
-| Persona | A persona describes who you want this voice to be. A good persona design will inform all voice creation. This might include choosing an available voice model already created, or starting from scratch by casting and recording a new voice talent.|
-| Script | A script is a text file that contains the utterances to be spoken by your voice talent. (The term *utterances* encompasses both full sentences and shorter phrases.)|
+The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the text-to-speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
-## The process for creating a professional custom neural voice
+## Components sequence
-Creating a great custom neural voice requires careful quality control in each step, from voice design and data preparation, to the deployment of the voice model to your system. The following sections discuss some key steps you'll take when you're creating a custom neural voice for your organization.
-
-### Persona design
-
-First, [design a persona](record-custom-voice-samples.md#choose-your-voice-talent) of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning.
-
-### Script selection
-
-Carefully [select the recording script](record-custom-voice-samples.md#script-selection-criteria) to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
-
-### Preparing training data
-
-It's a good idea to capture the audio recordings in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model depends heavily on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.
-
-After the recordings are ready, [prepare the training data](how-to-custom-voice-prepare-data.md) in the right format.
-
-### Training
-
-After you've prepared the training data, go to [Speech Studio](https://aka.ms/speechstudio/customvoice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
+Custom Neural Voice consists of three major components: the text analyzer, the neural acoustic
+model, and the neural vocoder. To generate natural synthetic speech from text, text is first input into the text analyzer, which provides output in the form of phoneme sequence. A *phoneme* is a basic unit of sound that distinguishes one word from another in a particular language. A sequence of phonemes defines the pronunciations of the words provided in the text.
-### Testing
+Next, the phoneme sequence goes into the neural acoustic model to predict acoustic features that define speech signals. Acoustic features include the timbre, the speaking style, speed, intonations, and stress patterns. Finally, the neural vocoder converts the acoustic features into audible waves, so that synthetic speech is generated.
-Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs a good idea to use scripts within and outside the training dataset, so you can test the quality more broadly for different content.
+![Flowchart that shows the components of Custom Neural Voice.](./media/custom-voice/cnv-intro.png)
-### Tuning and adjustment
+Neural text-to-speech voice models are trained by using deep neural networks based on
+the recording samples of human voices. For more information, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911). To learn more about how a neural vocoder is trained, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
-The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech.
+## Migrate to Custom Neural Voice
-SSML is the markup language used to communicate with the text-to-speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
+If you're using the old version of Custom Voice (which is scheduled to be retired in February 2024), see [How to migrate to Custom Neural Voice](how-to-migrate-to-custom-neural-voice.md).
## Responsible use of AI
To learn how to use Custom Neural Voice responsibly, check the following article
## Next steps
-> [!div class="nextstepaction"]
-> [Create a Project](how-to-custom-voice.md)
+* [Create a project](how-to-custom-voice.md)
+* [Prepare training data](how-to-custom-voice-prepare-data.md)
+* [Train model](how-to-custom-voice-create-voice.md)
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
# Speech synthesis with the Audio Content Creation tool
-You can use the [Audio Content Creation](https://aka.ms/audiocontentcreation) tool in Speech Studio for text-to-speech synthesis without writing any code. You can use the output audio as-is, or as a starting point for further customization.
+You can use the [Audio Content Creation](https://speech.microsoft.com/portal/audiocontentcreation) tool in Speech Studio for text-to-speech synthesis without writing any code. You can use the output audio as-is, or as a starting point for further customization.
Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can efficiently fine-tune text-to-speech voices and design customized audio experiences.
Users now visit or refresh the [Audio Content Creation](https://aka.ms/audiocont
If they can't find the available Speech resource, they can check to ensure that they're in the right directory. To do so, they select the account profile at the upper right and then select **Switch** next to **Current directory**. If there's more than one directory available, it means they have access to multiple directories. They can switch to different directories and go to **Settings** to see whether the right Speech resource is available.
-Users who are in the same Speech resource will see each other's work in Audio Content Creation studio. If you want each individual user to have a unique and private workplace in Audio Content Creation, [create a new Speech resource](#step-2-create-a-speech-resource) for each user and give each user the unique access to the Speech resource.
+Users who are in the same Speech resource will see each other's work in the Audio Content Creation tool. If you want each individual user to have a unique and private workplace in Audio Content Creation, [create a new Speech resource](#step-2-create-a-speech-resource) for each user and give each user the unique access to the Speech resource.
### Remove users from a Speech resource
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
To upload your own datasets in Speech Studio, follow these steps:
1. Enter the dataset name and description, and then select **Next**. 1. Review your settings, and then select **Save and close**.
-After your dataset is uploaded, go to the **Train custom models** page to [train a custom model](how-to-custom-speech-train-model.md)
+After your dataset is uploaded, go to the **Train custom models** page to [train a custom model](how-to-custom-speech-train-model.md).
::: zone-end
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Title: Create a custom voice - Speech service
+ Title: Train your custom voice model - Speech service
-description: When you're ready to upload your data, go to the Custom Voice portal. Create or select a Custom Voice project. The project must share the right language, locale, and gender properties as the data you intend to use for your voice training.
+description: Learn how to train a custom neural voice through the Speech Studio portal.
- Previously updated : 08/01/2022+ Last updated : 10/27/2022 # Train your voice model
-In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned about the different data types you can use to train a custom neural voice, and the different format requirements. After you've prepared your data and the voice talent verbal statement, you can start to upload them to [Speech Studio](https://aka.ms/custom-voice-portal). In this article, you learn how to train a custom neural voice through the Speech Studio portal.
+In this article, you learn how to train a custom neural voice through the Speech Studio portal.
-> [!NOTE]
-> See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects. This article focuses on the creation of a professional Custom Neural Voice using the Pro project.
-
-## Set up voice talent
-
-A *voice talent* is an individual or target speaker whose voices are recorded and used to create neural voice models. Before you create a voice, define your voice persona and select a right voice talent. For details on recording voice samples, see [the tutorial](record-custom-voice-samples.md).
-
-To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent, consenting to the usage of their speech data to train a custom voice model. When you prepare your recording script, make sure you include the statement sentence. You can find the statement in multiple languages on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording.
-
-Upload this audio file to the Speech Studio as shown in the following screenshot. You create a voice talent profile, which is used to verify against your training data when you create a voice model. For more information, see [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
--
-The following steps assume that you've prepared the voice talent verbal consent files. Go to [Speech Studio](https://aka.ms/custom-voice-portal) to select a Custom Neural Voice project, and then follow these steps to create a voice talent profile.
-
-1. Go to **Text-to-Speech** > **Custom Voice** > **select a project**, and select **Set up voice talent**.
-
-1. Select **Add voice talent**.
-
-1. Next, to define voice characteristics, select **Target scenario**. Then describe your **Voice characteristics**. The scenarios you provide must be consistent with what you've applied for in the application form.
-
-1. Go to **Upload voice talent statement**, and follow the instruction to upload the voice talent statement you've prepared beforehand. Make sure the verbal statement is recorded in the same settings as your training data, including the recording environment and speaking style.
-
-1. Go to **Review and create**, review the settings, and select **Submit**.
+> [!IMPORTANT]
+> Custom Neural Voice training is currently only available in some regions. After your voice model is trained in a supported region, you can [copy](#copy-your-voice-model-to-another-project) it to a Speech resource in another region as needed. See footnotes in the [regions](regions.md#speech-service) table for more information.
-## Upload your data
-
-When you're ready to upload your data, go to the **Prepare training data** tab to add your first training set and upload data. A *training set* is a set of audio utterances and their mapping scripts used for training a voice model. You can use a training set to organize your training data. The service checks data readiness per each training set. You can import multiple data to a training set.
-
-You can do the following to create and review your training data:
-
-1. Select **Prepare training data** > **Add training set**.
-1. Enter **Name** and **Description**, and then select **Create** to add a new training set.
-
- When the training set is successfully created, you can start to upload your data.
-
-1. Select **Upload data** > **Choose data type** > **Upload data**. Then select **Specify the target training set**.
-1. Enter the name and description for your data, review the settings, and select **Submit**.
+Training duration varies depending on how much data you're training. It takes about 40 compute hours on average to train a custom neural voice. Standard subscription (S0) users can train four voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
> [!NOTE]
->- Duplicate audio names are removed from the training. Make sure the data you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicates, they're rejected.
-
-All data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Speech service. Go to [Prepare training data](how-to-custom-voice-prepare-data.md), and confirm that your data is correctly formatted.
-
-> [!NOTE]
-> - Standard subscription (S0) users can upload five data files simultaneously. If you reach the limit, wait until at least one of your data files finishes importing. Then try again.
-> - The maximum number of data files allowed to be imported per subscription is 500 .zip files for standard subscription (S0) users. Please see out [Speech service quotas and limits](speech-services-quotas-and-limits.md#custom-neural-voice) for more details.
-
-Data files are automatically validated when you select **Submit**. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. If there are any errors, fix them and submit again.
-
-After you upload the data, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your data. The pronunciation score ranges from 0-100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
-
-A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 35+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice.
-
-Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your data.
-
-### Typical data issues
-
-On **Data details**, you can check the data details of the training set. If there are any typical issues with the data, follow the instructions in the message that appears, to fix them before training.
-
-The issues are divided into three types. Refer to the following tables to check the respective types of errors.
-
-**Auto-rejected**
-
-Data with these errors won't be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can resubmit the corrected data for training.
-
-| Category | Name | Description |
-| | -- | |
-| Script | Invalid separator| You must separate the utterance ID and the script content with a Tab character.|
-| Script | Invalid script ID| The script line ID must be numeric.|
-| Script | Duplicated script|Each line of the script content must be unique. The line is duplicated with {}.|
-| Script | Script too long| The script must be less than 1,000 characters.|
-| Script | No matching audio| The ID of each utterance (each line of the script file) must match the audio ID.|
-| Script | No valid script| No valid script is found in this dataset. Fix the script lines that appear in the detailed issue list.|
-| Audio | No matching script| No audio files match the script ID. The name of the .wav files must match with the IDs in the script file.|
-| Audio | Invalid audio format| The audio format of the .wav files is invalid. Check the .wav file format by using an audio tool like [SoX](http://sox.sourceforge.net/).|
-| Audio | Low sampling rate| The sampling rate of the .wav files can't be lower than 16 KHz.|
-| Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. It's a good idea to make utterances shorter than 15 seconds.|
-| Audio | No valid audio| No valid audio is found in this dataset. Check your audio data and upload again.|
-| Mismatch | Low scored utterance| Sentence-level pronunciation score is lower than 70. Review the script and the audio content to make sure they match.|
-
-**Auto-fixed**
-
-The following errors are fixed automatically, but you should review and confirm the fixes are made correctly.
-
-| Category | Name | Description |
-| | -- | |
-| Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. |
-| Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
-
-**Manual check required**
-
-Unresolved errors listed in the next table affect the quality of training, but data with these errors won't be excluded during training. For higher-quality training, it's a good idea to fix these errors manually.
-
-| Category | Name | Description |
-| | -- | |
-| Script | Non-normalized text|This script contains digits. Expand them to normalized words, and match with the audio. For example, normalize *123* to *one hundred and twenty-three*.|
-| Script | Non-normalized text|This script contains symbols. Normalize the symbols to match the audio. For example, normalize *50%* to *fifty percent*.|
-| Script | Not enough question utterances| At least 10 percent of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.|
-| Script | Not enough exclamation utterances| At least 10 percent of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.|
-| Script | No valid end punctuation| Add one of the following at the end of the line: full stop (half-width '.' or full-width '。'), exclamation point (half-width '!' or full-width '!' ), or question mark ( half-width '?' or full-width '?').|
-| Audio| Low sampling rate for neural voice | It's recommended that the sampling rate of your .wav files should be 24 KHz or higher for creating neural voices. If it's lower, it will be automatically raised to 24 KHz.|
-| Volume |Overall volume too low|Volume shouldn't be lower than -18 dB (10 percent of max volume). Control the volume average level within proper range during the sample recording or data preparation.|
-| Volume | Volume overflow| Overflowing volume is detected at {}s. Adjust the recording equipment to avoid the volume overflow at its peak value.|
-| Volume | Start silence issue | The first 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the first 100 ms at the start silent.|
-| Volume| End silence issue| The last 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the last 100 ms at the end silent.|
-| Mismatch | Low scored words|Review the script and the audio content to make sure they match, and control the noise floor level. Reduce the length of long silence, or split the audio into multiple utterances if it's too long.|
-| Mismatch | Start silence issue |Extra audio was heard before the first word. Review the script and the audio content to make sure they match, control the noise floor level, and make the first 100 ms silent.|
-| Mismatch | End silence issue| Extra audio was heard after the last word. Review the script and the audio content to make sure they match, control the noise floor level, and make the last 100 ms silent.|
-| Mismatch | Low signal-noise ratio | Audio SNR level is lower than 20 dB. At least 35 dB is recommended.|
-| Mismatch | No score available |Failed to recognize speech content in this audio. Check the audio and the script content to make sure the audio is valid, and matches the script.|
-
-### Fix data issues online
-
-You can fix the utterances with issues individually on **Data details** page.
-
-1. On the **Data details** page, select individual utterances you want to edit, then click **Edit**.
-
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset.png" alt-text="Screenshot of selecting edit button on the Data details page.":::
-
-1. Edit window will be displayed.
-
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset-editscript.png" alt-text="Screenshot of displaying Edit transcript and recording file window.":::
-
-1. Update transcript or recording file according to issue description on the edit window.
-
- You can edit transcript in the text box, then click **Done**
-
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset-scriptedit-done.png" alt-text="Screenshot of selecting Done button on the Edit transcript and recording file window.":::
+> Although the total number of hours required per [training method](#choose-a-training-method) will vary, the same unit price applies to each. For more information, see the [Custom Neural training pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
- If you need to update recording file, select **Update recording file**, then upload the fixed recording file (.wav).
-
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset-upload-recording.png" alt-text="Screenshot that shows how to upload recording file on the Edit transcript and recording file window.":::
+## Choose a training method
-1. After the data in a training set are updated, you need to check the data quality by clicking **Analyze data** before using this training set for training.
+After you validate your data files, you can use them to build your Custom Neural Voice model. When you create a custom neural voice, you can choose to train it with one of the following methods:
- You can't select this training set for training model before the analysis is complete.
+- [Neural](?tabs=neural#train-your-custom-neural-voice-model): Create a voice in the same language of your training data, select **Neural** method.
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset-analyze.png" alt-text="Screenshot of selecting Analyze data on Data details page.":::
+- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) (Preview): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`. The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=stt-tts) for cross lingual training. You don't need to prepare training data in the target language, but your test script must be in the target language.
- You can also delete utterances with issues by selecting them and clicking **Delete**.
+- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model) (Preview): Create a custom neural voice that speaks in multiple styles/emotions, without adding new training data. Multi-style voices are particularly useful for video game characters, conversational chatbots, audiobook and content readers, and more. To create a multi-style voice, you just need to prepare a set of general training data (at least 300 utterances), and select one or more of the preset target speaking styles. You can also create up to 10 custom styles by providing style samples as additional training data for the same voice.
## Train your Custom Neural Voice model
-After you validate your data files, you can use them to build your Custom Neural Voice model.
+To create a custom neural voice in Speech Studio, follow these steps for one of the following [methods](#choose-a-training-method):
+
+# [Neural](#tab/neural)
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
+1. Select **Neural** as the [training method](#choose-a-training-method) for your model and then select **Next**. To use a different training method, see [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) or [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model).
+ :::image type="content" source="media/custom-voice/cnv-train-neural.png" alt-text="Screenshot that shows how to select neural training.":::
+1. Select a version of the training recipe for your model. The latest version is selected by default. The supported features and training time can vary by version. Normally, the latest version is recommended for the best results. In some cases, you can choose an older version to reduce training time.
+1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
+1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data.
+1. Select **Next**.
+1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
+1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
+1. Select **Next**.
+1. Review the settings and check the box to accept the terms of use.
+1. Select **Submit** to start training the model.
+
+# [Neural - cross lingual](#tab/crosslingual)
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
+1. Select **Neural - cross lingual** (Preview) as the [training method](#choose-a-training-method) for your model. To use a different training method, see [Neural](?tabs=neural#train-your-custom-neural-voice-model) or [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model).
+ :::image type="content" source="media/custom-voice/cnv-train-neural-cross-lingual.png" alt-text="Screenshot that shows how to select neural cross lingual training.":::
+1. Select the **Target language** that will be the secondary language for your voice model. Only one target language can be selected for a voice model.
+1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
+1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data.
+1. Select **Next**.
+1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
+1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
+1. Select **Next**.
+1. Review the settings and check the box to accept the terms of use.
+1. Select **Submit** to start training the model.
+
+# [Neural - multi style](#tab/multistyle)
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
+1. Select **Neural - multi style** (Preview) as the [training method](#choose-a-training-method) for your model. To use a different training method, see [Neural](?tabs=neural#train-your-custom-neural-voice-model) or [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model).
+ :::image type="content" source="media/custom-voice/cnv-train-neural-multi-style.png" alt-text="Screenshot that shows how to select neural multi style training.":::
+1. Select one or more preset speaking styles to train.
+1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
+1. Select **Next**.
+1. Optionally, you can add up to 10 custom speaking styles. Select **Add a custom style** and enter a custom style name of your choice. Select style samples as training data.
+1. Select **Next**.
+1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data.
+1. Select **Next**.
+1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audios for the default style and 20 for each preset style automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
+1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
+1. Select **Next**.
+1. Review the settings and check the box to accept the terms of use.
+1. Select **Submit** to start training the model.
+
+
+
+The **Train model** table displays a new entry that corresponds to this newly created model. The status reflects the process of converting your data to a voice model, as described in this table:
+
+| State | Meaning |
+| -- | - |
+| Processing | Your voice model is being created. |
+| Succeeded | Your voice model has been created and can be deployed. |
+| Failed | Your voice model has failed in training. The cause of the failure might be, for example, unseen data problems or network issues. |
+| Canceled | The training for your voice model was canceled. |
+
+While the model status is **Processing**, you can select **Cancel training** to cancel your voice model. You're not charged for this canceled training.
++
+After you finish training the model successfully, you can review the model details and [test the model](#test-your-voice-model).
+
+You can use the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio]( https://speech.microsoft.com/portal/audiocontentcreation) to create audio and fine-tune your deployed voice. If applicable for your voice, one of multiple styles can also be selected.
-1. On the **Train model** tab, select **Train a new model** to create a voice model with the data you've uploaded.
-
-1. Select the training method for your model.
-
- If you want to create a voice in the same language of your training data, select **Neural** method. For the **Neural** method, you can select different versions of the training recipe for your model. The versions vary according to the features supported and model training time. Normally new versions are enhanced ones with bugs fixed and new features supported. The latest version is selected by default.
-
- You can also select **Neural - cross lingual** and **Target language** to create a secondary language for your voice model. Only one target language can be selected for a voice model. You don't need to prepare additional data in the target language for training, but your test script needs to be in the target language. For the languages supported by cross lingual feature, see [supported languages](language-support.md?tabs=stt-tts).
-
- The same unit price applies to both **Neural** and **Neural - cross lingual**. Check [the pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for training.
-
-1. Choose the data you want to use for training, and specify a speaker file.
-
- >[!NOTE]
- >- To create a custom neural voice, select at least 300 utterances.
- >- To train a neural voice, you must specify a voice talent profile. This profile must provide the audio consent file of the voice talent, acknowledging to use their speech data to train a custom neural voice model.
-
-1. Choose your test script. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script, including up to 100 utterances. The test script must exclude the filenames (the ID of each utterance). Otherwise, these IDs are spoken. Here's an example of how the utterances are organized in one .txt file:
-
- ```
- This is the waistline, and it's falling.
- We have trouble scoring.
- It was Janet Maslin.
- ```
-
- Each paragraph of the utterance results in a separate audio. If you want to combine all sentences into one audio, make them a single paragraph.
-
- >[!NOTE]
- >- The test script must be a .txt file, less than 1 MB. Supported encoding formats include ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE.
- >- The generated audios are a combination of the uploaded test script and the default test script.
-
-1. Enter a **Name** and **Description** to help you identify this model. Choose a name carefully. The name you enter here will be the name you use to specify the voice in your request for speech synthesis as part of the SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
-
- A common use of the **Description** field is to record the names of the data that you used to create the model.
-
-1. Review the settings, then select **Submit** to start training the model.
-
- Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files.
-
- The **Train model** table displays a new entry that corresponds to this newly created model.
-
- When the model is training, you can select **Cancel training** to cancel your voice model. You're not charged for this canceled training.
-
- :::image type="content" source="media/custom-voice/cnv-cancel-training.png" alt-text="Screenshot that shows how to cancel training for a model.":::
+### Rename your model
- The table displays the status: processing, succeeded, failed, and canceled. The status reflects the process of converting your data to a voice model, as shown in this table:
+If you want to rename the model you built, you can select **Clone model** to create a clone of the model with a new name in the current project.
- | State | Meaning |
- | -- | - |
- | Processing | Your voice model is being created. |
- | Succeeded | Your voice model has been created and can be deployed. |
- | Failed | Your voice model has failed in training. The cause of the failure might be, for example, unseen data problems or network issues. |
- | Canceled | The training for your voice model was canceled. |
- Training duration varies depending on how much data you're training. It takes about 40 compute hours on average to train a custom neural voice.
+Enter the new name on the **Clone voice model** window, then select **Submit**. The text 'Neural' will be automatically added as a suffix to your new model name.
- > [!NOTE]
- > Standard subscription (S0) users can train four voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
-1. After you finish training the model successfully, you can review the model details.
+### Test your voice model
After your voice model is successfully built, you can use the generated sample audio files to test it before deploying it for use.
The quality of the voice depends on many factors, such as:
- The accuracy of the transcript file. - How well the recorded voice in the training data matches the personality of the designed voice for your intended use case.
-### Rename your model
+Select **DefaultTests** under **Testing** to listen to the sample audios. The default test samples include 100 sample audios generated automatically during training to help you test the model. In addition to these 100 audios provided by default, your own test script (at most 100 utterances) provided during training are also added to **DefaultTests** set. You're not charged for the testing with **DefaultTests**.
-If you want to rename the model you built, you can select **Clone model** to create a clone of the model with a new name in the current project.
+If you want to upload your own test scripts to further test your model, select **Add test scripts** to upload your own test script.
-Enter the new name on the **Clone voice model** window, then click **Submit**. The text 'Neural' will be automatically added as a suffix to your new model name.
+Before uploading test script, check the [test script requirements](#test-script-requirements). You'll be charged for the additional testing with the batch synthesis based on the number of billable characters. See [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-### Test your voice model
+On **Add test scripts** window, select **Browse for a file** to select your own script, then select **Add** to upload it.
-After you've trained your voice model, you can test the model on the model details page. Select **DefaultTests** under **Testing** to listen to the sample audios. The default test samples include 100 sample audios generated automatically during training to help you test the model. In addition to these 100 audios provided by default, your own test script (at most 100 utterances) provided during training are also added to **DefaultTests** set. You're not charged for the testing with **DefaultTests**.
+### Test script requirements
-If you want to upload your own test scripts to further test your model, select **Add test scripts** to upload your own test script.
+The test script must be a .txt file, less than 1 MB. Supported encoding formats include ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE.
+Unlike the [training transcription files](how-to-custom-voice-training-data.md#transcription-data-for-individual-utterances--matching-transcript), the test script should exclude the utterance ID (filenames of each utterance). Otherwise, these IDs are spoken.
-Before uploading test script, check the [test script requirements](#train-your-custom-neural-voice-model). You'll be charged for the additional testing with the batch synthesis based on the number of billable characters. See [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+Here's an example set of utterances in one .txt file:
-On **Add test scripts** window, click **Browse for a file** to select your own script, then select **Add** to upload it.
+```text
+This is the waistline, and it's falling.
+We have trouble scoring.
+It was Janet Maslin.
+```
+Each paragraph of the utterance results in a separate audio. If you want to combine all sentences into one audio, make them a single paragraph.
+
+>[!NOTE]
+> The generated audio files are a combination of the automatic test scripts and custom test scripts.
### Update engine version for your voice model
When a new engine is available, you're prompted to update your neural voice mode
:::image type="content" source="media/custom-voice/cnv-engine-update-prompt.png" alt-text="Screenshot of displaying engine update message." lightbox="media/custom-voice/cnv-engine-update-prompt.png":::
-Go to the model details page, click **Update** at the top to display **Update** window.
+Go to the model details page, select **Update** at the top to display **Update** window.
:::image type="content" source="media/custom-voice/cnv-engine-update.png" alt-text="Screenshot of selecting Update menu at the top of page." lightbox="media/custom-voice/cnv-engine-update.png":::
-Then click **Update** to update your model to the latest engine version.
+Then select **Update** to update your model to the latest engine version.
:::image type="content" source="media/custom-voice/cnv-engine-update-done.png" alt-text="Screenshot of selecting Update button to update engine.":::
-You're not charged for engine update. The previous versions are still kept. You can check all engine versions for this model from **Engine version** drop-down list, or remove one if you don't need it anymore.
+You're not charged for engine update. The previous versions are still kept. You can check all engine versions for the model from **Engine version** drop-down list, or remove one if you don't need it anymore.
:::image type="content" source="media/custom-voice/cnv-engine-version.png" alt-text="Screenshot of displaying Engine version drop-down list.":::
-The updated version is automatically set as default. But you can change the default version by selecting a version from the drop-down list and clicking **Set as default**.
+The updated version is automatically set as default. But you can change the default version by selecting a version from the drop-down list and selecting **Set as default**.
:::image type="content" source="media/custom-voice/cnv-engine-set-default.png" alt-text="Screenshot that shows how to set a version as default.":::
After you've updated the engine version for your voice model, you need to [redep
For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
+## Copy your voice model to another project
+
+You can copy your voice model to another project for the same region or another region. For example, you can copy a neural voice model that was trained in one region, to a project for another region.
+ > [!NOTE] > Custom Neural Voice training is currently only available in some regions. But you can easily copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
+To copy your custom neural voice model to another project:
+
+1. On the **Train model** tab, select a voice model that you want to copy, and then select **Copy to project**.
+
+ :::image type="content" source="media/custom-voice/cnv-model-copy.png" alt-text="Screenshot of the copy to project option.":::
+
+1. Select the **Region**, **Speech resource**, and **Project** where you want to copy the model. You must have a speech resource and project in the target region, otherwise you need to create them first.
+
+ :::image type="content" source="media/custom-voice/cnv-model-copy-dialog.png" alt-text="Screenshot of the copy voice model dialog.":::
+
+1. Select **Submit** to copy the model.
+1. Select **View model** under the notification message for copy success.
+
+Navigate to the project where you copied the model to [deploy the model copy](how-to-deploy-and-use-endpoint.md).
+ ## Next steps - [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
Title: "How to prepare data for Custom Voice - Speech service"
-description: "Create a custom voice for your brand with the Speech service. You provide studio recordings and the associated scripts, the service generates a unique voice model tuned to the recorded voice. Use this voice to synthesize speech in your products, tools, and applications."
+description: "Learn how to provide studio recordings and the associated scripts that will be used to train your Custom Neural Voice."
- Previously updated : 08/01/2022+ Last updated : 10/27/2022
-# Prepare training data
+# Prepare training data for Custom Neural Voice
-When you're ready to create a custom Text-to-Speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
+When you're ready to create a custom Text-to-Speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. For details on recording voice samples, see [the tutorial](record-custom-voice-samples.md). The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
-> [!NOTE]
-> This article focuses on the creation of a professional Custom Neural Voice using the Pro project. See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects.
-
-## Voice talent verbal statement
-
-Before you can train your own Text-to-Speech voice model, you'll need [audio recordings](record-custom-voice-samples.md) and the [associated text transcriptions](how-to-custom-voice-prepare-data.md#types-of-training-data). On this page, we'll review data types, how they're used, and how to manage each.
-
-> [!IMPORTANT]
-> To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence. You can find the statement in multiple languages [here](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
->
- :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Upload voice talent statement":::
->
-> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access here](https://aka.ms/customneural).
-
-## Types of training data
-
-A voice training dataset includes audio recordings, and a text file with the associated transcriptions. Each audio file should contain a single utterance (a single sentence or a single turn for a dialog system), and be less than 15 seconds long.
-
-In some cases, you may not have the right dataset ready and will want to test the custom neural voice training with available audio files, short or long, with or without transcripts. We provide options (beta) to help you segment your audio into utterances and prepare transcripts using the [Batch Transcription API](batch-transcription.md).
-
-This table lists data types and how each is used to create a custom Text-to-Speech voice model.
-
-| Data type | Description | When to use | Additional processing required |
-| | -- | -- | |
-| **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. |
-| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (.wav or .mp3, longer than 20 seconds, at most 1000 audio files), paired with a collection (.zip) of transcripts that contains all spoken words. | You have audio files and matching transcripts, but they aren't segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. |
-| **Audio only (beta)** | A collection (.zip) of audio files (.wav or .mp3, at most 1000 audio files) without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.|
-
-Files should be grouped by type into a dataset and uploaded as a zip file. Each dataset can only contain a single data type.
+All data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Speech service. To confirm that your data is correctly formatted, see [Training data types](how-to-custom-voice-training-data.md).
> [!NOTE]
-> The maximum number of datasets allowed to be imported per subscription is 500 zip files for standard subscription (S0) users.
->
-> For the two beta options, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
+> - Standard subscription (S0) users can upload five data files simultaneously. If you reach the limit, wait until at least one of your data files finishes importing. Then try again.
+> - The maximum number of data files allowed to be imported per subscription is 500 .zip files for standard subscription (S0) users. Please see out [Speech service quotas and limits](speech-services-quotas-and-limits.md#custom-neural-voice) for more details.
-## Individual utterances + matching transcript
+## Upload your data
-You can prepare recordings of individual utterances and the matching transcript in two ways. Either [write a script and have it read by a voice talent](record-custom-voice-samples.md) or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
+When you're ready to upload your data, go to the **Prepare training data** tab to add your first training set and upload data. A *training set* is a set of audio utterances and their mapping scripts used for training a voice model. You can use a training set to organize your training data. The service checks data readiness per each training set. You can import multiple data to a training set.
-To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
+To upload training data, follow these steps:
-For data format examples, refer to the sample training set on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/Sample%20Data). The sample training set includes the sample script and the associated audios.
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Prepare training data** > **Upload data**.
+1. In the **Upload data** wizard, choose a [data type](how-to-custom-voice-training-data.md) and then select **Next**.
+1. Select local files from your computer or enter the Azure Blob storage URL to upload data.
+1. Under **Specify the target training set**, select an existing training set or create a new one. If you created a new training set, make sure it's selected in the drop-down list before you continue.
+1. Select **Next**.
+1. Enter a name and description for your data and then select **Next**.
+1. Review the upload details, and select **Submit**.
-> [!TIP]
-> To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [record voice samples to create a custom neural voice](record-custom-voice-samples.md).
+> [!NOTE]
+> Duplicate IDs are not accepted. Utterances with the same ID will be removed.
+>
+> Duplicate audio names are removed from the training. Make sure the data you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicates, they're rejected.
-### Audio files
+Data files are automatically validated when you select **Submit**. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. If there are any errors, fix them and submit again.
-Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text-to-Speech voices aren't supported, with the exception of the Chinese-English bi-lingual. Each audio file must have a unique filename with the filename extension .wav.
+After you upload the data, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your data. The pronunciation score ranges from 0-100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
-Follow these guidelines when preparing audio.
+## Resolve data issues online
-| Property | Value |
-| -- | -- |
-| File format | RIFF (.wav), grouped into a .zip file |
-| File name | File name characters supported by Windows OS, with .wav extension.<br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
-| Sampling rate | For creating a custom neural voice, 24,000 Hz is required. |
-| Sample format | PCM, at least 16-bit |
-| Audio length | Shorter than 15 seconds |
-| Archive format | .zip |
-| Maximum archive size | 2048 MB |
+After upload, you can check the data details of the training set. Before continuing to [train your voice model](how-to-custom-voice-create-voice.md), you should try to resolve any data issues.
-> [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. If a .zip file contains .wav files with different sample rates, only those equal to or higher than 16,000 Hz will be imported. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+You can resolve data issues per utterance in Speech Studio.
-### Transcripts
+1. On the **Data details** page, select individual utterances you want to edit, then click **Edit**.
-The transcription file is a plain text file. Use these guidelines to prepare your transcriptions.
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset.png" alt-text="Screenshot of selecting edit button on the Data details page.":::
-| Property | Value |
-| -- | -- |
-| File format | Plain text (.txt) |
-| Encoding format | ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
-| # of utterances per line | **One** - Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t). |
-| Maximum file size | 2048 MB |
+1. Edit window will be displayed.
-Below is an example of how the transcripts are organized utterance by utterance in one .txt file:
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-editscript.png" alt-text="Screenshot of displaying Edit transcript and recording file window.":::
-```
-0000000001[tab] This is the waistline, and it's falling.
-0000000002[tab] We have trouble scoring.
-0000000003[tab] It was Janet Maslin.
-```
-ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts will introduce quality loss during the training.
+1. Update transcript or recording file according to issue description on the edit window.
-## Long audio and transcript (beta)
+ You can edit transcript in the text box, then click **Done**
-In some cases, you may not have segmented audio available. We provide a service (beta) through the Speech Studio to help you segment long audio files and create transcriptions. Keep in mind, this service will be charged toward your speech-to-text subscription usage.
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-scriptedit-done.png" alt-text="Screenshot of selecting Done button on the Edit transcript and recording file window.":::
-> [!NOTE]
-> The long-audio segmentation service will leverage the batch transcription feature of speech-to-text, which only supports standard subscription (S0) users. During the processing of the segmentation, your audio files and the transcripts will also be sent to the Custom Speech service to refine the recognition model so the accuracy can be improved for your data. No data will be retained during this process. After the segmentation is done, only the utterances segmented and their mapping transcripts will be stored for your downloading and training.
+ If you need to update recording file, select **Update recording file**, then upload the fixed recording file (.wav).
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-upload-recording.png" alt-text="Screenshot that shows how to upload recording file on the Edit transcript and recording file window.":::
-### Audio files
+1. After the data in a training set are updated, you need to check the data quality by clicking **Analyze data** before using this training set for training.
-Follow these guidelines when preparing audio for segmentation.
+ You can't select this training set for training model before the analysis is complete.
-| Property | Value |
-| -- | -- |
-| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
-| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
-| Sampling rate | For creating a custom neural voice, 24,000 Hz is required. |
-| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
-| Audio length | Longer than 20 seconds |
-| Archive format | .zip |
-| Maximum archive size | 2048 MB, at most 1000 audio files included |
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-analyze.png" alt-text="Screenshot of selecting Analyze data on Data details page.":::
-> [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
-
-All audio files should be grouped into a zip file. ItΓÇÖs OK to put .wav files and .mp3 files into one audio zip. For example, you can upload a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45-second-long, and another audio named ΓÇÿqueenstory.mp3ΓÇÖ, 200-second-long. All .mp3 files will be transformed into the .wav format after processing.
+ You can also delete utterances with issues by selecting them and clicking **Delete**.
-### Transcripts
+### Typical data issues
-Transcripts must be prepared to the specifications listed in this table. Each audio file must be matched with a transcript.
+The issues are divided into three types. Refer to the following tables to check the respective types of errors.
-| Property | Value |
-| -- | -- |
-| File format | Plain text (.txt), grouped into a .zip |
-| File name | Use the same name as the matching audio file |
-| Encoding format |ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
-| # of utterances per line | No limit |
-| Maximum file size | 2048 MB |
+**Auto-rejected**
-All transcripts files in this data type should be grouped into a zip file. For example, you've uploaded a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 seconds long, and another one named ΓÇÿqueenstory.mp3ΓÇÖ, 200 seconds long. You'll need to upload another zip file containing two transcripts, one named ΓÇÿkingstory.txtΓÇÖ, the other one ΓÇÿqueenstory.txtΓÇÖ. Within each plain text file, you'll provide the full correct transcription for the matching audio.
+Data with these errors won't be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can resubmit the corrected data for training.
-After your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on the transcript provided. You can check the segmented utterances and the matching transcripts by downloading the dataset. Unique IDs will be assigned to the segmented utterances automatically. ItΓÇÖs important that you make sure the transcripts you provide are 100% accurate. Errors in the transcripts can reduce the accuracy during the audio segmentation and further introduce quality loss in the training phase that comes later.
+| Category | Name | Description |
+| | -- | |
+| Script | Invalid separator| You must separate the utterance ID and the script content with a Tab character.|
+| Script | Invalid script ID| The script line ID must be numeric.|
+| Script | Duplicated script|Each line of the script content must be unique. The line is duplicated with {}.|
+| Script | Script too long| The script must be less than 1,000 characters.|
+| Script | No matching audio| The ID of each utterance (each line of the script file) must match the audio ID.|
+| Script | No valid script| No valid script is found in this dataset. Fix the script lines that appear in the detailed issue list.|
+| Audio | No matching script| No audio files match the script ID. The name of the .wav files must match with the IDs in the script file.|
+| Audio | Invalid audio format| The audio format of the .wav files is invalid. Check the .wav file format by using an audio tool like [SoX](http://sox.sourceforge.net/).|
+| Audio | Low sampling rate| The sampling rate of the .wav files can't be lower than 16 KHz.|
+| Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. It's a good idea to make utterances shorter than 15 seconds.|
+| Audio | No valid audio| No valid audio is found in this dataset. Check your audio data and upload again.|
+| Mismatch | Low scored utterance| Sentence-level pronunciation score is lower than 70. Review the script and the audio content to make sure they match.|
-## Audio only (beta)
+**Auto-fixed**
-If you don't have transcriptions for your audio recordings, use the **Audio only** option to upload your data. Our system can help you segment and transcribe your audio files. Keep in mind, this service will be charged toward your speech-to-text subscription usage.
+The following errors are fixed automatically, but you should review and confirm the fixes are made correctly.
-Follow these guidelines when preparing audio.
+| Category | Name | Description |
+| | -- | |
+| Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. |
+| Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
-> [!NOTE]
-> The long-audio segmentation service will leverage the batch transcription feature of speech-to-text, which only supports standard subscription (S0) users.
-
-| Property | Value |
-| -- | -- |
-| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
-| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
-| Sampling rate | For creating a custom neural voice, 24,000 Hz is required. |
-| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
-| Audio length | No limit |
-| Archive format | .zip |
-| Maximum archive size | 2048 MB, at most 1000 audio files included |
+**Manual check required**
-> [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+Unresolved errors listed in the next table affect the quality of training, but data with these errors won't be excluded during training. For higher-quality training, it's a good idea to fix these errors manually.
-All audio files should be grouped into a zip file. Once your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on our speech batch transcription service. Unique IDs will be assigned to the segmented utterances automatically. Matching transcripts will be generated through speech recognition. All .mp3 files will be transformed into the .wav format after processing. You can check the segmented utterances and the matching transcripts by downloading the dataset.
+| Category | Name | Description |
+| | -- | |
+| Script | Non-normalized text|This script contains digits. Expand them to normalized words, and match with the audio. For example, normalize *123* to *one hundred and twenty-three*.|
+| Script | Non-normalized text|This script contains symbols. Normalize the symbols to match the audio. For example, normalize *50%* to *fifty percent*.|
+| Script | Not enough question utterances| At least 10 percent of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.|
+| Script | Not enough exclamation utterances| At least 10 percent of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.|
+| Script | No valid end punctuation| Add one of the following at the end of the line: full stop (half-width '.' or full-width '。'), exclamation point (half-width '!' or full-width '!' ), or question mark ( half-width '?' or full-width '?').|
+| Audio| Low sampling rate for neural voice | It's recommended that the sampling rate of your .wav files should be 24 KHz or higher for creating neural voices. If it's lower, it will be automatically raised to 24 KHz.|
+| Volume |Overall volume too low|Volume shouldn't be lower than -18 dB (10 percent of max volume). Control the volume average level within proper range during the sample recording or data preparation.|
+| Volume | Volume overflow| Overflowing volume is detected at {}s. Adjust the recording equipment to avoid the volume overflow at its peak value.|
+| Volume | Start silence issue | The first 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the first 100 ms at the start silent.|
+| Volume| End silence issue| The last 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the last 100 ms at the end silent.|
+| Mismatch | Low scored words|Review the script and the audio content to make sure they match, and control the noise floor level. Reduce the length of long silence, or split the audio into multiple utterances if it's too long.|
+| Mismatch | Start silence issue |Extra audio was heard before the first word. Review the script and the audio content to make sure they match, control the noise floor level, and make the first 100 ms silent.|
+| Mismatch | End silence issue| Extra audio was heard after the last word. Review the script and the audio content to make sure they match, control the noise floor level, and make the last 100 ms silent.|
+| Mismatch | Low signal-noise ratio | Audio SNR level is lower than 20 dB. At least 35 dB is recommended.|
+| Mismatch | No score available |Failed to recognize speech content in this audio. Check the audio and the script content to make sure the audio is valid, and matches the script.|
## Next steps
cognitive-services How To Custom Voice Talent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-talent.md
+
+ Title: "Set up voice talent for custom neural voice - Speech service"
+
+description: Create a voice talent profile with an audio file recorded by the voice talent, consenting to the usage of their speech data to train a custom voice model.
++++++ Last updated : 10/27/2022+++
+# Set up voice talent for Custom Neural Voice
+
+A voice talent is an individual or target speaker whose voices are recorded and used to create neural voice models.
+
+Before you can train a neural voice, you must submit a recording of the voice talent's consent statement. The voice talent statement is a recording of the voice talent reading a statement that they consent to the usage of their speech data to train a custom voice model. The consent statement is also used to verify that the voice talent is the same person as the speaker in the training data.
+
+> [!TIP]
+> Before you get started in Speech Studio, define your voice [persona and choose the right voice talent](record-custom-voice-samples.md#choose-your-voice-talent).
+
+You can find the verbal consent statement in multiple languages on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. See also the [disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context).
+
+## Add voice talent
+
+To add a voice talent profile and upload their consent statement, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Set up voice talent** > **Add voice talent**.
+1. In the **Add new voice talent** wizard, describe the characteristics of the voice you're going to create. The scenarios that you specify here must be consistent with what you provided in the application form.
+1. Select **Next**.
+1. On the **Upload voice talent statement** page, follow the instructions to upload the voice talent statement you've recorded beforehand. Make sure the verbal statement was [recorded](record-custom-voice-samples.md) with the same settings, environment, and speaking style as your training data.
+ :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Screenshot of the voice talent statement upload dialog.":::
+1. Enter the voice talent name and company name. The voice talent name must be the name of the person who recorded the consent statement. The company name must match the company name that was spoken in the recorded statement.
+1. Select **Next**.
+1. Review the voice talent and persona details, and select **Submit**.
+
+After the voice talent status is *Succeeded*, you can proceed to [train your custom voice model](how-to-custom-voice-create-voice.md).
+
+## Next steps
+
+- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
cognitive-services How To Custom Voice Training Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-training-data.md
+
+ Title: "Training data for Custom Neural Voice - Speech service"
+
+description: "Learn about the data types that you can use to train a Custom Neural Voice."
++++++ Last updated : 10/27/2022+++
+# Training data for Custom Neural Voice
+
+When you're ready to create a custom Text-to-Speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
+
+> [!TIP]
+> To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [record voice samples to create a custom neural voice](record-custom-voice-samples.md).
+
+## Types of training data
+
+A voice training dataset includes audio recordings, and a text file with the associated transcriptions. Each audio file should contain a single utterance (a single sentence or a single turn for a dialog system), and be less than 15 seconds long.
+
+In some cases, you may not have the right dataset ready and will want to test the custom neural voice training with available audio files, short or long, with or without transcripts.
+
+This table lists data types and how each is used to create a custom Text-to-Speech voice model.
+
+| Data type | Description | When to use | Extra processing required |
+| | -- | -- | |
+| [Individual utterances + matching transcript](#individual-utterances--matching-transcript) | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. |
+| [Long audio + transcript](#long-audio--transcript-preview) | A collection (.zip) of long, unsegmented audio files (.wav or .mp3, longer than 20 seconds, at most 1000 audio files), paired with a collection (.zip) of transcripts that contains all spoken words. | You have audio files and matching transcripts, but they aren't segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation wherever required. |
+| [Audio only (Preview)](#audio-only-preview) | A collection (.zip) of audio files (.wav or .mp3, at most 1000 audio files) without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation wherever required.|
+
+Files should be grouped by type into a dataset and uploaded as a zip file. Each dataset can only contain a single data type.
+
+> [!NOTE]
+> The maximum number of datasets allowed to be imported per subscription is 500 zip files for standard subscription (S0) users.
+
+## Individual utterances + matching transcript
+
+You can prepare recordings of individual utterances and the matching transcript in two ways. Either [write a script and have it read by a voice talent](record-custom-voice-samples.md) or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
+
+To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
+
+For data format examples, refer to the sample training set on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/Sample%20Data). The sample training set includes the sample script and the associated audio.
+
+### Audio data for Individual utterances + matching transcript
+
+Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text-to-Speech voices aren't supported, except for the Chinese-English bi-lingual. Each audio file must have a unique filename with the filename extension .wav.
+
+Follow these guidelines when preparing audio.
+
+| Property | Value |
+| -- | -- |
+| File format | RIFF (.wav), grouped into a .zip file |
+| File name | File name characters supported by Windows OS, with .wav extension.<br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
+| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
+| Sample format | PCM, at least 16-bit |
+| Audio length | Shorter than 15 seconds |
+| Archive format | .zip |
+| Maximum archive size | 2048 MB |
+
+> [!NOTE]
+> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. If a .zip file contains .wav files with different sample rates, only those equal to or higher than 16,000 Hz will be imported. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+
+### Transcription data for Individual utterances + matching transcript
+
+The transcription file is a plain text file. Use these guidelines to prepare your transcriptions.
+
+| Property | Value |
+| -- | -- |
+| File format | Plain text (.txt) |
+| Encoding format | ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
+| # of utterances per line | **One** - Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t). |
+| Maximum file size | 2048 MB |
+
+Below is an example of how the transcripts are organized utterance by utterance in one .txt file:
+
+```
+0000000001[tab] This is the waistline, and it's falling.
+0000000002[tab] We have trouble scoring.
+0000000003[tab] It was Janet Maslin.
+```
+ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts will introduce quality loss during the training.
+
+## Long audio + transcript (Preview)
+
+> [!NOTE]
+> For **Long audio + transcript (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
+
+In some cases, you may not have segmented audio available. The Speech Studio can help you segment long audio files and create transcriptions. The long-audio segmentation service will use the [Batch Transcription API](batch-transcription.md) feature of speech-to-text.
+
+During the processing of the segmentation, your audio files and the transcripts will also be sent to the Custom Speech service to refine the recognition model so the accuracy can be improved for your data. No data will be retained during this process. After the segmentation is done, only the utterances segmented and their mapping transcripts will be stored for your downloading and training.
+
+> [!NOTE]
+> This service will be charged toward your speech-to-text subscription usage. The long-audio segmentation service is only supported with standard (S0) Speech resources.
+
+### Audio data for Long audio + transcript
+
+Follow these guidelines when preparing audio for segmentation.
+
+| Property | Value |
+| -- | -- |
+| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
+| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
+| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
+| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
+| Audio length | Longer than 20 seconds |
+| Archive format | .zip |
+| Maximum archive size | 2048 MB, at most 1000 audio files included |
+
+> [!NOTE]
+> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+
+All audio files should be grouped into a zip file. ItΓÇÖs OK to put .wav files and .mp3 files into one audio zip. For example, you can upload a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 second long, and another audio named ΓÇÿqueenstory.mp3ΓÇÖ, 200 second long. All .mp3 files will be transformed into the .wav format after processing.
+
+### Transcription data for Long audio + transcript
+
+Transcripts must be prepared to the specifications listed in this table. Each audio file must be matched with a transcript.
+
+| Property | Value |
+| -- | -- |
+| File format | Plain text (.txt), grouped into a .zip |
+| File name | Use the same name as the matching audio file |
+| Encoding format |ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
+| # of utterances per line | No limit |
+| Maximum file size | 2048 MB |
+
+All transcripts files in this data type should be grouped into a zip file. For example, you've uploaded a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 seconds long, and another one named ΓÇÿqueenstory.mp3ΓÇÖ, 200 seconds long. You'll need to upload another zip file containing two transcripts, one named ΓÇÿkingstory.txtΓÇÖ, the other one ΓÇÿqueenstory.txtΓÇÖ. Within each plain text file, you'll provide the full correct transcription for the matching audio.
+
+After your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on the transcript provided. You can check the segmented utterances and the matching transcripts by downloading the dataset. Unique IDs will be assigned to the segmented utterances automatically. ItΓÇÖs important that you make sure the transcripts you provide are 100% accurate. Errors in the transcripts can reduce the accuracy during the audio segmentation and further introduce quality loss in the training phase that comes later.
+
+## Audio only (Preview)
+
+> [!NOTE]
+> For **Audio only (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
+
+If you don't have transcriptions for your audio recordings, use the **Audio only** option to upload your data. Our system can help you segment and transcribe your audio files. Keep in mind, this service will be charged toward your speech-to-text subscription usage.
+
+Follow these guidelines when preparing audio.
+
+> [!NOTE]
+> The long-audio segmentation service will leverage the batch transcription feature of speech-to-text, which only supports standard subscription (S0) users.
+
+| Property | Value |
+| -- | -- |
+| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
+| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
+| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
+| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
+| Audio length | No limit |
+| Archive format | .zip |
+| Maximum archive size | 2048 MB, at most 1000 audio files included |
+
+> [!NOTE]
+> The default sampling rate for a custom neural voice is 24,000 Hz. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+
+All audio files should be grouped into a zip file. Once your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on our speech batch transcription service. Unique IDs will be assigned to the segmented utterances automatically. Matching transcripts will be generated through speech recognition. All .mp3 files will be transformed into the .wav format after processing. You can check the segmented utterances and the matching transcripts by downloading the dataset.
+
+## Next steps
+
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
+- [How to record voice samples](record-custom-voice-samples.md)
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
Title: Get started with Custom Neural Voice - Speech service
+ Title: Create a project for Custom Neural Voice - Speech service
-description: Custom Neural Voice is a set of online tools that you use to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions."
+description: Learn how to create a Custom Neural Voice project that contains data, models, tests, and endpoints in Speech Studio.
- Previously updated : 08/01/2022+ Last updated : 10/27/2022
-# Create a Project
+# Create a project for Custom Neural Voice
-[Custom Neural Voice](https://aka.ms/customvoice) is a set of online tools that you use to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md?tabs=stt-tts) and [region](regions.md#speech-service).
+Content for [Custom Neural Voice](https://aka.ms/customvoice) like data, models, tests, and endpoints are organized into projects in Speech Studio. Each project is specific to a country and language, and the gender of the voice you want to create. For example, you might create a project for a female voice for your call center's chat bots that use English in the United States.
-> [!IMPORTANT]
-> Custom Neural Voice Pro can be used to create higher-quality models that are indistinguishable from human recordings. For access you must commit to using it in alignment with our responsible AI principles. Learn more about our [policy on limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply here](https://aka.ms/customneural).
->
-> With [Custom Neural Voice Lite](custom-neural-voice.md#custom-neural-voice-project-types) (public preview), you can create a model for demonstration and evaluation purpose. No application is required. Microsoft restricts and selects the recording and testing samples for use with Custom Neural Voice Lite. You must apply the full access to Custom Neural Voice in order to deploy and use the Custom Neural Voice Lite model for business purpose.
-
-## Set up your Azure account
+> [!TIP]
+> Try [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
-A Speech resource is required before you can use Custom Neural Voice. Follow these instructions to create a Speech resource in Azure. If you don't have an Azure account, you can sign up for a new one.
+All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md?tabs=stt-tts) and [region](regions.md#speech-service).
-Once you've created an Azure account and a Speech resource, you'll need to sign in to Speech Studio and connect your subscription.
+## Create a Custom Neural Voice Pro project
-1. Get your Speech resource key from the Azure portal.
-1. Sign in to [Speech Studio](https://aka.ms/speechstudio), and then select **Custom Voice**.
-1. Select your subscription and create a speech project.
-1. If you want to switch to another Speech subscription, select the **cog** icon at the top.
+To create a Custom Neural Voice Pro project, follow these steps:
-> [!NOTE]
-> Custom Neural Voice training is currently only available in some regions. But you can easily copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select the subscription and Speech resource to work with.
-## Create a project
+ > [!IMPORTANT]
+ > Custom Neural Voice training is currently only available in some regions. After your voice model is trained in a supported region, you can copy it to a Speech resource in another region as needed. See footnotes in the [regions](regions.md#speech-service) table for more information.
-Content like data, models, tests, and endpoints are organized into projects in Speech Studio. Each project is specific to a country and language, and the gender of the voice you want to create. For example, you might create a project for a female voice for your call center's chat bots that use English in the United States.
+1. Select **Custom Voice** > **Create a project**.
+1. Select **Custom Neural Voice Pro** > **Next**.
+1. Follow the instructions provided by the wizard to create your project.
-To create a custom voice project:
-
-1. Sign in to [Speech Studio](https://aka.ms/speechstudio).
-1. Select **Text-to-Speech** > **Custom Voice** > **Create project**.
-
- See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Lite and Custom Neural Voice Pro projects.
-
-1. After you've created a CNV Pro project, click your project's name and you'll see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. See [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md) to set up the voice talent, and proceed to training data.
-
-## Cross lingual feature
-
-With cross lingual feature (public preview), you can create a different language for your voice model. If the language of your training data is supported by cross lingual feature, you can create a voice that speaks a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US` or any of the languages supported by cross lingual feature. For details, see [supported languages](language-support.md?tabs=stt-tts). You don't need to prepare additional data in the target language for training, but your test script needs to be in the target language.
-
-For how to create a different language from your training data, select the training method **Neural-cross lingual** during training. See [how to train your custom neural voice model](how-to-custom-voice-create-voice.md#train-your-custom-neural-voice-model).
-
-After the voice is created, you can use the Audio Content Creation tool to fine-tune your deployed voice, with richer voice tuning supports. Sign in to the Audio Content Creation of [Speech Studio]( https://aka.ms/speechstudio/) with your Azure account, and select your created voice from the target language to start tuning experience.
-
-## Migrate to Custom Neural Voice
-
-If you're using the old version of Custom Voice (which is scheduled to be retired in February 2024), see [How to migrate to Custom Neural Voice](how-to-migrate-to-custom-neural-voice.md).
+Select the new project by name or select **Go to project**. You'll see these menu items in the left panel: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**.
## Next steps
+- [Set up voice talent](how-to-custom-voice-talent.md)
- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)-- [How to record voice samples](record-custom-voice-samples.md) - [Train your voice model](how-to-custom-voice-create-voice.md) - [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
Previously updated : 08/01/2022 Last updated : 10/27/2022 zone_pivot_groups: programming-languages-set-nineteen
zone_pivot_groups: programming-languages-set-nineteen
# Deploy and use your voice model
-After you've successfully created and trained your voice model, you deploy it to a custom neural voice endpoint. Use the custom neural voice endpoint instead of the usual text-to-speech endpoint for requests with the REST API. Use the speech studio to create a custom neural voice endpoint. Use the REST API to suspend or resume a custom neural voice endpoint.
+After you've successfully created and [trained](how-to-custom-voice-create-voice.md) your voice model, you deploy it to a custom neural voice endpoint.
+
+Use the Speech Studio to [add a deployment endpoint](#add-a-deployment-endpoint) for your custom neural voice. You can use either the Speech Studio or text-to-speech REST API to [suspend or resume](#suspend-and-resume-an-endpoint) a custom neural voice endpoint.
> [!NOTE]
-> See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects. This article focuses on the creation of a professional Custom Neural Voice using the Pro project.
+> You can create up to 50 endpoints with a standard (S0) Speech resource, each with its own custom neural voice.
+
+To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same Speech resource to pass through the authentication of the text-to-speech service.
-## Create a custom neural voice endpoint
+## Add a deployment endpoint
To create a custom neural voice endpoint:
-1. On the **Deploy model** tab, select **Deploy model**.
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Deploy model** > **Deploy model**.
1. Select a voice model that you want to associate with this endpoint. 1. Enter a **Name** and **Description** for your custom endpoint. 1. Select **Deploy** to create your endpoint.
-In the endpoint table, you now see an entry for your new endpoint. It might take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
-
-You can suspend and resume your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL is retained, so you don't need to change your code in your apps.
+After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
-You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one you want to update.
+## Application settings
-> [!NOTE]
->- You can create up to 50 endpoints with a standard (S0) Speech resource, each with its own custom neural voice.
->- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same Speech resource to pass through the authentication of the text-to-speech service.
+The application settings that you use as REST API [request parameters](#request-parameters) are available on the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
-After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code.
-The custom endpoint is functionally identical to the standard endpoint that's used for text-to-speech requests. For more information, see the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md).
+* The **Endpoint key** shows the Speech resource key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header.
+* The **Endpoint URL** shows your service region. Use the value that precedes `voice.speech.microsoft.com` as your service region request parameter. For example, use `eastus` if the endpoint URL is `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1`.
+* The **Endpoint URL** shows your endpoint ID. Use the value appended to the `?deploymentId=` query parameter as the value of your endpoint ID request parameter.
-[Audio Content Creation](https://speech.microsoft.com/audiocontentcreation) is a tool that allows you to fine-tune audio output by using a friendly UI.
+## Use your custom voice
-## Copy your voice model to another project
+The custom endpoint is functionally identical to the standard endpoint that's used for text-to-speech requests.
-You can copy your voice model to another project for the same region or another region. For example, you can copy a neural voice model that was trained in one region, to a project for another region.
+One difference is that the `EndpointId` must be specified to use the custom voice via the Speech SDK. You can start with the [text-to-speech quickstart](get-started-speech-to-text.md) and then update the code with the `EndpointId` and `SpeechSynthesisVoiceName`.
-> [!NOTE]
-> Custom Neural Voice training is currently only available in some regions. But you can easily copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
+```csharp
+var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
+speechConfig.SpeechSynthesisVoiceName = "YourCustomVoiceName";
+speechConfig.EndpointId = "YourEndpointId";
+```
-To copy your custom neural voice model to another project:
+```cpp
+auto speechConfig = SpeechConfig::FromSubscription(speechKey, speechRegion);
+speechConfig->SetSpeechSynthesisVoiceName("YourCustomVoiceName");
+speechConfig->SetEndpointId("YourEndpointId");
+```
-1. On the **Train model** tab, select a voice model that you want to copy, and then select **Copy to project**.
+```java
+SpeechConfig speechConfig = SpeechConfig.fromSubscription(speechKey, speechRegion);
+speechConfig.setSpeechSynthesisVoiceName("YourCustomVoiceName");
+speechConfig.setEndpointId("YourEndpointId");
+```
- :::image type="content" source="media/custom-voice/cnv-model-copy.png" alt-text="Copy to project":::
+```ObjectiveC
+SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithSubscription:speechKey region:speechRegion];
+speechConfig.speechSynthesisVoiceName = @"YourCustomVoiceName";
+speechConfig.EndpointId = @"YourEndpointId";
+```
-1. Select the **Region**, **Speech resource**, and **Project** where you want to copy the model. You must have a speech resource and project in the target region, otherwise you need to create them first.
+```Python
+speech_config = speechsdk.SpeechConfig(subscription=os.environ.get('SPEECH_KEY'), region=os.environ.get('SPEECH_REGION'))
+speech_config.endpoint_id = "YourEndpointId"
+speech_config.speech_synthesis_voice_name = "YourCustomVoiceName"
+```
- :::image type="content" source="media/custom-voice/cnv-model-copy-dialog.png" alt-text="Copy voice model":::
+To use a custom neural voice via [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md#choose-a-voice-for-text-to-speech), specify the model name as the voice name. This example uses the `YourCustomVoiceName` voice.
-1. Select **Submit** to copy the model.
-1. Select **View model** under the notification message for copy success.
-1. On the **Train model** tab, select the newly copied model and then select **Deploy model**.
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="YourCustomVoiceName">
+ This is the text that is spoken.
+ </voice>
+</speak>
+```
## Switch to a new voice model in your product
Once you've updated your voice model to the latest engine version, or if you wan
## Suspend and resume an endpoint
-You can suspend or resume an endpoint, to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can use the same endpoint URL in your application to synthesize speech.
+You can suspend or resume an endpoint, to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can continue to use the same endpoint URL in your application to synthesize speech.
You can suspend and resume an endpoint in Speech Studio or via the REST API.
This section describes how to suspend or resume a custom neural voice endpoint i
This section will show you how to [get](#get-endpoint), [suspend](#suspend-endpoint), or [resume](#resume-endpoint) a custom neural voice endpoint via REST API.
-#### Application settings
-
-The application settings that you use as REST API [request parameters](#request-parameters) are available on the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
--
-* The **Endpoint key** shows the Speech resource key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header.
-* The **Endpoint URL** shows your service region. Use the value that precedes `voice.speech.microsoft.com` as your service region request parameter. For example, use `eastus` if the endpoint URL is `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1`.
-* The **Endpoint URL** shows your endpoint ID. Use the value appended to the `?deploymentId=` query parameter as the value of your endpoint ID request parameter.
#### Get endpoint
Status code: 202 Accepted
The HTTP status code for each response indicates success or common errors.
-| HTTP status code | Description | Possible reason |
-| - | -- | |
-| 200 | OK | The request was successful. |
-| 202 | Accepted | The request has been accepted and is being processed. |
+| HTTP status code | Description | Possible reason |
+| - | -- | - |
+| 200 | OK | The request was successful. |
+| 202 | Accepted | The request has been accepted and is being processed. |
| 400 | Bad Request | The value of a parameter is invalid, or a required parameter is missing, empty, or null. One common issue is a header that is too long. |
-| 401 | Unauthorized | The request isn't authorized. Check to make sure your Speech resource key or [token](rest-speech-to-text-short.md#authentication) is valid and in the correct region. |
-| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your Speech resource. |
-| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
-
-## Use your custom voice
-
-The difference between Custom voice sample codes and [Text-to-speech quickstart codes](get-started-speech-to-text.md) is that `EndpointId` must be filled in Custom Voice. So you should first build and run demo quickly by quickstart codes and then check following Custom voice sample codes to see how to set `EndpointId`.
-
-```csharp
-var speechConfig = SpeechConfig.FromSubscription(YourResourceKey, YourResourceRegion);
-speechConfig.SpeechSynthesisVoiceName = "YourCustomVoiceName";
-speechConfig.EndpointId = "YourEndpointId";
-```
-
-```cpp
-auto speechConfig = SpeechConfig::FromSubscription(YourResourceKey, YourResourceRegion);
-speechConfig->SetSpeechSynthesisVoiceName("YourCustomVoiceName");
-speechConfig->SetEndpointId("YourEndpointId");
-```
-
-```java
-SpeechConfig speechConfig = SpeechConfig.fromSubscription(YourResourceKey, YourResourceRegion);
-speechConfig.setSpeechSynthesisVoiceName("YourCustomVoiceName");
-speechConfig.setEndpointId("YourEndpointId");
-```
-
-```ObjectiveC
-SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithSubscription:speechKey region:serviceRegion];
-speechConfig.speechSynthesisVoiceName = @"YourCustomVoiceName";
-speechConfig.EndpointId = @"YourEndpointId";
-```
-
-```Python
-speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
-speech_config.endpoint_id = "YourEndpointId"
-speech_config.speech_synthesis_voice_name = "YourCustomVoiceName"
-```
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your Speech resource key or [token](rest-speech-to-text-short.md#authentication) is valid and in the correct region. |
+| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your Speech resource. |
+| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers.|
## Next steps
cognitive-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md
Before you can migrate to custom neural voice, your [application](https://aka.ms
> Even without an Azure account, you can listen to voice samples in [Speech Studio](https://aka.ms/customvoice) and determine the right voice for your business needs. 1. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and then [apply here](https://aka.ms/customneural).
-2. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://aka.ms/speechstudio/customvoice) using the same Azure subscription that you provide in your application.
- > [!IMPORTANT]
- > To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence. You can find the statement in multiple languages [here](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
-
- :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Upload voice talent statement":::
-
-3. After the custom neural voice model is created, deploy the voice model to a new endpoint. To create a new custom voice endpoint with your neural voice model, go to **Text-to-Speech > Custom Voice > Deploy model**. Select **Deploy models** and enter a **Name** and **Description** for your custom endpoint. Then select the custom neural voice model you would like to associate with this endpoint and confirm the deployment.
-4. Update your code in your apps if you have created a new endpoint with a new model.
+1. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://aka.ms/speechstudio/customvoice) using the same Azure subscription that you provide in your application.
+1. Before you can [train](how-to-custom-voice-create-voice.md) and [deploy](how-to-deploy-and-use-endpoint.md) a custom voice model, you must [create a voice talent profile](how-to-custom-voice-talent.md). The profile requires an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model.
+1. Update your code in your apps if you have created a new endpoint with a new model.
## Custom voice details (deprecated)
cognitive-services How To Speech Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis.md
keywords: text to speech
## Next steps
+* [Try the text-to-speech quickstart](get-started-text-to-speech.md)
* [Get started with Custom Neural Voice](how-to-custom-voice.md) * [Improve synthesis with SSML](speech-synthesis-markup.md)
-* [Synthesize from long-form text](long-audio-api.md) like books and news articles
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
Title: "Record custom voice samples - Speech service"
+ Title: "Recording custom voice samples - Speech service"
description: Make a production-quality custom voice by preparing a robust script, hiring good voice talent, and recording professionally.
Previously updated : 08/01/2022 Last updated : 10/14/2022
-# How to record voice samples for Custom Neural Voice
+# Recording voice samples for Custom Neural Voice
This article provides you instructions on preparing high-quality voice samples for creating a professional voice model using the Custom Neural Voice Pro project.
-> [!NOTE]
-> See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects.
- Creating a high-quality production custom neural voice from scratch isn't a casual undertaking. The central component of a custom neural voice is a large collection of audio samples of human speech. It's vital that these audio recordings be of high quality. Choose a voice talent who has experience making these kinds of recordings, and have them recorded by a recording engineer using professional equipment. Before you can make these recordings, though, you need a script: the words that will be spoken by your voice talent to create the audio samples.
Print three copies of the script: one for the voice talent, one for the recordin
### Voice talent statement
-To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence.
-
-You can find the statement in multiple languages on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model.
--
-Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
+To train a neural voice, you must [create a voice talent profile](how-to-custom-voice-talent.md) with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence.
### Legalities
You can refer to below specification to prepare for the audio samples as best pr
> [!Note] > You can record at higher sampling rate and bit depth, for example in the format of 48 KHz 24 bit PCM. During the custom neural voice training, we'll down sample it to 24 KHz 16 bit PCM automatically.
+A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 35+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice.
+
+Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your data.
+ ### Typical audio errors For high-quality training results, avoiding audio errors is highly recommended. Audio errors can are usually within following categories:
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The `voice` element is required. It's used to specify the voice that's used for
| Attribute | Description | Required or optional | | - | - | -- |
-| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md?tabs=stt-tts). | Required |
+| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported prebuilt voices, see [Language support](language-support.md?tabs=stt-tts). To use your [custom neural voice](how-to-deploy-and-use-endpoint.md#use-your-custom-voice), specify the model name as the voice name in SSML.| Required |
**Example**
-This example uses the `en-US-JennyNeural` voice. For a complete list of supported voices, see [Language support](language-support.md?tabs=stt-tts).
+This example uses the `en-US-JennyNeural` voice.
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
Apply here for initial access or for a production review:
All solutions using the Azure OpenAI service are also required to go through a use case review before they can be released for production use, and are evaluated on a case-by-case basis. In general, the more sensitive the scenario the more important risk mitigation measures will be for approval.
+## Comparing Azure OpenAI and OpenAI
+
+Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
+
+With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering.
+ ## Key concepts ### Prompts & Completions
communication-services About Call Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/about-call-types.md
When a user of your application calls another user of your application over an i
### Public switched telephone network (PSTN)
-Any time your users interact with a traditional telephone number, calls are facilitated by PSTN (Public Switched Telephone Network) voice calling. To make and receive PSTN calls, you need to add telephony capabilities to your Azure Communication Services resource. In this case, signaling and media use a combination of IP-based and PSTN-based technologies to connect your users.
+Anytime your users interact with a traditional telephone number, calls are facilitated by PSTN (Public Switched Telephone Network) voice calling. To make and receive PSTN calls, you need to add telephony capabilities to your Azure Communication Services resource. In this case, signaling and media use a combination of IP-based and PSTN-based technologies to connect your users.
### One-to-one call
A one-to-one call on Azure Communication Services happens when one of your users
A group call on Azure Communication Services happens when three or more participants connect to one another. Any combination of VoIP and PSTN-connected users can be present on a group call. A one-to-one call can be converted into a group call by adding more participants to the call. One of those participants can be a bot.
+### Rooms call
+
+A call that takes place within the context of a Room. A Room is a container that manages activity between Azure Communication Services end-users. A Room offers application developers better control over *who* can join a call, *when* they meet and *how* they collaborate. To learn more about Rooms, see the [conceptual documentation](../rooms/room-concept.md).
+ ### Supported video standards We support H.264 (MPEG-4). ### Video quality
-We support up to Full HD 1080p on the native (iOS, Android) SDKs. For Web (JS) SDK we support Standard HD 720p. The quality depends on the available bandwidth.
+We support up to Full HD 1080p on the native (iOS, Android) SDKs. For Web (JS) SDK, we support Standard HD 720p. The quality depends on the available bandwidth.
## Next steps
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
The following example shows how to create a CPU scaling rule.
"minReplicas": "1", "maxReplicas": "10", "rules": [{
- "name": "cpuScalingRule",
+ "name": "cpu-scaling-rule",
"custom": { "type": "cpu", "metadata": {
The following example shows how to create a memory scaling rule.
"minReplicas": "1", "maxReplicas": "10", "rules": [{
- "name": "memoryScalingRule",
+ "name": "memory-scaling-rule",
"custom": { "type": "memory", "metadata": {
cosmos-db Timestamptodatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/timestamptodatetime.md
Previously updated : 08/18/2020 Last updated : 10/27/2022 + # TimestampToDateTime (Azure Cosmos DB)+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)] Converts the specified timestamp value to a DateTime.
TimestampToDateTime (<Timestamp>)
## Arguments
-*Timestamp*
+### Timestamp
A signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch. In other words, the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
Returns the UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh
|.fffffff|seven-digit fractional seconds| |Z|UTC (Coordinated Universal Time) designator|
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
## Remarks
TimestampToDateTime will return `undefined` if the timestamp value specified is
## Examples
-The following example converts the timestamp to a DateTime:
+The following example converts the value `1,594,227,912,345` from milliseconds to a date and time of **July 8, 2020, 5:05 PM UTC**.
```sql SELECT TimestampToDateTime(1594227912345) AS DateTime
SELECT TimestampToDateTime(1594227912345) AS DateTime
```json [
- {
- "DateTime": "2020-07-08T17:05:12.3450000Z"
- }
+ {
+ "DateTime": "2020-07-08T17:05:12.3450000Z"
+ }
+]
+```
+
+This next example uses the timestamp from an existing item in a container. The item's timestamp is expressed in seconds.
+
+```json
+{
+ "id": "8cc56bd4-5b8d-450b-a576-449836171398",
+ "type": "reading",
+ "data": "temperature",
+ "value": 35.726545156,
+ "_ts": 1605862991
+}
+```
+
+To use the `_ts` value, you must multiply the value by 1,000 since the timestamp is expressed in seconds.
+
+```sql
+SELECT
+ TimestampToDateTime(r._ts * 1000) AS timestamp,
+ r.id
+FROM
+ readings r
+```
+
+```json
+[
+ {
+ "timestamp": "2020-11-20T09:03:11.0000000Z",
+ "id": "8cc56bd4-5b8d-450b-a576-449836171398"
+ }
]
-```
+```
## Next steps
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-get-started.md
dotnet build
## Connect to Azure Cosmos DB for Table
-To connect to the API for Table of Azure Cosmos DB, create an instance of the [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) class. This class is the starting point to perform all operations against tables. There are two primary ways to connect to a API for Table account using the **TableServiceClient** class:
+To connect to the API for Table of Azure Cosmos DB, create an instance of the [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) class. This class is the starting point to perform all operations against tables. There are two primary ways to connect to an API for Table account using the **TableServiceClient** class:
* [Connect with a API for Table connection string](#connect-with-a-connection-string)
-* [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
### Connect with a connection string
Create a new instance of the **TableServiceClient** class with the ``COSMOS_CONN
:::code language="csharp" source="~/azure-cosmos-db-table-dotnet-v12/101-client-connection-string/Program.cs" id="connection_string" highlight="3":::
-### Connect using the Microsoft Identity Platform
-
-To connect to your API for Table account using the Microsoft Identity Platform and Azure AD, use a security principal. The exact type of principal will depend on where you host your application code. The table below serves as a quick reference guide.
-
-| Where the application runs | Security principal
-|--|--||
-| Local machine (developing and testing) | User identity or service principal |
-| Azure | Managed identity |
-| Servers or clients outside of Azure | Service principal |
-
-#### Import Azure.Identity
-
-The **Azure.Identity** NuGet package contains core authentication functionality that is shared among all Azure SDK libraries.
-
-Import the [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) NuGet package using the ``dotnet add package`` command.
-
-```dotnetcli
-dotnet add package Azure.Identity
-```
-
-Rebuild the project with the ``dotnet build`` command.
-
-```dotnetcli
-dotnet build
-```
-
-In your code editor, add using directives for ``Azure.Core`` and ``Azure.Identity`` namespaces.
--
-#### Create TableServiceClient with default credential implementation
-
-If you're testing on a local machine, or your application will run on Azure services with direct support for managed identities, obtain an OAuth token by creating a [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) instance.
-
-For this example, we saved the instance in a variable of type [``TokenCredential``](/dotnet/api/azure.core.tokencredential) as that's a more generic type that's reusable across SDKs.
--
-Create a new instance of the **TableServiceClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
--
-#### Create TableServiceClient with a custom credential implementation
-
-If you plan to deploy the application out of Azure, you can obtain an OAuth token by using other classes in the [Azure.Identity client library for .NET](/dotnet/api/overview/azure/identity-readme). These other classes also derive from the ``TokenCredential`` class.
-
-For this example, we create a [``ClientSecretCredential``](/dotnet/api/azure.identity.clientsecretcredential) instance by using client and tenant identifiers, along with a client secret.
--
-You can obtain the client ID, tenant ID, and client secret when you register an application in Azure Active Directory (AD). For more information about registering Azure AD applications, see [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
-
-Create a new instance of the **TableServiceClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
-- ## Build your application As you build your application, your code will primarily interact with four types of resources:
The following guides show you how to use each of these classes to build your app
## Next steps
-Now that you've connected to a API for Table account, use the next guide to create and manage tables.
+Now that you've connected to an API for Table account, use the next guide to create and manage tables.
> [!div class="nextstepaction"] > [Create a table in Azure Cosmos DB for Table using .NET](how-to-dotnet-create-table.md)
cost-management-billing Save Compute Costs Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/save-compute-costs-reservations.md
For more information, see [Self-service exchanges and refunds for Azure Reservat
## Charges covered by reservation - **Reserved Virtual Machine Instance** - A reservation only covers the virtual machine and cloud services compute costs. It doesn't cover additional software, Windows, networking, or storage charges.-- **Azure Storage reserved capacity** - A reservation covers storage capacity for standard storage accounts for Blob storage or Azure Data Lake Gen2 storage. The reservation doesn't cover bandwidth or transaction rates.
+- **Azure Blob storage reserved capacity** - A reservation covers storage capacity for Blob storage and Azure Data Lake Gen2 storage. The reservation doesn't cover bandwidth or transaction rates.
+- **Azure Files reserved capacity** - A reservation covers storage capacity for Azure Files. Reservations for hot and cool tiers don't cover bandwidth or transaction rates.
- **Azure Cosmos DB reserved capacity** - A reservation covers throughput provisioned for your resources. It doesn't cover the storage and networking charges. - **Azure Data Factory data flows** - A reservation covers integration runtime cost for the compute type and number of cores that you buy. - **SQL Database reserved vCore** - Covers both SQL Managed Instance and SQL Database Elastic Pool/single database. Only the compute costs are included with a reservation. The SQL license is billed separately.
data-share How To Share From Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/how-to-share-from-sql.md
Title: Share and receive data from Azure SQL Database and Azure Synapse Analytics description: Learn how to share and receive data from Azure SQL Database and Azure Synapse Analytics--++ Previously updated : 02/02/2022 Last updated : 10/27/2022 # Share and receive data from Azure SQL Database and Azure Synapse Analytics
To share data snapshots from your Azure SQL resources, you first need to prepare
- An Azure subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - An [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md) with tables and views that you want to share. - [An Azure Data Share account](share-your-data-portal.md#create-a-data-share-account).-- Your data recipient's Azure sign in e-mail address (using their e-mail alias won't work).
+- Your data recipient's Azure sign-in e-mail address (using their e-mail alias won't work).
- If your Azure SQL resource is in a different Azure subscription than your Azure Data Share account, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where your source Azure SQL resource is located. ### Source-specific prerequisites
You can use one of these methods to authenticate with Azure SQL Database or Azur
These prerequisites cover the authentication you'll need so Azure Data Share can connect with your Azure SQL Database: -- You'll need permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
+- You'll need permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
- SQL Server **Azure Active Directory Admin** permissions. - SQL Server Firewall access: 1. In the [Azure portal](https://portal.azure.com/), navigate to your SQL server. Select *Firewalls and virtual networks* from left navigation.
You can follow the [step by step demo video](https://youtu.be/hIE-TjJD8Dc) to co
1. Navigate to your Data Share Overview page.
- ![Share your data](./media/share-receive-data.png "Share your data")
+ :::image type="content" source="./media/share-receive-data.png" alt-text="Screenshot showing the data share overview.":::
1. Select **Start sharing your data**.
You can follow the [step by step demo video](https://youtu.be/hIE-TjJD8Dc) to co
1. Fill out the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
- ![EnterShareDetails](./media/enter-share-details.png "Enter Share details")
+ :::image type="content" source="./media/enter-share-details.png " alt-text="Screenshot of the share creation page in Azure Data Share, showing the share name, type, description, and terms of used filled out.":::
1. Select **Continue**. 1. To add Datasets to your share, select **Add Datasets**.
- ![Add Datasets to your share](./media/datasets.png "Datasets")
+ :::image type="content" source="./media/datasets.png" alt-text="Screenshot of the datasets page in share creation, the add datasets button is highlighted.":::
1. Select the dataset type that you would like to add. There will be a different list of dataset types depending on the share type (snapshot or in-place) you selected in the previous step.
- ![AddDatasets](./media/add-datasets.png "Add Datasets")
+ :::image type="content" source="./media/add-datasets.png" alt-text="Screenshot showing the available dataset types.":::
1. Select your SQL server or Synapse workspace. If you're using Azure Active Directory authentication and the checkbox **Allow Data Share to run the above 'create user' SQL script on my behalf** appears, check the checkbox. If you're using SQL authentication, provide credentials, and be sure you've followed the prerequisites so that you have permissions. Select **Next** to navigate to the object you would like to share and select 'Add Datasets'. You can select tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL DW), or tables from Azure Synapse Analytics (workspace) dedicated SQL pool.
- ![SelectDatasets](./media/select-datasets-sql.png "Select Datasets")
+ :::image type="content" source="./media/select-datasets-sql.png" alt-text="Screenshot showing the Azure SQL Database dataset window with a sql server selected.":::
-1. In the Recipients tab, enter in the email addresses of your Data Consumer by selecting '+ Add Recipient'. The email address needs to be recipient's Azure sign in email.
+1. In the Recipients tab, enter in the email addresses of your Data Consumer by selecting '+ Add Recipient'. The email address needs to be recipient's Azure sign-in email.
- ![AddRecipients](./media/add-recipient.png "Add recipients")
+ :::image type="content" source="./media/add-recipient.png" alt-text="Screenshot of the recipients page, showing a recipient added.":::
1. Select **Continue**. 1. If you have selected snapshot share type, you can configure snapshot schedule to provide updates of your data to your data consumer.
- ![EnableSnapshots](./media/enable-snapshots.png "Enable snapshots")
+ :::image type="content" source="./media/enable-snapshots.png" alt-text="Screenshot of the settings page, showing the snapshot toggle enabled.":::
1. Select a start time and recurrence interval.
Select your resource type and follow the steps:
If you choose to receive data into Azure Storage, complete these prerequisites before accepting a data share: - An [Azure Storage account](../storage/common/storage-account-create.md).-- Permission to write to the storage account: *Microsoft.Storage/storageAccounts/write*. This permission exists in the Azure RBAC **Contributor** role.-- Permission to add role assignment of the Data Share resource's managed identity to the storage account: which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the Azure RBAC **Owner** role.
+- Permission to write to the storage account: *Microsoft.Storage/storageAccounts/write*. This permission exists in the **Contributor** role.
+- Permission to add role assignment of the Data Share resource's managed identity to the storage account: which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the **Owner** role.
<a id="prerequisitesforreceivingtoazuresqlorsynapse"></a> ### Prerequisites for receiving data into Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)
If you choose to receive data into Azure Storage, complete these prerequisites b
For a SQL server where you're the **Azure Active Directory admin** of the SQL server, complete these prerequisites before accepting a data share: - An [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).-- Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
+- Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
- SQL Server Firewall access: 1. In the [Azure portal](https://portal.azure.com/), navigate to your SQL server. Select **Firewalls and virtual networks** from left navigation. 1. Select **Yes** for *Allow Azure services and resources to access this server*.
For a SQL server where you're **not** the **Azure Active Directory admin**, comp
You can follow the [step by step demo video](https://youtu.be/aeGISgK1xro), or the steps below to configure prerequisites. - An [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).-- Permission to write to databases on the SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
+- Permission to write to databases on the SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
- Permission for the Data Share resource's managed identity to access the Azure SQL Database or Azure Synapse Analytics: 1. In the [Azure portal](https://portal.azure.com/), navigate to the SQL server and set yourself as the **Azure Active Directory Admin**. 1. Connect to the Azure SQL Database/Data Warehouse using the [Query Editor](/azure/azure-sql/database/connect-query-portal#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
You can follow the [step by step demo video](https://youtu.be/aeGISgK1xro), or t
### Prerequisites for receiving data into Azure Synapse Analytics (workspace) SQL pool - An Azure Synapse Analytics (workspace) dedicated SQL pool. Receiving data into serverless SQL pool isn't currently supported.-- Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the Azure RBAC **Contributor** role.
+- Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the **Contributor** role.
- Permission for the Data Share resource's managed identity to access the Synapse workspace SQL pool: 1. In the [Azure portal](https://portal.azure.com/), navigate to Synapse workspace. 1. Select SQL Active Directory admin from left navigation and set yourself as the **Azure Active Directory admin**.
To open an invitation from Azure portal directly, search for **Data Share Invita
If you're a guest user on a tenant, you'll need to verify your email address for the tenant before viewing a Data Share invitation for the first time. Once verified, your email is valid for 12 months.
-![List of Invitations](./media/invitations.png "List of invitations")
Then, select the share you would like to view.
Then, select the share you would like to view.
1. Make sure all fields are reviewed, including the **Terms of Use**. If you agree to the terms of use, you'll be required to check the box to indicate you agree.
- ![Terms of use](./media/terms-of-use.png "Terms of use")
+ :::image type="content" source="./media/terms-of-use.png" alt-text="Screenshot of the invitation acceptance page, showing the terms of use highlighted and the agreement selected.":::
1. Under *Target Data Share Account*, select the Subscription and Resource Group that you'll be deploying your Data Share into.
Then, select the share you would like to view.
1. Once you've agreed to the terms of use and specified a Data Share account to manage your received share, Select **Accept and configure**. A share subscription will be created.
- ![Accept options](./media/accept-options.png "Accept options")
+ :::image type="content" source="./media/accept-options.png" alt-text="Screenshot of the acceptance page, showing the target data share account information filled out.":::
If you don't want to accept the invitation, Select *Reject*.
Follow the steps below to configure where you want to receive data.
1. Select **Datasets** tab. Check the box next to the dataset you'd like to assign a destination to. Select **+ Map to target** to choose a target data store.
- ![Map to target](./media/dataset-map-target.png "Map to target")
+ :::image type="content" source="./media/dataset-map-target.png" alt-text="Screenshot of the received shares page with the map to target button highlighted.":::
1. Select the target resource to store the shared data. Any data files or tables in the target data store with the same path and name will be overwritten. If you're receiving data into a SQL store and the **Allow Data Share to run the above 'create user' SQL script on my behalf** checkbox appears, check the checkbox. Otherwise, follow the instruction in prerequisites to run the script appear on the screen. This will give Data Share resource write permission to your target SQL DB.
- ![Target storage account](./media/dataset-map-target-sql.png "Target Data Store")
+ :::image type="content" source="./media/dataset-map-target-sql.png" alt-text="Screenshot of the map datasets to target window, showing available targets in the dropdown.":::
1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular updates to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**. > [!NOTE] > The first scheduled snapshot will start within one minute of the schedule time and the next snapshots will start within seconds of the scheduled time.
- ![Enable snapshot schedule](./media/enable-snapshot-schedule.png "Enable snapshot schedule")
+ :::image type="content" source="./media/enable-snapshot-schedule.png" alt-text="Screenshot showing the snapshot schedule tab with the enable button selected.":::
### Trigger a snapshot
These steps only apply to snapshot-based sharing.
1. You can trigger a snapshot by selecting **Details** tab followed by **Trigger snapshot**. Here, you can trigger a full snapshot of your data. If it's your first time receiving data from your data provider, select full copy. When a snapshot is executing, the next snapshots won't start until the previous one is complete.
- ![Trigger snapshot](./media/trigger-snapshot.png "Trigger snapshot")
+ :::image type="content" source="./media/trigger-snapshot.png" alt-text="Screenshot of the received shares page, showing the trigger snapshot dropdown selected and the full copy option highlighted.":::
1. When the last run status is *successful*, go to target data store to view the received data. Select **Datasets**, and select the link in the Target Path.
- ![Consumer datasets](./media/consumer-datasets.png "Consumer dataset mapping")
+ :::image type="content" source="./media/consumer-datasets.png" alt-text="Screenshot of the datasets tab showing a successful dataset selected.":::
### View history
data-share How To Share From Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/how-to-share-from-storage.md
Title: Share and receive data from Azure Blob Storage and Azure Data Lake Storage description: Learn how to share and receive data from Azure Blob Storage and Azure Data Lake Storage.--++ Previously updated : 02/02/2022 Last updated : 10/27/2022 # Share and receive data from Azure Blob Storage and Azure Data Lake Storage
Existing files that have the same name are overwritten during a snapshot. A file
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [An Azure Data Share account](share-your-data-portal.md#create-a-data-share-account).-- Your data recipient's Azure sign in e-mail address (using their e-mail alias won't work).
+- Your data recipient's Azure sign-in e-mail address (using their e-mail alias won't work).
- If your Azure SQL resource is in a different Azure subscription than your Azure Data Share account, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where your source Azure SQL resource is located. ### Prerequisites for the source storage account
Existing files that have the same name are overwritten during a snapshot. A file
1. Provide the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
- ![Screenshot showing data share details.](./media/enter-share-details.png "Enter the data share details.")
+ :::image type="content" source="./media/enter-share-details.png " alt-text="Screenshot of the share creation page in Azure Data Share, showing the share name, type, description, and terms of used filled out.":::
1. Select **Continue**. 1. To add datasets to your share, select **Add Datasets**.
- ![Screenshot showing how to add datasets to your share.](./media/datasets.png "Datasets.")
+ :::image type="content" source="./media/datasets.png" alt-text="Screenshot of the datasets page in share creation, the add datasets button is highlighted.":::
1. Select a dataset type to add. The list of dataset types depends on whether you selected snapshot-based sharing or in-place sharing in the previous step.
- ![Screenshot showing where to select a dataset type.](./media/add-datasets.png "Add datasets.")
+ :::image type="content" source="./media/add-datasets.png" alt-text="Screenshot showing the available dataset types.":::
1. Go to the object you want to share. Then select **Add Datasets**.
- ![Screenshot showing how to select an object to share.](./media/select-datasets.png "Select datasets.")
+ :::image type="content" source="./media/select-datasets.png" alt-text="Screenshot of the select datasets page, showing a folder selected.":::
1. On the **Recipients** tab, add the email address of your data consumer by selecting **Add Recipient**.
- ![Screenshot showing how to add recipient email addresses.](./media/add-recipient.png "Add recipients.")
+ :::image type="content" source="./media/add-recipient.png" alt-text="Screenshot of the recipients page, showing a recipient added.":::
1. Select **Continue**. 1. If you selected a snapshot share type, you can set up the snapshot schedule to update your data for the data consumer.
- ![Screenshot showing the snapshot schedule settings.](./media/enable-snapshots.png "Enable snapshots.")
+ :::image type="content" source="./media/enable-snapshots.png" alt-text="Screenshot of the settings page, showing the snapshot toggle enabled.":::
1. Select a start time and recurrence interval.
You can open an invitation from email or directly from the [Azure portal](https:
If you're a guest user of a tenant, you'll be asked to verify your email address for the tenant prior to viewing Data Share invitation for the first time. Once verified, it's valid for 12 months.
- ![Screenshot showing the list of invitations in the Azure portal.](./media/invitations.png "List of invitations.")
+ :::image type="content" source="./media/invitations.png" alt-text="Screenshot of the invitations page, showing a pending invitation.":::
1. Select the share you want to view.
You can open an invitation from email or directly from the [Azure portal](https:
1. Review all of the fields, including the **Terms of use**. If you agree to the terms, select the check box.
- ![Screenshot showing the Terms of use area.](./media/terms-of-use.png "Terms of use.")
+ :::image type="content" source="./media/terms-of-use.png" alt-text="Screenshot of the invitation acceptance page, showing the terms of use highlighted and the agreement selected.":::
1. Under **Target Data Share account**, select the subscription and resource group where you'll deploy your Data Share. Then fill in the following fields:
You can open an invitation from email or directly from the [Azure portal](https:
1. Select **Accept and configure**. A share subscription is created.
- ![Screenshot showing where to accept the configuration options.](./media/accept-options.png "Accept options")
+ :::image type="content" source="./media/accept-options.png" alt-text="Screenshot of the acceptance page, showing the target data share account information filled out.":::
The received share appears in your Data Share account.
You can open an invitation from email or directly from the [Azure portal](https:
1. On the **Datasets** tab, select the check box next to the dataset where you want to assign a destination. Select **Map to target** to choose a target data store.
- ![Screenshot showing how to map to a target.](./media/dataset-map-target.png "Map to target.")
+ :::image type="content" source="./media/dataset-map-target.png" alt-text="Screenshot of the received shares page with the map to target button highlighted.":::
1. Select a target data store for the data. Files in the target data store that have the same path and name as files in the received data will be overwritten.
- ![Screenshot showing where to select a target storage account.](./media/map-target.png "Target storage.")
+ :::image type="content" source="./media/map-target.png" alt-text="Screenshot of the map datasets to target window, showing a filesystem name given.":::
1. For snapshot-based sharing, if the data provider uses a snapshot schedule to regularly update the data, you can enable the schedule from the **Snapshot Schedule** tab. Select the box next to the snapshot schedule. Then select **Enable**. The first scheduled snapshot will start within one minute of the schedule time and subsequent snapshots will start within seconds of the scheduled time.
- ![Screenshot showing how to enable a snapshot schedule.](./media/enable-snapshot-schedule.png "Enable snapshot schedule.")
+ :::image type="content" source="./media/enable-snapshot-schedule.png" alt-text="Screenshot showing the snapshot schedule tab with the enable button selected.":::
### Trigger a snapshot
The steps in this section apply only to snapshot-based sharing.
1. You can trigger a snapshot from the **Details** tab. On the tab, select **Trigger snapshot**. You can choose to trigger a full snapshot or incremental snapshot of your data. If you're receiving data from your data provider for the first time, select **Full copy**. When a snapshot is executing, subsequent snapshots won't start until the previous one complete.
- ![Screenshot showing the Trigger snapshot selection.](./media/trigger-snapshot.png "Trigger snapshot.")
+ :::image type="content" source="./media/trigger-snapshot.png" alt-text="Screenshot of the received shares page, showing the trigger snapshot dropdown selected and the full copy option highlighted.":::
1. When the last run status is *successful*, go to the target data store to view the received data. Select **Datasets**, and then select the target path link.
- ![Screenshot showing a consumer dataset mapping.](./media/consumer-datasets.png "Consumer dataset mapping.")
+ :::image type="content" source="./media/consumer-datasets.png" alt-text="Screenshot of the datasets tab showing a successful dataset selected.":::
### View history
data-share Supported Data Stores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/supported-data-stores.md
Title: Supported data stores in Azure Data Share description: Learn about the data stores that are supported for use in Azure Data Share. --++ Previously updated : 09/10/2021 Last updated : 10/27/2022 # Supported data stores in Azure Data Share Azure Data Share provides open and flexible data sharing, including the ability to share from and to different data stores. Data providers can share data from one type of data store, and data consumers can choose a data store to receive the data.
-In this article, you'll learn about the rich set of Azure data stores that Azure Data Share supports. You'll also learn about how data providers and data consumers can combine different data stores.
+In this article, you'll learn about the set of Azure data stores that Azure Data Share supports. You'll also learn about how data providers and data consumers can combine different data stores.
## Supported data stores
The following table explains the combinations and options that data consumers ca
| Data Explorer ||||||| Γ£ô | ## Share from a storage account
-Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. You can share block, append, or page blobs, and they are received as block blobs.
+
+Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. You can share block, append, or page blobs, and they're received as block blobs.
When file systems, containers, or folders are shared in snapshot-based sharing, data consumers can choose to make a full copy of the shared data. Or they can use the incremental snapshot capability to copy only new files or updated files.
An incremental snapshot is based on the last-modified time of the files. Existin
If a snapshot is interrupted and fails, for example, due to a cancel action, networking issue, or disaster, the next incremental snapshot copies files that have a last-modified time greater than the time of the last successful snapshot.
-For more information, see [Share and receive data from Azure Blob Storage and Azure Data Lake Storage](how-to-share-from-storage.md).
+For more information, see: [share and receive data from Azure Blob Storage and Azure Data Lake Storage](how-to-share-from-storage.md).
## Share from a SQL-based source
-Azure Data Share supports the sharing of both tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL Data Warehouse). It supports the sharing of tables from Azure Synapse Analytics (workspace) dedicated SQL pool. Sharing from Azure Synapse Analytics (workspace) serverless SQL pool isn't currently supported.
+
+Azure Data Share supports the sharing of both tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL Data Warehouse). It supports the sharing of tables from Azure Synapse Analytics (workspace) dedicated SQL pool. Sharing from Azure Synapse Analytics (workspace) serverless SQL pool isn't currently supported.
Data consumers can choose to accept the data into Azure Data Lake Storage Gen2 or Azure Blob Storage as a CSV file or parquet file. They can also accept data as tables into Azure SQL Database and Azure Synapse Analytics.
When consumers accept data into Azure Data Lake Storage Gen2 or Azure Blob Stora
If a snapshot is interrupted and fails, for example, due to a cancel action, networking issue, or disaster, the next snapshot copies the entire table or view again.
-For more information, see [Share and receive data from Azure SQL Database and Azure Synapse Analytics](how-to-share-from-sql.md).
+For more information, see: [share and receive data from Azure SQL Database and Azure Synapse Analytics](how-to-share-from-sql.md).
## Share from Data Explorer
-Azure Data Share supports the ability to share databases in-place from Azure Data Explorer clusters. A data provider can share at the level of the database or the cluster. If you are using Data Share API to share data, you can also share specific tables.
+
+Azure Data Share supports the ability to share databases in-place from Azure Data Explorer clusters. A data provider can share at the level of the database or the cluster. If you're using Data Share API to share data, you can also share specific tables.
When data is shared at the database level, data consumers can access only the databases that the data provider shared. When a provider shares data at the cluster level, data consumers can access all of the databases from the provider's cluster, including any future databases that the data provider creates.
To access shared databases, data consumers need their own Azure Data Explorer cl
When a sharing relationship is established, Azure Data Share creates a symbolic link between the provider's cluster and the consumer's cluster. Data that's ingested into the source cluster by using batch mode appears on the target cluster within a few minutes.
-For more information, see [Share and receive data from Azure Data Explorer](/azure/data-explorer/data-share).
+For more information, see: [share and receive data from Azure Data Explorer](/azure/data-explorer/data-share).
## Next steps
databox-online Azure Stack Edge Zero Touch Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-zero-touch-provisioning.md
+
+ Title: Use a config file to deploy an Azure Stack Edge device | Microsoft Docs
+description: Describes how to use PowerShell to provision and activate Azure Stack Edge devices.
+++++ Last updated : 10/26/2022++
+# Use a config file to deploy an Azure Stack Edge device
++
+This article describes how to automate initial device configuration and activation of Azure Stack Edge devices using PowerShell. You can automate and standardized device configuration of one or more devices before they're activated.
+
+Use this method as an alternative to the local web user interface setup sequence. You can run as many rounds of device configuration as necessary, until the device is activated. After device activation, use the Azure portal user interface or the device local web user interface to modify device configuration.
+
+## Usage considerations
+
+- You can apply configuration changes to a device until it's activated. To change device configuration after activation or to manage devices using the local web user interface, see [Connect to Azure Stack Edge Pro with GPU](azure-stack-edge-gpu-deploy-connect.md?pivots=single-node).
+- You can't change device authentication using this method. To change device authentication settings, see [Change device password](azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#change-device-password).
+- You can only provision single-node devices using this method. Two-node cluster configuration isn't supported.
+- You can apply individual configuration changes to a device using PowerShell cmdlets, or you can apply bulk configuration changes using a JSON file.
+
+## About device setup and configuration
+
+Device setup and configuration declarations define the configuration for that device using a root-level "Device" identifier. Declarations supported for Azure Stack Edge devices include:
+- Device endpoint
+- Password
+- Certificates
+- Encryption at rest
+- Web proxy
+- Network
+- Time
+- Update
+- Activation
+
+A device configuration operation doesn't have to include every declaration; you can include only the declarations that create a desired configuration for your device.
+
+The following PowerShell cmdlets are supported to configure Azure Stack Edge devices:
+
+|Cmdlet|Description|
+|||
+|Set-Login|First-time sign in, set or change sign in credentials to access the device.|
+|Get-DeviceConfiguration|Fetch the current device configuration.|
+|Set-DeviceConfiguration|Change the device configuration.|
+|New-Package|Prepare a device setup configuration package to apply to one or more devices.|
+|Get-DeviceConfigurationStatus|Fetch the status of in-flight configuration changes being applied to the device to determine whether the operation succeeded, failed, or is still in progress.|
+|Get-DeviceDiagnostic|Fetch diagnostic status of the device.|
+|Start-DeviceDiagnostic|Start a new diagnostic run to verify status after a device setup configuration package is applied.|
+|To-json|A utility command that formats the cmdlet response in a JSON file.|
+
+## Prerequisites
+
+Before you begin, make sure that you:
+
+1. Have a client running Windows 10 or later, or Windows Server 2016 or later.
+1. Are running PowerShell version 5.1 or later.
+1. Are connected to the local web UI of an Azure Stack Edge device. For more information, see [Connect to Azure Stack Edge Pro with GPU](azure-stack-edge-gpu-deploy-connect.md?pivots=single-node).
+1. Have downloaded the [PowerShell module](https://aka.ms/aseztp-ps).
+
+## Import the module and sign into the device
+
+Use the following steps to import the PowerShell module and sign into the device.
+
+1. Run PowerShell as an administrator.
+1. Import the PowerShell module.
+
+ ```azurepowershell
+ Import-Module "<Local path to PowerShell module>"\ZtpRestHelpers.ps1
+ ```
+
+1. Sign into the device using the ```Set-Login``` cmdlet. First-time sign into the device requires password reset.
+
+ ```azurepowershell
+ Set-Login "https://<IP address>" "<Password1>" "<NewPassword>"
+ ```
+
+## Change password and fetch the device configuration
+
+Use the following steps to sign into a device, change the password, and fetch the device configuration:
+
+1. Sign into the device and change the device password.
+
+ ```azurepowershell
+ Set-Login "https://<IP address>" "<CurrentPassword>" "<NewPassword>"
+ ```
+
+1. Fetch the device configuration.
+
+ ```azurepowershell
+ Get-DeviceConfiguration | To-json
+ ```
+
+## Apply initial configuration to a device
+
+Use the following steps to create a device configuration package in PowerShell and then apply the configuration to one or more devices.
+
+Run the following cmdlets in PowerShell:
+
+1. Sign into the device.
+
+ ```azurepowershell
+ Set-Login "https://<IP address>" "<Password>"
+ ```
+
+1. Set the time object properties.
+
+ ```azurepowershell
+ $time = New-Object PSObject -Property @{ TimeZone = "Hawaiian Standard Time" }
+ ```
+
+1. Set the update object properties.
+
+ ```azurepowershell
+ $update = New-Object PSObject -Property @{ ServerType = "MicrosoftUpdate" }
+ ```
+
+1. Create a package with the new time and update settings.
+
+ ```azurepowershell
+ $pkg = New-Package -Time $time -Update $update
+ ```
+
+1. Run the package.
+
+ ```azurepowershell
+ $newCfg = Set-DeviceConfiguration -DesiredDeviceConfig $pkg
+ ```
+
+1. Verify that the operation is complete.
+
+ ```azurepowershell
+ Get-DeviceConfigurationStatus | To-json
+ ```
+ Here's an example output:
+
+ ```output
+ PS C:\> Get-DeviceConfigurationStatus | To-json
+ {
+ "deviceConfiguration": {
+ "status": "Complete",
+ "results": [
+ {
+ "declarationName": "Time",
+ "resultCode": "Success",
+ "errorCode": "None",
+ "message": null
+ },
+ {
+ "declarationName": "Update",
+ "resultCode": "Success",
+ "errorCode": "None",
+ "message": null
+ }
+ ]
+ }
+ }
+ PS C:\>
+
+ ```
+
+1. After the operation is complete, fetch the new device configuration.
+
+ ```azurepowershell
+ Get-DeviceConfiguration | To-json
+ ```
+
+1. Save the device configuration as a JSON file.
+
+ ```azurepowershell
+ Get-DeviceConfiguration | To-json | Out-File "<Local path>\TestConfig2.json"
+ ```
+
+1. After saving device configuration settings to a JSON file, you can use steps in the following section to apply those device configuration settings to one or more devices that aren't yet activated.
+
+## Apply a configuration to a device using a JSON file, without device activation
+
+Once a config.json file has been created, as in the previous example, with the desired configuration, use the JSON file to change configuration settings on one or more devices that aren't activated.
+
+> [!NOTE]
+> Use a config.json file that meets the needs of your organization. A [sample config.json file is available here](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/tree/master/ZTP/).
+
+This sequence of PowerShell cmdlets signs into the device, applies device configuration settings from a JSON file, verifies completion of the operation, and then fetches the new device configuration.
+
+Run the following cmdlets in PowerShell:
+
+1. Sign into the device.
+
+ ```azurepowershell
+ Set-Login "https://<IP address>" "<Password>"
+ ```
+
+1. Before you run the device configuration operation, ensure that the JSON file uses the node.id of the device to be changed.
+
+ > [!NOTE]
+ > Each device has a unique node.id. To change device configuration settings, the node.id in the JSON file must match the node.id of the device to be changed.
+
+ Fetch the node.id from the device with the following command in PowerShell:
+
+ ```azurepowershell
+ Get-DeviceConfiguration | To-json
+ ```
+
+ Here's an example of output showing node.id for the device:
+
+ ```output
+
+ PS C:\> Get-DeviceConfiguration | To-json
+ {
+ "device": {
+ "deviceInfo": {
+ "model": "Azure Stack Edge",
+ "softwareVersion": "2.2.2075.5523",
+ "serialNumber": "1HXQG13",
+ "isActivated": false,
+ "nodes": [
+ {
+ "id": "d0d8cb16-60d4-4970-bb65-b9d254d1a289",
+ "name": "1HXQG13"
+ }
+ ]
+ },
+ ```
+
+1. Create a package that uses a local JSON file for device configuration settings.
+
+ ```azurepowershell
+ $p = Get-Content -Path "<Local path>\<ConfigFileName.json>" | ConvertFrom-json
+ ```
+
+1. Run the package.
+
+ ```azurepowershell
+ $newCfg = Set-DeviceConfiguration -DesiredDeviceConfig $p
+ ```
+
+1. Monitor status of the operation. It may take 10 minutes or more for the changes to complete.
+
+ ```azurepowershell
+ Get-DeviceConfigurationStatus | To-json
+ ```
+
+1. After the operation is complete, fetch the new device configuration.
+
+ ```azurepowershell
+ Get-DeviceConfiguration | To-json
+ ```
+
+## Activate a device
+
+Use the following steps to activate an Azure Stack Edge device. Note that activation can't be undone, and a device activation key can't be reused or applied to a different device.
+
+1. Retrieve the activation key for your device. For detailed steps, see [Create a management resource, and Get the activation key](azure-stack-edge-gpu-deploy-prep.md#create-a-management-resource-for-each-device) sections.
+
+1. Sign into the device.
+
+ ```azurepowershell
+ Set-Login "https://<IP address>" "Password"
+ ```
+
+1. Set the ActivationKey property.
+
+ ```azurepowershell
+ $ActivationKey = "<Activation key>"
+ ```
+1. Create an activation object and set the activationKey property.
+
+ ```azurepowershell
+ $activation = New-Object PsObject -Property @{ActivationKey=$ActivationKey; ServiceEncryptionKey=""}
+ ```
+
+1. Create a package with the activation object and activation key.
+
+ ```azurepowershell
+ $p = New-Package -Activation $activation
+ ```
+
+1. Run the package.
+
+ ```azurepowershell
+ $newCfg = Set-DeviceConfiguration -DesiredDeviceConfig $p
+ ```
+
+1. Monitor status of the operation. It may take 10 minutes or more for the changes to complete.
+
+ ```azurepowershell
+ Get-DeviceConfigurationStatus | To-json
+ ```
+
+1. After the operation is complete, fetch the new device configuration.
+
+ ```azurepowershell
+ Get-DeviceConfiguration | To-json
+ ```
+
+ Here's an example of output showing device activation status:
+
+ ```output
+ PS C:\> Get-DeviceConfiguration | To-json
+ {
+ "device": {
+ "deviceInfo": {
+ "model": "Azure Stack Edge",
+ "softwareVersion": "2.2.2075.5523",
+ "serialNumber": "1HXQJ23",
+ "isActivated": true,
+ "nodes": [
+ {
+ "id": "d0d8ca16-60d4-4970-bb65-b9d254d1a289",
+ "name": "1HXQG13"
+ }
+ ]
+ },
+
+ ```
+
+## Quickly fetch or change device configuration settings
+
+Use the following steps to sign into the device, fetch the status of the webProxy properties, set the webProxy property to ΓÇ£isEnabled = trueΓÇ¥ and set the webProxy URI, and then fetch the status of the changed webProxy properties. After running the package, verify the new device configuration.
+
+1. Sign into the device.
+
+ ```azurepowershell
+ Set-Login "https://<IP address>" "Password"
+ ```
+
+1. Load the device configuration cmdlet.
+
+ ```azurepowershell
+ $p = Get-DeviceConfiguration
+ ```
+
+1. Fetch the status of the webProxy properties.
+
+ ```azurepowershell
+ $p.device.webproxy
+ ```
+
+ Here's a sample output:
+
+ ```output
+ PS C:\> $p.device.webproxy
+
+ isEnabled : False
+ connectionURI : null
+ authentication : None
+ username :
+ password :
+ ```
+
+1. Set the webProxy property to ΓÇ£isEnabled = trueΓÇ¥ and set the webProxy URI.
+
+ ```azurepowershell
+ $p.device.webproxy.isEnabled = $true
+ $p.device.webproxy.connectionURI = "<specify a URI depending on the geographic location of the device>"
+ ```
+
+1. Fetch the status of the updated webProxy properties.
+
+ ```azurepowershell
+ $p.device.webproxy
+ ```
+
+ Here's a sample output showing the updated properties:
+
+ ```output
+ PS C:\> $p.device.webproxy
+
+ isEnabled : True
+ connectionURI : http://10.57.48.82:8080
+ authentication : None
+ username :
+ password :
+ ```
+
+1. Run the package with updated webProxy properties.
+
+ ```azurepowershell
+ $newCfg = Set-DeviceConfiguration -DesiredDeviceConfig $p
+ ```
+
+1. Monitor status of the operation. It may take 10 minutes or more for the changes to complete.
+
+ ```azurepowershell
+ Get-DeviceConfigurationStatus | To-json
+ ```
+
+1. After the operation is complete, fetch the new device configuration.
+
+ ```azurepowershell
+ Get-DeviceConfiguration | To-json
+ ```
+
+ Here's an example of output showing the updated webProxy properties:
+
+ ```output
+ "webProxy": {
+ "isEnabled": true,
+ "connectionURI": "http://10.57.48.82:8080",
+ "authentication": "None",
+ "username": null,
+ "password": null
+ }
+ ```
+
+## Run device diagnostics
+
+Use the following steps to sign into the device and run device diagnostics to verify status after you apply a device configuration package.
+
+1. Sign into the device.
+
+ ```azurepowershell
+ Set-Login "https://<IP address>" "Password"
+ ```
+
+1. Run device diagnostics.
+
+ ```azurepowershell
+ Start-DeviceDiagnostic
+ ```
+1. Fetch the status of the device diagnostics operation.
+
+ ```azurepowershell
+ Get-DeviceDiagnostic | To-json
+ ```
+ Here's an example of output showing device diagnostics:
+
+ ```output
+ PS C:\> Get-DeviceDiagnostic | To-json
+ {
+ "lastRefreshTime": "2022-09-27T20:12:10.643768Z",
+ "status": "Complete",
+ "diagnostics": [
+ {
+ "test": "System software",
+ "category": "Software",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Disks",
+ "category": "Hardware, Disk",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Power Supply Units",
+ "category": "Hardware",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Network interfaces",
+ "category": "Hardware",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "CPUs",
+ "category": "Hardware",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Network settings ",
+ "category": "Logical, Network",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Internet connectivity",
+ "category": "Logical, Network",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Web proxy",
+ "category": "Logical, Network",
+ "status": "NotApplicable",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Time sync ",
+ "category": "Logical, Time",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Azure portal connectivity",
+ "category": "Logical, Network, AzureConnectivity",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Azure storage account credentials",
+ "category": "Logical, AzureConnectivity",
+ "status": "NotApplicable",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Software update readiness",
+ "category": "Logical, Update",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "User passwords",
+ "category": "Logical, PasswordExpiry",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Azure consistent services health check",
+ "category": "ACS",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Certificates",
+ "category": "Certificates",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Azure container read/write",
+ "category": "Logical, Network, AzureConnectivity",
+ "status": "NotApplicable",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Azure Edge compute runtime",
+ "category": "Logical, AzureEdgeCompute",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ },
+ {
+ "test": "Compute acceleration",
+ "category": "Hardware, Logical",
+ "status": "Succeeded",
+ "recommendedActions": ""
+ }
+ ]
+ }
+
+ ```
+
+## Troubleshooting
+
+- [Run diagnostics or collect logs to troubleshoot Azure Stack Edge device issues](azure-stack-edge-gpu-troubleshoot.md).
+
+## Next steps
+
+- [Troubleshoot device activation issues](azure-stack-edge-gpu-troubleshoot-activation.md).
+- [Troubleshoot Azure Resource Manager issues](azure-stack-edge-gpu-troubleshoot-azure-resource-manager.md).
+- [Troubleshoot Blob storage issues](azure-stack-edge-gpu-troubleshoot-blob-storage.md).
databox Data Box Disk Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-ordered.md
Previously updated : 07/10/2022 Last updated : 10/21/2022 # Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure. # Tutorial: Order an Azure Data Box Disk
-Azure Data Box Disk is a hybrid cloud solution that allows you to import your on-premises data into Azure in a quick, easy, and reliable way. You transfer your data to solid-state disks (SSDs) supplied by Microsoft and ship the disks back. This data is then uploaded to Azure.
+Azure Data Box Disk is a hybrid cloud solution that allows you to import your on-premises data into Azure in a quick, easy, and reliable way. You transfer your data to solid-state disks (SSDs) supplied by Microsoft and ship the disks back. This data is then uploaded to Azure.
This tutorial describes how you can order an Azure Data Box Disk. In this tutorial, you learn about:
This tutorial describes how you can order an Azure Data Box Disk. In this tutori
> > * Order a Data Box Disk > * Track the order
-> * Cancel the order
+> * Cancel the order
## Prerequisites
Take the following steps to order Data Box Disk.
|Setting|Value| |||
- |Subscription|Select the subscription for which Data Box service is enabled.<br> The subscription is linked to your billing account. |
|Transfer type| Import to Azure|
+ |Subscription|Select the subscription for which Data Box service is enabled.<br> The subscription is linked to your billing account. |
+ |Resource group| Select the resource group you want to use to order a Data Box. <br> A resource group is a logical container for the resources that can be managed or deployed together.|
|Source country/region | Select the country/region where your data currently resides.| |Destination Azure region|Select the Azure region where you want to transfer data.|
Take the following steps to order Data Box Disk.
![Select Data Box Disk option 2](media/data-box-disk-deploy-ordered/select-data-box-sku-zoom.png)
-5. In **Order**, specify the **Order details**. Enter or select the following information.
+5. In **Order**, specify the **Order details** in the **Basics** tab. Enter or select the following information.
+ |Setting|Value| |||
- |Name|Provide a friendly name to track the order.<br> The name can have between 3 and 24 characters that can be letters, numbers, and hyphens. <br> The name must start and end with a letter or a number. |
- |Resource group| Use an existing or create a new one. <br> A resource group is a logical container for the resources that can be managed or deployed together. |
- |Destination Azure region| Select a region for your storage account.<br> Currently, storage accounts in all regions in US, West and North Europe, Canada, and Australia are supported. |
- |Estimated data size in TB| Enter an estimate in TB. <br>Based on the data size, Microsoft sends you an appropriate number of 8 TB SSDs (7 TB usable capacity). <br>The maximum usable capacity of 5 disks is up to 35 TB. |
+ |Subscription| The subscription is automatically populated based on your earlier selection. |
+ |Resource group| The resource group you selected previously. |
+ |Import order name|Provide a friendly name to track the order.<br> The name can have between 3 and 24 characters that can be letters, numbers, and hyphens. <br> The name must start and end with a letter or a number. |
+ |Number of disks per order| Enter the number of disks you would like to order. <br> There can be a maximum of 5 disks per order (1 disk = 7TB). |
|Disk passkey| Supply the disk passkey if you check **Use custom key instead of Azure generated passkey**. <br> Provide a 12 to 32-character alphanumeric key that has at least one numeric and one special character. The allowed special characters are `@?_+`. <br> You can choose to skip this option and use the Azure generated passkey to unlock your disks.|
- |Storage destination | Choose from storage account or managed disks or both. <br> Based on the specified Azure region, select a storage account from the filtered list of an existing storage account. Data Box Disk can be linked with only 1 storage account. <br> You can also create a new **General-purpose v1**, **General-purpose v2**, or **Blob storage account**. <br>Storage accounts with virtual networks are supported. To allow Data Box service to work with secured storage accounts, enable the trusted services within the storage account network firewall settings. For more information, see how to [Add Azure Data Box as a trusted service](../storage/common/storage-network-security.md#exceptions).|
- If using storage account as the storage destination, you see the following screenshot:
+ ![Screenshot of order details](media/data-box-disk-deploy-ordered/data-box-disk-order.png)
+
+6. On the **Data destination** screen, select the **Data destination** - either storage accounts or managed disks (or both).
+
+ |Setting|Value|
+ |||
+ |Data destination |Choose from storage account or managed disks or both.<br> Based on the specified Azure region, select a storage account from the filtered list of an existing storage account. Data Box Disk can be linked with only 1 storage account.<br> You can also create a new General-purpose v1, General-purpose v2, or Blob storage account.<br> Storage accounts with virtual networks are supported. To allow Data Box service to work with secured storage accounts, enable the trusted services within the storage account network firewall settings. For more information, see how to Add Azure Data Box as a trusted service.|
+ |Destination Azure region| Select a region for your storage account. <br> Currently, storage accounts in all regions in US, West and North Europe, Canada, and Australia are supported. |
+ |Resource group| If using Data Box Disk to create managed disks from the on-premises VHDs, you need to provide the resource group.<br> Create a new resource group if you intend to create managed disks from on-premises VHDs. Use an existing resource group only if it was created for Data Box Disk order for managed disk by Data Box service.<br> Only one resource group is supported.|
+
+ ![Screenshot of Data Box Disk data destination.](media/data-box-disk-deploy-ordered/data-box-disk-order-destination.png)
+
+ The storage account specified for managed disks is used as a staging storage account. The Data Box service uploads the VHDs to the staging storage account and then converts those into managed disks and moves to the resource groups. For more information, see Verify data upload to Azure.
+
+7. Select **Next: Security>** to continue.
+
+ The **Security** screen lets you use your own encryption key.
+
+ All settings on the **Security** screen are optional. If you don't change any settings, the default settings will apply.
+
+8. If you want to use your own customer-managed key to protect the unlock passkey for your new resource, expand **Encryption type**.
+
+ ![Screenshot of Data Box Disk encryption type.](media/data-box-disk-deploy-ordered/data-box-disk-encryption.png)
+
+ Configuring a customer-managed key for your Azure Data Box Disk is optional. By default, Data Box uses a Microsoft managed key to protect the unlock passkey.
+
+ A customer-managed key doesn't affect how data on the device is encrypted. The key is only used to encrypt the device unlock passkey.
+
+ If you don't want to use a customer-managed key, skip to Step 14.
+
+1. To use a customer-managed key, select **Customer managed key** as the key type. Then choose **Select a key vault and key**.
+
+ ![Screenshot of Customer managed key selection.](media/data-box-disk-deploy-ordered/data-box-disk-customer-key.png)
+
+1. In the **Select key from Azure Key Vault** blade:
+
+ - The **Subscription** is automatically populated.
+
+ - For **Key vault**, you can select an existing key vault from the dropdown list.
+
+ ![Screenshot of existing key vault.](media/data-box-disk-deploy-ordered/data-box-disk-select-key-vault.png)
+
+ Or select **Create new key vault** if you want to create a new key vault.
+
+ ![Screenshot of new key vault.](media/data-box-disk-deploy-ordered/data-box-disk-create-new-key-vault.png)
+
+ Then, on the **Create key vault** screen, enter the resource group and a key vault name. Ensure that **Soft delete** and **Purge protection** are enabled. Accept all other defaults, and select **Review + Create**.
+
+ ![Screenshot of Create key vault blade.](media/data-box-disk-deploy-ordered/data-box-disk-key-vault-blade.png)
+
+ Review the information for your key vault, and select **Create**. Wait for a couple minutes for key vault creation to complete.
+
+ ![Screenshot of Review + create.](media/data-box-disk-deploy-ordered/data-box-disk-create-key-vault.png)
+
+1. The **Select a key** blade will display your selected key vault.
+
+ ![Screenshot of new key vault 2.](media/data-box-disk-deploy-ordered/data-box-disk-new-key-vault.png)
+
+ If you want to create a new key, select **Create new key**. You must use an **RSA key**. The size can be 2048 or greater. Enter a name for your new key, accept the other defaults, and select **Create**.
+
+ ![Screenshot of Create new key.](media/data-box-disk-deploy-ordered/data-box-disk-new-key.png)
+
+ You'll be notified when the key has been created in your key vault. Your new key will be selected and displayed on the **Select a key** blade.
+
+1. Select the **Version** of the key to use, and then choose **Select**.
+
+ ![Screenshot of key version.](media/data-box-disk-deploy-ordered/data-box-disk-key-version.png)
+
+ If you want to create a new key version, select **Create new version**.
+
+ ![Screenshot of new key version.](media/data-box-disk-deploy-ordered/data-box-disk-new-key-version.png)
+
+ Choose settings for the new key version, and select **Create**.
+
+ ![Screenshot of new key version settings.](media/data-box-disk-deploy-ordered/data-box-disk-new-key-settings.png)
+
+ The **Encryption type** settings on the **Security** screen show your key vault and key.
+
+ ![Screenshot of encryption type settings.](media/data-box-disk-deploy-ordered/data-box-disk-encryption-settings.png)
+
+1. Select a user identity that you'll use to manage access to this resource. Choose **Select a user identity**. In the panel on the right, select the subscription and the managed identity to use. Then choose **Select**.
- ![Data Box Disk order for storage account](media/data-box-disk-deploy-ordered/order-storage-account.png)
+ A user-assigned managed identity is a stand-alone Azure resource that can be used to manage multiple resources. For more information, see Managed identity types.
- If using Data Box Disk to create managed disks from the on-premises VHDs, you also need to provide the following information:
+ If you need to create a new managed identity, follow the guidance in Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal.
- |Setting |Value |
- |||
- |Resource group | Create a new resource group if you intend to create managed disks from on-premises VHDs. Use an existing resource group only if it was created for Data Box Disk order for managed disk by Data Box service. <br> Only one resource group is supported.|
+ ![Screenshot of user identity.](media/data-box-disk-deploy-ordered/data-box-disk-user-identity.png)
- ![Data Box Disk order for managed disk](media/data-box-disk-deploy-ordered/order-managed-disks.png)
+ The user identity is shown in Encryption type settings.
- The storage account specified for managed disks is used as a staging storage account. The Data Box service uploads the VHDs to the staging storage account and then converts those into managed disks and moves to the resource groups. For more information, see [Verify data upload to Azure](data-box-disk-deploy-upload-verify.md#verify-data-upload-to-azure).
+ ![Screenshot of user identity 2.](media/data-box-disk-deploy-ordered/data-box-disk-user-identity-2.png)
-6. Click **Next**.
- ![Supply order details](media/data-box-disk-deploy-ordered/data-box-order-details.png)
+8. In the **Contact details** tab, select **Add address** and enter the address details. Click Validate address. The service validates the shipping address for service availability. If the service is available for the specified shipping address, you receive a notification to that effect.
-7. In the **Shipping address** tab, provide your first and last name, name and postal address of the company and a valid phone number. Click **Validate address**. The service validates the shipping address for service availability. If the service is available for the specified shipping address, you receive a notification to that effect.
+ If you have chosen self-managed shipping, see [Use self-managed shipping](data-box-disk-portal-customer-managed-shipping.md).
- After the order is processed, you will receive an email notification. If you have chosen self-managed shipping, see [Use self-managed shipping](data-box-disk-portal-customer-managed-shipping.md).
+ ![Screenshot of Data Box Disk contact details.](media/data-box-disk-deploy-ordered/data-box-disk-contact-details.png)
- ![Provide shipping address](media/data-box-disk-deploy-ordered/data-box-shipping-address.png)
-8. In the **Notification details**, specify email addresses. The service sends email notifications regarding any updates to the order status to the specified email addresses.
+ Specify valid email addresses as the service sends email notifications regarding any updates to the order status to the specified email addresses.
We recommend that you use a group email so that you continue to receive notifications if an admin in the group leaves.
-9. Review the information **Summary** related to the order, contact, notification, and privacy terms. Check the box corresponding to the agreement to privacy terms.
+9. Review the information in the **Review + Order** tab related to the order, contact, notification, and privacy terms. Check the box corresponding to the agreement to privacy terms.
-10. Click **Order**. The order takes a few minutes to be created.
+10. Click **Order**. The order takes a few minutes to be created.
## Track the order After you have placed the order, you can track the status of the order from Azure portal. Go to your order and then go to **Overview** to view the status. The portal shows the job in **Ordered** state.
-![Data Box Disk status ordered](media/data-box-disk-deploy-ordered/data-box-portal-ordered.png)
+![Data Box Disk status ordered.](media/data-box-disk-deploy-ordered/data-box-portal-ordered.png)
If the disks are not available, you receive a notification. If the disks are available, Microsoft identifies the disks for shipment and prepares the disk package. During disk preparation, following actions occur:
To cancel this order, in the Azure portal, go to **Overview** and click **Cancel
You can only cancel when the disks are ordered, and the order is being processed for shipment. Once the order is processed, you can no longer cancel the order.
-![Cancel order](media/data-box-disk-deploy-ordered/cancel-order1.png)
+![Cancel order.](media/data-box-disk-deploy-ordered/cancel-order1.png)
To delete a canceled order, go to **Overview** and click **Delete** from the command bar.
databox Data Box How To Set Data Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-how-to-set-data-tier.md
Title: Send data to Hot, Cold, Archive blob tier via Azure Data Box/Azure Data Box Heavy
-description: Describes how to use Azure Data Box or Azure Data Box Heavy to send data to an appropriate block blob storage tier such as hot, cold, or archive
+ Title: Send data to Hot, Cool, Archive blob tier via Azure Data Box/Azure Data Box Heavy
+description: Describes how to use Azure Data Box or Azure Data Box Heavy to send data to an appropriate block blob storage tier such as hot, Cool, or archive
Azure Data Box moves large amounts of data to Azure by shipping you a proprietary storage device. You fill up the device with data and return it. The data from Data Box is uploaded to a default tier associated with the storage account. You can then move the data to another storage tier.
-This article describes how the data that is uploaded by Data Box can be moved to a Hot, Cold, or Archive blob tier. This article applies to both Azure Data Box and Azure Data Box Heavy.
+This article describes how the data that is uploaded by Data Box can be moved to a Hot, Cool, or Archive blob tier. This article applies to both Azure Data Box and Azure Data Box Heavy.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Choose the correct storage tier for your data
-Azure storage allows three different tiers to store data in the most cost-effective manner - Hot, Cold, or Archive. Hot storage tier is optimized for storing data that is accessed frequently. Hot storage has higher storage costs than Cool and Archive storage, but the lowest access costs.
+Azure storage allows three different tiers to store data in the most cost-effective manner - Hot, Cool, or Archive. Hot storage tier is optimized for storing data that is accessed frequently. Hot storage has higher storage costs than Cool and Archive storage, but the lowest access costs.
-Cool storage tier is for infrequently accessed data that needs to be stored for a minimum of 30 days. The storage cost for cold tier is lower than that of hot storage tier but the data access charges are high when compared to Hot tier.
+Cool storage tier is for infrequently accessed data that needs to be stored for a minimum of 30 days. The storage cost for cool tier is lower than that of hot storage tier but the data access charges are high when compared to Hot tier.
The Azure Archive tier is offline and offers the lowest storage costs but also the highest access costs. This tier is meant for data that remains in archival storage for a minimum of 180 days. For details of each of these tiers and the pricing model, go to [Comparison of the storage tiers](../storage/blobs/access-tiers-overview.md).
-The data from the Data Box or Data Box Heavy is uploaded to a storage tier that is associated with the storage account. When you create a storage account, you can specify the access tier as Hot or Cold. Depending upon the access pattern of your workload and cost, you can move this data from the default tier to another storage tier.
+The data from the Data Box or Data Box Heavy is uploaded to a storage tier that is associated with the storage account. When you create a storage account, you can specify the access tier as Hot or Cool. Depending upon the access pattern of your workload and cost, you can move this data from the default tier to another storage tier.
You may only tier your object storage data in Blob storage or General Purpose v2 (GPv2) accounts. General Purpose v1 (GPv1) accounts do not support tiering. To choose the correct storage tier for your data, review the considerations detailed in [Azure Blob storage: Premium, Hot, Cool, and Archive storage tiers](../storage/blobs/access-tiers-overview.md).
Following steps describe how you can set the blob tier to Archive using an Azure
## Next steps -- Learn how to address the [common data tiering scenarios with lifecycle policy rules](../storage/blobs/lifecycle-management-overview.md#examples-of-lifecycle-policies)
+- Learn how to address the [common data tiering scenarios with lifecycle policy rules](../storage/blobs/lifecycle-management-overview.md#examples-of-lifecycle-policies)
databox Data Box System Requirements Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements-rest.md
We recommend that you review the information carefully before you connect to the
||-|| | Azure Files | Cloud-based SMB and NFS file shares supported | Not supported | | Service encryption for data at Rest | 256-bit AES encryption | 256-bit AES encryption |
-| Storage account type | General-purpose and Azure blob storage accounts | General-purpose v1 only|
+| Storage account type | General-purpose and Azure Blob storage accounts | General-purpose v1 only|
| Blob name | 1,024 characters (2,048 bytes) | 880 characters (1,760 bytes)| | Block blob maximum size | 4.75 TiB (100 MB X 50,000 blocks) | 4.75 TiB (100 MB x 50,000 blocks) for Azure Data Box v 3.0 onwards.| | Page blob maximum size | 8 TiB | 1 TiB |
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Detected suspicious use of the useradd command**<br>(VM_SuspectUserAddition)|Analysis of host data has detected suspicious use of the useradd command on %{Compromised Host}.|Persistence|Medium| |**Digital currency mining related behavior detected**|Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining.|-|High| |**Disabling of auditd logging [seen multiple times]**|The Linux Audit system provides a way to track security-relevant information on the system. It records as much information about the events that are happening on your system as possible. Disabling auditd logging could hamper discovering violations of security policies used on the system. This behavior was seen [x] times today on the following machines: [Machine names]|-|Low|
-|**Docker build operation detected on a Kubernetes node**<br>(VM_ImageBuildOnNode) | Machine logs indicate a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | Defense Evasion | Low |
|**Executable found running from a suspicious location**<br>(VM_SuspectExecutablePath)|Analysis of host data detected an executable file on %{Compromised Host} that is running from a location in common with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host.| Execution |High| |**Exploitation of Xorg vulnerability [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the user of Xorg with suspicious arguments. Attackers may use this technique in privilege escalation attempts. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Exposed Docker daemon on TCP socket**<br>(VM_ExposedDocker)|Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port.|Execution, Exploitation|Medium|
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Script extension mismatch detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Script extension mismatch detected**<br>(VM_MismatchedScriptFeatures)|Analysis of host data on %{Compromised Host} detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions.|Defense Evasion|Medium| |**Shellcode detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected shellcode being generated from the command line. This process could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**SSH server is running inside a container**<br>(VM_ContainerSSH)| Machine logs indicate that an SSH server is running inside a Docker container. While this behavior can be intentional, it frequently indicates that a container is misconfigured or breached.|Execution|Medium|
|**Successful SSH brute force attack**<br>(VM_SshBruteForceSuccess)|Analysis of host data has detected a successful brute force attack. The IP %{Attacker source IP} was seen making multiple login attempts. Successful logins were made from that IP with the following user(s): %{Accounts used to successfully sign in to host}. This means that the host may be compromised and controlled by a malicious actor.|Exploitation|High| |**Suspect Password File Access** <br> (VM_SuspectPasswordFileAccess) | Analysis of host data has detected suspicious access to encrypted user passwords. | Persistence | Informational | |**Suspicious Account Creation Detected**|Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator.|-|Medium|
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Suspicious password access [seen multiple times]**|Analysis of host data has detected suspicious access to encrypted user passwords on %{Compromised Host}. This behavior was seen [x] times today on the following machines: [Machine names]|-|Informational| |**Suspicious password access**|Analysis of host data has detected suspicious access to encrypted user passwords on %{Compromised Host}.|-|Informational| |**Suspicious PHP execution detected**<br>(VM_SuspectPhp)|Machine logs indicate that a suspicious PHP process is running. The action included an attempt to run OS commands or PHP code from the command line using the PHP process. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities such as attempts to infect websites with web shells.|Execution|Medium|
-|**Suspicious request to Kubernetes API**<br>(VM_KubernetesAPI)|Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container.|LateralMovement|Medium|
|**Suspicious request to the Kubernetes Dashboard**<br>(VM_KubernetesDashboard) | Machine logs indicate that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container. |LateralMovement| Medium | |**Threat Intel Command Line Suspect Domain** <br> (VM_ThreatIntelCommandLineSuspectDomain) | The process 'PROCESSNAME' on 'HOST' connected to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred.| Initial Access | Medium | |**Unusual config reset in your virtual machine**<br>(VM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium |
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
To learn about how to respond to these attack paths, see [Identify and remediate
|--|--|--|--| | Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | AAD Managed identity | | Has permission to | Indicates that an identity has permissions to a resource or a group of resources | AAD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources|
-| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, Github owner, Azure DevOps project, Azure DevOps organization | All Azure & AWS resources, All Kubernetes entities, All DevOps entities |
+| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization | All Azure & AWS resources, All Kubernetes entities, All DevOps entities |
| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service | | Is running | Indicates that the source entity is running the target entity as a process | Azure VM, Kubernetes container | SQL, Kubernetes image, Kubernetes pod | | Member of | Indicates that the source identity is a member of the target identities group | AAD group, AAD user | AAD group |
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
The following table summarizes what's included in each plan.
| **Unified view** | The Defender for Cloud portal displays Defender for Endpoint alerts. You can then drill down into Defender for Endpoint portal, with additional information such as the alert process tree, the incident graph, and a detailed machine timeline showing historical data up to six months.| :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Automatic MDE provisioning** | Automatic provisioning of Defender for Endpoint on Azure, AWS, and GCP resources. | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Microsoft threat and vulnerability management** | Discover vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, without needing other agents or periodic scans. [Learn more](deploy-vulnerability-assessment-tvm.md). | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| **Fileless attack detection** | Fileless attack detection in Defender for Servers and Microsoft Defender for Endpoint (MDE) generate detailed security alerts that accelerate alert triage, correlation, and downstream response time. | :::image type="icon" source="./mediE & Defender for Servers) |
-| **Threat detection for OS and network** | Defender for Servers and Microsoft Defender for Endpoint (MDE) detect threats at the OS and network levels, including VM behavioral detections. | :::image type="icon" source="./mediE & Defender for Servers) |
-| **Threat detection for the control plane** | Defender for Servers detects threats directed at the control plane, including network-based detections. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| **Threat detection for OS-level (Agent-based)** | Defender for Servers and Microsoft Defender for Endpoint (MDE) detect threats at the OS level, including VM behavioral detections and **Fileless attack detection**, which generates detailed security alerts that accelerate alert triage, correlation, and downstream response time.<br>[Learn more](alerts-reference.md#alerts-windows) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| **Threat detection for network-level (Agentless)** | Defender for Servers detects threats directed at the control plane on the network, including network-based detections for Azure virtual machines. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| **Security Policy and Regulatory Compliance** | Customize a security policy for your subscription and also compare the configuration of your resources with requirements in industry standards, regulations, and benchmarks. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Integrated vulnerability assessment powered by Qualys** | Use the Qualys scanner for real-time identification of vulnerabilities in Azure and hybrid VMs. Everything's handled by Defender for Cloud. You don't need a Qualys license or even a Qualys account. [Learn more](deploy-vulnerability-assessment-vm.md). | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Log Analytics 500 MB free data ingestion** | Defender for Cloud leverages Azure Monitor to collect data from Azure VMs and servers, using the Log Analytics agent. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
defender-for-iot Concept Agent Portfolio Overview Os Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-agent-portfolio-overview-os-support.md
For additional information on supported operating systems, or to request access
## Azure RTOS micro agent
-The Microsoft Defender for IoT micro agent provides a comprehensive and lightweight security solution for devices that use Azure RTOS. Microsoft Defender for IoT micro agent provides coverage for common threats, and potential malicious activities on real-time operating system (RTOS) devices. The micro agent comes built in as part of the Azure RTOS NetX Duo component, and monitors the device's network activity.
- The Microsoft Defender for IoT micro agent comes built in as part of the Azure RTOS NetX Duo component, and monitors the device's network activity. The micro agent consists of a comprehensive and lightweight security solution that provides coverage for common threats, and potential malicious activities on a real-time operating system (RTOS) devices. ## Next steps
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Enter the following parameters:
| Syslog text message output fields | Description | |--|--|
-| Date and time | Date and time that the syslog server machine received the information. |
| Priority | User. Alert |
-| Hostname | Sensor IP address |
| Message | CyberX platform name: The sensor name.<br /> Microsoft Defender for IoT Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Protocol (Optional): The detected source protocol.<br /> Address (Optional): Source protocol address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Protocol (Optional): The detected destination protocol.<br /> Address (Optional): The destination protocol address.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. <br /> UUID (Optional): The UUID the alert. | | Syslog object output | Description |
Enter the following parameters:
| Date and time | Date and time that the syslog server machine received the information. | | Priority | User.Alert | | Hostname | Sensor IP address |
-| Message | CEF:0 <br />Microsoft Defender for IoT <br />Sensor name= The name of the sensor appliance. <br />Sensor version <br />Alert title= The title of the alert. <br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. <br />dst_ip= IP address of the destination device.<br />cat= The alert group associated with the alert. |
+| Message | CEF:0 <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of serverity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device.<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. |
| Syslog LEEF output format | Description | |--|--|
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
The following table shows Azure Database Migration Service support for **online*
| | Amazon RDS MySQL | Γ£ö | Preview | | **Azure DB for PostgreSQL - Single server** | PostgreSQL | Γ£ö | GA | | | Azure DB for PostgreSQL - Single server <sup>2</sup> | Γ£ö | GA |
-| | Amazon DS PostgreSQL | Γ£ö | GA |
+| | Amazon RDS PostgreSQL | Γ£ö | GA |
| **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | Γ£ö | GA | | | Azure DB for PostgreSQL - Single server <sup>2</sup> | Γ£ö | GA | | | Amazon RDS PostgreSQL | Γ£ö | GA |
The following table shows Azure Database Migration Service support for **online*
## Next steps
-For an overview of Azure Database Migration Service and regional availability, see the article [What is the Azure Database Migration Service](dms-overview.md).
+For an overview of Azure Database Migration Service and regional availability, see the article [What is the Azure Database Migration Service](dms-overview.md).
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
Previously updated : 10/26/2022 Last updated : 10/27/2022
az aks create -g $RG -n $AKSNAME -l $LOC \
``` > [!NOTE]
-> To create and use your own VNet and route table with `kubelet` network plugin, you need to use [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For system-assigned control plane identity, we cannot get the identity ID before creating cluster, which causes delay for role assignment to take effect.
+> To create and use your own VNet and route table with `kubenet` network plugin, you need to use [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For system-assigned control plane identity, we cannot get the identity ID before creating cluster, which causes delay for role assignment to take effect.
+>
> To create and use your own VNet and route table with `azure` network plugin, both system-assigned and user-assigned managed identities are supported. ### Enable developer access to the API server
firewall Protect Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-virtual-desktop.md
Previously updated : 09/30/2022 Last updated : 10/27/2022
Based on the Azure Virtual Desktop (AVD) [reference article](../virtual-desktop/
| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | IP Address | 169.254.169.254, 168.63.129.16 | | Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | FQDN | ocsp.msocsp.com | ++ > [!NOTE] > Some deployments might not need DNS rules. For example, Azure Active Directory Domain controllers forward DNS queries to Azure DNS at 168.63.129.16.
healthcare-apis Autoscale Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/autoscale-azure-api-fhir.md
Azure API for FHIR, as a managed service, allows customers to persist with Fast
## What is autoscale?
-By default, Azure API for FHIR is set to manual scale. This option works well when the transaction workloads are known and consistent. Customers can adjust the throughput `RU/s` through the portal up to 10,000 and submit a request to increase the limit.
+By default, Azure API for FHIR is set to manual scale. This option works well when the transaction workloads are known and consistent. Customers can adjust the throughput `RU/s` through the portal up to 100,000 and submit a request to increase the limit.
The autoscale feature is designed to scale computing resources including the database throughput `RU/s` up and down automatically according to the workloads, thus eliminating the manual steps of adjusting allocated computing resources.
You can also decrease the max `RU/s` or `Tmax` value. When you lower the max `RU
* **Example 2**: You have 20-GB data and the highest provisioned `RU/s` is 100,000. The minimum value is Max (4000, **100,000/10**, 20x400) = 10,000. The second number, **100,000/10 =10,000**, is used. * **Example 3**: You have 80-GB data and the highest provisioned RU/s is 300,000. The minimum value is Max (4000, 300,000/10, **80x400**) = 32,000. The third number, **80x400=32,000**, is used.
-You can adjust the max `RU/s` or `Tmax` value through the portal if it's a valid number and no greater than 10,000 `RU/s`. You can create a support ticket to request `Tmax` value larger than 10,000.
+You can adjust the max `RU/s` or `Tmax` value through the portal if it's a valid number and no greater than 100,000 `RU/s`. You can create a support ticket to request `Tmax` value larger than 100,000.
>[!Note] >As data storage grows, the system will automatically increase the max throughput to the next highest RU/s that can support that level of storage.
healthcare-apis How To Use Calculated Functions Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculated-functions-mappings.md
Previously updated : 02/16/2022 Last updated : 10/25/2022 # How to use CalculatedContentTemplate mappings
-> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of MedTech service.
-
-This article describes how to use CalculatedContentTemplate mappings with MedTech service Device mappings templates.
+This article describes how to use CalculatedContentTemplate mappings with MedTech service device mapping template.
## CalculatedContentTemplate
-MedTech service provides an expression-based content template to both match the wanted template and extract values. **Expressions** may be used by either JSONPath or JmesPath. Each expression within the template may choose its own expression language.
+MedTech service provides an expression-based content template to both match the wanted template and extract values. **Expressions** may be used by either JSONPath or JMESPath. Each expression within the template may choose its own expression language.
> [!NOTE] > If an expression language isn't defined, the default expression language configured for the template will be used. The default is JSONPath but can be overwritten if needed.
In the example below, *typeMatchExpression* is defined as:
... } ```+ > [!TIP]
-> The default expression language to use for a Device mapping template is JsonPath. If you want to use JsonPath, the expression alone may be supplied.
+> The default expression language to use for a MedTech service device mapping template is JsonPath. If you want to use JsonPath, the expression alone may be supplied.
```json "templateType": "CalculatedContent",
In the example below, *typeMatchExpression* is defined as:
} ```
-The default expression language to use for a template can be explicitly set using the `defaultExpressionLanguage` parameter:
+The default expression language to use for a MedTech service device template can be explicitly set using the `defaultExpressionLanguage` parameter:
```json "templateType": "CalculatedContent",
The default expression language to use for a template can be explicitly set usin
} ```
-The CalculatedContentTemplate allows matching on and extracting values from an Azure Event Hub message using **Expressions** as defined below:
+The CalculatedContentTemplate allows matching on and extracting values from an Azure Event Hubs message using **Expressions** as defined below:
|Property|Description|Example| |--|--|-|
The CalculatedContentTemplate allows matching on and extracting values from an A
|CorrelationIdExpression|*Optional*: The expression to extract the correlation identifier. This output can be used to group values into a single observation in the FHIR destination mappings.|`$.matchedToken.correlationId`| |Values[].ValueName|The name to associate with the value extracted by the next expression. Used to bind the wanted value/component in the FHIR destination mapping template.|`hr`| |Values[].ValueExpression|The expression to extract the wanted value.|`$.matchedToken.heartRate`|
-|Values[].Required|Will require the value to be present in the payload. If not found, a measurement won't be generated and an InvalidOperationException will be created.|`true`|
+|Values[].Required|Will require the value to be present in the payload. If not found, a measurement won't be generated, and an InvalidOperationException will be created.|`true`|
### Expression Languages
When specifying the language to use for the expression, the below values are val
| Expression Language | Value | ||--| | JSONPath | **JsonPath** |
-| JmesPath | **JmesPath** |
+| JMESPath | **JmesPath** |
>[!TIP]
->For more information on JSONPath, see [JSONPath](https://goessner.net/articles/JsonPath/). The [CalculatedContentTemplate](#calculatedcontenttemplate) uses the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
+> For more information on JSONPath, see [JSONPath](https://goessner.net/articles/JsonPath/). The [CalculatedContentTemplate](#calculatedcontenttemplate) uses the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
>
->For more information on JmesPath, see [JmesPath](https://jmespath.org/specification.html). The [CalculatedContentTemplate](#calculatedcontenttemplate) uses the [JmesPath .NET implementation](https://github.com/jdevillard/JmesPath.Net) for resolving JmesPath expressions.
+> For more information on JMESPath, see [JMESPath](https://jmespath.org/specification.html). The [CalculatedContentTemplate](#calculatedcontenttemplate) uses the [JMESPath .NET implementation](https://github.com/jdevillard/JmesPath.Net) for resolving JMESPath expressions.
-### Custom Functions
+### Custom functions
-A set of MedTech service Custom Functions is also available. These Custom Functions are outside of the functions provided as part of the JmesPath specification. For more information on Custom Functions, see [MedTech service Custom Functions](./how-to-use-custom-functions.md).
+A set of MedTech service custom functions is also available. The MedTech service custom functions are outside of the functions provided as part of the JMESPath specification. For more information on the MedTech service custom functions, see [How to use MedTech service custom functions](how-to-use-custom-functions.md).
### Matched Token
And
"systolic": "122", "diastolic": "82", "date": "2021-07-13T17:28:01.061122Z"
- }
} } ```
In the below example, height data arrives in either inches or meters. We want al
{ "required": "true", "valueExpression": {
- "value": "multiply(to_number(matchedToken.height), `0.0254`)", // Convert inches to meters. Notice we utilize JmesPath as that gives us access to transformation functions
+ "value": "multiply(to_number(matchedToken.height), `0.0254`)", // Convert inches to meters. Notice we utilize JMESPath as that gives us access to transformation functions
"language": "JmesPath" }, "valueName": "height"
In the below example, height data arrives in either inches or meters. We want al
``` > [!TIP]
-> See MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service article [Troubleshoot MedTech service device and FHIR destination mappings](iot-troubleshoot-mappings.md) for assistance fixing common errors and issues related to MedTech service mappings.
## Next steps In this article, you learned how to use Device mappings. To learn how to use FHIR destination mappings, see
->[!div class="nextstepaction"]
->[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
+> [!div class="nextstepaction"]
+> [How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Collection Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-collection-content-mappings.md
- Title: CollectionContentTemplate mappings in MedTech service Device mappings - Azure Health Data Services
-description: This article describes how to use CollectionContentTemplate mappings with MedTech service Device mappings.
---- Previously updated : 03/22/2022---
-# How to use CollectionContentTemplate mappings
-
-> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-
-This article describes how to use CollectionContentTemplate mappings with the MedTech service Device mappings templates.
-
-## CollectionContentTemplate
-
-The CollectionContentTemplate may be used to represent a list of templates that will be used during normalization.
-
-### Example
-
-```json
-{
- "templateType": "CollectionContent",
- "template": [
- {
- "templateType": "CalculatedContent",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@heartRate)]",
- "deviceIdExpression": "$.matchedToken.deviceId",
- "timestampExpression": "$.matchedToken.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.matchedToken.heartRate",
- "valueName": "hr"
- }
- ]
- }
- },
- {
- "templateType": "CalculatedContent",
- "template": {
- "typeName": "stepcount",
- "typeMatchExpression": "$..[?(@steps)]",
- "deviceIdExpression": "$.matchedToken.deviceId",
- "timestampExpression": "$.matchedToken.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.matchedToken.steps",
- "valueName": "steps"
- }
- ]
- }
- }
- ]
-}
-```
-> [!TIP]
-> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
-
-## Next steps
-
-In this article, you learned how to use Device mappings. To learn how to use FHIR destination mappings, see
-
->[!div class="nextstepaction"]
->[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
Title: How to use custom functions with the MedTech service device mapping - Azure Health Data Services
-description: This article describes how to use custom functions with MedTech service device mapping.
+ Title: How to use custom functions with the MedTech service device mappings - Azure Health Data Services
+description: This article describes how to use custom functions with MedTech service device mappings.
Previously updated : 08/30/2022 Last updated : 10/25/2022
-# How to use custom functions
+# How to use custom functions with device mappings
-Many functions are available when using **JmesPath** as the expression language. Besides the functions available as part of the JmesPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-use-device-mappings.md) during the device message [normalization](iot-data-flow.md#normalize) process.
+Many functions are available when using **JMESPath** as the expression language. Besides the functions available as part of the JMESPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-use-device-mappings.md) during the device message [normalization](iot-data-flow.md#normalize) process.
> [!TIP]
-> For more information on JmesPath functions, see the JmesPath [specification](https://jmespath.org/specification.html#built-in-functions).
+> For more information on JMESPath functions, see the [JMESPath specification](https://jmespath.org/specification.html#built-in-functions).
## Function signature
-Each function has a signature that follows the JmesPath specification. This signature can be represented as:
+Each function has a signature that follows the JMESPath specification. This signature can be represented as:
```jmespath return_type function_name(type $argname)
return_type function_name(type $argname)
The signature indicates the valid types for the arguments. If an invalid type is passed in for an argument, an error will occur. > [!NOTE]
-> When math-related functions are done, the end result **must** be able to fit within a C# [long](/dotnet/csharp/language-reference/builtin-types/integral-numeric-types#characteristics-of-the-integral-types) value. If the end result in unable to fit within a C# long value, then a mathematical error will occur.
+> When math-related functions are done, the end result **must** be able to fit within a [C# long](/dotnet/csharp/language-reference/builtin-types/integral-numeric-types#characteristics-of-the-integral-types) value. If the end result is unable to fit within a C# long value, then a mathematical error will occur.
## Exception handling
Exceptions may occur at various points within the event processing lifecycle. He
|Action|When|Exceptions that may occur during template parsing|Outcome| ||-|-|-|
-|**Template parsing**|Each time a new batch of messages is received the Device mapping template is loaded and parsed.|Failure to parse the template.|System will attempt to reload and parse the latest Device mapping template until parsing succeeds. No new messages will be processed until parsing is successful.|
-|**Template parsing**|Each time a new batch of messages is received the Device mapping template is loaded and parsed.|Failure to parse any expressions.|System will attempt to reload and parse the latest Device mapping template until parsing succeeds. No new messages will be processed until parsing is successful.|
+|**Template parsing**|Each time a new batch of messages is received the Device mapping template is loaded and parsed.|Failure to parse the template.|System will attempt to reload and parse the latest device mapping template until parsing succeeds. No new messages will be processed until parsing is successful.|
+|**Template parsing**|Each time a new batch of messages is received the Device mapping template is loaded and parsed.|Failure to parse any expressions.|System will attempt to reload and parse the latest device mapping template until parsing succeeds. No new messages will be processed until parsing is successful.|
|**Function Execution**|Each time a function is executed against data within a message.|Input data doesn't match that of the function signature.|System stops processing that message. The message isn't retried.| |**Function execution**|Each time a function is executed against data within a message.|Any other exceptions listed in the description of the function.|System stops processing that message. The message isn't retried.|
Examples:
| {"unix": 0} | fromUnixTimestampMs(unix) | "1970-01-01T00:00:00+0" | > [!TIP]
-> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service article [Troubleshoot MedTech service device and FHIR destination mappings](iot-troubleshoot-mappings.md) for assistance fixing common errors and issues related to MedTech service mappings.
## Next steps In this article, you learned how to use the MedTech service custom functions. To learn how to use custom functions with the MedTech service device mapping, see
->[!div class="nextstepaction"]
->[How to use device mappings](how-to-use-device-mappings.md)
+> [!div class="nextstepaction"]
+> [How to use device mappings](how-to-use-device-mappings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-device-mappings.md
Title: Device mappings in MedTech service - Azure Health Data Services
-description: This article describes how to configure and use Device mapping templates with Azure Health Data Services MedTech service.
+ Title: How to configure device mappings in MedTech service - Azure Health Data Services
+description: This article provides an overview and describes how to configure the MedTech service device mappings within the Azure Health Data Services.
Previously updated : 09/27/2022 Last updated : 10/25/2022
-# How to use device mappings
+# Device mappings overview
-This article describes how to configure the MedTech service device mapping.
+This article provides an overview and describes how to configure the MedTech service device mappings.
-The MedTech service requires two types of JSON-based mappings. The first type, **device mapping**, is responsible for mapping the device payloads sent to the MedTech service device message event hub end point. The device mapping extracts types, device identifiers, measurement date time, and the measurement value(s).
+The MedTech service requires two types of JSON-based mappings. The first type, **device mappings**, is responsible for mapping the device payloads sent to the MedTech service device message event hub endpoint. The device mapping extracts types, device identifiers, measurement date time, and the measurement value(s).
-The second type, **Fast Healthcare Interoperability Resources (FHIR&#174;) destination mapping**, controls the mapping for FHIR resource. The FHIR destination mapping allows configuration of the length of the observation period, FHIR data type used to store the values, and terminology code(s).
+The second type, **Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings**, controls the mapping for FHIR resource. The FHIR destination mappings allow configuration of the length of the observation period, FHIR data type used to store the values, and terminology code(s).
> [!NOTE] > Device and FHIR destination mappings are stored in an underlying blob storage and loaded from blob per compute execution. Once updated they should take effect immediately. The two types of mappings are composed into a JSON document based on their type. These JSON documents are then added to your MedTech service through the Azure portal. The device mapping is added through the **Device mapping** page and the FHIR destination mapping through the **Destination** page.
+
+## How to configure device mappings
-> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service device and FHIR destination mappings; and export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-
-> [!IMPORTANT]
-> Links to OSS projects on the GitHub website are for informational purposes only and do not constitute an endorsement or guarantee of any kind. You should review the information and licensing terms on the OSS projects on GitHub before using it.
-
-## Device mappings overview
-
-Device mappings provide functionality to extract device message content into a common format for further evaluation. Each device message received is evaluated against all device mapping templates.
-
-A single inbound device message can be separated into multiple outbound messages that are later mapped to different observations in the FHIR service.
-
-The result is a normalized data object representing the value or values parsed by the templates.
+Device mappings provide functionality to extract device message content into a common format for further evaluation. Each device message received is evaluated against all device mapping templates. A single inbound device message can be separated into multiple outbound messages that are later mapped to different observations in the FHIR service. The result is a normalized data object representing the value or values parsed by the device mapping templates.
The normalized data model has a few required properties that must be found and extracted:
The normalized data model has a few required properties that must be found and e
> [!IMPORTANT] > The full normalized model is defined by the [IMeasurement](https://github.com/microsoft/iomt-fhir/blob/master/src/lib/Microsoft.Health.Fhir.Ingest.Schema/IMeasurement.cs) interface.
-Below are conceptual examples of what happens during normalization and transformation process within the MedTech service:
+Below is an example of what happens during normalization and transformation process within the MedTech service. For the purposes of the device mapping, we'll be focusing on the **Normalized data** process:
:::image type="content" source="media/iot-data-transformation/iot-data-normalization-high-level.png" alt-text="Diagram of IoT data normalization flow example zoomed out." lightbox="media/iot-data-transformation/iot-data-normalization-high-level.png"::: - The content payload itself is an Azure Event Hubs message, which is composed of three parts: Body, Properties, and SystemProperties. The `Body` is a byte array representing an UTF-8 encoded string. During template evaluation, the byte array is automatically converted into the string value. `Properties` is a key value collection for use by the message creator. `SystemProperties` is also a key value collection reserved by the Azure Event Hubs framework with entries automatically populated by it. ```json
The content payload itself is an Azure Event Hubs message, which is composed of
} } ```
+## CollectionContentTemplate
+
+The CollectionContentTemplate is the **root** template type used by the MedTech service device mappings template and represents a list of all templates that will be used during the normalization process.
+
+### Example
+
+```json
+{
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@heartRate)]",
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+ },
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "stepcount",
+ "typeMatchExpression": "$..[?(@steps)]",
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.steps",
+ "valueName": "steps"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+ ## Mapping with JSONPath
-The five device content-mapping types supported today rely on JSONPath to both match the required mapping and extracted values. More information on JSONPath can be found [here](https://goessner.net/articles/JsonPath/). All five template types use the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
+The device mapping content types supported by the MedTech service rely on JSONPath to both match the required mapping and extracted values. More information on JSONPath can be found [here](https://goessner.net/articles/JsonPath/). All template types use the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
+
+### Example
-You can define one or more templates within the MedTech service device mapping. Each event hub device message received is evaluated against all device mapping templates.
+**Heart rate**
-A single inbound device message can be separated into multiple outbound messages that are later mapped to different observations in the FHIR service.
+*A device message from the Azure Event Hubs event hub received by the MedTech service*
-Various template types exist and may be used when building the MedTech service device mapping.
+```json
+{
+ "Body": {
+ "heartRate": "78",
+ "endDate": "2021-02-01T22:46:01.8750000Z",
+ "deviceId": "device123"
+ },
+ "Properties": {},
+ "SystemProperties": {}
+}
+```
-|Name | Description |
-|-|-|
-|[JsonPathContentTemplate](./how-to-use-jsonpath-content-mappings.md) |A template that supports writing expressions using JsonPath
-|[CollectionContentTemplate](./how-to-use-collection-content-mappings.md) |A template used to represent a list of templates that will be used during the normalization. |
-|[CalculatedContentTemplate](./how-to-use-calculated-functions-mappings.md)|A template that supports writing expressions using one of several expression languages. Supports data transformation via the use of JmesPath functions.|
-|[IotJsonPathContentTemplate](./how-to-use-iot-jsonpath-content-mappings.md)|A template that supports messages sent from Azure Iot Hub or the Legacy Export Data feature of Azure Iot Central.|
-|[IotCentralJsonPathContentTemplate](./how-to-use-iot-central-json-content-mappings.md)|A template that supports messages sent via the Export Data feature of Azure Iot Central.|
+*A conforming MedTech service device mapping template that could be used during the normalization process with the example device message*
+```json
+{
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "JsonPathContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@heartRate)]",
+ "deviceIdExpression": "$.deviceId",
+ "timestampExpression": "$.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+JSONPath allows matching on and extracting values from a device message.
+
+|Property|Description|Example|
+|--|--|-|
+|TypeName|The type to associate with measurements that match the template|`heartrate`|
+|TypeMatchExpression|The JSONPath expression that is evaluated against the EventData payload. If a matching JToken is found, the template is considered a match. All later expressions are evaluated against the extracted JToken matched here.|`$..[?(@heartRate)]`|
+|DeviceIdExpression|The JSONPath expression to extract the device identifier.|`$.matchedToken.deviceId`|
+|TimestampExpression|The JSONPath expression to extract the timestamp value for the measurement's OccurrenceTimeUtc.|`$.matchedToken.endDate`|
+|PatientIdExpression|*Required* when IdentityResolution is in **Create** mode and *Optional* when IdentityResolution is in **Lookup** mode. The expression to extract the patient identifier.|`$.matchedToken.patientId`|
+|EncounterIdExpression|*Optional*: The expression to extract the encounter identifier.|`$.matchedToken.encounterId`|
+|CorrelationIdExpression|*Optional*: The expression to extract the correlation identifier. This output can be used to group values into a single observation in the FHIR destination mappings.|`$.matchedToken.correlationId`|
+|Values[].ValueName|The name to associate with the value extracted by the next expression. Used to bind the wanted value/component in the FHIR destination mapping template.|`hr`|
+|Values[].ValueExpression|The JSONPath expression to extract the wanted value.|`$.matchedToken.heartRate`|
+|Values[].Required|Will require the value to be present in the payload. If not found, a measurement won't be generated, and an InvalidOperationException will be created.|`true`|
+
+## Other supported template types
+
+You can define one or more templates within the MedTech service device mapping. Each device message received is evaluated against all device mapping templates.
+
+|Template Type|Description|
+|-|--|
+|[CalculatedContentTemplate](how-to-use-calculated-functions-mappings.md)|A template that supports writing expressions using one of several expression languages. Supports data transformation via the use of JMESPath functions.|
+|[IotJsonPathContentTemplate](how-to-use-iot-jsonpath-content-mappings.md)|A template that supports messages sent from Azure Iot Hub or the Legacy Export Data feature of Azure Iot Central.
+
> [!TIP]
-> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service article [Troubleshoot MedTech service device and FHIR destination mappings](iot-troubleshoot-mappings.md) for assistance fixing common errors and issues related to MedTech service mappings.
## Next steps
-In this article, you learned how to use Device mappings. To learn how to use FHIR destination mappings, see
+In this article, you learned how to use device mappings. To learn how to use FHIR destination mappings, see
->[!div class="nextstepaction"]
->[How to use the FHIR destination mapping](how-to-use-fhir-mappings.md)
+> [!div class="nextstepaction"]
+> [How to use the FHIR destination mappings](how-to-use-fhir-mappings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Fhir Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-fhir-mappings.md
Previously updated : 07/07/2022 Last updated : 10/25/2022
This article describes how to configure the MedTech service using the Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings.
-> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
- Below is a conceptual example of what happens during the normalization and transformation process within the MedTech service: :::image type="content" source="media/iot-data-transformation/iot-data-normalization-high-level.png" alt-text="Diagram of IoT data normalization flow." lightbox="media/iot-data-transformation/iot-data-normalization-high-level.png":::
Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConc
``` > [!TIP]
-> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service article [Troubleshoot MedTech service device and FHIR destination mappings](iot-troubleshoot-mappings.md) for assistance fixing common errors and issues related to MedTech service mappings.
## Next steps
-In this article, you learned how to use FHIR destination mappings. To learn how to use Device mappings, see
+In this article, you learned how to use FHIR destination mappings. To learn how to use device mappings, see
->[!div class="nextstepaction"]
->[How to use Device mappings](how-to-use-device-mappings.md)
+> [!div class="nextstepaction"]
+> [How to use device mappings](how-to-use-device-mappings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Iot Central Json Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iot-central-json-content-mappings.md
- Title: IotCentralJsonPathContentTemplate mappings in MedTech service device mappings - Azure Health Data Services
-description: This article describes how IotCentralJsonPathContent mappings with MedTech service device mappings templates.
---- Previously updated : 09/16/2022---
-# How to use IotCentralJsonPathContentTemplate mappings
-
-> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service device and FHIR destination mappings. Export mappings for uploading to MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-
-This article describes how to use IoTCentralJsonPathContentTemplate mappings with the MedTech service device mappings.
-
-## IotCentralJsonPathContentTemplate
-
-The IotCentralJsonPathContentTemplate also doesn't require DeviceIdExpression and TimestampExpression. It gets used when the messages being evaluated are sent through the [Export Data](../../iot-central/core/howto-export-data.md) feature of [Azure IoT Central](../../iot-central/core/overview-iot-central.md).
-
-If you're using Azure IoT Central's Data Export feature and custom properties in the message body for the device identity or measurement timestamp, you can still use the JsonPathContentTemplate.
-
-> [!NOTE]
-> When using `IotCentralJsonPathContentTemplate`, `TypeMatchExpression` should resolve to the entire message as a JToken. For more information, see the following examples:
-
-### Examples
-
-**Heart rate**
-
-*Message*
-
-```json
-{
- "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
- "messageSource": "telemetry",
- "deviceId": "1vzb5ghlsg1",
- "schema": "default@v1",
- "templateId": "urn:qugj6vbw5:___qbj_27r",
- "enqueuedTime": "2020-08-05T22:26:55.455Z",
- "telemetry": {
- "Activity": "running",
- "BloodPressure": {
- "Diastolic": 7,
- "Systolic": 71
- },
- "BodyTemperature": 98.73447010562934,
- "HeartRate": 88,
- "HeartRateVariability": 17,
- "RespiratoryRate": 13
- },
- "enrichments": {
- "userSpecifiedKey": "sampleValue"
- },
- "messageProperties": {
- "messageProp": "value"
- }
-}
-```
-
-*Template*
-
-```json
-{
- "templateType": "IotCentralJsonPathContent",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@telemetry.HeartRate)]",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.telemetry.HeartRate",
- "valueName": "hr"
- }
- ]
- }
-}
-```
-
-**Blood pressure**
-
-*Message*
-
-```json
-{
- "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
- "messageSource": "telemetry",
- "deviceId": "1vzb5ghlsg1",
- "schema": "default@v1",
- "templateId": "urn:qugj6vbw5:___qbj_27r",
- "enqueuedTime": "2020-08-05T22:26:55.455Z",
- "telemetry": {
- "Activity": "running",
- "BloodPressure": {
- "Diastolic": 7,
- "Systolic": 71
- },
- "BodyTemperature": 98.73447010562934,
- "HeartRate": 88,
- "HeartRateVariability": 17,
- "RespiratoryRate": 13
- },
- "enrichments": {
- "userSpecifiedKey": "sampleValue"
- },
- "messageProperties": {
- "messageProp": "value"
- }
-}
-```
-
-*Template*
-
-```json
-{
- "templateType": "IotCentralJsonPathContent",
- "template": {
- "typeName": "bloodPressure",
- "typeMatchExpression": "$..[?(@telemetry.BloodPressure.Diastolic && @telemetry.BloodPressure.Systolic)]",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.telemetry.BloodPressure.Diastolic",
- "valueName": "bp_diastolic"
- },
- {
- "required": "true",
- "valueExpression": "$.telemetry.BloodPressure.Systolic",
- "valueName": "bp_systolic"
- }
- ]
- }
-}
-```
-
-> [!TIP]
-> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
-
-## Next steps
-
-In this article, you learned how to use IotCentralJsonPathContentTemplate with your MedTech service device mappings. To learn how to use FHIR destination mappings, see
-
->[!div class="nextstepaction"]
->[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Iot Jsonpath Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iot-jsonpath-content-mappings.md
Previously updated : 10/03/2022 Last updated : 10/25/2022
The assumption, when using this template, is the messages being evaluated were s
When you're using these SDKs, the device identity and the timestamp of the message are known.
->[!IMPORTANT]
->Make sure that you're using a device identifier from Azure Iot Hub or Azure IoT Central that is registered as an identifier for a device resource on the destination Fast Healthcare Interoperability Resource (FHIR&#174;) service.
+> [!IMPORTANT]
+> Make sure that you're using a device identifier from Azure Iot Hub or Azure IoT Central that is registered as an identifier for a device resource on the destination Fast Healthcare Interoperability Resource (FHIR&#174;) service.
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContentTemplate, assuming that you're using custom properties in the message body for the device identity or measurement timestamp.
With each of these examples, you're provided with:
``` > [!TIP]
-> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service article [Troubleshoot MedTech service Device and FHIR destination mappings](iot-troubleshoot-mappings.md) for assistance fixing common errors and issues related to MedTech service mappings.
## Next steps In this article, you learned how to use IotJsonPathContentTemplate mappings with the MedTech service device mapping. To learn how to use MedTech service FHIR destination mapping, see
->[!div class="nextstepaction"]
->[How to use the FHIR destination mapping](how-to-use-fhir-mappings.md)
+> [!div class="nextstepaction"]
+> [How to use the FHIR destination mapping](how-to-use-fhir-mappings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Jsonpath Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-jsonpath-content-mappings.md
- Title: JsonPathContentTemplate mappings in MedTech service Device mappings - Azure Health Data Services
-description: This article describes how to use JsonPathContentTemplate mappings with the MedTech service Device mappings templates.
---- Previously updated : 02/16/2022---
-# How to use JsonPathContentTemplate mappings
-
-> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-
-This article describes how to use JsonPathContentTemplate mappings with the MedTech service Device mappings templates.
-
-## JsonPathContentTemplate
-
-The JsonPathContentTemplate allows matching on and extracting values from an Azure Event Hub message using JSONPath.
-
-|Property|Description|Example|
-|--|--|-|
-|TypeName|The type to associate with measurements that match the template|`heartrate`|
-|TypeMatchExpression|The JSONPath expression that is evaluated against the EventData payload. If a matching JToken is found, the template is considered a match. All later expressions are evaluated against the extracted JToken matched here.|`$..[?(@heartRate)]`|
-|TimestampExpression|The JSONPath expression to extract the timestamp value for the measurement's OccurrenceTimeUtc.|`$.matchedToken.endDate`|
-|DeviceIdExpression|The JSONPath expression to extract the device identifier.|`$.matchedToken.deviceId`|
-|PatientIdExpression|*Required* when IdentityResolution is in **Create** mode and *Optional* when IdentityResolution is in **Lookup** mode. The expression to extract the patient identifier.|`$.matchedToken.patientId`|
-|EncounterIdExpression|*Optional*: The expression to extract the encounter identifier.|`$.matchedToken.encounterId`|
-|CorrelationIdExpression|*Optional*: The expression to extract the correlation identifier. This output can be used to group values into a single observation in the FHIR destination mappings.|`$.matchedToken.correlationId`|
-|Values[].ValueName|The name to associate with the value extracted by the next expression. Used to bind the wanted value/component in the FHIR destination mapping template.|`hr`|
-|Values[].ValueExpression|The JSONPath expression to extract the wanted value.|`$.matchedToken.heartRate`|
-|Values[].Required|Will require the value to be present in the payload. If not found, a measurement won't be generated and an InvalidOperationException will be created.|`true`|
-
-### Examples
-
-**Heart rate**
-
-*Message*
-
-```json
-{
- "Body": {
- "heartRate": "78",
- "endDate": "2021-02-01T22:46:01.8750000Z",
- "deviceId": "device123"
- },
- "Properties": {},
- "SystemProperties": {}
-}
-```
-
-*Template*
-
-```json
-{
- "templateType": "JsonPathContent",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@heartRate)]",
- "deviceIdExpression": "$.deviceId",
- "timestampExpression": "$.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.heartRate",
- "valueName": "hr"
- }
- ]
- }
-}
-```
-**Blood pressure**
-
-*Message*
-
-```json
-{
- "Body": {
- "systolic": "123",
- "diastolic" : "87",
- "endDate": "2021-02-01T22:46:01.8750000Z",
- "deviceId": "device123"
- },
- "Properties": {},
- "SystemProperties": {}
-}
-```
-
-*Template*
-
-```json
-{
- "typeName": "bloodpressure",
- "typeMatchExpression": "$..[?(@systolic && @diastolic)]",
- "deviceIdExpression": "$.deviceId",
- "timestampExpression": "$.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.systolic",
- "valueName": "systolic"
- },
- {
- "required": "true",
- "valueExpression": "$.diastolic",
- "valueName": "diastolic"
- }
- ]
-}
-```
-**Project multiple measurements from single message**
-
-*Message*
-
-```json
-{
- "Body": {
- "heartRate": "78",
- "steps": "2",
- "endDate": "2021-02-01T22:46:01.8750000Z",
- "deviceId": "device123"
- },
- "Properties": {},
- "SystemProperties": {}
-}
-```
-
-*Template 1*
-
-```json
-{
- "templateType": "JsonPathContent",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@heartRate)]",
- "deviceIdExpression": "$.deviceId",
- "timestampExpression": "$.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.heartRate",
- "valueName": "hr"
- }
- ]
- }
-}
-```
-
-*Template 2*
-
-```json
-{
- "templateType": "JsonPathContent",
- "template": {
- "typeName": "stepcount",
- "typeMatchExpression": "$..[?(@steps)]",
- "deviceIdExpression": "$.deviceId",
- "timestampExpression": "$.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.steps",
- "valueName": "steps"
- }
- ]
- }
-}
-```
-
-**Project multiple measurements from array in message**
-
-*Message*
-
-```json
-{
- "Body": [
- {
- "heartRate": "78",
- "endDate": "2021-02-01T22:46:01.8750000Z",
- "deviceId": "device123"
- },
- {
- "heartRate": "81",
- "endDate": "2021-02-01T23:46:01.8750000Z",
- "deviceId": "device123"
- },
- {
- "heartRate": "72",
- "endDate": "2021-02-01T24:46:01.8750000Z",
- "deviceId": "device123"
- }
- ],
- "Properties": {},
- "SystemProperties": {}
-}
-```
-*Template*
-
-```json
-{
- "templateType": "JsonPathContent",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@heartRate)]",
- "deviceIdExpression": "$.deviceId",
- "timestampExpression": "$.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.heartRate",
- "valueName": "hr"
- }
- ]
- }
-}
-```
-
-> [!TIP]
-> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
-
-## Next steps
-
-In this article, you learned how to use Device mappings. To learn how to use FHIR destination mappings, see
-
->[!div class="nextstepaction"]
->[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Troubleshoot Error Messages And Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-troubleshoot-error-messages-and-conditions.md
Title: Troubleshoot MedTech service error messages, conditions, and fixes - Azure Health Data Services
-description: This article helps users troubleshoot MedTech service errors/conditions and provides fixes and solutions.
+ Title: Troubleshoot MedTech service error messages and conditions - Azure Health Data Services
+description: This article helps users troubleshoot MedTech service errors messages and conditions.
Previously updated : 03/21/2022 Last updated : 10/25/2022
This article provides steps for troubleshooting and fixing MedTech service error messages and conditions.
-> [!IMPORTANT]
-> Having access to MedTech service metrics is essential for monitoring and troubleshooting. MedTech service assists you to do these actions through [Metrics](how-to-configure-metrics.md).
- > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
+> Having access to metrics and logs are essential tools for assisting you in troubleshooting and assessing the overall performance of your MedTech service. Check out these MedTech service articles to learn more about how to enable, configure, and use these monitoring features:
+>
+> [How to use the MedTech service monitoring tab](how-to-use-monitoring-tab.md)
+>
+> [How to configure the MedTech service metrics](how-to-configure-metrics.md)
+>
+> [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
> [!NOTE]
-> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
+> When you open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your device and FHIR destination mappings](how-to-create-mappings-copies.md) to assist in the troubleshooting process.
## Error messages and conditions
This property provides the name for a specific error. Below is the list of all e
|`PatientDeviceMismatchException`|This error occurs when the device resource on the FHIR service has a reference to a patient resource. This error type means it doesn't match with the patient identifier present in the message.|`FHIRResourceError`|Error|`FHIRConversionError`| |`PatientNotFoundException`|No Patient FHIR resource is referenced by the Device FHIR resource associated with the device identifier present in the device message. Note this error will only occur when the MedTech service instance is configured with the *Lookup* resolution type.|`FHIRConversionError`|Error|`FHIRConversion`| |`DeviceNotFoundException`|No device resource exists on the FHIR service associated with the device identifier present in the device message.|`DeviceMessageError`|Error|Normalization|
-|`PatientIdentityNotDefinedException`|This error occurs when expression to parse patient identifier from the device message isn't configured on the Device mapping or patient identifer isn't present in the device message. Note this error occurs only when MedTech service's resolution type is set to *Create*.|`DeviceTemplateError`|Critical|Normalization|
-|`DeviceIdentityNotDefinedException`|This error occurs when the expression to parse device identifier from the device message isn't configured on the Device mapping or device identifer isn't present in the device message.|`DeviceTemplateError`|Critical|Normalization|
+|`PatientIdentityNotDefinedException`|This error occurs when expression to parse patient identifier from the device message isn't configured on the device mapping or patient identifer isn't present in the device message. Note this error occurs only when MedTech service's resolution type is set to *Create*.|`DeviceTemplateError`|Critical|Normalization|
+|`DeviceIdentityNotDefinedException`|This error occurs when the expression to parse device identifier from the device message isn't configured on the device mapping or device identifer isn't present in the device message.|`DeviceTemplateError`|Critical|Normalization|
|`NotSupportedException`|Error occurred when device message with unsupported format is received.|`DeviceMessageError`|Error|Normalization| ### MedTech service resource
This property provides the name for a specific error. Below is the list of all e
|A Patient Resource hasn't been created in the FHIR service (Resolution Type: Look up only)*.|Create a valid Patient Resource in the FHIR service.| |The `Device.patient` reference isn't set, or the reference is invalid (Resolution Type: Look up only)*.|Make sure the Device Resource contains a valid [Reference](https://www.hl7.org/fhir/device-definitions.html#Device.patient) to a Patient Resource.|
-*Reference [Quickstart: Deploy MedTech service using Azure portal](deploy-iot-connector-in-azure.md) for a functional description of the MedTech service resolution types (For example: Look up or Create).
+*Reference [Quickstart: Deploy MedTech service using Azure portal](deploy-05-new-config.md#destination-properties) for a functional description of the MedTech service resolution types (For example: Create or Lookup).
## Next steps In this article, you learned how to troubleshoot MedTech service error messages and conditions. To learn how to troubleshoot a MedTech service Device and FHIR destination mappings, see
->[!div class="nextstepaction"]
->[Troubleshoot MedTech service Device and FHIR destination mappings](iot-troubleshoot-mappings.md)
+> [!div class="nextstepaction"]
+> [Troubleshoot MedTech service device and FHIR destination mappings](iot-troubleshoot-mappings.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Iot Troubleshoot Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-troubleshoot-mappings.md
Title: Troubleshoot MedTech service Device and FHIR destination mappings - Azure Health Data Services
-description: This article helps users troubleshoot the MedTech service Device and FHIR destination mappings.
+ Title: Troubleshoot MedTech service device and FHIR destination mappings - Azure Health Data Services
+description: This article helps users troubleshoot the MedTech service device and FHIR destination mappings.
Previously updated : 10/10/2022 Last updated : 10/25/2022
-# Troubleshoot MedTech service Device and FHIR destination mappings
+# Troubleshoot MedTech service device and FHIR destination mappings
-This article provides the validation steps MedTech service performs on the Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings and can be used for troubleshooting mappings error messages and conditions.
-
-> [!IMPORTANT]
-> Having access to MedTech service Metrics is essential for monitoring and troubleshooting. MedTech service assists you to do these actions through [Metrics](how-to-configure-metrics.md).
+This article provides the validation steps the MedTech service performs on the device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings and can be used for troubleshooting mappings error messages and conditions.
> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
+> Having access to metrics and logs are essential tools for assisting you in troubleshooting and assessing the overall performance of your MedTech service. Check out these MedTech service articles to learn more about how to enable, configure, and use these monitoring features:
+>
+> [How to use the MedTech service monitoring tab](how-to-use-monitoring-tab.md)
+>
+> [How to configure the MedTech service metrics](how-to-configure-metrics.md)
+>
+> [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
> [!NOTE]
-> When you open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
+> When you open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your device and FHIR destination mappings](how-to-create-mappings-copies.md) to assist in the troubleshooting process.
## Device and FHIR destination mappings validations
-This section describes the validation process that the MedTech service performs. The validation process validates the Device and FHIR destination mappings before allowing them to be saved for use. These elements are required in the Device and FHIR destination mappings.
+The validation process validates the device and FHIR destination mappings before allowing them to be saved for use. These elements are required in the device and FHIR destination mappings templates.
**Device mappings**
This section describes the validation process that the MedTech service performs.
In this article, you learned the validation process that the MedTech service performs on the Device and FHIR destination mappings. To learn how to troubleshoot MedTech service errors and conditions, see
->[!div class="nextstepaction"]
->[Troubleshoot MedTech service error messages and conditions](iot-troubleshoot-error-messages-and-conditions.md)
+> [!div class="nextstepaction"]
+> [Troubleshoot MedTech service error messages and conditions](iot-troubleshoot-error-messages-and-conditions.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
import-export Storage Import Export Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-service.md
Previously updated : 10/20/2022 Last updated : 10/27/2022 # What is Azure Import/Export service?
The Azure Import/Export service supports copying data to and from all Azure stor
|North Central US | Australia Southeast | Brazil South | UK South | |South Central US | Japan West |Korea Central | Germany Central | |West Central US | Japan East | US Gov Virginia | Germany Northeast |
-|South Africa West | South Africa North | UAE |
+|South Africa West | South Africa North | UAE Central | UAE North |
## Security considerations
iot-dps How To Manage Linked Iot Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-manage-linked-iot-hubs.md
DPS also supports deleting linked IoT Hubs from the DPS instance using the [Crea
## Update keys for linked IoT hubs
-It may become necessary to either rotate or update the symmetric keys for an IoT hub that's been linked to DPS. In this case, you'll also need to update the connection string setting in DPS for the linked IoT hub. Note that provisioning to an IoT hub will fail during the interim between updating a key on the IoT hub and updating your DPS instance with the new connections string based on that key.
+It may become necessary to either rotate or update the symmetric keys for an IoT hub that's been linked to DPS. In this case, you'll also need to update the connection string setting in DPS for the linked IoT hub. Note that provisioning to an IoT hub will fail during the interim between updating a key on the IoT hub and updating your DPS instance with the new connections string based on that key. For this reason, we recommend [using the Azuer CLI to update your keys](#use-the-azure-cli-to-update-keys) because you can update the connnection string on the linked hub direcctly. With the Azure portal, you have to delete the IoT hub from your DPS instance and then relink it in order to update the connection string.
### Use the Azure portal to update keys
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
You should already have IoT Edge installed on your device. If not, follow the st
```toml [agent.config]
- image: "mcr.microsoft.com/azureiotedge-agent:1.4"
+ image = "mcr.microsoft.com/azureiotedge-agent:1.4"
``` 01. The beginning of your parent configuration file should look similar to the following example.
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
For more information, see [Load balancer limits](../azure-resource-manager/manag
## Limitations - A standalone virtual machine resource, availability set resource, or virtual machine scale set resource can reference one SKU, never both. - [Move operations](../azure-resource-manager/management/move-resource-group-and-subscription.md):
- - Resource group move operations (within same subscription) **are supported** for Standard Load Balancer and Standard Public IP.
- - [Subscription group move operations](../azure-resource-manager/management/move-support-resources.md) are **not** supported for Standard Load Balancers.
+ - [Resource group move operations](../azure-resource-manager/management/move-support-resources.md#microsoftnetwork) (within same subscription) are **supported** for Standard Load Balancer and Standard Public IP.
+ - [Subscription move operations](../azure-resource-manager/management/move-support-resources.md#microsoftnetwork) are **not supported** for Standard Load Balancers.
## Next steps - See [Create a public Standard Load Balancer](quickstart-load-balancer-standard-public-portal.md) to get started with using a Load Balancer.
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-authenticate-batch-endpoint.md
job = ml_client.batch_endpoints.invoke(
) ```
-# [Azure ML studio](#tab/studio)
+# [studio](#tab/studio)
Jobs are always started using the identity of the user in the portal in studio.
job = ml_client.batch_endpoints.invoke(
) ```
-# [Azure ML studio](#tab/studio)
+# [studio](#tab/studio)
You can't run jobs using a service principal from studio.
job = ml_client.batch_endpoints.invoke(
) ```
-# [Azure ML studio](#tab/studio)
+# [studio](#tab/studio)
You can't run jobs using a managed identity from studio.
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-deploy-model-custom-output.md
Last updated 10/10/2022-+
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-image-processing-batch.md
Last updated 10/10/2022-+
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-nlp-processing-batch.md
Last updated 10/10/2022-+
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-secure-batch-endpoint.md
When deploying a machine learning model to a batch endpoint, you can secure thei
## Prerequisites * A secure Azure Machine Learning workspace. For more details about how to achieve it read [Create a secure workspace](../tutorial-create-secure-workspace.md).
+* For Azure Container Registry in private networks, please note that there are [some prerequisites about their configuration](../how-to-secure-workspace-vnet.md#prerequisites).
+
+ > [!WARNING]
+ > Azure Container Registries with Quarantine feature enabled are not supported by the moment.
+ * Ensure blob, file, queue, and table private endpoints are configured for the storage accounts as explained at [Secure Azure storage accounts](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). Batch deployments require all the 4 to properly work. ## Securing batch endpoints
-All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. Not further configuration is required.
+All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. No further configuration is required.
> [!IMPORTANT] > When working on a private link-enabled workspaces, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Please use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-scoring-job).
The following diagram shows how the networking looks like for batch endpoints wh
:::image type="content" source="./media/how-to-secure-batch-endpoint/batch-vnet-peering.png" alt-text="Diagram that shows the high level architecture of a secure Azure Machine Learning workspace deployment.":::
+In order to enable the jump host VM (or self-hosted agent VMs if using [Azure Bastion](../../bastion/bastion-overview.md)) access to the resources in Azure Machine Learning VNET, the previous architecture uses virtual network peering to seamlessly connect these two virtual networks. Thus the two virtual networks appear as one for connectivity purposes. The traffic between VMs and Azure Machine Learning resources in peered virtual networks uses the Microsoft backbone infrastructure. Like traffic between them in the same network, traffic is routed through Microsoft's private network only.
+ ## Securing batch deployment jobs Azure Machine Learning batch deployments run on compute clusters. To secure batch deployment jobs, those compute clusters have to be deployed in a virtual network too. 1. Create an Azure Machine Learning [computer cluster in the virtual network](../how-to-secure-training-vnet.md#compute-cluster).
-1. If your compute instance uses a public IP address, you must [Allow inbound communication](../how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+2. Ensure all related services have private endpoints configured in the network. Private endpoints are used for not only Azure Machine Learning workspace, but also its associated resources such as Azure Storage, Azure Key Vault, or Azure Container Registry. Azure Container Registry is a required service. While securing the Azure Machine Learning workspace with virtual networks, please note that there are [some prerequisites about Azure Container Registry](../how-to-secure-workspace-vnet.md#prerequisites).
+4. If your compute instance uses a public IP address, you must [Allow inbound communication](../how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
> [!TIP] > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
The following diagram shows the high level design:
Have the following considerations when using such architecture:
-* Put the second set of private endpoints in a different resource group and hence in different private DNS zones. This prevents a name resolution conflict between the set of IPs used for the workload and the ones used by the client VNets.
+* Put the second set of private endpoints in a different resource group and hence in different private DNS zones. This prevents a name resolution conflict between the set of IPs used for the workspace and the ones used by the client VNets. Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today. Please note that the DNS resolution against a private DNS zone works only from virtual networks that are linked to it. For more details see [recommended zone names for Azure services](../../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
* For your storage accounts, add 4 private endpoints in each VNet for blob, file, queue, and table as explained at [Secure Azure storage accounts](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-batch-endpoint.md
Use batch endpoints when:
In this article, you will learn how to use batch endpoints to do batch scoring.
+> [!TIP]
+> We suggest you to read the Scenarios sections (see the navigation bar at the left) to find more about how to use Batch Endpoints in specific scenarios including NLP, computer vision, or how to integrate them with other Azure services.
+ ## Prerequisites [!INCLUDE [basic cli prereqs](../../../includes/machine-learning-cli-prereqs.md)]
Use the `init()` method for any costly or common preparation. For example, use i
Use the `run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]` method to perform the scoring of each mini-batch generated by the batch deployment. Such method will be called once per each `mini_batch` generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured.
-> [!IMPORTANT]
+> [!NOTE]
+> __How is work distributed?__:
+>
> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution. The method receives a list of file paths as a parameter (`mini_batch`). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option will depend on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments). The `run()` method should return a pandas DataFrame or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element will represent a single file processed. For a tabular dataset, each row/element will represent a row in a processed file.
-Use __arrays__ when you need to output a single prediction. Use __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data, you may want to append your predictions to the original record. Use a pandas DataFrame for this case. For file datasets, __we still recommend to output a pandas DataFrame__ as they provide a more robust approach to read the results.
+> [!IMPORTANT]
+> __How to write predictions?__:
+>
+> Use __arrays__ when you need to output a single prediction. Use __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data, you may want to append your predictions to the original record. Use a pandas DataFrame for this case. For file datasets, __we still recommend to output a pandas DataFrame__ as they provide a more robust approach to read the results.
+>
+> Although pandas DataFrame may contain column names, they are not included in the output file. If needed, please see [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
> [!WARNING] > Do not not output complex data types (or lists of complex data types) in the `run` function. Those outputs will be transformed to string and they will be hard to read.
-The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array will be written to the output file as-is (given that the `output_action` isn't `summary_only`).
-
-> [!TIP]
-> We suggest you to read the Scenarios sections (see the navigation bar at the left) to see different case by case scenarios and how the scoring script looks like.
+The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array will be written to the output file as-is (considering the `output_action` isn't `summary_only`).
## Create a batch deployment
machine-learning How To Create Component Pipelines Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-ui.md
-# Create and run machine learning pipelines using components with the Azure Machine Learning studio (Preview)
+# Create and run machine learning pipelines using components with the Azure Machine Learning studio
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] In this article, you'll learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure Machine Learning studio and [Components](concept-component.md). You can create pipelines without using components, but components offer better amount of flexibility and reuse. Azure ML Pipelines may be defined in YAML and [run from the CLI](how-to-create-component-pipelines-cli.md), [authored in Python](how-to-create-component-pipeline-python.md), or composed in Azure ML Studio Designer with a drag-and-drop UI. This document focuses on the AzureML studio designer UI. - ## Prerequisites * If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Last updated 09/22/2022
In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from AzureML datastores, Azure Storage, public URLs, and local files. > [!IMPORTANT]
-> If you didn't creat/register the data source as a data asset, you can still [consume the data via specifying the data path in a job](how-to-read-write-data-v2.md#read-data-in-a-job) without below benefits.
+> If you didn't creat/register the data source as a data asset, you can still [consume the data via specifying the data path in a job](how-to-read-write-data-v2.md#read-data-in-a-job) without benefits below.
The benefits of creating data assets are:
To create a File data asset in the Azure Machine Learning studio, use the follow
## Create a `mltable` data asset `mltable` is a way to abstract the schema definition for tabular data to make it easier to share data assets (an overview can be found in [MLTable](concept-data.md#mltable)).
-`mltable` supports tabular data coming from belowing sources:
+`mltable` supports tabular data coming from following sources:
- Delimited files (CSV, TSV, TXT) - Parquet files - JSON Lines
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
def get_access_token_msi(resource):
arm_access_token = get_access_token_msi("https://management.azure.com") ```
+> [!NOTE]
+> To use Azure CLI with the managed identity for authentication, specify the identity client ID as the username when logging in: ```az login --identity --username $DEFAULT_IDENTITY_CLIENT_ID```.
+ ## Add custom applications such as RStudio (preview) You can set up other applications, such as RStudio, when creating a compute instance. Follow these steps in studio to set up a custom application on your compute instance
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
Once the updated image is built and your development container launches, use the
## Next steps -- [Deploy and score a machine learning model by using a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md)-- [Troubleshooting managed online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md)
+- [Deploy and score a machine learning model by using a managed online endpoint)](how-to-deploy-managed-online-endpoints.md)
+- [Troubleshooting managed online endpoints deployment and scoring)](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
The following data types are supported for batch inference.
To learn more, review these articles: -- [Deploy models with REST (preview)](how-to-deploy-with-rest.md)
+- [Deploy models with REST](how-to-deploy-with-rest.md)
- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md) - [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) - [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md)-- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md)-- [Access Azure resources with an online endpoint and managed identity (preview)](how-to-access-resources-from-endpoints-managed-identities.md)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Access Azure resources with an online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
- [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
You can create registries in AzureML studio using the following steps:
> [!TIP] > If you are in a workspace, navigate to the global UI by clicking your organization or tenant name in the navigation pane to find the __Registries__ entry. You can also go directly there by navigating to [https://ml.azure.com/registries](https://ml.azure.com/registries).
- :::image type="content" source="./media/how-to-manage-registries/studio-create-registry-button.png" alt-text="Screenshot of the create registry screen.":::
+ :::image type="content" source="./media/how-to-manage-registries/studio-create-registry-button.png" lightbox="./media/how-to-manage-registries/studio-create-registry-button.png" alt-text="Screenshot of the create registry screen.":::
1. Enter the registry name, select the subscription and resource group and then select __Next__.
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
The following example defines a pipeline containing three nodes and moves data b
## Next steps * [Train models](how-to-train-model.md)
-* [Tutorial: Create production ML pipelines with Python SDK v2 (preview)](tutorial-pipeline-python-sdk.md)
+* [Tutorial: Create production ML pipelines with Python SDK v2](tutorial-pipeline-python-sdk.md)
* Learn more about [Data in Azure Machine Learning](concept-data.md)
machine-learning How To Schedule Pipeline Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md
Title: Schedule Azure Machine Learning pipeline jobs (preview)
+ Title: Schedule Azure Machine Learning pipeline jobs
description: Learn how to schedule pipeline jobs that allow you to automate routine, time-consuming tasks such as data processing, training, and monitoring.
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](v1/how-to-train-keras.md)
-> * [v2 (preview)](how-to-train-keras.md)
+> * [v2 (current version)](how-to-train-keras.md)
In this article, learn how to run your Keras training scripts using the Azure Machine Learning (AzureML) Python SDK v2.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](v1/how-to-train-pytorch.md)
-> * [v2 (preview)](how-to-train-pytorch.md)
+> * [v2 (current version)](how-to-train-pytorch.md)
In this article, you'll learn to train, hyperparameter tune, and deploy a [PyTorch](https://pytorch.org/) model using the Azure Machine Learning (AzureML) Python SDK v2.
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](v1/how-to-train-scikit-learn.md)
-> * [v2 (preview)](how-to-train-scikit-learn.md)
+> * [v2 (current version)](how-to-train-scikit-learn.md)
In this article, learn how to run your scikit-learn training scripts with Azure Machine Learning Python SDK v2.
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](v1/how-to-train-tensorflow.md)
-> * [v2 (preview)](how-to-train-tensorflow.md)
+> * [v2 (current version)](how-to-train-tensorflow.md)
In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using Azure Machine Learning Python SDK v2.
machine-learning How To View Online Endpoints Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-view-online-endpoints-costs.md
Title: View costs for managed online endpoints (preview)
+ Title: View costs for managed online endpoints
description: 'Learn to how view costs for a managed online endpoint in Azure Machine Learning.'
-# View costs for an Azure Machine Learning managed online endpoint (preview)
+# View costs for an Azure Machine Learning managed online endpoint
-Learn how to view costs for a managed online endpoint (preview). Costs for your endpoints will accrue to the associated workspace. You can see costs for a specific endpoint using tags.
+Learn how to view costs for a managed online endpoint. Costs for your endpoints will accrue to the associated workspace. You can see costs for a specific endpoint using tags.
> [!IMPORTANT]
-> This article only applies to viewing costs for Azure Machine Learning managed online endpoints (preview). Managed online endpoints are different from other resources since they must use tags to track costs. For more information on viewing the costs of other Azure resources, see [Quickstart: Explore and analyze costs with cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md).
+> This article only applies to viewing costs for Azure Machine Learning managed online endpoints. Managed online endpoints are different from other resources since they must use tags to track costs. For more information on viewing the costs of other Azure resources, see [Quickstart: Explore and analyze costs with cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md).
## Prerequisites -- Deploy an Azure Machine Learning managed online endpoint (preview).
+- Deploy an Azure Machine Learning managed online endpoint.
- Have at least [Billing Reader](../role-based-access-control/role-assignments-portal.md) access on the subscription where the endpoint is deployed ## View costs
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2. In the foll
For more information, see the documentation here: * [steps in SDK v1](/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py&preserve-view=true)
-* [Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2 (Preview)](how-to-create-component-pipeline-python.md)
+* [Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2](how-to-create-component-pipeline-python.md)
* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/using-pipelines/image-classification.ipynb) * [OutputDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py&preserve-view=true) * [`mldesigner`](https://pypi.org/project/mldesigner/)
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
In this tutorial, you accomplish the following tasks:
> [!TIP] > If you're looking for a template (Microsoft Bicep or Hashicorp Terraform) that demonstrates how to create a secure workspace, see [Tutorial - Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
-After completing this tutorial, you will have the following architecture:
+After completing this tutorial, you'll have the following architecture:
* An Azure Virtual Network, which contains three subnets: * __Training__: Contains the Azure Machine Learning workspace, dependency services, and resources used for training models.
- * __Scoring__: Contains resources used to deploy models as endpoints.
+ * __Scoring__: For the steps in this tutorial, it isn't used. However if you continue using this workspace for other tutorials, we recommend using this subnet when deploying models to [endpoints](concept-endpoints.md).
* __AzureBastionSubnet__: Used by the Azure Bastion service to securely connect clients to Azure Virtual Machines.
+* An Azure Machine Learning workspace that uses a private endpoint to communicate using the VNet.
+* An Azure Storage Account that uses private endpoints to allow storage services such as blob and file to communicate using the VNet.
+* An Azure Container Registry that uses a private endpoint communicate using the VNet.
+* Azure Bastion, which allows you to use your browser to securely communicate with the jump box VM inside the VNet.
+* An Azure Virtual Machine that you can remotely connect to and access resources secured inside the VNet.
+* An Azure Machine Learning compute instance and compute cluster.
+
+> [!TIP]
+> The Azure Batch Service listed on the diagram is a back-end service required by the compute clusters and compute instances.
+ ## Prerequisites
-* Familiarity with Azure Virtual Networks and IP networking. If you are not familiar, try the [Fundamentals of computer networking](/training/modules/network-fundamentals/) module.
+* Familiarity with Azure Virtual Networks and IP networking. If you aren't familiar, try the [Fundamentals of computer networking](/training/modules/network-fundamentals/) module.
* While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning v2. ## Create a virtual network
To create a virtual network, use the following steps:
> > If you plan on using a _private endpoint_ to add these services to the VNet, you do not need to select these entries. The steps in this article use a private endpoint for these services, so you do not need to select them when following these steps.
-1. Select __Security__. For __BastionHost__, select __Enable__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you will create inside the VNet in a later step. Use the following values for the remaining fields:
+1. Select __Security__. For __BastionHost__, select __Enable__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you'll create inside the VNet in a later step. Use the following values for the remaining fields:
* __Bastion name__: A unique name for this Bastion instance * __AzureBastionSubnetAddress space__: 172.16.2.0/27
To create a virtual network, use the following steps:
:::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-networking.png" alt-text="UI for storage account networking":::
-1. On the __Create a private endpoint__ form, use the same __subscription__, __resource group__, and __Region__ that you have used for previous resources. Enter a unique __Name__.
+1. On the __Create a private endpoint__ form, use the same __subscription__, __resource group__, and __Region__ that you've used for previous resources. Enter a unique __Name__.
:::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint.png" alt-text="UI to add the file private endpoint":::
To create a virtual network, use the following steps:
1. Select __Review + Create__. Verify that the information is correct, and then select __Create__. > [!TIP]
-> If you plan to use [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md) in your pipeline, it is also required to configure private endpoints target **queue** and **table** sub-resources. ParallelRunStep uses queue and table under the hood for task scheduling and dispatching.
+> If you plan to use a [batch endpoint](concept-endpoints.md) or an Azure Machine Learning pipeline that uses a [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md), it is also required to configure private endpoints target **queue** and **table** sub-resources. ParallelRunStep uses queue and table under the hood for task scheduling and dispatching.
## Create a key vault
Use the following steps to create an Azure Virtual Machine to use as a jump box.
1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide values for the following fields: * __Virtual machine name__: A unique name for the VM.
- * __Username__: The username you will use to log in to the VM.
+ * __Username__: The username you'll use to log in to the VM.
* __Password__: The password for the username. * __Security type__: Standard. * __Image__: Windows 11 Enterprise.
From studio, select __Compute__, __Compute clusters__, and then select the compu
:::image type="content" source="./media/tutorial-create-secure-workspace/compute-instance-stop.png" alt-text="Screenshot of stop button for compute instance"::: ### Stop the jump box
-Once it has been created, select the virtual machine in the Azure portal and then use the __Stop__ button. When you are ready to use it again, use the __Start__ button to start it.
+Once it has been created, select the virtual machine in the Azure portal and then use the __Stop__ button. When you're ready to use it again, use the __Start__ button to start it.
:::image type="content" source="./media/tutorial-create-secure-workspace/virtual-machine-stop.png" alt-text="Screenshot of stop button for the VM":::
To delete all resources created in this tutorial, use the following steps:
1. Enter the resource group name, then select __Delete__. ## Next steps
-Now that you have created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).
+Now that you've created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-attach-compute-targets.md)
-> * [v2 (preview)](../how-to-train-model.md)
+> * [v2 (current version)](../how-to-train-model.md)
Learn how to attach Azure compute resources to your Azure Machine Learning workspace with SDK v1. Then you can use these resources as training and inference [compute targets](../concept-compute-target.md) in your machine learning tasks.
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-keras.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-train-keras.md)
-> * [v2 (preview)](../how-to-train-keras.md)
+> * [v2 (current version)](../how-to-train-keras.md)
In this article, learn how to run your Keras training scripts with Azure Machine Learning.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-train-pytorch.md)
-> * [v2 (preview)](../how-to-train-pytorch.md)
+> * [v2 (current version)](../how-to-train-pytorch.md)
In this article, learn how to run your [PyTorch](https://pytorch.org/) training scripts at enterprise scale using Azure Machine Learning.
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-scikit-learn.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-train-scikit-learn.md)
-> * [v2 (preview)](../how-to-train-scikit-learn.md)
+> * [v2 (current version)](../how-to-train-scikit-learn.md)
In this article, learn how to run your scikit-learn training scripts with Azure Machine Learning.
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-tensorflow.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-train-tensorflow.md)
-> * [v2 (preview)](../how-to-train-tensorflow.md)
+> * [v2 (current version)](../how-to-train-tensorflow.md)
In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using Azure Machine Learning.
network-function-manager Delete Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/delete-functions.md
# Tutorial: Delete network functions on Azure Stack Edge
-In this tutorial, you learn how to delete Azure Network Function Manager - Network Function and Azure Network Function Manager - Device using the Azure portal.
+In this tutorial, you learn how to delete a network function and a device in Azure Network Function Manager by using the Azure portal.
-## Delete network function
+## Delete a network function
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to the **Azure Network Manager - Devices** resource in which you have deployed a network function and select **Network Function**.
- ![Screenshot that shows how to select a network function.](media/delete-functions/select-network-function.png)
+1. Go to the **Azure Network Manager - Devices** resource in which you've deployed a network function. Under **Network Function**, select the function that you want to delete.
+
+ ![Screenshot that shows how to select a network function.](media/delete-functions/select-network-function.png)
-1. Select **Delete** Network Function.
- ![Screenshot that shows how to delete a network function.](media/delete-functions/delete-network-function.png)
+1. Select **Delete**.
+
+ ![Screenshot that shows how to delete a network function.](media/delete-functions/delete-network-function.png)
- > [!NOTE]
- > Incase you encounter following error while deleting the network function.
- > *Failed to delete resource. Error: The client 'user@mail.com' with object id 'xxxx-9999-xxxx-9999-xxxx' has permission to perform action 'Microsoft.HybridNetwork/networkFunctions/delete' on scope 'mrg-ResourceGroup/providers/Microsoft.HybridNetwork/networkFunctions/NetworkFunction01'; however, the access is denied because of the deny assignment with name 'System deny assignment created by managed application /subscriptions/xxxx-0000-xxxx-0000-xxxx/resourceGroups/ResourceGroup/providers/Microsoft.Solutions/applications/managedApplication01' and Id 'xxxxxxxxxxxxxxxxxxxxxx' at scope '/subscriptions/xxxx-0000-xxxx-0000-xxxx/resourceGroups/mrg-ResourceGroup and refer **Step 4**.*
- > ![Screenshot that shows an error for failed to delete.](media/delete-functions/failed-to-delete.png)
+1. You might encounter a "Failed to delete resource" error while you're deleting the network function.
+
+ ![Screenshot that shows an error for failure to delete a resource.](media/delete-functions/failed-to-delete.png)
-1. Navigate to search box within the **Azure portal** and search for the **Managed Application** which was seen as an exception in **Step 3**.
- ![Screenshot that shows a managed application.](media/delete-functions/managed-application.png)
+ If so, use the search box in the Azure portal to search for the managed application that the error mentioned. When the managed application appears under **Resources**, select it.
+
+ ![Screenshot that shows searching for a managed application.](media/delete-functions/managed-application.png)
-1. Select **Delete** Managed Application
- ![Screenshot that shows how to delete a managed application.](media/delete-functions/delete-managed-application.png)
+ In the details for the managed application, select **Delete**.
+
+ ![Screenshot that shows the button for deleting a managed application.](media/delete-functions/delete-managed-application.png)
-## Delete network function manager - device
+## Delete a device
- > [!IMPORTANT]
- > Ensure that all the Network Function deployed within the Azure Network Function Manager is deleted before proceeding to the next step.
- >
+> [!IMPORTANT]
+> Ensure that all the network functions deployed within Azure Network Function Manager are deleted before you delete a device.
-1. Navigate to the **Azure Network Manager - Devices** resource in which you have deleted a network function and select **Delete** Azure Network Function Manager - Device
- ![Screenshot that shows how to delete a network function manager.](media/delete-functions/delete-network-function-manager.png)
+Go to the **Azure Network Manager - Devices** resource in which you've deleted a network function, and then select **Delete**.
+
+![Screenshot that shows the button for deleting a device.](media/delete-functions/delete-network-function-manager.png)
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-create-cluster.md
az feature register --namespace Microsoft.RedHatOpenShift --name preview
> [!NOTE] > ARO pull secret does not change the cost of the RH OpenShift license for ARO.
-A Red Hat pull secret enables your cluster to access Red Hat container registries along with additional content. This step is optional but recommended.
+A Red Hat pull secret enables your cluster to access Red Hat container registries along with additional content. This step is optional but recommended. Please note that the field `cloud.openshift.com` will be removed from your secret even if your pull-secret contains that field. This field enables an extra monitoring feature which sends data to RedHat and is thus disabled by default. To enable this feature, see https://docs.openshift.com/container-platform/4.11/support/remote_health_monitoring/enabling-remote-health-reporting.html .
1. [Navigate to your Red Hat OpenShift cluster manager portal](https://console.redhat.com/openshift/install/azure/aro-provisioned) and log in.
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
Using AOGS, we capture the Aqua broadcast when the satellite is within line of s
In this tutorial, we will follow these steps to collect and process Aqua data: > [!div class="checklist"]
-> * [Schedule a contact and collect Aqua direct broadcast data using AOGS](#step-1-schedule-a-contact-and-collect-aqua-direct-broadcast-data-using-aogs).
-> * [Process Aqua direct broadcast data using RT-STPS](#step-2-process-aqua-direct-broadcast-data-using-rt-stps).
-> * [Create higher level products using IPOPP](#step-3-create-higher-level-products-using-ipopp).
+> * [Use AOGS to schedule and contact and collect Aqua data](#step-1-use-aogs-to-schedule-a-contact-and-collect-aqua-data).
+> * [Install NASA DRL tools](#step-2-install-nasa-drl-tools).
+> * [Create Level-0 product using RT-STPS](#step-3-create-level-0-product-using-rt-stps).
+> * [Create higher level products using IPOPP](#step-4-create-higher-level-products-using-ipopp).
-Optional setup steps for capturing the ground station telemetry are included in the [Appendix](#appendix).
+Optional setup steps for capturing the ground station telemetry are included the guide on [receiving real-time telemetry from the ground stations](receive-real-time-telemetry.md).
-## Step 1: Schedule a contact and collect Aqua direct broadcast data using AOGS
+## Step 1: Use AOGS to schedule a contact and collect Aqua data
-Follow the steps listed in [Tutorial: Downlink data from NASA's AQUA public satellite](downlink-aqua.md) to schedule a contact with Aqua using AOGS and collect the direct broadcast data on an Azure VM for further processing.
+Execute steps listed in [Tutorial: Downlink data from NASA's AQUA public satellite](downlink-aqua.md)
+
+The above tutorial provides a walkthrough for scheduling a contact with Aqua and collecting the direct broadcast data on an Azure VM.
> [!NOTE] > In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](downlink-aqua.md#prepare-your-virtual-machine-vm-and-network-to-receive-aqua-data), use the following values:
Follow the steps listed in [Tutorial: Downlink data from NASA's AQUA public sate
> - **Size:** Standard_D8s_v5 or higher > - **IP Address:** Ensure that the VM has at least one standard public IP address
-At the end of this step, you should have the raw direct broadcast saved as ```.bin``` files under the ```~/aquadata``` folder on the receiver-vm.
-
-## Step 2: Process Aqua direct broadcast data using RT-STPS
-
-The [Real-time Software Telemetry Processing System](https://directreadout.sci.gsfc.nasa.gov/?id=dspContent&cid=69)(RT-STPS) is a NASA-provided software for processing Aqua direct broadcast data. The steps below cover installation of RT-STPS Verson 6.0 on the receiver-vm, and production of Level-0 Production Data Set(PDS) files from the data collected in the previous step.
-
-Register with the [NASA DRL](https://directreadout.sci.gsfc.nasa.gov/) to download the RT-STPS installation package.
-
-Transfer the installation binaries to the receiver-vm:
-
-```console
-ssh azureuser@receiver-vm 'mkdir -p ~/software'
-scp RT-STPS_6.0*.tar.gz azureuser@receiver-vm:~/software/.
-```
+At the end of this step, you should have the raw direct broadcast data saved as ```.bin``` files under the ```~/aquadata``` folder on the ```receiver-vm```.
-Alternatively, you can upload your installation binaries to a container in Azure Storage and download them to the receiver-vm using [AzCopy](../storage/common/storage-use-azcopy-v10.md)
+## Step 2: Install NASA DRL tools
+> [!NOTE]
+> Due to potential resource contention, DRL recommends installing RT-STPS and IPOPP on separate machines. But for this tutorial, we install both tools on the ```receiver-vm``` because we don't run them at the same time. For production workloads, please follow sizing and isolation recommendations in the user guides available on the DRL website.
-### Install rt-stps
+### Increase OS disk size on the receiver-vm
-```console
-sudo yum install java-11-openjdk
-cd ~/software
-tar -xzvf RT-STPS_6.0.tar.gz
-cd ./rt-stps
-./install.sh
-```
+The default disk space allocated to the OS disk of an Azure VM is not sufficient for installing NASA DRL tools. Follow the steps below to increase the size of the OS disk on the ```receiver-vm``` to 1TB.
-### Install rt-stps patches
+### [Portal](#tab/portal2)
-```console
-cd ~/software
-tar -xzvf RT-STPS_6.0_PATCH_1.tar.gz
-tar -xzvf RT-STPS_6.0_PATCH_2.tar.gz
-tar -xzvf RT-STPS_6.0_PATCH_3.tar.gz
-cd ./rt-stps
-./install.sh
-```
-
-### Validate install
+1. Open the [portal](https://portal.azure.com).
+1. Navigate to your virtual machine.
+1. On the **Overview** page, select **Stop**.
+1. On the **Disks** page, select the OS disk.
+1. On the **Disk** pane, navigate to **Size + performance** page.
+1. Select **Premium SSD(locally redundant storage)** from the **Disk SKU** dropdown.
+1. Select the **P30** Disk Tier (1024GB).
+1. Select **Save**.
+1. Navigate back to **Virtual Machine** pane.
+1. On the **Overview** page, select **Start**
+
+On the receiver-vm, verify that the root partition now has 1TB available
-```console
-cd ~/software
-tar -xzvf RT-STPS_6.0_testdata.tar.gz
-cd ~/software/rt-stps
-rm ./data/*
-./bin/batch.sh config/npp.xml ./testdata/input/rt-stps_npp_testdata.dat
-# Verify that files exist
-ls -la ./data
+```bash
+lsblk -o NAME,HCTL,SIZE,MOUNTPOINT
```-
-### Create Level-0 product
-
-Run rt-stps in batch mode to process the ```.bin``` file collected in Step 1
+This should show ~1TB allocated to the root ```/``` mountpoint.
```console
-cd ~/software/rt-stps
-./bin/batch.sh ./config/aqua.xml ~/aquadata/raw-2022-05-29T0957-0700.bin
+NAME HCTL SIZE MOUNTPOINT
+sda 0:0:0:0 1T
+Γö£ΓöÇsda1 500M /boot
+Γö£ΓöÇsda2 1023G /
+Γö£ΓöÇsda14 4M
+ΓööΓöÇsda15 495M /boot/efi
```
-This command produces Level-0 Production Data Set (```.pds```) files under the ```~/software/rt-stps/data``` directory.
-
-## Step 3: Create higher level products using IPOPP
-
-[International Planetary Observation Processing Package (IPOPP)](https://directreadout.sci.gsfc.nasa.gov/?id=dspContent&cid=68) is another NASA-provided software to process Aqua Level-0 data into higher level products.
-In the steps below, you'll process the Level-0 data generated in the previous step using IPOPP.
-
-> [!NOTE]
-> Due to potential resource contention, DRL recommends installing RT-STPS and IPOPP on separate machines. But for this tutorial, we install both on the our receiver-vm because we don't run them at the same time. For production workloads, please follow sizing and isolation recommendations in the user guides available on the DRL website.
-
-### Attach a data disk to the receiver-vm
-
-IPOPP installation and subsequent generation of products requires more disk space and I/O throughput than what is available on the receiver-vm by default.
-To provide more disk space and throughput, attach a 1TB premium data disk to the receiver-vm by following steps in [Attach a data disk to a Linux VM](../virtual-machines/linux/attach-disk-portal.md)
-
-### Create a file system on the data disk
-
-```console
-lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
-sudo parted /dev/sdb --script mklabel gpt mkpart xfspart xfs 0% 100%
-sudo mkfs.xfs /dev/sdb1
-sudo partprobe /dev/sdb1
-sudo mkdir /datadrive
-sudo mount /dev/sdb1 /datadrive
-sudo chown azureuser:azureuser /datadrive
-```
-> [!NOTE]
-> To ensure that the datadrive is mounted automatically after every reboot, please refer to [Attach a data disk to a Linux VM](../virtual-machines/linux/attach-disk-portal.md#mount-the-disk) for instructions on how to add an entry to ```/etc/fstab```
-- ### Install Desktop and VNC Server-
-IPOPP installation requires using a browser to sign on to the DRL website to download the installation script. This script must be run from the same host that it was downloaded to. The subsequent IPOPP configuration also requires a GUI. Therefore, we install a full desktop and a vnc server to enable running GUI applications on the receiver-vm.
-
-```console
+Using NASA DRL tools requires support for running GUI applications. To enable this, install desktop tools and vncserver on the `receiver-vm`:
+```bash
sudo yum install tigervnc-server sudo yum groups install "GNOME Desktop" ```- Start VNC server:-
-```console
+```bash
vncsever ```
-Enter a password when prompted.
-
-Port forward the vncserver port (5901) over ssh:
+Enter a password when prompted.
-```console
+### Remotely access the VM Desktop
+Port forward the vncserver port (5901) over SSH to your local machine:
+```bash
ssh -L 5901:localhost:5901 azureuser@receiver-vm ```
+1. On your local machine, download and install [TightVNC Viewer](https://www.tightvnc.com/download.php).
+1. Start the TightVNC Viewer and connect to ```localhost:5901```.
+1. Enter the vncserver password you entered in the previous step.
+1. You should see the GNOME Desktop that is running on the VM in the VNC viewer window.
-Download the [TightVNC](https://www.tightvnc.com/download.php) viewer and connect to ```localhost:5901``` and enter the vncserver password entered in the previous step. You should see the GNOME desktop running on the VM.
-
-Start a new terminal, and start the Firefox browser
-
-```console
-firefox
-```
+### Download RT-STPS and IPOPP installation files
+From the GNOME Desktop, go to **Applications** > **Internet** > **Firefox** to start a browser.
-[Log on the DRL website](https://directreadout.sci.gsfc.nasa.gov/loginDRL.cfm?cid=320&type=software) and download the downloader script.
+Log on to the [NASA DRL](https://directreadout.sci.gsfc.nasa.gov/?id=dspContent&cid=325&type=software) website and download the **RT-STPS** installation files and the **IPOPP downloader script** under software downloads. The downloaded files will land under ~/Downloads.
-Run the downloader script from the ```/datadrive/ipopp``` directory because
-the home directory isn't large enough to hold the downloaded content.
+Alternatively, you can download the installation files on your local machine first and then upload to a container in Azure Storage. Then use [AzCopy](../storage/common/storage-use-azcopy-v10.md) to download to your ```receiver-vm```.
-```console
-INSTALL_DIR=/datadrive/ipopp
-cp ~/Downloads/downloader_DRL-IPOPP_4.1.sh $INSTALL_DIR
-cd $INSTALL_DIR
-./downloader_DRL-IPOPP_4.1.sh
+### Install RT-STPS
+```bash
+tar -xvzf ~/Downloads/RT-STPS_7.0.tar.gz --directory ~/
+tar -xvzf ~/Downloads/RT-STPS_7.0_testdata.tar.gz --directory ~/
+cd ~/rt-stps
+./install.sh
```-
-This script will download \~35G and will take 1 hour or more.
-
-Alternatively, you can upload your installation binaries to a container in Azure Storage and download them to the receiver-vm using [AzCopy](../storage/common/storage-use-azcopy-v10.md)
+Validate your RT-STPS install by processing the test data supplied with the installation:
+```bash
+cd ~/rt-stps
+./bin/batch.sh config/jpss1.xml ./testdata/input/rt-stps_jpss1_testdata.dat
+```
+Verify that output files exist in the data folder:
+```bash
+ls -la ~/data/
+```
+This completes the RT-STPS installation.
### Install IPOPP-
-```console
-tar -xvzf DRL-IPOPP_4.1.tar.gz --directory $INSTALL_DIR
-chmod -R 755 $INSTALL_DIR/IPOPP
-$INSTALL_DIR/IPOPP/install_ipopp.sh -installdir $INSTALL_DIR/drl -datadir $INSTALL_DIR/data -ingestdir $INSTALL_DIR/data/ingest
+Run the IPOPP downloader script to download the IPOPP installation files.
+```bash
+cd ~/Downloads
+./downloader_DRL-IPOPP_4.1.sh
+tar -xvzf ~/Downloads/DRL-IPOPP_4.1.tar.gz --directory ~/
+cd ~/IPOPP
+./install_ipopp.sh
```
-### Install IPOPP patches
+### Configure and start IPOPP services
+IPOPP services are configured using its Dashboard GUI.
-```console
-$INSTALL_DIR/drl/tools/install_patch.sh $PATCH_FILE_NAME
-```
-### Start IPOPP services
-
-```console
-$INSTALL_DIR/drl/tools/services.sh start
-```
-### Verify service status
+[Go to the VM Desktop](#remotely-access-the-vm-desktop) and start a new terminal under **Applications** > **Utilities** > **Terminal**
+Start the IPOPP dashboard from the terminal:
+```bash
+~/drl/tools/dashboard.sh
```
-$INSTALL_DIR/drl/tools/services.sh status
-$INSTALL_DIR/drl/tools/spa_services.sh status
-```
+IPOPP starts in the process monitoring mode. Switch to **Configuration Mode** by the using the menu option.
-### Configure IPOPP services using its dashboard
+Enable the following under the **EOS** tab:
+* gbad
+* MODISL1DB l0l1aqua
+* MODISL1DB l1atob
+* IMAPP
-Before we can create Level-1 and Level-2 products from the Level-0 PDS files generated by rt-stps, we need to configure IPOPP. IPOPP must be configured with its dashboard GUI. To start the dashboard, first port forward the vncserver port (5901) over ssh:
+Switch back to **Process Monitoring** mode using the menu option.
-```console
-ssh -L 5901:localhost:5901 azureuser@receiver-vm
+Start IPOPP
+```bash
+~/drl/tools/services.sh start
+~/drl/tools/services.sh status
```
+This completes the IPOPP installation and configuration.
-Using the TightVNC client, connect to localhost:5901 and enter the vncserver password. On the virtual machine desktop, open a new terminal and start the dashboard:
+## Step 3: Create Level-0 product using RT-STPS
-```console
-cd /datadrive/ipopp
-./drl/tools/dashboard.sh & 
+Run rt-stps in batch mode to process the ```.bin``` file collected in Step 1
+```bash
+cd ~/rt-stps
+./bin/batch.sh ./config/aqua.xml ~/aquadata/raw-2022-05-29T0957-0700.bin
```
+This command produces Level-0 Production Data Set (```.pds```) files under the ```~/rt-stps/data``` directory.
-1. IPOPP Dashboard starts in process monitoring mode. Switch to **Configuration Mode** by using the menu option. 
-
-2. Aqua related products can be configured from EOS tab in configuration mode. Disable all other tabs. We're interested in the MODIS Aerosol L2 (MOD04) product, which is produced by IMAPP SPA. Therefore, enable the following in the **EOS** tab: 
-
- - gbad 
-
- - MODISL1DB l0l1aqua 
-
- - MODISL1DB l1atob 
-
- - IMAPP 
-
-3. After updating the configuration, switch back to **Process Monitoring** mode using the menu. All tiles will be in OFF mode initially. 
-
-4. When prompted, save changes to the configuration.  
-
-5. Click **Start Services** in the action menu. Note that **Start Services** is only enabled in process monitoring mode.  
-
-6. Click **Check IPOPP Services** in action menu to validate.
+## Step 4: Create higher level products using IPOPP
## Ingest data for processing
-Copy the Level-0 PDS files generated by RT-STPS to the IPOPP ingest directory for further processing.
+Copy the PDS files generated by RT-STPS in the previous step to the IPOPP ingest directory for further processing.
-```console
-cp ~/software/rt-stps/data/* /datadrive/ipopp/drl/data/dsm/ingest/.
+```bash
+cp ~/rt-stps/data/* ~/drl/data/dsm/ingest/.
```- Run IPOPP ingest to create the products configured in the dashboard. -
-```
-/datadrive/ipopp/drl/tools/ingest_ipopp.sh
+```bash
+~/drl/tools/ingest_ipopp.sh
```- You can watch the progress in the dashboard.-
+```bash
+~/drl/tools/dashboard.sh
```
-/datadrive/ipopp/drl/tools/dashboard.sh
+IPOPP will produce output products in the following directory:
+```bash
+cd ~/drl/data/pub/gsfcdata/aqua/modis/
```
-IPOPP will produce output products in the following directories:
-
-```
-/datadrive/ipopp/drl/data/pub/gsfcdata/aqua/modis/level[0,1,2] 
-```
-
-## Appendix
-
-### Capture ground station telemetry
-
-Follow steps here to [receive real-time telemetry from the ground stations](receive-real-time-telemetry.md).
- ## Next steps For an end-to-end implementation that involves extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics, see:
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/troubleshoot.md
This document contains information about troubleshooting your solutions that use
## Unable to create an Elastic resource
-Elastic integration with Azure can only be set up by users who have *Owner* access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md).
+Elastic integration with Azure can only be set up by users who have *Owner* or *Contributor* access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md).
## Logs not being emitted to Elastic
In the Elastic site, open a support request.
## Next steps
-Learn about [managing your instance](manage.md) of Elastic.
+Learn about [managing your instance](manage.md) of Elastic.
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
The following table provides a list of high-level features and capabilities comp
| Support for PgLogical extension | No | Yes | | Support logical replication with HA | N/A | [Limited](concepts-high-availability.md#high-availabilitylimitations) | | **Disaster Recovery** | | |
-| Cross region DR | Using read replicas, geo-redundant backup | Geo-redundant backup (Preview) in select regions|
+| Cross region DR | Using read replicas, geo-redundant backup | Geo-redundant backup (in [selected regions](overview.md#azure-regions)) |
| DR using replica | Using async physical replication | N/A | | Automatic failover | No | N/A | | Can use the same r/w endpoint | No | N/A |
The following table provides a list of high-level features and capabilities comp
| PITR capability to any time within the retention period | Yes | Yes | Ability to restore on a different zone | N/A | Yes | | Ability to restore to a different VNET | No | Yes |
-| Ability to restore to a different region | Yes (Geo-redundant) | Yes (in Preview in [selected regions](overview.md#azure-regions)) |
+| Ability to restore to a different region | Yes (Geo-redundant) | Yes (in [selected regions](overview.md#azure-regions)) |
| Ability to restore a deleted server | Limited via API | Limited via support ticket | | **Read Replica** | | | | Support for read replicas | Yes | No |
private-5g-core Key Components Of A Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/key-components-of-a-private-mobile-network.md
The following diagram shows the key resources you'll use to manage your private
## Next steps
+- [Learn more about the design requirements for deploying a private mobile network](private-mobile-network-design-requirements.md)
- [Learn more about the prerequisites for deploying a private mobile network](complete-private-mobile-network-prerequisites.md)
private-5g-core Private 5G Core Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-5g-core-overview.md
Azure Private 5G Core is integrated with Log Analytics in Azure Monitor, as desc
## Next steps - [Learn more about the key components of a private mobile network](key-components-of-a-private-mobile-network.md)
+- [Learn more about the design requirements for deploying a private mobile network](private-mobile-network-design-requirements.md)
- [Learn more about the prerequisites for deploying a private mobile network](complete-private-mobile-network-prerequisites.md)
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
+
+ Title: Private mobile network design requirements
+
+description: Learn how to design a private mobile network for Azure Private 5G Core Preview.
++++ Last updated : 10/25/2022+++
+# Private mobile network design requirements
+
+This article will help you design and prepare for implementing a private 4G or 5G network based on the Azure Private 5G technology. It aims to provide an understanding of how these networks are constructed and the decisions that you'll need to make as you plan your network. It is intended for system integrators and other advanced partners that have a good understanding of enterprise IP networking and a grounding in Azure fundamentals.
+
+## Azure Private MEC and Azure Private 5G Core
+
+[Azure private multi-access edge compute (MEC)](../private-multi-access-edge-compute-mec/overview.md) is a solution that combines Microsoft compute, networking, and application services onto a deployment at the enterprise premises (edge). These deployments are managed centrally from the cloud. Azure Private 5G Core is an Azure service within Azure private MEC that provides 4G and 5G core network functions at the enterprise edge. At the enterprise edge site, devices attach across a cellular radio access network (RAN) and are connected via the Azure Private 5G Core service to upstream networks, applications, and resources. Optionally, devices may leverage the local compute capability provided by Azure private MEC to process data streams at very low latency, all under the control of the enterprise.
++
+## Requirements for a private mobile network
+
+The following capabilities must be present to allow user equipment (UEs) to attach to a private cellular network:
+
+- The UE must be compatible with the protocol and the wireless spectrum band used by the radio access network (RAN).
+- The UE must contain a subscriber identity module (SIM). This is a cryptographic element that stores the identity of the device.
+- There must be a RAN, sending and receiving the cellular signal, to all parts of the enterprise site that contain UEs needing service.
+- A packet core instance connected to the RAN and to an upstream network is required. The packet core is responsible for authenticating the UE's SIMs as they connect across the RAN and request service from the network. It applies policy to the resulting data flows to and from the UEs, for example, to set a quality of service.
+- The RAN, packet core, and upstream network infrastructure must be connected via Ethernet so that they can pass IP traffic to one another.
+
+## Designing a private mobile network
+
+The following sections describe elements of the network you'll need to consider and the design decisions you'll need to make in preparation for deploying the network.
+
+### Subnets and IP addresses
+
+You may have existing IP networks at the enterprise site that the private cellular network will have to integrate with. This might mean, for example:
+
+- Selecting IP subnets and IP addresses for the Azure Private 5G Core that match existing subnets without clashing addresses.
+- Segregating the new network via IP routers or using the private RFC1918 address space for subnets.
+- Assigning a special pool of IP addresses specifically for use by UEs when they attach to the network.
+- Using network address and port translation (NAPT), either on the packet core itself, or on an upstream network device such as a border router.
+- Optimizing the network for performance by choosing a maximum transmission unit (MTU) that minimizes fragmentation.
+
+You'll need to document the IPv4 subnets that will be used for the deployment and agree on the IP addresses to use for each element in the solution, as well as on the IP addresses that will be allocated to UEs when they attach. You'll need to deploy (or configure existing) routers and firewalls at the enterprise site to permit traffic. You should also agree how and where in the network any NAPT or MTU changes are required and plan the associated router/firewall configuration. For more information, see [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).
+
+### Network access
+
+Your design must reflect the enterpriseΓÇÖs rules on what networks and assets should be reachable by the RAN and UEs on the private 5G network. For example, they might be permitted to access local Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), the internet, or Azure, but not a factory operations local area network (LAN). You may need to arrange for remote access to the network so that you can troubleshoot issues without requiring a site visit. You also need to consider how the enterprise site will be connected to upstream networks such as Azure, for management and/or for access to other resources and applications outside of the enterprise site.
+
+You'll need to agree with the enterprise team which IP subnets and addresses will be allowed to communicate with each other. Then, create a routing plan and/or access control list (ACL) configuration that implements this agreement on the local IP infrastructure. You may also use virtual local area networks (VLANs) to partition elements at layer 2, configuring your switch fabric to assign connected ports to specific VLANs (for example, to put the Azure Stack Edge port used for RAN access into the same VLAN as the RAN units connected to the Ethernet switch). You should also agree with the enterprise to set up an access mechanism, such as a virtual private network (VPN), that allows your support personnel to remotely connect to the management interface of each element in the solution. You'll also need an IP link between Azure Private 5G Core and Azure for management and telemetry.
+
+### RAN compliance
+
+The RAN that you'll use to broadcast the signal across the enterprise site must comply with local regulations. For example, this could mean:
+
+- The RAN units have completed the process of homologation and received regulatory approval for their use on a certain frequency band in a country.
+- You have received permission for the RAN to broadcast using spectrum in a certain location, for example, by grant from a telecom operator, regulatory authority or via a technological solution such as a Spectrum Access System (SAS).
+- The RAN units in a site have access to high-precision timing sources, such as Precision Time Protocol (PTP) and GPS location services.
+
+You should ask your RAN partner for the countries and frequency bands for which the RAN is approved. You may find that you'll need to use multiple RAN partners to cover the countries in which you provide your solution. Although the RAN, UE and packet core all communicate using standard protocols, Microsoft recommends that you perform interoperability testing for the specific 4G Long-Term Evolution (LTE) or 5G standalone (SA) protocol between Azure Private 5G Core, UEs and the RAN prior to any deployment at an enterprise customer.
+
+Your RAN will transmit a Public Land Mobile Network Identity (PLMN ID) to all UEs on the frequency band it is configured to use. You should define the PLMN ID and confirm your access to spectrum. In some countries, spectrum must be obtained from the national regulator or incumbent telecommunications operator. For example, if you're using the band 48 Citizens Broadband Radio Service (CBRS) spectrum, you may need to work with your RAN partner to deploy a Spectrum Access System (SAS) domain proxy on the enterprise site so that the RAN can continuously check that it is authorized to broadcast.
+
+### Signal coverage
+
+The UEs must be able to communicate with the RAN from any location at the site. This means that the signals must propagate effectively in the environment, including accounting for obstructions and equipment, to support UEs moving around the site (for example, between indoor and outdoor areas).
+
+You should perform a site survey with your RAN partner and the enterprise to make sure that the coverage is adequate. Make sure that you understand the RAN unitsΓÇÖ capabilities in different environments and any limits (for example, on the number of attached UEs that a single unit can support). If your UEs are going to move around the site, you should also confirm that the RAN supports X2 (4G) or Xn (5G) handover, which allows for the UE to transition seamlessly between the coverage provided by two RAN units. Note that UEs cannot use these handover techniques to roam between a private enterprise network and the public cellular network offered by a telecommunications operator.
+
+### SIMs
+
+Every UE must present an identity to the network, encoded in a subscriber identity module (SIM). SIMs are available in different physical form factors as well as in software-only format (eSIM). The data encoded on the SIM must match the configuration of the RAN and of the provisioned identity data in the Azure Private 5G Core.
+
+Obtain SIMs in factors compatible with the UEs and programmed with the PLMN ID and keys that you want to use for the deployment. Physical SIMs are widely available on the open market at relatively low cost. If you prefer to use eSIMs, you'll need to deploy the necessary eSIM configuration and provisioning infrastructure so that UEs can configure themselves before they attach to the cellular network. You can use the provisioning data you receive from your SIM partner to provision matching entries in Azure Private 5G Core. Because SIM data must be kept secure, the cryptographic keys used to provision SIMs are not readable in Azure Private 5G Core once set, so you must consider how you'll store them in case you ever need to reprovision the data in Azure Private 5G Core.
+
+### Automation and integration
+
+Being able to build enterprise networks using automation and other programmatic techniques saves time, reduces errors, and produces better customer outcomes. Such techniques also provide a recovery path in the event of a site failure that requires rebuilding the network.
+
+You should adopt a programmatic, *infrastructure as code* approach to your deployments. You can use templates or the Azure REST API to build your deployment using parameters as inputs with values that you have collected during the design phase of the project. You should save provisioning information such as SIM data, switch/router configuration, and network policies in machine-readable format so that, in the event of a failure, you can reapply the configuration in the same way as you originally did. Another best practice to recover from failure is to deploy a spare Azure Stack Edge server to minimize recovery time if the first unit fails; you can then use your saved templates and inputs to quickly recreate the deployment. For more information on deploying a network using templates, refer to [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md).
+
+You must also consider how you'll integrate other Azure products and services with the private enterprise network. These products include [Azure Active Directory](/azure/active-directory/fundamentals/active-directory-whatis) and [role-based access control (RBAC)](/azure/role-based-access-control/overview), where you must consider how tenants, subscriptions and resource permissions will align with the business model that exists between you and the enterprise, as well as your own approach to customer system management. For example, you might use [Azure Blueprints](/azure/governance/blueprints/overview) to set up the subscriptions and resource group model that works best for your organization.
+
+## Next steps
+
+- [Learn more about the key components of a private mobile network](key-components-of-a-private-mobile-network.md)
+- [Learn more about the prerequisites for deploying a private mobile network](complete-private-mobile-network-prerequisites.md)
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
Previously updated : 3/15/2021 Last updated : 10/28/2022
The following tables list the Private Link services and the regions where they'r
|Azure Machine Learning | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Machine Learning.](../machine-learning/how-to-configure-private-link.md) | |Azure Bot Service | All public regions | Supported only on Direct Line App Service extension | GA </br> [Learn how to create a private endpoint for Azure Bot Service](/azure/bot-service/dl-network-isolation-concept) | | Azure Cognitive Services | All public regions<br/>All Government regions | | GA <br/> [Use private endpoints.](../cognitive-services/cognitive-services-virtual-networks.md#use-private-endpoints) |
+| Azure Cognitive Search | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Cognitive Search](/azure/search/service-create-private-endpoint) |
### Analytics |Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| |Azure Synapse Analytics| All public regions <br/> All Government regions | Supported for Proxy [connection policy](/azure/azure-sql/database/connectivity-architecture#connection-policy) |GA <br/> [Learn how to create a private endpoint for Azure Synapse Analytics.](/azure/azure-sql/database/private-endpoint-overview)|
-|Azure Event Hub | All public regions<br/>All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Event Hub.](../event-hubs/private-link-service.md) |
+|Azure Event Hubs | All public regions<br/>All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Event Hubs.](../event-hubs/private-link-service.md) |
| Azure Monitor <br/>(Log Analytics & Application Insights) | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Monitor.](../azure-monitor/logs/private-link-security.md) | |Azure Data Factory | All public regions<br/> All Government regions<br/>All China regions | Credentials need to be stored in an Azure key vault| GA <br/> [Learn how to create a private endpoint for Azure Data Factory.](../data-factory/data-factory-private-link.md) | |Azure HDInsight | All public regions<br/>All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure HDInsight.](../hdinsight/hdinsight-private-link.md) |
+| Azure Data Explorer | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Data Explorer.](/azure/data-explorer/security-network-private-endpoint) |
+| Azure Stream Analytics | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Stream Analytics.](/azure/stream-analytics/private-endpoints) |
### Compute |Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--|
-|Azure App Configuration | All public regions | | GA </br> [Learn how to create a private endpoint for Azure App Configuration](../azure-app-configuration/concept-private-endpoint.md) |
|Azure-managed Disks | All public regions<br/> All Government regions<br/>All China regions | [Select for known limitations](../virtual-machines/disks-enable-private-links-for-import-export-portal.md#limitations) | GA <br/> [Learn how to create a private endpoint for Azure Managed Disks.](../virtual-machines/disks-enable-private-links-for-import-export-portal.md) | | Azure Batch (batchAccount) | All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) | | Azure Batch (nodeManagement) | [Selected regions](../batch/simplified-compute-node-communication.md#supported-regions) | Supported for [simplified compute node communication](../batch/simplified-compute-node-communication.md) | Preview <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) |
+| Azure Functions | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Functions.](/azure/azure-functions/functions-create-vnet) |
### Containers
The following tables list the Private Link services and the regions where they'r
|Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| | Azure Key Vault | All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Key Vault.](../key-vault/general/private-link-service.md) |
+|Azure App Configuration | All public regions | | GA </br> [Learn how to create a private endpoint for Azure App Configuration](../azure-app-configuration/concept-private-endpoint.md) |
### Storage |Supported services |Available regions | Other considerations | Status |
The following tables list the Private Link services and the regions where they'r
|Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| | Azure SignalR | All Public Regions<br/> All China regions<br/> All Government Regions | Supported on Standard Tier or above | GA <br/> [Learn how to create a private endpoint for Azure SignalR.](../azure-signalr/howto-private-endpoints.md) |
-|Azure Web Apps | All public regions<br/> China North 2 & East 2 | Supported with PremiumV2, PremiumV3, or Function Premium plan | GA <br/> [Learn how to create a private endpoint for Azure Web Apps.](./tutorial-private-endpoint-webapp-portal.md) |
+|Azure App Service | All public regions<br/> China North 2 & East 2 | Supported with PremiumV2, PremiumV3, or Function Premium plan | GA <br/> [Learn how to create a private endpoint for Azure App Service.](/azure/app-service/networking/private-endpoint) |
|Azure Search | All public regions <br/> All Government regions | Supported with service in Private Mode | GA <br/> [Learn how to create a private endpoint for Azure Search.](../search/service-create-private-endpoint.md) | |Azure Relay | All public regions | | Preview <br/> [Learn how to create a private endpoint for Azure Relay.](../azure-relay/private-link-service.md) | |Azure Static Web Apps | All public regions | | Preview <br/> [Configure private endpoint in Azure Static Web Apps](../static-web-apps/private-endpoint.md) |
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
Attribute-based access control (ABAC) is an authorization system that defines ac
## What are role assignment conditions?
-[Azure role-based access control (Azure RBAC)](overview.md) is an authorization system that helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. In most cases, Azure RBAC will provide the access management you need by using role definitions and role assignments. However, in some cases you might want to provide more fined-grained access management or simplify the management of hundreds of role assignments.
+[Azure role-based access control (Azure RBAC)](overview.md) is an authorization system that helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. In most cases, Azure RBAC will provide the access management you need by using role definitions and role assignments. However, in some cases you might want to provide more fine-grained access management or simplify the management of hundreds of role assignments.
Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions. A *role assignment condition* is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. A condition filters down permissions granted as a part of the role definition and role assignment. For example, you can add a condition that requires an object to have a specific tag to read the object. You cannot explicitly deny access to specific resources using conditions.
search Query Simple Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-simple-syntax.md
Previously updated : 03/16/2022 Last updated : 10/27/2022 # Simple query syntax in Azure Cognitive Search
Although the simple parser is based on the [Apache Lucene Simple Query Parser](h
## Example (simple syntax)
-This example shows a simple query, distinguished by `"queryType": "simple"` and valid syntax. Although query type is set below, it's the default and can be omitted unless you are reverting from an alternative type. The following example is a search over independent terms, with a requirement that all matching documents include "pool".
+This example shows a simple query, distinguished by `"queryType": "simple"` and valid syntax. Although query type is set below, it's the default and can be omitted unless you're reverting from an alternative type. The following example is a search over independent terms, with a requirement that all matching documents include "pool".
```http POST https://{{service-name}}.search.windows.net/indexes/hotel-rooms-sample/docs/search?api-version=2020-06-30
POST https://{{service-name}}.search.windows.net/indexes/hotel-rooms-sample/docs
The "searchMode" parameter is relevant in this example. Whenever boolean operators are on the query, you should generally set `"searchMode=all"` to ensure that *all* of the criteria is matched. Otherwise, you can use the default `"searchMode=any"` that favors recall over precision.
-For additional examples, see [Simple query syntax examples](search-query-simple-examples.md). For details about the query request and parameters, see [Search Documents (REST API)](/rest/api/searchservice/Search-Documents).
+For more examples, see [Simple query syntax examples](search-query-simple-examples.md). For details about the query request and parameters, see [Search Documents (REST API)](/rest/api/searchservice/Search-Documents).
## Keyword search on terms and phrases
Strings passed to the "search" parameter can include terms or phrases in any sup
Depending on your search client, you might need to escape the quotation marks in a phrase search. For example, in Postman in a POST request, a phrase search on `"Roach Motel"` in the request body would be specified as `"\"Roach Motel\""`.
-By default, all strings passed in the "search" parameter undergo lexical analysis. Make sure you understand the tokenization behavior of the analyzer you are using. Often, when query results are unexpected, the reason can be traced to how terms are tokenized at query time. You can [test tokenization on specific strings](/rest/api/searchservice/test-analyzer) to confirm the output.
+By default, all strings passed in the "search" parameter undergo lexical analysis. Make sure you understand the tokenization behavior of the analyzer you're using. Often, when query results are unexpected, the reason can be traced to how terms are tokenized at query time. You can [test tokenization on specific strings](/rest/api/searchservice/test-analyzer) to confirm the output.
Any text input with one or more terms is considered a valid starting point for query execution. Azure Cognitive Search will match documents containing any or all of the terms, including any variations found during analysis of the text.
-As straightforward as this sounds, there is one aspect of query execution in Azure Cognitive Search that *might* produce unexpected results, increasing rather than decreasing search results as more terms and operators are added to the input string. Whether this expansion actually occurs depends on the inclusion of a NOT operator, combined with a "searchMode" parameter setting that determines how NOT is interpreted in terms of AND or OR behaviors. For more information, see the NOT operator under [Boolean operators](#boolean-operators).
+As straightforward as this sounds, there's one aspect of query execution in Azure Cognitive Search that *might* produce unexpected results, increasing rather than decreasing search results as more terms and operators are added to the input string. Whether this expansion actually occurs depends on the inclusion of a NOT operator, combined with a "searchMode" parameter setting that determines how NOT is interpreted in terms of AND or OR behaviors. For more information, see the NOT operator under [Boolean operators](#boolean-operators).
## Boolean operators
-You can embed Boolean operators in a query string to improve the precision of a match. In the simple syntax, boolean operators are character-based. Text operators, such as the word AND, are not supported.
+You can embed Boolean operators in a query string to improve the precision of a match. In the simple syntax, boolean operators are character-based. Text operators, such as the word AND, aren't supported.
| Character | Example | Usage | |-- |--|-| | `+` | `pool + ocean` | An AND operation. For example, `pool + ocean` stipulates that a document must contain both terms.| | `|` | `pool | ocean` | An OR operation finds a match when either term is found. In the example, the query engine will return match on documents containing either `pool` or `ocean` or both. Because OR is the default conjunction operator, you could also leave it out, such that `pool ocean` is the equivalent of `pool | ocean`.|
-| `-` | `pool ΓÇô ocean` | A NOT operation returns matches on documents that exclude the term. </p>To get the expected behavior on a NOT expression, set `"searchMode=all"` on the request. Otherwise, under the default of `"searchMode=any"`, you will get matches on `pool`, plus matches on all documents that do not contain `ocean`, which could be a lot of documents. The "searchMode" parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there is no `+` or `|` operator on the other terms). Using `"searchMode=all"` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". </p>When deciding on a "searchMode" setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
+| `-` | `pool ΓÇô ocean` | A NOT operation returns matches on documents that exclude the term. </p>To get the expected behavior on a NOT expression, set `"searchMode=all"` on the request. Otherwise, under the default of `"searchMode=any"`, you'll get matches on `pool`, plus matches on all documents that don't contain `ocean`, which could be a lot of documents. The "searchMode" parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there's no `+` or `|` operator on the other terms). Using `"searchMode=all"` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". </p>When deciding on a "searchMode" setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
<a name="prefix-search"></a>
For "starts with" queries, add a suffix operator (`*`) as the placeholder for th
|-- |--|-| | `*` | `lingui*` will match on "linguistic" or "linguini" | The asterisk (`*`) represents one or more characters of arbitrary length, ignoring case. |
-Similar to filters, a prefix query looks for an exact match. As such, there is no relevance scoring (all results receive a search score of 1.0). Be aware that prefix queries can be slow, especially if the index is large and the prefix consists of a small number of characters. An alternative methodology, such as edge n-gram tokenization, might perform faster. Terms using prefix search can't be longer than 1000 characters.
+Similar to filters, a prefix query looks for an exact match. As such, there's no relevance scoring (all results receive a search score of 1.0). Be aware that prefix queries can be slow, especially if the index is large and the prefix consists of a small number of characters. An alternative methodology, such as edge n-gram tokenization, might perform faster. Terms using prefix search can't be longer than 1000 characters.
Simple syntax supports prefix matching only. For suffix or infix matching against the end or middle of a term, use the [full Lucene syntax for wildcard search](query-lucene-syntax.md#bkmk_wildcard).
In the simple syntax, search operators include these characters: `+ | " ( ) ' \`
If any of these characters are part of a token in the index, escape it by prefixing it with a single backslash (`\`) in the query. For example, suppose you used a custom analyzer for whole term tokenization, and your index contains the string "Luxury+Hotel". To get an exact match on this token, insert an escape character: `search=luxury\+hotel`.
-To make things simple for the more typical cases, there are two exceptions to this rule where escaping is not needed:
+To make things simple for the more typical cases, there are two exceptions to this rule where escaping isn't needed:
+ The NOT operator `-` only needs to be escaped if it's the first character after a whitespace. If the `-` appears in the middle (for example, in `3352CDD0-EF30-4A2E-A512-3B30AF40F3FD`), you can skip escaping.
To make things simple for the more typical cases, there are two exceptions to th
## Encoding unsafe and reserved characters in URLs
-Ensure all unsafe and reserved characters are encoded in a URL. For example, '#' is an unsafe character because it is a fragment/anchor identifier in a URL. The character must be encoded to `%23` if used in a URL. '&' and '=' are examples of reserved characters as they delimit parameters and specify values in Azure Cognitive Search. For more information, see [RFC1738: Uniform Resource Locators (URL)](https://www.ietf.org/rfc/rfc1738.txt).
+Ensure all unsafe and reserved characters are encoded in a URL. For example, '#' is an unsafe character because it's a fragment/anchor identifier in a URL. The character must be encoded to `%23` if used in a URL. '&' and '=' are examples of reserved characters as they delimit parameters and specify values in Azure Cognitive Search. For more information, see [RFC1738: Uniform Resource Locators (URL)](https://www.ietf.org/rfc/rfc1738.txt).
Unsafe characters are ``" ` < > # % { } | \ ^ ~ [ ]``. Reserved characters are `; / ? : @ = + &`. ## Special characters
-In some circumstances, you may want to search for a special character, like an '❤' emoji or the '€' sign. In such cases, make sure that the analyzer you use does not filter those characters out. The standard analyzer bypasses many special characters, excluding them from your index.
+Special characters can range from currency symbols like '$' or 'Γé¼', to emojis. Many analyzers, including the default standard analyzer, will exclude special characters during indexing, which means they won't be represented in your index.
-Analyzers that will tokenize special characters include the "whitespace" analyzer, which takes into consideration any character sequences separated by whitespaces as tokens (so the "❤" string would be considered a token). Also, a language analyzer like the Microsoft English analyzer ("en.microsoft"), would take the "€" string as a token. You can [test an analyzer](/rest/api/searchservice/test-analyzer) to see what tokens it generates for a given query.
+If you need special character representation, you can assign an analyzer that preserves them:
-When using Unicode characters, make sure symbols are properly escaped in the query url (for instance for "❤" would use the escape sequence `%E2%9D%A4+`). Postman does this translation automatically.
++ The "whitespace" analyzer considers any character sequence separated by white spaces as tokens (so the '❤' emoji would be considered a token). +++ A language analyzer, such as the Microsoft English analyzer ("en.microsoft"), would take the '$' or '€' string as a token. +
+For confirmation, you can [test an analyzer](/rest/api/searchservice/test-analyzer) to see what tokens are generated for a given string. As you might expect, you might not get full tokenization from a single analyzer. A workaround is to create multiple fields that contain the same content, but with different analyzer assignments (for example,"description_en", "description_fr", and so forth for language analyzers).
+
+When using Unicode characters, make sure symbols are properly escaped in the query url (for instance for '❤' would use the escape sequence `%E2%9D%A4+`). Postman does this translation automatically.
## Precedence (grouping)
You can use parentheses to create subqueries, including operators within the par
## Query size limits
-If your application generates search queries programmatically, we recommend designing it in such a way that it does not generate queries of unbounded size.
+If your application generates search queries programmatically, we recommend designing it in such a way that it doesn't generate queries of unbounded size.
-+ For GET, the length of the URL cannot exceed 8 KB.
++ For GET, the length of the URL can't exceed 8 KB. + For POST (and any other request), where the body of the request includes `search` and other parameters such as `filter` and `orderby`, the maximum size is 16 MB. Additional limits include: + The maximum length of the search clause is 100,000 characters. + The maximum number of clauses in `search` (expressions separated by AND or OR) is 1024. + The maximum search term size is 1000 characters for [prefix search](#prefix-queries).
- + There is also a limit of approximately 32 KB on the size of any individual term in a query.
+ + There's also a limit of approximately 32 KB on the size of any individual term in a query.
For more information on query limits, see [API request limits](search-limits-quotas-capacity.md#api-request-limits). ## Next steps
-If you will be constructing queries programmatically, review [Full text search in Azure Cognitive Search](search-lucene-query-architecture.md) to understand the stages of query processing and the implications of text analysis.
+If you'll be constructing queries programmatically, review [Full text search in Azure Cognitive Search](search-lucene-query-architecture.md) to understand the stages of query processing and the implications of text analysis.
You can also review the following articles to learn more about query construction:
search Search File Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-file-storage-integration.md
Last updated 09/07/2022
> [!IMPORTANT] > Azure Files indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to create the indexer data source.
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure File Storage and makes it searchable in Azure Cognitive Search. Inputs to the indexer are your files in a single share. Output is a search index with searchable content and metadata stored in individual fields.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Files and makes it searchable in Azure Cognitive Search. Inputs to the indexer are your files in a single share. Output is a search index with searchable content and metadata stored in individual fields.
This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to indexing files in Azure Storage. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
Instead of dropping and rebuilding your index, you can use index aliases. A typi
## Create an index alias
-You can create an alias using the preview REST API, the preview SDKs, or through [Visual Studio Code](search-get-started-vs-code.md). An alias consists of the `name` of the alias and the name of the search index that the alias is mapped to. Only one index name can be specified in the `indexes` array.
+You can create an alias using the preview REST API, the preview SDKs, or through the [Azure portal](https://portal.azure.com). An alias consists of the `name` of the alias and the name of the search index that the alias is mapped to. Only one index name can be specified in the `indexes` array.
### [**REST API**](#tab/rest)
POST /aliases?api-version=2021-04-30-preview
} ```
-### [**Visual Studio Code**](#tab/vscode)
+### [**Azure portal**](#tab/portal)
-To create an alias in Visual Studio Code:
-1. Follow the steps in the [Visual Studio Code Quickstart](search-get-started-vs-code.md) to install the [Azure Cognitive Search extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch) and connect to your Azure Subscription.
-1. Navigate to your search service.
-1. Under your search service, right-click on **Aliases** and select **Create new alias**.
-1. Provide the name of your alias and the name of the search index you'd like to map it to and then save the file to create the alias.
+Follow the steps below to create an index alias in the Azure portal.
- ![Create an alias in VS Code](media/search-howto-alias/create-alias-vscode.png "Create an alias in VS Code")
+1. Navigate to your search service in the [Azure portal](https://portal.azure.com).
+1. Find and select **Aliases**.
+1. Select **+ Add Alias**.
+1. Give your index alias a name and select the search index you want to map the alias to. Then, select **Save**.
++
+### [**.NET SDK**](#tab/sdk)
++
+In the preview [.NET SDK](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.1) for Azure Cognitive Search, you can use the following syntax to create an index alias.
+
+```csharp
+// Create a SearchIndexClient
+SearchIndexClient adminClient = new SearchIndexClient(serviceEndpoint, credential);
+
+// Create an index alias
+SearchAlias myAlias = new SearchAlias("my-alias", "hotel-quickstart-index");
+adminClient.CreateAlias(myAlias);
+```
+
+Index aliases are also supported in the latest preview SDKs for [Java](https://search.maven.org/artifact/com.azure/azure-search-documents/11.6.0-beta.1/jar), [Python](https://pypi.org/project/azure-search-documents/11.4.0b1/), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/11.3.0-beta.8).
POST /indexes/my-alias/docs/search?api-version=2021-04-30-preview
If you expect that you may need to make updates to your index definition for your production indexes, you should use an alias rather than the index name for requests in your client-side application. Scenarios that require you to create a new index are outlined under these [rebuild conditions](search-howto-reindex.md#rebuild-conditions). > [!NOTE]
-> You can only use an alias with [document operations](/rest/api/searchservice/document-operations). Aliases can't be used to get or update an index definition, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
+> You can only use an alias with [document operations](/rest/api/searchservice/document-operations) or to get and update an index definition. Aliases can't be used to delete an index, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
## Swap indexes
After you make the update to the alias, requests will automatically start to be
## See also
-+ [Drop and rebuild an index in Azure Cognitive Search](search-howto-reindex.md)
++ [Drop and rebuild an index in Azure Cognitive Search](search-howto-reindex.md)
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Maximum number of synonym maps varies by tier. Each rule can have up to 20 expan
## Index alias limits
-Maximum number of [index aliases](search-how-to-alias.md) varies by tier. In all tiers, the maximum number of aliases is the same as the maximum number of indexes.
+Maximum number of [index aliases](search-how-to-alias.md) varies by tier. In all tiers, the maximum number of aliases is double the maximum number of indexes allowed.
| Resource | Free | Basic | S1 | S2 | S3 | S3-HD |L1 | L2 | | -- | --| |-|-|-|-||-|
-| Maximum aliases |3 |5 or 15 |50 |200 |200 |1000 per partition or 3000 per service |10 |10 |
+| Maximum aliases |6 |10 or 30 |100 |400 |400 |2000 per partition or 6000 per service |20 |20 |
## Data limits (AI enrichment)
sentinel Sentinel Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-service-limits.md
This article lists the most common service limits you might encounter as you use
[!INCLUDE [sentinel-service-limits](../../includes/sentinel-limits-machine-learning.md)]
+## Multi workspace limits
++ ## Notebook limits [!INCLUDE [sentinel-service-limits](../../includes/sentinel-limits-notebooks.md)]
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## October 2022
+- [Account enrichment fields removed from Azure AD Identity Protection connector](#account-enrichment-fields-removed-from-azure-ad-identity-protection-connector)
- [Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip) - [Out of the box anomaly detection on the SAP audit log (Preview)](#out-of-the-box-anomaly-detection-on-the-sap-audit-log-preview) - [IoT device entity page (Preview)](#iot-device-entity-page-preview)
+### Account enrichment fields removed from Azure AD Identity Protection connector
+
+As of **September 30, 2022**, alerts coming from the **Azure Activity Directory Information Protection connector** no longer contain the following fields:
+
+- CompromisedEntity
+- ExtendedProperties["User Account"]
+- ExtendedProperties["User NameΓÇ¥]
+
+We are working to adapt Microsoft Sentinel's built-in queries and other operations affected by this change to look up these values in other ways (using the *IdentityInfo* table).
+
+In the meantime, or if you've built any custom queries or rules directly referencing these fields, you'll need another way to get this information. Use the following two-step process to have your queries look up these values in the *IdentityInfo* table:
+
+1. If you haven't already, **enable the UEBA solution** to sync the *IdentityInfo* table with your Azure AD logs. Follow the instructions in [this document](enable-entity-behavior-analytics.md).
+(If you don't intend to use UEBA in general, you can ignore the last instruction about selecting data sources on which to enable entity behavior analytics.)
+
+1. Incorporate the query below in your existing queries or rules to look up this data by joining the *SecurityAlert* table with the *IdentityInfo* table.
+
+ ```kusto
+ SecurityAlert
+ | where TimeGenerated > ago(7d)
+ | where ProductName == "Azure Active Directory Identity Protection"
+ | mv-expand Entity = todynamic(Entities)
+ | where Entity.Type == "account"
+ | extend AadTenantId = tostring(Entity.AadTenantId)
+ | extend AadUserId = tostring(Entity.AadUserId)
+ | join kind=inner (
+ IdentityInfo
+ | where TimeGenerated > ago(14d)
+ | distinct AccountTenantId, AccountObjectId, AccountUPN, AccountDisplayName
+ | extend UserAccount = AccountUPN
+ | extend UserName = AccountDisplayName
+ | where isnotempty(AccountDisplayName) and isnotempty(UserAccount)
+ | project AccountTenantId, AccountObjectId, UserAccount, UserName
+ )
+ on
+ $left.AadTenantId == $right.AccountTenantId,
+ $left.AadUserId == $right.AccountObjectId
+ | extend CompromisedEntity = iff(CompromisedEntity == "N/A" or isempty(CompromisedEntity), UserAccount, CompromisedEntity)
+ | project-away AadTenantId, AadUserId, AccountTenantId, AccountObjectId
+ ```
+
+For information on looking up data to replace enrichment fields removed from the UEBA UserPeerAnalytics table, See [Heads up: Name fields being removed from UEBA UserPeerAnalytics table](#heads-up-name-fields-being-removed-from-ueba-userpeeranalytics-table) for a sample query.
+ ### Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP) As of **October 24, 2022**, [Microsoft 365 Defender](/microsoft-365/security/defender/) will be integrating [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents. Customers can choose between three levels of integration:
service-bus-messaging Service Bus Dotnet How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md
Title: Get started with Azure Service Bus topics (.NET)
description: This tutorial shows you how to send messages to Azure Service Bus topics and receive messages from topics' subscriptions using the .NET programming language. dotnet Previously updated : 10/11/2021 Last updated : 10/27/2022 ms.devlang: csharp
service-bus-messaging Service Bus Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-dr.md
Title: Azure Service Bus Geo-disaster recovery | Microsoft Docs description: How to use geographical regions to fail over and disaster recovery in Azure Service Bus Previously updated : 04/01/2022 Last updated : 10/27/2022 # Azure Service Bus Geo-disaster recovery Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in some cases even required by industry regulations.
-Azure Service Bus already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions when such failures occur. If a Service Bus namespace has been created with the enabled option for [availability zones](../availability-zones/az-overview.md), the outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility.
+Azure Service Bus already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions when such failures occur. A premium namespace can have two or more messaging units and these messaging units will be spread across multiple failure domains within a datacenter, supporting an all-active Service Bus cluster model.
-The all-active Azure Service Bus cluster model with availability zone support is superior to any on-premises message broker product in terms of resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures can't sufficiently defend against.
+For a premium tier namespace, the outage risk is further spread across three physically separated facilities ([availability zones](#availability-zones)), and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of a datacenter. The all-active Azure Service Bus cluster model within a failure domain along with the availability zone support is superior to any on-premises message broker product in terms of resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures can't sufficiently defend against.
The Service Bus Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good and without having to change your application configurations. Abandoning an Azure region will typically involve several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration. The feature is globally available for the Service Bus Premium SKU.
-The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Queues, Topics, Subscriptions, Filters) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move will repoint the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated.
+The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (queues, topics, subscriptions, filters) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move will repoint the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated.
## Important points to consider -- The feature enables instant continuity of operations with the same configuration, but **doesn't replicate the messages held in queues or topic subscriptions or dead-letter queues**. To preserve queue semantics, such a replication will require not only the replication of message data, but of every state change in the broker. For most Service Bus namespaces, the required replication traffic would far exceed the application traffic and with high-throughput queues, most messages would still replicate to the secondary while they are already being deleted from the primary, causing excessively wasteful traffic. For high-latency replication routes, which applies to many pairings you would choose for Geo-disaster recovery, it might also be impossible for the replication traffic to sustainably keep up with the application traffic due to latency-induced throttling effects.
+- The feature enables instant continuity of operations with the same configuration, but **doesn't replicate the messages held in queues or topic subscriptions or dead-letter queues**. To preserve queue semantics, such a replication will require not only the replication of message data, but of every state change in the broker. For most Service Bus namespaces, the required replication traffic would far exceed the application traffic and with high-throughput queues, most messages would still replicate to the secondary while they're already being deleted from the primary, causing excessively wasteful traffic. For high-latency replication routes, which applies to many pairings you would choose for Geo-disaster recovery, it might also be impossible for the replication traffic to sustainably keep up with the application traffic due to latency-induced throttling effects.
- Azure Active Directory (Azure AD) role-based access control (RBAC) assignments to Service Bus entities in the primary namespace aren't replicated to the secondary namespace. Create role assignments manually in the secondary namespace to secure access to them. -- The following configurations are not replicated.
+- The following configurations aren't replicated.
- Virtual network configurations - Private endpoint connections - All networks access enabled
The following terms are used in this article:
The following section is an overview to set up pairing between the namespaces.
-![1][]
You first create or use an existing primary namespace, and a new secondary namespace, then pair the two. This pairing gives you an alias that you can use to connect. Because you use an alias, you don't have to change connection strings. Only new namespaces can be added to your failover pairing.
-1. Create the primary namespace.
-1. Create the secondary namespace in a different region. This step is optional. You can create the secondary namespace while creating the pairing in the next step.
+1. Create the primary premium-tier namespace.
+1. Create the secondary premium-tier namespace in a different region. This step is optional. You can create the secondary namespace while creating the pairing in the next step.
1. In the Azure portal, navigate to your primary namespace. 1. Select **Geo-recovery** on the left menu, and select **Initiate pairing** on the toolbar.
- :::image type="content" source="./media/service-bus-geo-dr/primary-namspace-initiate-pairing-button.png" alt-text="Initiate pairing from the primary namespace":::
+ :::image type="content" source="./media/service-bus-geo-dr/primary-namspace-initiate-pairing-button.png" alt-text="Screenshot showing the Geo-recovery page with Initiate pairing link selected.":::
1. On the **Initiate pairing** page, follow these steps: 1. Select an existing secondary namespace or create one in a different region. In this example, an existing namespace is used as the secondary namespace. 1. For **Alias**, enter an alias for the geo-dr pairing. 1. Then, select **Create**.
- :::image type="content" source="./media/service-bus-geo-dr/initiate-pairing-page.png" alt-text="Select the secondary namespace":::
+ :::image type="content" source="./media/service-bus-geo-dr/initiate-pairing-page.png" alt-text="Screenshot showing the Initiate Pairing page in the Azure portal.":::
1. You should see the **Service Bus Geo-DR Alias** page as shown in the following image. You can also navigate to the **Geo-DR Alias** page from the primary namespace page by selecting the **Geo-recovery** on the left menu.
- :::image type="content" source="./media/service-bus-geo-dr/service-bus-geo-dr-alias-page.png" alt-text="Service Bus Geo-DR Alias page":::
+ :::image type="content" source="./media/service-bus-geo-dr/service-bus-geo-dr-alias-page.png" alt-text="Screenshot showing the Service Bus Geo-DR Alias page with primary and secondary namespaces.":::
1. On the **Geo-DR Alias** page, select **Shared access policies** on the left menu to access the primary connection string for the alias. Use this connection string instead of using the connection string to the primary/secondary namespace directly. Initially, the alias points to the primary namespace. 1. Switch to the **Overview** page. You can do the following actions: 1. Break the pairing between primary and secondary namespaces. Select **Break pairing** on the toolbar.
You first create or use an existing primary namespace, and a new secondary names
1. Turn ON the **Safe Failover** option to safely fail over to the secondary namespace. This feature makes sure that pending Geo-DR replications are completed before switching over to the secondary. 1. Then, select **Failover**.
- :::image type="content" source="./media/service-bus-geo-dr/failover-page.png" alt-text="{alt-text}":::
+ :::image type="content" source="./media/service-bus-geo-dr/failover-page.png" alt-text="Screenshot showing the Failover page.":::
> [!IMPORTANT] > Failing over will activate the secondary namespace and remove the primary namespace from the Geo-Disaster Recovery pairing. Create another namespace to have a new geo-disaster recovery pair.
You first create or use an existing primary namespace, and a new secondary names
1. Finally, you should add some monitoring to detect if a failover is necessary. In most cases, the service is one part of a large ecosystem, thus automatic failovers are rarely possible, as often failovers must be performed in sync with the remaining subsystem or infrastructure. ### Service Bus standard to premium
-If you have [migrated your Azure Service Bus Standard namespace to Azure Service Bus Premium](service-bus-migrate-standard-premium.md), then you must use the pre-existing alias (that is, your Service Bus Standard namespace connection string) to create the disaster recovery configuration through the **PS/CLI** or **REST API**.
+If you've [migrated your Azure Service Bus Standard namespace to Azure Service Bus Premium](service-bus-migrate-standard-premium.md), then you must use the pre-existing alias (that is, your Service Bus Standard namespace connection string) to create the disaster recovery configuration through the **PS/CLI** or **REST API**.
-It's because, during migration, your Azure Service Bus Standard namespace connection string/DNS name itself becomes an alias to your Azure Service Bus Premium namespace.
+It's because, during migration, your Azure Service Bus standard namespace connection string/DNS name itself becomes an alias to your Azure Service Bus premium namespace.
-Your client applications must utilize this alias (that is, the Azure Service Bus Standard namespace connection string) to connect to the Premium namespace where the disaster recovery pairing has been set up.
+Your client applications must utilize this alias (that is, the Azure Service Bus standard namespace connection string) to connect to the premium namespace where the disaster recovery pairing has been set up.
-If you use the Portal to set up the Disaster recovery configuration, then the portal will abstract this caveat from you.
+If you use the Azure portal to set up the disaster recovery configuration, the portal will abstract this caveat from you.
## Failover flow A failover is triggered manually by the customer (either explicitly through a command, or through client owned business logic that triggers the command) and never by Azure. It gives the customer full ownership and visibility for outage resolution on Azure's backbone.
-![4][]
After the failover is triggered -
Once the failover is initiated -
You can automate failover either with monitoring systems, or with custom-built monitoring solutions. However, such automation takes extra planning and work, which is out of the scope of this article.
-![2][]
## Management
-If you made a mistake; for example, you paired the wrong regions during the initial setup, you can break the pairing of the two namespaces at any time. If you want to use the paired namespaces as regular namespaces, delete the alias.
+If you made a mistake, for example, you paired the wrong regions during the initial setup, you can break the pairing of the two namespaces at any time. If you want to use the paired namespaces as regular namespaces, delete the alias.
## Use existing namespace as alias
The [samples on GitHub](https://github.com/Azure/azure-service-bus/tree/master/s
Note the following considerations to keep in mind with this release:
-1. In your failover planning, you should also consider the time factor. For example, if you lose connectivity for longer than 15 to 20 minutes, you might decide to initiate the failover.
+- In your failover planning, you should also consider the time factor. For example, if you lose connectivity for longer than 15 to 20 minutes, you might decide to initiate the failover.
-2. The fact that no data is replicated means that currently active sessions aren't replicated. Additionally, duplicate detection and scheduled messages may not work. New sessions, new scheduled messages, and new duplicates will work.
+- The fact that no data is replicated means that currently active sessions aren't replicated. Additionally, duplicate detection and scheduled messages may not work. New sessions, new scheduled messages, and new duplicates will work.
-3. Failing over a complex distributed infrastructure should be [rehearsed](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan) at least once.
+- Failing over a complex distributed infrastructure should be [rehearsed](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan) at least once.
-4. Synchronizing entities can take some time, approximately 50-100 entities per minute. Subscriptions and rules also count as entities.
+- Synchronizing entities can take some time, approximately 50-100 entities per minute. Subscriptions and rules also count as entities.
## Availability Zones
-The Service Bus Premium SKU supports [Availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of messaging store (1 primary and 2 secondary). Service Bus keeps all the three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If the applications see transient disconnects from Service Bus, the retry logic in the SDK will automatically reconnect to Service Bus.
+The Service Bus Premium SKU supports [availability zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of the messaging store (1 primary and 2 secondary). Service Bus keeps all three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If the applications see transient disconnects from Service Bus, the retry logic in the SDK will automatically reconnect to Service Bus.
When you use availability zones, both metadata and data (messages) are replicated across data centers in the availability zone. > [!NOTE] > The Availability Zones support for Azure Service Bus Premium is only available in [Azure regions](../availability-zones/az-region.md) where availability zones are present.
-You can enable Availability Zones on new namespaces only, using the Azure portal. Service Bus does not support migration of existing namespaces. You cannot disable zone redundancy after enabling it on your namespace.
+When you create a premium tier namespace, the support for availability zones (if available in the selected region) is automatically enabled for the namespace. There's no additional cost for using this feature and you can't disable or enable this feature.
-![3][]
## Private endpoints This section provides more considerations when using Geo-disaster recovery with namespaces that use private endpoints. To learn about using private endpoints with Service Bus in general, see [Integrate Azure Service Bus with Azure Private Link](private-link-service.md).
service-health Resource Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-overview.md
You can also access Resource Health by selecting **All services** and typing **r
Check out these references to learn more about Resource Health: - [Resource types and health checks in Azure Resource Health](resource-health-checks-resource-types.md)
+- [Resource Health virtual machine Health Annotations](resource-health-vm-annotation.md)
- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml)
service-health Resource Health Vm Annotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-vm-annotation.md
Last updated 9/29/2022
# Resource Health virtual machine Health Annotations
-When the health of your virtual machine is impacted by availability impacting disruptions (see Resource types and health checks), the platform emits context as to why the disruption has occurred to assist you in responding appropriately.
+When the health of your virtual machine is impacted by availability impacting disruptions (see [Resource types and health checks](resource-health-checks-resource-types.md)), the platform emits context as to why the disruption has occurred to assist you in responding appropriately.
The below table summarizes all the annotations that the platform emits today:
-|Annotation| Description
-|-|
-|VirtualMachineRestarted |The Virtual Machine is undergoing a reboot as requested by a restart action triggered by an authorized user or process from within the Virtual machine. No other action is required at this time. For more information, [see understanding Virtual Machine reboots in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot) .
-|VirtualMachineCrashed| The Virtual Machine is undergoing a reboot due to a guest OS crash. The local data remains unaffected during this process. No other action is required at this time. For more information, see [understanding Virtual Machine crashes in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot#vm-crashes) .
-VirtualMachineStorageOffline| The Virtual Machine is either currently undergoing a reboot or experiencing an application freeze due to a temporary loss of access to disk. No other achineFailedToSecureBoot |Applicable to Azure Confidential Compute Virtual Machines when guest activity such as unsigned booting components leads to a guest OS issue preventing the Virtual Machine from booting securely. You can attempt to retry deployment after ensuring OS boot components are signed by trusted publishers. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot).
-|LiveMigrationSucceeded |The Virtual Machine was briefly paused as a Live Migration operation was successfully performed on your Virtual Machine. This operation was carried out either as a repair action, for allocation optimization or as part of routine maintenance workflows. No other action is required at this time. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration) .
-|LiveMigrationFailure |A Live Migration operation was attempted on your Virtual Machine as either a repair action, for allocation optimization or as part of routine maintenance workflows. This operation, however, could not be successfully completed and may have resulted in a brief pause of your Virtual Machine. No otheraction is required at this time. <br/> Also note that [M Series](../virtual-machines/m-series.md), [L Series](../virtual-machines/lasv3-series.md) VM SKUs are not applicable for Live Migration. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration) . |
-|VirtualMachineAllocated | The Virtual Machine is in the process of being set up as requested by an authorized user or process. No other action is required at this time.
-|VirtualMachineDeallocationInitiated | The Virtual Machine is in the process of being stopped and deallocated as requested by an authorized user or process. No other action is required at this time.
-|VirtualMachineHostCrashed |The Virtual Machine has unexpectedly crashed due to the underlying host server experiencing a software failure or due to a failed hardware component. While the Virtual Machine is rebooting, the local data remains unaffected. You may attempt to redeploy the Virtual Machine to a different host server if you continue to experience issues.
-|VirtualMachineMigrationInitiatedForPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Planned Maintenance](../virtual-machines/maintenance-and-updates.md)
-|VirtualMachineRebootInitiatedForPlannedMaintenance| The Virtual Machine is undergoing a reboot as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md).
-|VirtualMachineHostRebootedForRepair | The Virtual Machine is undergoing a reboot due to the underlying host server experiencing unexpected failures. While the Virtual Machine is rebooting, the local data remains unaffected. For more information, see [understanding Virtual Machine reboots in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot) .
-|VirtualMachineMigrationInitiatedForRepair| The Virtual Machine is being migrated to a different host server due to the underlying host server experiencing unexpected failures. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Service Healing](https://azure.microsoft.com/blog/service-healing-auto-recovery-of-virtual-machines/) .
-|VirtualMachineRedeployInitiatedByControlPlaneDueToPlannedMaintenance| The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows triggered by an authorized user or process. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md)
-|VirtualMachineMigrationScheduledForDegradedHardware| The Virtual Machine is experiencing degraded availability as it is running on a host server with a degraded hardware component which is predicted to fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the degradation of the underlying hardware. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/)
-|VirtualMachinePossiblyDegradedDueToHardwareFailure | The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server with a degraded hardware component that will fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). |
-|VirtualMachineScheduledForServiceHealing| The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server that is experiencing fatal errors. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the failure signature encountered by the host server. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/).
-|VirtualMachinePreempted | If you are running a Spot/Low Priority Virtual Machine, it has been preempted either due to capacity recall by the platform or due to billing-based eviction where cost exceeded user defined thresholds. No other action is required at this time. For more information, see [Spot Virtual Machines](../virtual-machines/spot-vms.md).
-|VirtualMachineRebootInitiatedByControlPlane | The Virtual Machine is undergoing a reboot as requested by an authorized user or process from within the Virtual machine. No other action is required at this time.
-|VirtualMachineRedeployInitiatedByControlPlane | The Virtual Machine is being migrated to a different host server as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. Since the Virtual Machine is being migrated to a new host server, the local data will not persist.
-|VirtualMachineSizeChanged | The Virtual Machine is being resized as requested by an authorized user or process. No other action is required at this time.
-|VirtualMachineConfigurationUpdated | The Virtual Machine configuration is being updated as requested by an authorized user or process. No other action is required at this time.
-|VirtualMachineStartInitiatedByControlPlane |The Virtual Machine is starting as requested by an authorized user or process. No other action is required at this time.
-|VirtualMachineStopInitiatedByControlPlane | The Virtual Machine is stopping as requested by an authorized user or process. No other action is required at this time.
-|VirtualMachineStoppedInternally | The Virtual Machine is stopping as requested by an authorized user or process, or due to a guest activity from within the Virtual Machine. No other action is required at this time.
-|VirtualMachineProvisioningTimedOut | The Virtual Machine provisioning has failed due to Guest OS issues or incorrect user run scripts. You can attempt to either re-create this Virtual Machine. If this Virtual Machine is part of a Virtual Machine scale set, you can try reimaging it.
-|AccelnetUnhealthy | Applicable if Accelerated Networking is enabled for your Virtual Machine ΓÇô We have detected that the Accelerated Networking feature is not functioning as expected. You can attempt to redeploy your Virtual Machine to potentially mitigate the issue.
+| Annotation | Description |
+||-|
+| VirtualMachineRestarted | The Virtual Machine is undergoing a reboot as requested by a restart action triggered by an authorized user or process from within the Virtual Machine. No other action is required at this time. For more information, see [understanding Virtual Machine reboots in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot). |
+| VirtualMachineCrashed | The Virtual Machine is undergoing a reboot due to a guest OS crash. The local data remains unaffected during this process. No other action is required at this time. For more information, see [understanding Virtual Machine crashes in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot#vm-crashes). |
+| VirtualMachineStorageOffline | The Virtual Machine is either currently undergoing a reboot or experiencing an application freeze due to a temporary loss of access to disk. |
+| VirtualMachineFailedToSecureBoot | Applicable to Azure Confidential Compute Virtual Machines when guest activity such as unsigned booting components leads to a guest OS issue preventing the Virtual Machine from booting securely. You can attempt to retry deployment after ensuring OS boot components are signed by trusted publishers. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot). |
+| LiveMigrationSucceeded | The Virtual Machine was briefly paused as a Live Migration operation was successfully performed on your Virtual Machine. This operation was carried out either as a repair action, for allocation optimization or as part of routine maintenance workflows. No other action is required at this time. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration). |
+| LiveMigrationFailure | A Live Migration operation was attempted on your Virtual Machine as either a repair action, for allocation optimization or as part of routine maintenance workflows. This operation, however, could not be successfully completed and may have resulted in a brief pause of your Virtual Machine. No other action is required at this time. <br/> Also note that [M Series](../virtual-machines/m-series.md), [L Series](../virtual-machines/lasv3-series.md) VM SKUs are not applicable for Live Migration. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration). |
+| VirtualMachineAllocated | The Virtual Machine is in the process of being set up as requested by an authorized user or process. No other action is required at this time. |
+| VirtualMachineDeallocationInitiated | The Virtual Machine is in the process of being stopped and deallocated as requested by an authorized user or process. No other action is required at this time. |
+| VirtualMachineHostCrashed | The Virtual Machine has unexpectedly crashed due to the underlying host server experiencing a software failure or due to a failed hardware component. While the Virtual Machine is rebooting, the local data remains unaffected. You may attempt to redeploy the Virtual Machine to a different host server if you continue to experience issues. |
+| VirtualMachineMigrationInitiatedForPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Planned Maintenance](../virtual-machines/maintenance-and-updates.md). |
+| VirtualMachineRebootInitiatedForPlannedMaintenance | The Virtual Machine is undergoing a reboot as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md). |
+| VirtualMachineHostRebootedForRepair | The Virtual Machine is undergoing a reboot due to the underlying host server experiencing unexpected failures. While the Virtual Machine is rebooting, the local data remains unaffected. For more information, see [understanding Virtual Machine reboots in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot). |
+| VirtualMachineMigrationInitiatedForRepair | The Virtual Machine is being migrated to a different host server due to the underlying host server experiencing unexpected failures. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Service Healing](https://azure.microsoft.com/blog/service-healing-auto-recovery-of-virtual-machines/). |
+| VirtualMachineRedeployInitiatedByControlPlaneDueToPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows triggered by an authorized user or process. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md). |
+| VirtualMachineMigrationScheduledForDegradedHardware | The Virtual Machine is experiencing degraded availability as it is running on a host server with a degraded hardware component which is predicted to fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the degradation of the underlying hardware. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). |
+| VirtualMachinePossiblyDegradedDueToHardwareFailure | The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server with a degraded hardware component that will fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). |
+| VirtualMachineScheduledForServiceHealing | The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server that is experiencing fatal errors. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the failure signature encountered by the host server. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). |
+| VirtualMachinePreempted | If you are running a Spot/Low Priority Virtual Machine, it has been preempted either due to capacity recall by the platform or due to billing-based eviction where cost exceeded user defined thresholds. No other action is required at this time. For more information, see [Spot Virtual Machines](../virtual-machines/spot-vms.md). |
+| VirtualMachineRebootInitiatedByControlPlane | The Virtual Machine is undergoing a reboot as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. |
+| VirtualMachineRedeployInitiatedByControlPlane | The Virtual Machine is being migrated to a different host server as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. |
+| VirtualMachineSizeChanged | The Virtual Machine is being resized as requested by an authorized user or process. No other action is required at this time. |
+|VirtualMachineConfigurationUpdated | The Virtual Machine configuration is being updated as requested by an authorized user or process. No other action is required at this time. |
+| VirtualMachineStartInitiatedByControlPlane |The Virtual Machine is starting as requested by an authorized user or process. No other action is required at this time. |
+| VirtualMachineStopInitiatedByControlPlane | The Virtual Machine is stopping as requested by an authorized user or process. No other action is required at this time. |
+| VirtualMachineStoppedInternally | The Virtual Machine is stopping as requested by an authorized user or process, or due to a guest activity from within the Virtual Machine. No other action is required at this time. |
+| VirtualMachineProvisioningTimedOut | The Virtual Machine provisioning has failed due to Guest OS issues or incorrect user run scripts. You can attempt to either re-create this Virtual Machine. If this Virtual Machine is part of a virtual machine scale set, you can try reimaging it. |
+
+| AccelnetUnhealthy | Applicable if Accelerated Networking is enabled for your Virtual Machine ΓÇô We have detected that the Accelerated Networking feature is not functioning as expected. You can attempt to redeploy your Virtual Machine to potentially mitigate the issue. |
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server/Replication appliance** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
-[Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9249.0
+[Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9257.0
[Rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 9.50.6419.1 | 5.1.7626.0 | 9.50.6419.1 | 5.1.7626.0 | 2.0.9249.0 [Rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 9.49.6395.1 | 5.1.7418.0 | 9.49.6395.1 | 5.1.7418.0 | 2.0.9248.0 [Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6349.1 | 5.1.7387.0 | 9.48.6349.1 | 5.1.7387.0 | 2.0.9245.0
spring-apps How To Service Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-service-registration.md
Last updated 05/09/2022-+ zone_pivot_groups: programming-languages-spring-apps
Service registration and discovery are key requirements for maintaining a list o
* Use Kubernetes Service Discovery approach to invoke calls among your apps.
- Azure Spring Apps creates a corresponding kubernetes service for every app running in it using app name as the kubernetes service name. So you can invoke calls in one app to another app by using app name in a http/https request like http(s)://{app name}/path. And this approach is also suitable for Enterprise tier.
+ Azure Spring Apps creates a corresponding Kubernetes service for every app running in it using the app name as the Kubernetes service name. You can invoke calls from one app to another app by using the app name in an HTTP/HTTPS request such as `http(s)://{app name}/path`. This approach is also suitable for Enterprise tier. For more information, see the [Kubernetes registry code sample](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/k8s-service-registry).
* Use Managed Spring Cloud Service Registry (OSS) in Azure Spring Apps.
For information about how to set up service registration for a Steeltoe app, see
## Register your application using Spring Cloud Service Registry
-Before your application can manage service registration and discovery using Spring Cloud Service Registry, you must include the following dependency for *spring-cloud-starter-netflix-eureka-client* to your *pom.xml*:
+Before your application can manage service registration and discovery using Spring Cloud Service Registry, you must include the following dependency for `spring-cloud-starter-netflix-eureka-client` in your *pom.xml* file:
```xml <dependency>
Before your application can manage service registration and discovery using Spri
## Update the top level class
-Finally, add an annotation to the top level class of your application as shown in the following example:
+Finally, add an annotation to the top level class of your application, as shown in the following example:
```java package foo.bar;
The Spring Cloud Service Registry server endpoint will be injected as an environ
> [!NOTE] > It can take a few minutes for the changes to propagate from the server to all applications. ::: zone-end+
+## Next steps
+
+In this article, you learned how to register your application using Spring Cloud Service Registry. To learn how to access the Spring Cloud Service Registry using Azure Active Directory (Azure AD) role-based access control (RBAC), see [Access Config Server and Service Registry](how-to-access-data-plane-azure-ad-rbac.md).
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-manage.md
Title: Create and manage encryption scopes description: Learn how to create an encryption scope to isolate blob data at the container or blob level. -+ Previously updated : 07/13/2022 Last updated : 10/27/2022 -+ -+ ms.devlang: azurecli
An encryption scope is automatically enabled when you create it. After you creat
To create an encryption scope in the Azure portal, follow these steps: 1. Navigate to your storage account in the Azure portal.
-1. Select the **Encryption** setting.
+1. Under **Security + networking** select **Encryption**.
1. Select the **Encryption Scopes** tab. 1. Click the **Add** button to add a new encryption scope. 1. In the **Create Encryption Scope** pane, enter a name for the new scope.
To create an encryption scope in the Azure portal, follow these steps:
- If you selected **Customer-managed keys**, then select a subscription and specify a key vault or a managed HSM and a key to use for this encryption scope. 1. If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope.
- :::image type="content" source="media/encryption-scope-manage/create-encryption-scope-customer-managed-key-portal.png" alt-text="Screenshot showing how to create encryption scope in Azure portal":::
+ :::image type="content" source="media/encryption-scope-manage/create-encryption-scope-customer-managed-key-portal.png" alt-text="Screenshot showing how to create encryption scope in Azure portal" lightbox="media/encryption-scope-manage/create-encryption-scope-customer-managed-key-portal.png":::
# [PowerShell](#tab/powershell)
To learn more about infrastructure encryption, see [Enable infrastructure encryp
To view the encryption scopes for a storage account in the Azure portal, navigate to the **Encryption Scopes** setting for the storage account. From this pane, you can enable or disable an encryption scope or change the key for an encryption scope. To view details for a customer-managed key, including the key URI and version and whether the key version is automatically updated, follow the link in the **Key** column. # [PowerShell](#tab/powershell)
To create a container with a default encryption scope in the Azure portal, first
1. In the **Encryption scope** drop-down, select the default encryption scope for the container. 1. To require that all blobs in the container use the default encryption scope, select the checkbox to **Use this encryption scope for all blobs in the container**. If this checkbox is selected, then an individual blob in the container cannot override the default encryption scope.
- :::image type="content" source="media/encryption-scope-manage/create-container-default-encryption-scope.png" alt-text="Screenshot showing container with default encryption scope":::
+ :::image type="content" source="media/encryption-scope-manage/create-container-default-encryption-scope.png" alt-text="Screenshot showing container with default encryption scope" lightbox="media/encryption-scope-manage/create-container-default-encryption-scope.png":::
# [PowerShell](#tab/powershell)
If a client attempts to specify a scope when uploading a blob to a container tha
When you upload a blob, you can specify an encryption scope for that blob, or use the default encryption scope for the container, if one has been specified.
-When you upload a new blob with an encryption scope, you cannot change the default access tier for that blob.
+> [!NOTE]
+> When you upload a new blob with an encryption scope, you cannot change the default access tier for that blob. You also cannot change the access tier for an existing blob that uses an encryption scope. For more information about access tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
# [Portal](#tab/portal)
To upload a blob with an encryption scope via the Azure portal, first create the
1. Locate the **Encryption scope** drop-down section. By default, the blob is created with the default encryption scope for the container, if one has been specified. If the container requires that blobs use the default encryption scope, this section is disabled. 1. To specify a different scope for the blob that you are uploading, select **Choose an existing scope**, then select the desired scope from the drop-down.
- :::image type="content" source="media/encryption-scope-manage/upload-blob-encryption-scope.png" alt-text="Screenshot showing how to upload a blob with an encryption scope":::
+ :::image type="content" source="media/encryption-scope-manage/upload-blob-encryption-scope.png" alt-text="Screenshot showing how to upload a blob with an encryption scope" lightbox="media/encryption-scope-manage/upload-blob-encryption-scope.png":::
# [PowerShell](#tab/powershell)
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-overview.md
Previously updated : 09/20/2022 Last updated : 10/27/2022 + # Encryption scopes for Blob storage
A default encryption scope must be specified for a container at the time that th
If no default encryption scope is specified for the container, then you can upload a blob using any encryption scope that you've defined for the storage account. The encryption scope must be specified at the time that the blob is uploaded.
-When you upload a new blob with an encryption scope, you cannot change the default access tier for that blob. You also cannot change the access tier for an existing blob that uses an encryption scope. For more information about access tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+> [!NOTE]
+> When you upload a new blob with an encryption scope, you cannot change the default access tier for that blob. You also cannot change the access tier for an existing blob that uses an encryption scope. For more information about access tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
## Disabling an encryption scope
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Data sets have unique lifecycles. Early in the lifecycle, people access some data often. But the need for access often drops drastically as the data ages. Some data remains idle in the cloud and is rarely accessed once stored. Some data sets expire days or months after creation, while other data sets are actively read and modified throughout their lifetimes. Azure Storage lifecycle management offers a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle.
+> [!NOTE]
+> Each last access time update is charged as an "other transaction" at most once every 24 hours per object even if it's accessed 1000s of times in a day. This is separate from read transactions charges.
+ With the lifecycle management policy, you can: - Transition blobs from cool to hot immediately when they're accessed, to optimize for performance.
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
Previously updated : 05/26/2022 Last updated : 10/26/2022
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
description: Learn how to connect to an Azure Elastic SAN (preview) volume from
Previously updated : 10/25/2022 Last updated : 10/27/2022
Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -El
# [Azure CLI](#tab/azure-cli) ```azurecli
-az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
+virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
+
+az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
```
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
description: Learn how to connect to an Azure Elastic SAN (preview) volume from
Previously updated : 10/25/2022 Last updated : 10/27/2022
Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -El
# [Azure CLI](#tab/azure-cli) ```azurecli
-az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
+virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
+
+az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
```
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
description: An overview of Azure Elastic SAN (preview), a service that enables
Previously updated : 10/25/2022 Last updated : 10/27/2022
You can manage virtual network rules for volume groups through the Azure portal,
> You can use the **subscription** parameter to retrieve the subnet ID for a virtual network belonging to another Azure AD tenant. ```azurecli
- az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtual-network-rules:[{id:/'subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default',action:Allow}]}"
+ # First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
+ virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
+
+ az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
``` - Remove a network rule. The following command removes the first network rule, modify it to remove the network rule you'd like.
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
The AD DS account created by the cmdlet represents the storage account. If the A
You must run the script below in PowerShell 5.1 on a device that's domain joined to your on-premises AD DS, using an on-premises AD DS credential that's synced to your Azure AD. To follow the [Least privilege principle](../../role-based-access-control/best-practices.md), the on-premises AD DS credential must have the following Azure roles: - **Reader** on the resource group where the target storage account is located.-- **Contributor** on the storage account to be joined to AD DS (**Owner** will also work).
+- **Contributor** on the storage account to be joined to AD DS.
+
+> [!NOTE]
+> If the account used to join the storage account in AD DS is an **Owner** or **Contributor** in the Azure subscription where the target resources are located, then that account is already enabled to perform the join and no further assignments are required.
The AD DS credential must also have permissions to create a service logon account or computer account in the target AD. Replace the placeholder values with your own before executing the script.
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
Get result:
getToken(audience, name): returns AAD token for a given audience, name (optional) isValidToken(token): returns true if token hasn't expired getConnectionStringOrCreds(linkedService): returns connection string or credentials for linked service
-getSecret(akvName, secret, linkedService): returns AKV secret for a given AKV linked service, akvName, secret key
+getFullConnectionString(linkedService): returns full connection string with credentials
+getPropertiesAll(linkedService): returns all the properties of a linked servicegetSecret(akvName, secret, linkedService): returns AKV secret for a given AKV linked service, akvName, secret key
getSecret(akvName, secret): returns AKV secret for a given akvName, secret key
+getSecretWithLS(linkedService, secret): returns AKV secret for a given linked service, secret key
putSecret(akvName, secretName, secretValue, linkedService): puts AKV secret for a given akvName, secretName putSecret(akvName, secretName, secretValue): puts AKV secret for a given akvName, secretName
+putSecretWithLS(linkedService, secretName, secretValue): puts AKV secret for a given linked service, secretName
``` ### Get token
synapse-analytics Connect Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database.md
This article is a step-by-step guide for getting started with Azure Synapse Link
1. Provide a name for your Azure Synapse Link connection, and select the number of cores for the [link connection compute](sql-database-synapse-link.md#link-connection). These cores will be used for the movement of data from the source to the target. > [!NOTE]
- > We recommend starting low and increasing the number of cores as needed.
+ > * The number of cores you select here are allocated to the ingestion service for processing data loading and changes. They don't affect the source Azure SQL Database configuration or the target dedicated SQL pool confiruation.
+ > * We recommend starting low and increasing the number of cores as needed.
1. Select **OK**.
If you're using a database other than an Azure SQL database, see:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context) * [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
-* [Get or set a managed identity for an Azure SQL Database logical server or managed instance](/sql/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity.md#get-or-set-a-managed-identity-for-a-logical-server-or-managed-instance)
+* [Get or set a managed identity for an Azure SQL Database logical server or managed instance](/azure/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity#get-or-set-a-managed-identity-for-a-logical-server-or-managed-instance)
synapse-analytics Connect Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md
This article is a step-by-step guide for getting started with Azure Synapse Link
:::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-compute-settings.png" alt-text="Screenshot that shows where to enter the link connection settings.":::
+ > [!NOTE]
+ > The number of cores you select here are allocated to the ingestion service for processing data loading and changes. They don't affect the target dedicated SQL pool confiruation.
+ 1. With the new Azure Synapse Link connection open, you can now update the target table name, distribution type, and structure type. > [!NOTE]
virtual-desktop Configure Rdp Shortpath Limit Ports Public Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-rdp-shortpath-limit-ports-public-networks.md
By default, RDP Shortpath for public networks uses an ephemeral port range of 49152 to 65535 to establish a direct path between server and client. However, you may want to configure your session hosts to use a smaller, predictable port range.
-You can specify a smaller default range of ports 38300 to 39299 by configuring the `ICEEnableClientPortRange` registry value your session hosts, but in addition you can also specify the ports you want to use. When enabled on your session hosts, the Remote Desktop client will randomly select the port from the range you specify for every connection. If this range is exhausted, clients will fall back to using the default port range (49154-65535).
+You can set a smaller default range of ports 38300 to 39299, or you can specify your own port range to use. When enabled on your session hosts, the Remote Desktop client will randomly select the port from the range you specify for every connection. If this range is exhausted, clients will fall back to using the default port range (49152-65535).
When choosing the base and pool size, consider the number of ports you choose. The range must be between 1024 and 49151, after which the ephemeral port range begins.
When choosing the base and pool size, consider the number of ports you choose. T
## Enable a limited port range
-1. To enable a limited port range when using RDP Shortpath for public networks, open an elevated PowerShell prompt on your session hosts and run the following command to add the required registry value:
+1. To enable a limited port range when using RDP Shortpath for public networks, open PowerShell as an administrator on your session hosts and run the following command to add the required registry value. This will change the default port range to the smaller default range of ports 38300 to 39299.
```powershell New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server" -Name ICEEnableClientPortRange -PropertyType DWORD -Value 1 ```
-2. To further specify the port range to use, open an elevated PowerShell prompt on your session hosts and run the following commands, where the value for `ICEClientPortBase` is the start of the range, and `ICEClientPortRange` is the number of ports to use from the start of the range. For example, if you select 25000 as a port base and 1000 as pool size, the upper bound will be 25999.
+2. Once you have enabled a limited port range to be set, you can further specify the port range to use. Open PowerShell as an administrator on your session hosts and run the following commands, where the value for `ICEClientPortBase` is the start of the range, and `ICEClientPortRange` is the number of ports to use from the start of the range. For example, if you select 25000 as a port base and 1000 as pool size, the upper bound will be 25999.
```powershell New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services" -Name ICEClientPortBase -PropertyType DWORD -Value 25000
virtual-desktop Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-shortpath.md
If your users are in a scenario where RDP Shortpath for both managed network and
#### Session host virtual network
-| Name | Source | Source Port | Destination | Destination Port | Protocol | Action |
-|-|--|-|--||-|--|
-| RDP Shortpath Server Endpoint | VM subnet | Any | Any | 1024-65535 | UDP | Allow |
-| STUN Access | VM subnet | Any | - 13.107.17.41/32<br />- 13.107.64.0/18<br />- 20.202.0.0/16<br />- 52.112.0.0/14<br />- 52.120.0.0/14 | 3478 | UDP | Allow |
+| Name | Source | Source Port | Destination | Destination Port | Protocol | Action |
+|||::||::|::|::|
+| RDP Shortpath Server Endpoint | VM subnet | Any | Any | 1024-65535<br />(*default 49152-65535*) | UDP | Allow |
+| STUN Access | VM subnet | Any | - 13.107.17.41/32<br />- 13.107.64.0/18<br />- 20.202.0.0/16<br />- 52.112.0.0/14<br />- 52.120.0.0/14 | 3478 | UDP | Allow |
#### Client network
-| Name | Source | Source Port | Destination | Destination Port | Protocol | Action |
-|-|-|-|--||-|--|
-| RDP Shortpath Server Endpoint | Client network | Any | Public IP addresses assigned to NAT Gateway or Azure Firewall (provided by the STUN endpoint) | 1024-65535 | UDP | Allow |
-| STUN Access | Client network | Any | - 13.107.17.41/32<br />- 13.107.64.0/18<br />- 20.202.0.0/16<br />- 52.112.0.0/14<br />- 52.120.0.0/14 | 3478 | UDP | Allow |
+| Name | Source | Source Port | Destination | Destination Port | Protocol | Action |
+|||::||::|::|::|
+| RDP Shortpath Server Endpoint | Client network | Any | Public IP addresses assigned to NAT Gateway or Azure Firewall (provided by the STUN endpoint) | 1024-65535<br />(*default 49152-65535*) | UDP | Allow |
+| STUN Access | Client network | Any | - 13.107.17.41/32<br />- 13.107.64.0/18<br />- 20.202.0.0/16<br />- 52.112.0.0/14<br />- 52.120.0.0/14 | 3478 | UDP | Allow |
### Teredo support
virtual-desktop Remote Desktop Clients Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/remote-desktop-clients-overview.md
There are many features you can use to enhance your remote experience, such as:
- Device redirection, such as webcams, storage devices, and printers. - Microsoft Teams optimizations.
-Some features are only available with certain clients, so it's important to check [Compare the features of the Remote Desktop clients](../compare-remote-desktop-clients.md) to understand the differences when connecting to Azure Virtual Desktop.
+Some features are only available with certain clients, so it's important to check [Compare the features of the Remote Desktop clients](../compare-remote-desktop-clients.md?toc=%2Fazure%2Fvirtual-desktop%2Fusers%2Ftoc.json) to understand the differences when connecting to Azure Virtual Desktop.
If you want information on Remote Desktop Services instead, see [Remote Desktop clients for Remote Desktop Services](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
virtual-machines Dbms_Guide_Sapase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_sapase.md
tags: azure-resource-manager
Previously updated : 08/23/2022 Last updated : 10/27/2022
An example of a configuration for a small SAP ASE DB Server with a database size
| Configuration | Windows | Linux | Comments | | | | | |
-| VM Type | E4s_v3 (4 vCPU/32 GB RAM) | E4s_v3 (4 vCPU/32 GB RAM) | |
+| VM Type | E4s_v3/v4/v5 (4 vCPU/32 GB RAM) | E4s_v3/v4/v5 (4 vCPU/32 GB RAM) | |
| Accelerated Networking | Enable | Enable | | | SAP ASE version | 16.0.03.07 or higher | 16.0.03.07 or higher | | | # of data devices | 4 | 4 | | | # of log devices | 1 | 1 | | | # of temp devices | 1 | 1 | More for SAP BW workload |
-| Operating system | Windows Server 2019 | SLES 12 SP4/ 15 SP1 or RHEL 7.6/8.1 | |
+| Operating system | Windows Server 2019 | SLES 12 SP5, 15 SP1 or later or RHEL 7.9, 8.1/8.2/8.4 | |
| Disk aggregation | Storage Spaces | LVM2 | | | File system | NTFS | XFS | | Format block size | Needs workload testing | Needs workload testing | |
An example of a configuration for a medium SAP ASE DB Server with a database siz
| Configuration | Windows | Linux | Comments | | | | | |
-| VM Type | E16s_v3 (16 vCPU/128 GB RAM) | E16s_v3 (16 vCPU/128 GB RAM) | |
+| VM Type | E16s_v3/v4/v5 (16 vCPU/128 GB RAM) | E16s_v3/v4/v5 (16 vCPU/128 GB RAM) | |
| Accelerated Networking | Enable | Enable | | | SAP ASE version | 16.0.03.07 or higher | 16.0.03.07 or higher | | | # of data devices | 8 | 8 | | | # of log devices | 1 | 1 | | | # of temp devices | 1 | 1 | More for SAP BW workload |
-| Operating system | Windows Server 2019 | SLES 12 SP4/ 15 SP1 or RHEL 7.6/8.1 | |
+| Operating system | Windows Server 2019 | SLES 12 SP5, 15 SP1 or later or RHEL 7.9, 8.1/8.2/8.4| |
| Disk aggregation | Storage Spaces | LVM2 | | | File system | NTFS | XFS | | Format block size | Needs workload testing | Needs workload testing | |
An example of a configuration for a small SAP ASE DB Server with a database size
| Configuration | Windows | Linux | Comments | | | | | |
-| VM Type | E64s_v3 (64 vCPU/432 GB RAM) | E64s_v3 (64 vCPU/432 GB RAM) | |
+| VM Type | E64s_v3/v4/v5 (64 vCPU/432 GB RAM) | E64s_v3/v4/v5 (64 vCPU/432 GB RAM) | |
| Accelerated Networking | Enable | Enable | | | SAP ASE version | 16.0.03.07 or higher | 16.0.03.07 or higher | | | # of data devices | 16 | 16 | | | # of log devices | 1 | 1 | | | # of temp devices | 1 | 1 | More for SAP BW workload |
-| Operating system | Windows Server 2019 | SLES 12 SP4/ 15 SP1 or RHEL 7.6/8.1 | |
+| Operating system | Windows Server 2019 | SLES 12 SP5, 15 SP1 or later or RHEL 7.9, 8.1/8.2/8.4 | |
| Disk aggregation | Storage Spaces | LVM2 | | | File system | NTFS | XFS | | Format block size | Needs workload testing | Needs workload testing | |
An example of a configuration for a small SAP ASE DB Server with a database size
| # of data devices | 32 | 32 | | | # of log devices | 1 | 1 | | | # of temp devices | 1 | 1 | More for SAP BW workload |
-| Operating system | Windows Server 2019 | SLES 12 SP4/ 15 SP1 or RHEL 7.6/8.1 | |
+| Operating system | Windows Server 2019 | SLES 12 SP5, 15 SP1 or later or RHEL 7.9, 8.1/8.2/8.4 | |
| Disk aggregation | Storage Spaces | LVM2 | | | File system | NTFS | XFS | | Format block size | Needs workload testing | Needs workload testing | |
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 10/20/2022 Last updated : 10/27/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- October 27, 2022: Adding Ev4 and Ev5 VM families and updated OS releases to table in [SAP ASE Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_sapase.md)
- October 20, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) and [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md) to indicate that we are de-emphasizing SAP reference architectures, utilizing NFS clusters - October 18, 2022: Clarify some considerations around using Azure Availability Zones in [SAP workload configurations with Azure Availability Zones](./sap-ha-availability-zones.md) - October 17, 2022: Change in [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) and [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to add guidance for setting up parameter `AUTOMATED_REGISTER`
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
vm-windows Previously updated : 09/22/2022 Last updated : 10/26/2022
Make sure to assign the custom role to the service principal at all VM (cluster
vm.dirty_background_bytes = 314572800 </code></pre>
- c. Make sure vm.swappiness is set to 10 to avoid [hang issues with backups/compression on NetAPP filesystem] (https://me.sap.com/notes/2080199) as well as to reduce swap usage and favor memory.
+ c. Make sure vm.swappiness is set to 10 to reduce swap usage and favor memory.
<pre><code>sudo vi /etc/sysctl.conf # Change/set the following setting
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
Title: How Accelerated Networking works in Linux and FreeBSD VMs
description: How Accelerated Networking Works in Linux and FreeBSD VMs documentationcenter: ''-+ editor: ''
vm-linux Last updated 02/15/2022-+ # How Accelerated Networking works in Linux and FreeBSD VMs
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
Title: Accelerated Networking overview
description: Accelerated Networking to improves networking performance of Azure VMs. documentationcenter: ''-+ editor: ''
vm-windows Last updated 02/15/2022-+ # What is Accelerated Networking?
virtual-network Application Security Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/application-security-groups.md
description: Learn about the use of application security groups. documentationcenter: na-+ na Last updated 02/27/2020-+
virtual-network Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/cli-samples.md
Title: Azure CLI samples for virtual network
description: Learn about various sample scripts you can use for completing tasks in the Azure CLI, including creating a virtual network for multi-tier applications. documentationcenter: virtual-network-+ editor: '' tags:
Last updated 07/15/2019-+
virtual-network Concepts And Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/concepts-and-best-practices.md
Title: Azure Virtual Network - Concepts and best practices
description: Learn about Azure Virtual Network concepts and best practices. documentationcenter: na-+ na Last updated 12/03/2020-+ # Azure Virtual Network concepts and best practices
virtual-network Container Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/container-networking-overview.md
Title: Container networking with Azure Virtual Network | Microsoft Docs
description: Learn about the Azure Virtual Network container network interface (CNI) plug-in and how to enable containers to use an Azure Virtual Network. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 9/18/2018-+
virtual-network Create Peering Different Deployment Models Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-deployment-models-subscriptions.md
description: Learn how to create a virtual network peering between virtual networks created through different Azure deployment models that exist in different Azure subscriptions. documentationcenter: ''-+ na Last updated 06/25/2020-+
virtual-network Create Peering Different Deployment Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-deployment-models.md
Title: Create an Azure virtual network peering - different deployment models - s
description: Learn how to create a virtual network peering between virtual networks created through different Azure deployment models that exist in the same Azure subscription. documentationcenter: ''-+ editor: '' tags: azure-resource-manager
na Last updated 11/15/2018-+ # Create a virtual network peering - different deployment models, same subscription
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
description: Learn how to create a virtual network peering between virtual networks created through Resource Manager that exist in different Azure subscriptions in the same or different Azure Active Directory tenant. documentationcenter: ''-+ na Last updated 04/09/2019-+ # Create a virtual network peering - Resource Manager, different subscriptions and Azure Active Directory tenants
virtual-network Create Ptr For Smtp Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-ptr-for-smtp-service.md
description: Describes how to configure reverse lookup zones for an SMTP banner check in Azure documentationcenter: virtual-network-+ virtual-network Last updated 10/31/2018-+ # Configure reverse lookup zones for an SMTP banner check
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
Title: Create an Azure VM with Accelerated Networking using Azure CLI
description: Learn how to create a Linux virtual machine with Accelerated Networking enabled. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 03/24/2022-+
virtual-network Create Vm Accelerated Networking Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-powershell.md
Title: Create Windows VM with accelerated networking - Azure PowerShell
description: Create a Windows virtual machine (VM) with Accelerated Networking for improved network performance documentationcenter: ''-+ editor: ''
vm-windows Last updated 03/22/2022-+ # Create a Windows VM with accelerated networking using Azure PowerShell
virtual-network Deploy Container Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking.md
Title: Deploy Azure virtual network container networking | Microsoft Docs
description: Learn how to deploy the Azure Virtual Network container network interface (CNI) plug-in for Kubernetes clusters. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 9/18/2018-+
virtual-network Diagnose Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-routing-problem.md
Title: Diagnose an Azure virtual machine routing problem | Microsoft Docs
description: Learn how to diagnose a virtual machine routing problem by viewing the effective routes for a virtual machine. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 05/30/2018-+ ms.devlang: azurecli
virtual-network Diagnose Network Traffic Filter Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-traffic-filter-problem.md
Title: Diagnose a virtual machine network traffic filter problem | Microsoft Doc
description: Learn how to diagnose a virtual machine network traffic filter problem by viewing the effective security rules for a virtual machine. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 05/29/2018-+ ms.devlang: azurecli
virtual-network Associate Public Ip Address Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/associate-public-ip-address-vm.md
Title: Associate a public IP address to a virtual machine
-description: Learn to associate a public IP address to a virtual machine (VM) by using the Azure portal or the Azure CLI.
+description: Learn how to associate a public IP address to a virtual machine (VM) by using the Azure portal, Azure CLI or Azure PowerShell.
Previously updated : 02/21/2019 Last updated : 10/26/2022 # Associate a public IP address to a virtual machine
-In this article, you learn how to associate a public IP address to an existing virtual machine (VM). If you want to connect to a VM from the internet, the VM must have a public IP address associated to it. If you want to create a new VM with a public IP address, you can do so using the [Azure portal](virtual-network-deploy-static-pip-arm-portal.md), the [Azure CLI](virtual-network-deploy-static-pip-arm-cli.md), or [Azure PowerShell](virtual-network-deploy-static-pip-arm-ps.md). Public IP addresses have a nominal fee. For details, see [pricing](https://azure.microsoft.com/pricing/details/ip-addresses/). There is a limit to the number of public IP addresses that you can use per subscription. For details, see [limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#publicip-address).
+In this article, you learn how to associate a public IP address to an existing virtual machine (VM). If you want to create a new VM with a public IP address, you can do so using the [Azure portal](virtual-network-deploy-static-pip-arm-portal.md), the [Azure CLI](virtual-network-deploy-static-pip-arm-cli.md), or [Azure PowerShell](virtual-network-deploy-static-pip-arm-ps.md). Public IP addresses have a nominal fee. For details, see [pricing](https://azure.microsoft.com/pricing/details/ip-addresses/). There's a limit to the number of public IP addresses that you can use per subscription. For details, see [limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#publicip-address).
You can use the [Azure portal](#azure-portal), the [Azure CLI](#azure-cli), or [Azure PowerShell](#powershell) to associate a public IP address to a VM.
You can use the [Azure portal](#azure-portal), the [Azure CLI](#azure-cli), or [
2. Browse to, or search for the virtual machine that you want to add the public IP address to and then select it. 3. Under **Settings**, select **Networking**, and then select the network interface you want to add the public IP address to, as shown in the following picture:
- ![Select network interface](./media/associate-public-ip-address-vm/select-nic.png)
+ :::image type="content" source="./media/associate-public-ip-address-vm/select-nic.png" alt-text="Screenshot showing how to select the network interface of a virtual machine.":::
> [!NOTE] > Public IP addresses are associated to network interfaces attached to a VM. In the previous picture, the VM only has one network interface. If the VM had multiple network interfaces, they would all appear, and you'd select the network interface you want to associate the public IP address to. 4. Select **IP configurations** and then select an IP configuration, as shown in the following picture:
- ![Select IP configuration](./media/associate-public-ip-address-vm/select-ip-configuration.png)
+ :::image type="content" source="./media/associate-public-ip-address-vm/select-ip-configuration.png" alt-text="Screenshot showing how to select the I P configuration of a network interface.":::
> [!NOTE] > Public IP addresses are associated to IP configurations for a network interface. In the previous picture, the network interface has one IP configuration. If the network interface had multiple IP configurations, they would all appear in the list, and you'd select the IP configuration that you want to associate the public IP address to.
-5. Select **Enabled**, then select **IP address (*Configure required settings*)**. Choose an existing public IP address, which automatically closes the **Choose public IP address** box. If you don't have any available public IP addresses listed, you need to create one. To learn how, see [Create a public IP address](virtual-network-public-ip-address.md#create-a-public-ip-address). Select **Save**, as shown in the picture that follows, and then close the box for the IP configuration.
+5. Select **Associate**, then select **Choose public IP address** to choose an existing public IP address. If you don't have any available public IP addresses listed, you need to create one. To learn how, see [Create a public IP address](virtual-network-public-ip-address.md#create-a-public-ip-address).
- ![Enable public IP address](./media/associate-public-ip-address-vm/enable-public-ip-address.png)
+ :::image type="content" source="./media/associate-public-ip-address-vm/choose-public-ip-address.png" alt-text="Screenshot showing how to select and associate an existing public I P.":::
+
+6. Select **Save**, as shown in the picture that follows, and then close the box for the IP configuration.
+
+ :::image type="content" source="./media/associate-public-ip-address-vm/enable-public-ip-address.png" alt-text="Screenshot showing the selected public I P.":::
> [!NOTE] > The public IP addresses that appear are those that exist in the same region as the VM. If you have multiple public IP addresses created in the region, all will appear here. If any are grayed out, it's because the address is already associated to a different resource.
-6. View the public IP address assigned to the IP configuration, as shown in the picture that follows. It may take a few seconds for an IP address to appear.
+7. View the public IP address assigned to the IP configuration, as shown in the picture that follows. It may take a few seconds for an IP address to appear.
- ![View assigned public IP address](./media/associate-public-ip-address-vm/view-assigned-public-ip-address.png)
+ :::image type="content" source="./media/associate-public-ip-address-vm/view-assigned-public-ip-address.png" alt-text="Screenshot showing the newly assigned public I P.":::
> [!NOTE]
- > The address is assigned from a pool of addresses used in each Azure region. To see a list of address pools used in each region, see [Microsoft Azure Datacenter IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653). The address assigned can be any address in the pools used for the region. If you need the address to be assigned from a specific pool in the region, use a [Public IP address prefix](public-ip-address-prefix.md).
+ > The address is assigned from a pool of addresses used in each Azure region. To see a list of address pools used in each region, see [Azure IP Ranges and Service Tags](https://www.microsoft.com/en-us/download/details.aspx?id=56519). The address assigned can be any address in the pools used for the region. If you need the address to be assigned from a specific pool in the region, use a [Public IP address prefix](public-ip-address-prefix.md).
-7. [Allow network traffic to the VM](#allow-network-traffic-to-the-vm) with security rules in a network security group.
+8. [Allow network traffic to the VM](#allow-network-traffic-to-the-vm) with security rules in a network security group.
## Azure CLI Install the [Azure CLI](/cli/azure/install-azure-cli?toc=%2fazure%2fvirtual-network%2ftoc.json), or use the Azure Cloud Shell. The Azure Cloud Shell is a free Bash shell that you can run directly within the Azure portal. It has the Azure CLI preinstalled and configured to use with your account. Select the **Try it** button in the CLI commands that follow. Selecting **Try it** invokes a Cloud Shell that you can sign in to your Azure account with. 1. If using the CLI locally in Bash, sign in to Azure with `az login`.
-2. A public IP address is associated to an IP configuration of a network interface attached to a VM. Use the [az network nic-ip-config update](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-update) command to associate a public IP address to an IP configuration. The following example associates an existing public IP address named *myVMPublicIP* to the IP configuration named *ipconfigmyVM* of an existing network interface named *myVMVMNic* that exists in a resource group named *myResourceGroup*.
+2. A public IP address is associated to an IP configuration of a network interface attached to a VM. Use the [az network nic ip-config update](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-update) command to associate a public IP address to an IP configuration. The following example associates an existing public IP address named *myPublicIP* to the IP configuration named *ipconfig1* of an existing network interface named *myVMNic* that exists in a resource group named *myResourceGroup*.
```azurecli-interactive az network nic ip-config update \
- --name ipconfigmyVM \
- --nic-name myVMVMNic \
+ --name ipconfig1 \
+ --nic-name myVMNic \
--resource-group myResourceGroup \
- --public-ip-address myVMPublicIP
+ --public-ip-address myPublicIP
```
- - If you don't have an existing public IP address, use the [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) command to create one. For example, the following command creates a public IP address named *myVMPublicIP* in a resource group named *myResourceGroup*.
+ - If you don't have an existing public IP address, use the [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) command to create one. For example, the following command creates a public IP address named *myPublicIP* in a resource group named *myResourceGroup*.
```azurecli-interactive
- az network public-ip create --name myVMPublicIP --resource-group myResourceGroup
+ az network public-ip create --name myPublicIP --resource-group myResourceGroup
``` > [!NOTE]
- > The previous command creates a public IP address with default values for several settings that you may want to customize. To learn more about all public IP address settings, see [Create a public IP address](virtual-network-public-ip-address.md#create-a-public-ip-address). The address is assigned from a pool of public IP addresses used for each Azure region. To see a list of address pools used in each region, see [Microsoft Azure Datacenter IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653).
+ > The previous command creates a public IP address with default values for several settings that you may want to customize. To learn more about all public IP address settings, see [Create a public IP address](virtual-network-public-ip-address.md#create-a-public-ip-address). The address is assigned from a pool of public IP addresses used for each Azure region. To see a list of address pools used in each region, see [Azure IP Ranges and Service Tags](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
- If you don't know the name of a network interface attached to your VM, use the [az vm nic list](/cli/azure/vm/nic#az-vm-nic-list) command to view them. For example, the following command lists the names of the network interfaces attached to a VM named *myVM* in a resource group named *myResourceGroup*:
Install the [Azure CLI](/cli/azure/install-azure-cli?toc=%2fazure%2fvirtual-netw
The output includes one or more lines that are similar to the following example: ```
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myVMVMNic",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myVMNic",
```
- In the previous example, *myVMVMNic* is the name of the network interface.
+ In the previous example, *myVMNic* is the name of the network interface.
- - If you don't know the name of an IP configuration for a network interface, use the [az network nic ip-config list](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-list) command to retrieve them. For example, the following command lists the names of the IP configurations for a network interface named *myVMVMNic* in a resource group named *myResourceGroup*:
+ - If you don't know the name of an IP configuration for a network interface, use the [az network nic ip-config list](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-list) command to retrieve them. For example, the following command lists the names of the IP configurations for a network interface named *myVMNic* in a resource group named *myResourceGroup*:
```azurecli-interactive
- az network nic ip-config list --nic-name myVMVMNic --resource-group myResourceGroup --out table
+ az network nic ip-config list --nic-name myVMNic --resource-group myResourceGroup --out table
``` 3. View the public IP address assigned to the IP configuration with the [az vm list-ip-addresses](/cli/azure/vm#az-vm-list-ip-addresses) command. The following example shows the IP addresses assigned to an existing VM named *myVM* in a resource group named *myResourceGroup*.
Install the [Azure CLI](/cli/azure/install-azure-cli?toc=%2fazure%2fvirtual-netw
``` > [!NOTE]
- > The address is assigned from a pool of addresses used in each Azure region. To see a list of address pools used in each region, see [Microsoft Azure Datacenter IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653). The address assigned can be any address in the pools used for the region. If you need the address to be assigned from a specific pool in the region, use a [Public IP address prefix](public-ip-address-prefix.md).
+ > The address is assigned from a pool of addresses used in each Azure region. To see a list of address pools used in each region, see [Azure IP Ranges and Service Tags](https://www.microsoft.com/en-us/download/details.aspx?id=56519). The address assigned can be any address in the pools used for the region. If you need the address to be assigned from a specific pool in the region, use a [Public IP address prefix](public-ip-address-prefix.md).
4. [Allow network traffic to the VM](#allow-network-traffic-to-the-vm) with security rules in a network security group.
Install [PowerShell](/powershell/azure/install-az-ps), or use the Azure Cloud Sh
1. If using PowerShell locally, sign in to Azure with `Connect-AzAccount`. 2. A public IP address is associated to an IP configuration of a network interface attached to a VM. Use the [Get-AzVirtualNetwork](/powershell/module/Az.Network/Get-AzVirtualNetwork) and [Get-AzVirtualNetworkSubnetConfig](/powershell/module/Az.Network/Get-AzVirtualNetworkSubnetConfig) commands to get the virtual network and subnet that the network interface is in. Next, use the [Get-AzNetworkInterface](/powershell/module/Az.Network/Get-AzNetworkInterface) command to get a network interface and the [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) command to get an existing public IP address. Then use the [Set-AzNetworkInterfaceIpConfig](/powershell/module/Az.Network/Set-AzNetworkInterfaceIpConfig) command to associate the public IP address to the IP configuration and the [Set-AzNetworkInterface](/powershell/module/Az.Network/Set-AzNetworkInterface) command to write the new IP configuration to the network interface.
- The following example associates an existing public IP address named *myVMPublicIP* to the IP configuration named *ipconfigmyVM* of an existing network interface named *myVMVMNic* that exists in a subnet named *myVMSubnet* in a virtual network named *myVMVNet*. All resources are in a resource group named *myResourceGroup*.
+ The following example associates an existing public IP address named *myPublicIP* to the IP configuration named *ipconfig1* of an existing network interface named *myVMNic* that exists in a subnet named *mySubnet* in a virtual network named *myVNet*. All resources are in a resource group named *myResourceGroup*.
```azurepowershell-interactive
- $vnet = Get-AzVirtualNetwork -Name myVMVNet -ResourceGroupName myResourceGroup
- $subnet = Get-AzVirtualNetworkSubnetConfig -Name myVMSubnet -VirtualNetwork $vnet
- $nic = Get-AzNetworkInterface -Name myVMVMNic -ResourceGroupName myResourceGroup
- $pip = Get-AzPublicIpAddress -Name myVMPublicIP -ResourceGroupName myResourceGroup
- $nic | Set-AzNetworkInterfaceIpConfig -Name ipconfigmyVM -PublicIPAddress $pip -Subnet $subnet
+ $vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+ $subnet = Get-AzVirtualNetworkSubnetConfig -Name mySubnet -VirtualNetwork $vnet
+ $nic = Get-AzNetworkInterface -Name myVMNic -ResourceGroupName myResourceGroup
+ $pip = Get-AzPublicIpAddress -Name myPublicIP -ResourceGroupName myResourceGroup
+ $nic | Set-AzNetworkInterfaceIpConfig -Name ipconfig1 -PublicIPAddress $pip -Subnet $subnet
$nic | Set-AzNetworkInterface ```
- - If you don't have an existing public IP address, use the [New-AzPublicIpAddress](/powershell/module/Az.Network/New-AzPublicIpAddress) command to create one. For example, the following command creates a *dynamic* public IP address named *myVMPublicIP* in a resource group named *myResourceGroup* in the *eastus* region.
+ - If you don't have an existing public IP address, use the [New-AzPublicIpAddress](/powershell/module/Az.Network/New-AzPublicIpAddress) command to create one. For example, the following command creates a *dynamic* public IP address named *myPublicIP* in a resource group named *myResourceGroup* in the *eastus* region.
```azurepowershell-interactive
- New-AzPublicIpAddress -Name myVMPublicIP -ResourceGroupName myResourceGroup -AllocationMethod Dynamic -Location eastus
+ New-AzPublicIpAddress -Name myPublicIP -ResourceGroupName myResourceGroup -AllocationMethod Dynamic -Location eastus
``` > [!NOTE]
- > The previous command creates a public IP address with default values for several settings that you may want to customize. To learn more about all public IP address settings, see [Create a public IP address](virtual-network-public-ip-address.md#create-a-public-ip-address). The address is assigned from a pool of public IP addresses used for each Azure region. To see a list of address pools used in each region, see [Microsoft Azure Datacenter IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653).
+ > The previous command creates a public IP address with default values for several settings that you may want to customize. To learn more about all public IP address settings, see [Create a public IP address](virtual-network-public-ip-address.md#create-a-public-ip-address). The address is assigned from a pool of public IP addresses used for each Azure region. To see a list of address pools used in each region, see [Azure IP Ranges and Service Tags](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
- If you don't know the name of a network interface attached to your VM, use the [Get-AzVM](/powershell/module/Az.Compute/Get-AzVM) command to view them. For example, the following command lists the names of the network interfaces attached to a VM named *myVM* in a resource group named *myResourceGroup*:
Install [PowerShell](/powershell/azure/install-az-ps), or use the Azure Cloud Sh
$vm.NetworkProfile ```
- The output includes one or more lines that are similar to the example that follows. In the example output, *myVMVMNic* is the name of the network interface.
+ The output includes one or more lines that are similar to the example that follows. In the example output, *myVMNic* is the name of the network interface.
```
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myVMVMNic",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myVMNic",
```
- - If you don't know the name of the virtual network or subnet that the network interface is in, use the `Get-AzNetworkInterface` command to view the information. For example, the following command gets the virtual network and subnet information for a network interface named *myVMVMNic* in a resource group named *myResourceGroup*:
+ - If you don't know the name of the virtual network or subnet that the network interface is in, use the `Get-AzNetworkInterface` command to view the information. For example, the following command gets the virtual network and subnet information for a network interface named *myVMNic* in a resource group named *myResourceGroup*:
```azurepowershell-interactive
- $nic = Get-AzNetworkInterface -Name myVMVMNic -ResourceGroupName myResourceGroup
+ $nic = Get-AzNetworkInterface -Name myVMNic -ResourceGroupName myResourceGroup
$ipConfigs = $nic.IpConfigurations $ipConfigs.Subnet | Select Id ```
- The output includes one or more lines that are similar to the example that follows. In the example output, *myVMVNET* is the name of the virtual network and *myVMSubnet* is the name of the subnet.
+ The output includes one or more lines that are similar to the example that follows. In the example output, *myVNet* is the name of the virtual network and *mySubnet* is the name of the subnet.
```
- "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVMVNET/subnets/myVMSubnet",
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/mySubnet",
```
- - If you don't know the name of an IP configuration for a network interface, use the [Get-AzNetworkInterface](/powershell/module/Az.Network/Get-AzNetworkInterface) command to retrieve them. For example, the following command lists the names of the IP configurations for a network interface named *myVMVMNic* in a resource group named *myResourceGroup*:
+ - If you don't know the name of an IP configuration for a network interface, use the [Get-AzNetworkInterface](/powershell/module/Az.Network/Get-AzNetworkInterface) command to retrieve them. For example, the following command lists the names of the IP configurations for a network interface named *myVMNic* in a resource group named *myResourceGroup*:
```azurepowershell-interactive
- $nic = Get-AzNetworkInterface -Name myVMVMNic -ResourceGroupName myResourceGroup
+ $nic = Get-AzNetworkInterface -Name myVMNic -ResourceGroupName myResourceGroup
$nic.IPConfigurations ```
- The output includes one or more lines that are similar to the example that follows. In the example output, *ipconfigmyVM* is the name of an IP configuration.
+ The output includes one or more lines that are similar to the example that follows. In the example output, *ipconfig1* is the name of an IP configuration.
```
- Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myVMVMNic/ipConfigurations/ipconfigmyVM
+ Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myVMNic/ipConfigurations/ipconfig1
```
-3. View the public IP address assigned to the IP configuration with the [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) command. The following example shows the address assigned to a public IP address named *myVMPublicIP* in a resource group named *myResourceGroup*.
+3. View the public IP address assigned to the IP configuration with the [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) command. The following example shows the address assigned to a public IP address named *myPublicIP* in a resource group named *myResourceGroup*.
```azurepowershell-interactive
- Get-AzPublicIpAddress -Name myVMPublicIP -ResourceGroupName myResourceGroup | Select IpAddress
+ Get-AzPublicIpAddress -Name myPublicIP -ResourceGroupName myResourceGroup | Select IpAddress
``` If you don't know the name of the public IP address assigned to an IP configuration, run the following commands to get it: ```azurepowershell-interactive
- $nic = Get-AzNetworkInterface -Name myVMVMNic -ResourceGroupName myResourceGroup
+ $nic = Get-AzNetworkInterface -Name myVMNic -ResourceGroupName myResourceGroup
$nic.IPConfigurations $address = $nic.IPConfigurations.PublicIpAddress $address | Select Id ```
- The output includes one or more lines that are similar to the example that follows. In the example output, *myVMPublicIP* is the name of the public IP address assigned to the IP configuration.
+ The output includes one or more lines that are similar to the example that follows. In the example output, *myPublicIP* is the name of the public IP address assigned to the IP configuration.
```
- "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/myVMPublicIP"
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/myPublicIP"
``` > [!NOTE]
- > The address is assigned from a pool of addresses used in each Azure region. To see a list of address pools used in each region, see [Microsoft Azure Datacenter IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653). The address assigned can be any address in the pools used for the region. If you need the address to be assigned from a specific pool in the region, use a [Public IP address prefix](public-ip-address-prefix.md).
+ > The address is assigned from a pool of addresses used in each Azure region. To see a list of address pools used in each region, see [Azure IP Ranges and Service Tags](https://www.microsoft.com/en-us/download/details.aspx?id=56519). The address assigned can be any address in the pools used for the region. If you need the address to be assigned from a specific pool in the region, use a [Public IP address prefix](public-ip-address-prefix.md).
4. [Allow network traffic to the VM](#allow-network-traffic-to-the-vm) with security rules in a network security group. ## Allow network traffic to the VM
-Before you can connect to the public IP address from the internet, ensure that you have the necessary ports open in any network security group that you might have associated to the network interface, the subnet the network interface is in, or both. Though security groups filter traffic to the private IP address of the network interface, once inbound internet traffic arrives at the public IP address, Azure translates the public address to the private IP address, so if a network security group prevents the traffic flow, the communication with the public IP address fails. You can view the effective security rules for a network interface and its subnet using the [Portal](../../virtual-network/diagnose-network-traffic-filter-problem.md#diagnose-using-azure-portal), [CLI](../../virtual-network/diagnose-network-traffic-filter-problem.md#diagnose-using-azure-cli), or [PowerShell](../../virtual-network/diagnose-network-traffic-filter-problem.md#diagnose-using-powershell).
+Before you can connect to the public IP address from the internet, ensure that you have the necessary ports open in any network security group that you might have associated to the network interface, the subnet of the network interface, or both. Though security groups filter traffic to the private IP address of the network interface, once inbound internet traffic arrives at the public IP address, Azure translates the public address to the private IP address, so if a network security group prevents the traffic flow, the communication with the public IP address fails. You can view the effective security rules for a network interface and its subnet using the [Portal](../../virtual-network/diagnose-network-traffic-filter-problem.md#diagnose-using-azure-portal), [CLI](../../virtual-network/diagnose-network-traffic-filter-problem.md#diagnose-using-azure-cli), or [PowerShell](../../virtual-network/diagnose-network-traffic-filter-problem.md#diagnose-using-powershell).
## Next steps
-Allow inbound internet traffic to your VM with a network security group. To learn how to create a network security group, see [Work with network security groups](../../virtual-network/manage-network-security-group.md#work-with-network-security-groups). To learn more about network security groups, see [Security groups](../../virtual-network/network-security-groups-overview.md).
+In this article, you learned how to associate a public IP address to a VM using Azure portal, Azure CLI or Azure PowerShell.
+
+Use a [network security group](../../virtual-network/network-security-groups-overview.md) to allow inbound Internet traffic to your VM. To learn how to create a network security group, see [Work with network security groups](../../virtual-network/manage-network-security-group.md#work-with-network-security-groups).
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
Title: Upgrading a basic public IP address to standard SKU - Guidance description: Overview of upgrade options and guidance for migrating basic public IP to standard public IP for future basic public IP address retirement-+ -+ Last updated 09/19/2022 #customer-intent: As an cloud engineer with Basic public IP services, I need guidance and direction on migrating my workloads off basic to Standard SKUs
virtual-network Virtual Networks Static Private Ip Arm Pportal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md
Title: Create a VM with a static private IP address - Azure portal
+ Title: 'Create a VM with a static private IP address - Azure portal'
description: Learn how to create a virtual machine with a static private IP address using the Azure portal. Previously updated : 10/01/2021 Last updated : 10/27/2022
Use the following steps to create a virtual machine, virtual network, and subnet
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+2. In the search box at the top of the portal, enter *Virtual machine*. Select **Virtual machines** in the search results.
-3. Select **+ Create**, then **+ Virtual machine** in **Virtual machines**.
+3. Select **+ Create** then **Azure Virtual machine**.
4. In **Create a virtual machine**, enter or select the following information:
Use the following steps to create a virtual machine, virtual network, and subnet
| - | -- | | **Project details** | | | Subscription | Select your subscription. |
- | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
+ | Resource group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. |
| **Instance details** | |
- | Virtual machine name | Enter **myVM**. |
+ | Virtual machine name | Enter *myVM*. |
| Region | Select **(US) East US 2**. | | Availability options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
| Image | Select **Windows Server 2019 Datacenter - Gen2**. | | Azure Spot instance | Leave unchecked. | | Size | Select a size. |
Use the following steps to create a virtual machine, virtual network, and subnet
:::image type="content" source="./media/virtual-networks-static-private-ip-arm-pportal/create-vm.png" alt-text="Screenshot of create virtual machine."::: > [!WARNING]
- > Portal 3389 is selected, to enable remote access to the Windows Server virtual machine from the internet. Opening port 3389 to the internet is not recommended to manage production workloads. </br> For secure access to Azure virtual machines, see **[What is Azure Bastion?](../../bastion/bastion-overview.md)**
+ > Port 3389 is selected to enable remote access to the Windows Server virtual machine from the internet. Opening port 3389 to the internet is not recommended to manage production workloads. </br> For secure access to Azure virtual machines, see **[What is Azure Bastion?](../../bastion/bastion-overview.md)**
3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
In this section, you'll change the private IP address from **dynamic** to **stat
:::image type="content" source="./media/virtual-networks-static-private-ip-arm-pportal/select-ip-configuration.png" alt-text="Screenshot of select ip configuration.":::
-7. Select **Static** in **Assignment**. Select **Save**.
+7. Select **Static** in **Assignment**. Change the private IP address if you want a different one, and then select **Save**.
+ > [!WARNING]
+ > If you change the private IP address, the VM associated with the network interface will be restarted to utilize the new IP address.
+
:::image type="content" source="./media/virtual-networks-static-private-ip-arm-pportal/select-static-assignment.png" alt-text="Screenshot of select static assignment.":::
- > [!NOTE]
- > If you notice after selecting **Save** that the assignment is still set to **Dynamic**, the IP address you typed is already in use. Try another IP address.
- To change the IP address back to dynamic set the assignment for your private IP address to **Dynamic**, and then select **Save**. > [!WARNING]
To change the IP address back to dynamic set the assignment for your private IP
When no longer needed, delete the resource group and all of the resources it contains:
-1. Enter **myResourceGroup** in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
+1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
2. Select **Delete resource group**.
-3. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
## Next steps
virtual-network Virtual Networks Static Private Ip Arm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-ps.md
Title: Create a VM with a static private IP address - Azure PowerShell
+ Title: 'Create a VM with a static private IP address - Azure PowerShell'
description: Learn how to create a virtual machine with a static private IP address using Azure PowerShell. Previously updated : 10/01/2021 Last updated : 10/27/2022
An Azure resource group is a logical container into which Azure resources are de
Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) named **myResourceGroup** in the **eastus2** location. ```azurepowershell-interactive
+## Create resource group. ##
$rg =@{ Name = 'myResourceGroup' Location = 'eastus2'
$nic | Set-AzNetworkInterface
``` > [!WARNING]
-> Though you can add private IP address settings to the operating system, we recommend not doing so until after reading [Add a private IP address to an operating system](virtual-network-network-interface-addresses.md#private).
+> From within the operating system of a VM, you shouldn't statically assign the *private* IP that's assigned to the Azure VM. Only do static assignment of a private IP when it's necessary, such as when [assigning many IP addresses to VMs](virtual-network-multiple-ip-addresses-portal.md).
+>
+>If you manually set the private IP address within the operating system, make sure it matches the private IP address assigned to the Azure [network interface](virtual-network-network-interface-addresses.md#change-ip-address-settings). Otherwise, you can lose connectivity to the VM. Learn more about [private IP address](virtual-network-network-interface-addresses.md#private) settings.
## Clean up resources
Remove-AzResourceGroup -Name myResourceGroup -Force
- Learn more about [public IP addresses](public-ip-addresses.md#public-ip-addresses) in Azure. - Learn more about all [public IP address settings](virtual-network-public-ip-address.md#create-a-public-ip-address). - Learn more about [private IP addresses](private-ip-addresses.md) and assigning a [static private IP address](virtual-network-network-interface-addresses.md#add-ip-addresses) to an Azure virtual machine.-- Learn more about creating [Linux](../../virtual-machines/windows/tutorial-manage-vm.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Windows](../../virtual-machines/windows/tutorial-manage-vm.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machines.
+- Learn more about creating [Linux](../../virtual-machines/windows/tutorial-manage-vm.md) and [Windows](../../virtual-machines/windows/tutorial-manage-vm.md) virtual machines.
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
Title: Azure Kubernetes network policies | Microsoft Docs
description: Learn about Kubernetes network policies to secure your Kubernetes cluster. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 9/25/2018-+
virtual-network Manage Network Security Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-network-security-group.md
description: Learn where to find information about security rules and how to create, change, or delete a network security group. documentationcenter: na-+ na Last updated 03/13/2020-+ # Create, change, or delete a network security group
virtual-network Manage Route Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-route-table.md
description: Learn where to find information about virtual network traffic routing, and how to create, change, or delete a route table. documentationcenter: na-+ na Last updated 03/19/2020-+ # Create, change, or delete a route table
virtual-network Manage Subnet Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-subnet-delegation.md
description: Learn how to add or remove a delegated subnet for a service in Azure. documentationcenter: na-+ na Last updated 11/06/2019-+ ms.devlang: azurecli
virtual-network Manage Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-virtual-network.md
description: Create and delete a virtual network and change settings, like DNS servers and IP address spaces, for an existing virtual network. documentationcenter: na-+ na Last updated 01/10/2019-+ # Create, change, or delete a virtual network
virtual-network Monitor Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/monitor-virtual-network-reference.md
Title: Monitoring Azure virtual network data reference description: Important reference material needed when you monitor Azure virtual network -+ -+ Last updated 06/29/2021
virtual-network Monitor Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/monitor-virtual-network.md
Title: Monitoring Azure virtual networks description: Start here to learn how to monitor Azure virtual networks --++
virtual-network Network Security Group How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-group-how-it-works.md
description: Learn how network security groups help you filter network traffic between Azure resources. documentationcenter: na-+ na Last updated 08/24/2020-+
virtual-network Network Security Groups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-groups-overview.md
description: Learn about network security groups. Network security groups help you filter network traffic between Azure resources. documentationcenter: na-+ na Last updated 09/08/2020-+
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network
description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 09/12/2022 --++
virtual-network Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/powershell-samples.md
Title: Azure PowerShell samples for virtual network
description: Learn about Azure PowerShell samples for managing virtual networks, including a sample for creating a virtual network for multi-tier applications. documentationcenter: virtual-network-+ editor: '' tags:
Last updated 07/15/2019-+ # Azure PowerShell samples for virtual network
virtual-network Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-bicep.md
Title: 'Quickstart: Create a virtual network using Bicep'
description: Learn how to use Bicep to create an Azure virtual network. -+ Last updated 06/24/2022-+
virtual-network Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-cli.md
Title: Create a virtual network - quickstart - Azure CLI description: In this quickstart, learn to create a virtual network using the Azure CLI. A virtual network lets Azure resources communicate with each other and with the internet.-+ Last updated 04/13/2022-+ #Customer intent: I want to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
virtual-network Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-portal.md
Title: 'Quickstart: Create a virtual network - Azure portal' description: In this quickstart, learn how to create a virtual network using the Azure portal.--++ Last updated 06/20/2022
virtual-network Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-powershell.md
Title: Create a virtual network - quickstart - Azure PowerShell description: In this quickstart, you create a virtual network using the Azure portal. A virtual network lets Azure resources communicate with each other and with the internet.-+ Last updated 04/13/2022-+ #Customer intent: I want to create a virtual network so that virtual machines can communicate with privately with each other and with the internet.
virtual-network Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-template.md
Title: 'Quickstart: Create a virtual network using a Resource Manager template'
description: Learn how to use a Resource Manager template to create an Azure virtual network. -+ Last updated 06/09/2021-+
virtual-network Virtual Network Cli Sample Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-filter-network-traffic.md
Title: Filter VM network traffic - Azure CLI script sample
description: Filter inbound and outbound virtual machine (VM) network traffic using an Azure CLI script sample. documentationcenter: virtual-network-+ ms.devlang: azurecli
Last updated 02/03/2022-+
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack Standard Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack-standard-load-balancer.md
description: Learn how to configure IPv6 endpoints in a virtual network script sample using Standard Load Balancer. documentationcenter: na-+ Last updated 02/03/2022-+
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack.md
description: Use an Azure CLI script sample to configure IPv6 endpoints and deploy a dual stack (IPv4 + IPv6) application in Azure. documentationcenter: na-+ Last updated 02/03/2022-+
virtual-network Virtual Network Cli Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-multi-tier-application.md
Title: Create a VNet for multi-tier applications - Azure CLI script sample
description: Create a virtual network for multi-tier applications - Azure CLI script sample. documentationcenter: virtual-network-+ ms.devlang: azurecli
Last updated 02/03/2022-+
virtual-network Virtual Network Cli Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-peer-two-virtual-networks.md
Title: Peer two virtual networks - Azure CLI script sample
description: Create and connect two virtual networks in the same region through the Azure network by using an Azure CLI script sample. documentationcenter: virtual-network-+ ms.devlang: azurecli
Last updated 02/03/2022-+
virtual-network Virtual Network Cli Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-route-traffic-through-nva.md
Title: Route traffic via network virtual appliance - Azure CLI script sample
description: Route traffic through a firewall network virtual appliance - Azure CLI script sample. documentationcenter: virtual-network-+ editor: '' tags:
Last updated 02/03/2022-+
virtual-network Virtual Network Powershell Sample Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-filter-network-traffic.md
Title: Filter VM network traffic - Azure PowerShell script sample
description: Filter inbound and outbound VM network traffic - Azure PowerShell script sample. documentationcenter: virtual-network-+ editor: '' tags:
Last updated 03/20/2018-+
virtual-network Virtual Network Powershell Sample Ipv6 Dual Stack Standard Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-ipv6-dual-stack-standard-load-balancer.md
description: Learn about configuring an IPv6 frontend in a virtual network script sample with Standard Load Balancer. documentationcenter: na-+ Last updated 07/15/2019-+
virtual-network Virtual Network Powershell Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-ipv6-dual-stack.md
description: Configure IPv6 endpoints in virtual network with an Azure PowerShell script and find links to command-specific documentation to help with the PowerShell sample. documentationcenter: na-+ Last updated 07/15/2019-+
virtual-network Virtual Network Powershell Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-multi-tier-application.md
Title: Create a VNet for multi-tier applications - Azure PowerShell script sampl
description: Create a virtual network for multi-tier applications - Azure PowerShell script sample. documentationcenter: virtual-network-+ editor: '' tags:
Last updated 12/13/2018-+
virtual-network Virtual Network Powershell Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-peer-two-virtual-networks.md
Title: Peer two virtual networks - Azure PowerShell script sample
description: Create and connect two virtual networks in the same region. Use the Azure script for two peer virtual networks to connect the networks through the Azure network. documentationcenter: virtual-network-+ ms.devlang: powershell
Last updated 03/20/2018-+
virtual-network Virtual Network Powershell Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-route-traffic-through-nva.md
Title: Route traffic via NVA - Azure PowerShell script sample
description: Azure PowerShell script sample - Route traffic through a firewall NVA. documentationcenter: virtual-network-+ ms.devlang: powershell
Last updated 03/20/2018-+
virtual-network Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network
description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 10/12/2022 --++
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
description: Learn about service tags. Service tags help minimize the complexity of security rule creation. documentationcenter: na-+ na Last updated 10/11/2021-+
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
Title: DPDK in an Azure Linux VM | Microsoft Docs
description: Learn the benefits of the Data Plane Development Kit (DPDK) and how to set up the DPDK on a Linux virtual machine. documentationcenter: na-+ editor: ''
na Last updated 05/12/2020-+ # Set up DPDK in a Linux virtual machine
virtual-network Subnet Delegation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/subnet-delegation-overview.md
Title: What is subnet delegation in Azure virtual network?
description: Learn about subnet delegation in Azure virtual network documentationcenter: na-+ na Last updated 12/15/2020-+ # What is subnet delegation?
virtual-network Subnet Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/subnet-extension.md
Title: Subnet extension in Azure | Microsoft Docs
description: Learn about subnet extension in Azure. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 10/31/2019-+ # Subnet extension
virtual-network Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/template-samples.md
Title: Azure Resource Manager template samples for virtual network | Microsoft D
description: Learn about different Azure Resource Manager templates available for you to deploy Azure virtual networks with. documentationcenter: virtual-network-+ editor: '' tags:
Last updated 04/22/2019-+ # Azure Resource Manager template samples for virtual network
virtual-network Troubleshoot Outbound Smtp Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/troubleshoot-outbound-smtp-connectivity.md
Title: Troubleshoot outbound SMTP connectivity in Azure | Microsoft Docs description: Learn the recommended method for sending email and how to troubleshoot problems with outbound SMTP connectivity in Azure. -+ editor: ''
na Last updated 04/28/2021-+
virtual-network Troubleshoot Vm Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/troubleshoot-vm-connectivity.md
Title: Troubleshoot Azure VM connectivity problems description: Learn how to diagnose and resolve connectivity problems that affect Azure virtual machines (VMs).--++ audience: ITPro
virtual-network Tutorial Connect Virtual Networks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-cli.md
Title: Connect virtual networks with VNet peering - Azure CLI
description: In this article, you learn how to connect virtual networks with virtual network peering, using the Azure CLI. documentationcenter: virtual-network-+ tags: azure-resource-manager # Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network.
virtual-network Last updated 03/13/2018-+
virtual-network Tutorial Connect Virtual Networks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-portal.md
Title: 'Tutorial: Connect virtual networks with VNet peering - Azure portal'
description: In this tutorial, you learn how to connect virtual networks with virtual network peering using the Azure portal. documentationcenter: virtual-network-+ virtual-network Last updated 06/24/2022-+ # Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network.
virtual-network Tutorial Connect Virtual Networks Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-powershell.md
Title: Connect virtual networks with VNet peering - Azure PowerShell
description: In this article, you learn how to connect virtual networks with virtual network peering, using Azure PowerShell. documentationcenter: virtual-network-+ tags: azure-resource-manager # Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network.
virtual-network Last updated 03/13/2018-+
virtual-network Tutorial Create Route Table Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-cli.md
Title: Route network traffic - Azure CLI | Microsoft Docs
description: In this article, learn how to route network traffic with a route table using the Azure CLI. documentationcenter: virtual-network-+ editor: '' tags: azure-resource-manager
virtual-network Last updated 04/20/2022-+
virtual-network Tutorial Create Route Table Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-portal.md
description: In this tutorial, learn how to route network traffic with a route table using the Azure portal. documentationcenter: virtual-network-+ virtual-network Last updated 06/27/2022-+ # Customer intent: I want to route traffic from one subnet, to a different subnet, through a network virtual appliance.
virtual-network Tutorial Create Route Table Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-powershell.md
Title: Route network traffic Azure PowerShell | Microsoft Docs
description: In this article, learn how to route network traffic with a route table using PowerShell. documentationcenter: virtual-network-+ editor: '' tags: azure-resource-manager
virtual-network Last updated 03/13/2018-+
virtual-network Tutorial Filter Network Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-cli.md
Title: Filter network traffic - Azure CLI | Microsoft Docs
description: In this article, you learn how to filter network traffic to a subnet, with a network security group, using the Azure CLI. documentationcenter: virtual-network-+ editor: '' tags: azure-resource-manager
virtual-network Last updated 03/30/2018-+
virtual-network Tutorial Filter Network Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-powershell.md
Title: Filter network traffic - Azure PowerShell | Microsoft Docs
description: In this article, you learn how to filter network traffic to a subnet, with a network security group, using PowerShell. documentationcenter: virtual-network-+ editor: '' tags: azure-resource-manager
virtual-network Last updated 03/30/2018-+
virtual-network Tutorial Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic.md
Title: 'Tutorial: Filter network traffic with a network security group (NSG) - A
description: In this tutorial, you learn how to filter network traffic to a subnet, with a network security group (NSG), using the Azure portal. -+ Last updated 06/28/2022-+ # Customer intent: I want to filter network traffic to virtual machines that perform similar functions, such as web servers.
virtual-network Tutorial Restrict Network Access To Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-cli.md
Title: Restrict network access to PaaS resources - Azure CLI
description: In this article, you learn how to limit and restrict network access to Azure resources, such as Azure Storage and Azure SQL Database, with virtual network service endpoints using the Azure CLI. documentationcenter: virtual-network-+ editor: '' tags: azure-resource-manager
virtual-network Last updated 03/14/2018-+
virtual-network Tutorial Restrict Network Access To Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-powershell.md
Title: Restrict network access to PaaS resources - Azure PowerShell
description: In this article, you learn how to limit and restrict network access to Azure resources, such as Azure Storage and Azure SQL Database, with virtual network service endpoints using Azure PowerShell. documentationcenter: virtual-network-+ editor: '' tags: azure-resource-manager
na Last updated 03/14/2018-+
virtual-network Tutorial Restrict Network Access To Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources.md
Title: 'Tutorial: Restrict access to PaaS resources with service endpoints - Azure portal' description: In this tutorial, you learn how to limit and restrict network access to Azure resources, such as an Azure Storage, with virtual network service endpoints using the Azure portal. documentationcenter: virtual-network--++ tags: azure-resource-manager
virtual-network Tutorial Tap Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-tap-virtual-network-cli.md
Title: Create, change, or delete a VNet TAP - Azure CLI
description: Learn how to create, change, or delete a virtual network TAP using the Azure CLI. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 03/18/2018-+
virtual-network Update Virtual Network Peering Address Space https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/update-virtual-network-peering-address-space.md
Title: Updating the address space for a peered virtual network description: Learn about adding or deleting the address space for a peered virtual network without downtime.--++ Last updated 07/10/2022
virtual-network Virtual Machine Network Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-machine-network-throughput.md
Title: Azure virtual machine network throughput | Microsoft Docs
description: Learn about Azure virtual machine network throughput, including how bandwidth is allocated to a virtual machine. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 4/26/2019-+
virtual-network Virtual Network Bandwidth Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-bandwidth-testing.md
description: Use NTTTCP to target the network for testing and minimize the use of other resources that could impact performance. documentationcenter: na-+ na Last updated 10/06/2020-+
virtual-network Virtual Network Configure Vnet Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-configure-vnet-connections.md
Title: Configure and validate virtual network or VPN connections
description: Step-by-step guidance to configure and validate various Azure VPN and virtual network deployments documentationcenter: na-+ editor: ''
na Last updated 08/28/2019-+
virtual-network Virtual Network Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-disaster-recovery-guidance.md
Title: Virtual network business continuity | Microsoft Docs
description: Learn what to do in the event of an Azure service disruption impacting Azure Virtual Networks. documentationcenter: ''-+ editor: ''
na Last updated 05/16/2016-+
virtual-network Virtual Network For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-for-azure-services.md
description: Learn the how to deploy dedicated Azure services into a virtual network and learn about the capabilities those deployments provide. documentationcenter: na-+ na Last updated 04/06/2020-+
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
Title: Create, change, or delete an Azure virtual network peering | Microsoft Do
description: Create, change, or delete a virtual network peering. With virtual network peering, you connect virtual networks in the same region and across regions. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 09/01/2021-+ # Create, change, or delete a virtual network peering
virtual-network Virtual Network Manage Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-subnet.md
description: Learn where to find information about virtual networks and how to add, change, or delete a virtual network subnet in Azure. documentationcenter: na-+ na Last updated 06/27/2022-+ # Add, change, or delete a virtual network subnet
virtual-network Virtual Network Network Interface Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface-vm.md
Title: Add network interfaces to or remove from Azure VMs
description: Learn how to add network interfaces to or remove network interfaces from virtual machines. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 03/13/2020-+ # Add network interfaces to or remove network interfaces from virtual machines
virtual-network Virtual Network Nsg Manage Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-nsg-manage-log.md
Title: Diagnostic resource logging for a network security group
description: Learn how to enable event and rule counter diagnostic resource logs for an Azure network security group. -+ Last updated 06/04/2018-+ ms.devlang: azurecli
virtual-network Virtual Network Optimize Network Bandwidth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-optimize-network-bandwidth.md
Title: Optimize VM network throughput | Microsoft Docs
description: Optimize network throughput for Microsoft Azure Windows and Linux VMs, including major distributions such as Ubuntu, CentOS, and Red Hat. documentationcenter: na-+ editor: ''
na Last updated 10/06/2020-+
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
description: Learn about virtual network peering in Azure, including how it enables you to connect networks in Azure Virtual Network. documentationcenter: na-+ Last updated 07/10/2022-+ #customer intent: As a cloud architect, I need to know how to use virtual network peering for connecting virtual networks. This will allow me to design connectivity correctly, understand future scalability options, and limitations. # Virtual network peering
virtual-network Virtual Network Scenario Udr Gw Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-scenario-udr-gw-nva.md
Title: Hybrid connection with 2-tier application | Microsoft Docs
description: Learn how to deploy virtual appliances and UDR to create a multi-tier application environment in Azure documentationcenter: na-+
na Last updated 05/05/2016-+ # Virtual appliance scenario
virtual-network Virtual Network Service Endpoint Policies Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-cli.md
Title: Restrict data exfiltration to Azure Storage - Azure CLI
description: In this article, you learn how to limit and restrict virtual network data exfiltration to Azure Storage resources with virtual network service endpoint policies using the Azure CLI. documentationcenter: virtual-network-+ editor: '' tags: azure-resource-manager
virtual-network Last updated 02/03/2020-+
virtual-network Virtual Network Service Endpoint Policies Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-overview.md
Title: Azure virtual network service endpoint policies | Microsoft Docs
description: Learn how to filter Virtual Network traffic to Azure service resources using Service Endpoint Policies documentationcenter: na-+ na Last updated 02/21/2020-+ # Virtual network service endpoint policies for Azure Storage
virtual-network Virtual Network Service Endpoint Policies Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-portal.md
description: In this article, learn how to set up and associated service endpoint policies using the Azure portal. documentationcenter: virtual-network-+ virtual-network Last updated 02/21/2020-+ # Create, change, or delete service endpoint policy using the Azure portal
virtual-network Virtual Network Service Endpoint Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-powershell.md
Title: Restrict data exfiltration to Azure Storage - Azure PowerShell
description: In this article, you learn how to limit and restrict virtual network data exfiltration to Azure Storage resources with virtual network service endpoint policies using Azure PowerShell. documentationcenter: virtual-network-+ editor: '' tags: azure-resource-manager
na Last updated 02/03/2020-+
virtual-network Virtual Network Service Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoints-overview.md
description: Learn how to enable direct access to Azure resources from a virtual network using service endpoints. documentationcenter: na-+ na Last updated 10/20/2022-+
virtual-network Virtual Network Tap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tap-overview.md
Title: Azure virtual network TAP overview | Microsoft Docs
description: Learn about virtual network TAP. Virtual network TAP provides you a deep copy of virtual machine network traffic that can be streamed to a packet collector. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 04/14/2019-+
virtual-network Virtual Network Tcpip Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tcpip-performance-tuning.md
Title: TCP/IP performance tuning for Azure VMs | Microsoft Docs
description: Learn various common TCP/IP performance tuning techniques and their relationship to Azure VMs. documentationcenter: na-+ editor: ''
na Last updated 04/02/2019-+
virtual-network Virtual Network Test Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-test-latency.md
Title: Test Azure virtual machine network latency in an Azure virtual network |
description: Learn how to test network latency between Azure virtual machines on a virtual network documentationcenter: na-+ editor: ''
na Last updated 10/29/2019-+
virtual-network Virtual Network Troubleshoot Cannot Delete Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-cannot-delete-vnet.md
Title: Cannot delete a virtual network in Azure | Microsoft Docs
description: Learn how to troubleshoot the issue in which you cannot delete a virtual network in Azure. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 10/31/2018-+
virtual-network Virtual Network Troubleshoot Connectivity Problem Between Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-connectivity-problem-between-vms.md
Title: Troubleshooting connectivity problems between Azure VMs | Microsoft Docs
description: Learn how to troubleshoot and resolve the connectivity problems that you might experience between Azure VMs. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 10/30/2018-+ # Troubleshooting connectivity problems between Azure VMs
virtual-network Virtual Network Troubleshoot Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-nva.md
Title: Troubleshooting network virtual appliance issues in Azure | Microsoft Doc
description: Troubleshoot Network Virtual Appliance (NVA) issues in Azure and validate basic Azure Platform requirements for NVA configurations. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 10/26/2018-+ # Network virtual appliance issues in Azure
virtual-network Virtual Network Troubleshoot Peering Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-peering-issues.md
Title: Troubleshoot virtual network peering issues
description: Steps to help resolve most virtual network peering issues. documentationcenter: na-+ editor: '' tags: virtual-network
na Last updated 08/28/2019-+
virtual-network Virtual Network Vnet Plan Design Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-vnet-plan-design-arm.md
Title: Plan Azure virtual networks | Microsoft Docs
description: Learn how to plan for virtual networks based on your isolation, connectivity, and location requirements. documentationcenter: na-+ na Last updated 04/08/2020-+ # Plan virtual networks
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
Title: Azure Virtual Network FAQ description: Answers to the most frequently asked questions about Microsoft Azure virtual networks.-+ Last updated 06/26/2020-+ # Azure Virtual Network frequently asked questions (FAQ)
virtual-network Virtual Networks Name Resolution Ddns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-ddns.md
Title: Using dynamic DNS to register hostnames in Azure | Microsoft Docs
description: Learn how to setup dynamic DNS to register hostnames in your own DNS servers. documentationcenter: na-+ editor: ''
na Last updated 02/23/2017-+ # Use dynamic DNS to register hostnames in your own DNS server
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
na Last updated 09/22/2022-+
virtual-network Virtual Networks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-overview.md
Title: Azure Virtual Network
description: Learn about Azure Virtual Network concepts and features, including address space, subnets, regions, and subscriptions. documentationcenter: na-+ # Customer intent: As someone with a basic network background that is new to Azure, I want to understand the capabilities of Azure Virtual Network, so that my Azure resources such as VMs, can securely communicate with each other, the internet, and my on-premises resources. na Last updated 12/03/2020-+ # What is Azure Virtual Network?
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
description: Learn how Azure routes virtual network traffic, and how you can customize Azure's routing. documentationcenter: na-+ na Last updated 05/03/2022-+
virtual-network Virtual Networks Viewing And Modifying Hostnames https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-viewing-and-modifying-hostnames.md
Title: Viewing and Modifying Hostnames | Microsoft Docs
description: How to view and change hostnames for Azure virtual machines, web and worker roles for name resolution documentationcenter: na-+
na Last updated 05/14/2021-+ # Viewing and modifying hostnames
virtual-network Vnet Integration For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/vnet-integration-for-azure-services.md
description: This article describes different methods of integrating an Azure service to a virtual network that enables you to securely access the Azure service. documentationcenter: na-+ Last updated 12/01/2020-+
virtual-network What Is Ip Address 168 63 129 16 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/what-is-ip-address-168-63-129-16.md
Title: What is IP address 168.63.129.16? | Microsoft Docs
description: Learn about IP address 168.63.129.16, specifically that it's used to facilitate a communication channel to Azure platform resources. documentationcenter: na-+ editor: v-jesits tags: azure-resource-manager
na Last updated 05/15/2019-+
virtual-wan How To Virtual Hub Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing-powershell.md
+
+ Title: 'How to configure virtual hub routing: Azure PowerShell'
+
+description: Learn how to configure Virtual WAN virtual hub routing using Azure PowerShell.
+++++ Last updated : 10/26/2022+++
+# How to configure virtual hub routing - Azure PowerShell
+
+A virtual hub can contain multiple gateways such as a site-to-site VPN gateway, ExpressRoute gateway, point-to-site gateway, and Azure Firewall. The routing capabilities in the virtual hub are provided by a router that manages all routing, including transit routing, between the gateways using Border Gateway Protocol (BGP). The virtual hub router also provides transit connectivity between virtual networks that connect to a virtual hub and can support up to an aggregate throughput of 50 Gbps. These routing capabilities apply to customers using **Standard** Virtual WANs. For more information, see [About virtual hub routing](about-virtual-hub-routing.md).
+
+This article helps you configure virtual hub routing using Azure PowerShell. You can also configure virtual hub routing using the [Azure portal steps](how-to-virtual-hub-routing.md).
+
+## Create a route table
+
+1. Get the virtual hub details to create route table.
+
+ ```azurepowershell-interactive
+ $virtualhub = Get-AzVirtualHub -ResourceGroupName "[resource group name]" -Name "[virtualhub name]"
+ ```
+
+1. Get VNet connection details to be used as next hop.
+
+ ```azurepowershell-interactive
+ $hubVnetConnection = Get-AzVirtualHubVnetConnection -Name "[HubconnectionName]" -ParentResourceName "[Hub Name]" -ResourceGroupName "[resource group name]"
+ ```
+
+1. Create a route to be associated with the virtual hub $virtualhub. The **-NextHop** is the virtual network connection $hubVnetConnection. Nexthop can be list of virtual network connections or Azure Firewall.
+
+ ```azurepowershell-interactive
+ $route = New-AzVHubRoute -Name "[Route Name]" -Destination ΓÇ£[@("Destination prefix")]ΓÇ¥ -DestinationType "CIDR" -NextHop $hubVnetConnection.Id -NextHopType "ResourceId"
+ ```
+
+1. Create the route table using the route object created in the previous step, $route, and associate it to the virtual hub $virtualhub.
+
+ ```azurepowershell-interactive
+ New-AzVHubRouteTable -Name "testRouteTable" -ParentObject $virtualhub -Route @($route) -Label @("testLabel")
+ ```
+
+## Delete a route table
+
+```azurepowershell-interactive
+Remove-AzVirtualHubRouteTable -ResourceGroupName "[resource group name]" -HubName "virtualhubname" -Name "routeTablename"
+```
+
+## Update a route table
+
+The steps in this section help you update a route table. For example, update an existing route's next hop to an existing Azure Firewall.
+
+```azurepowershell-interactive
+$firewall = Get-AzFirewall -Name "[firewall name]]" -ResourceGroupName "[resource group name]"
+$newroute = New-AzVHubRoute -Name "[Route Name]" -Destination @("0.0.0.0/0") -DestinationType "CIDR" -NextHop $firewall.Id -NextHopType "ResourceId"
+Update-AzVHubRouteTable -ResourceGroupName "[resource group name]" -VirtualHubName ["virtual hub name"] -Name ["route table name"] -Route @($newroute)
+```
+
+## Configure routing for a virtual network connection
+
+The steps in this section help you set up routing configuration for a virtual network connection. For example, adding static routes to an NVA appliance.
+
+* For this configuration, the route name should be the same as the one you used when you added a route earlier. Otherwise, you'll create two routes in the routing table: one without an IP address and one with an IP address.
+* The destination prefix can be one CIDR or multiple ones. For a single CIDR, use this format: `@("10.19.2.0/24")`. For multiple CIDRs, use this format: `@("10.19.2.0/24", "10.40.0.0/16")`.
+
+1. Define a static route to an NVA IP address.
+
+ ```azurepowershell-interactive
+ $staticRoute = New-AzStaticRoute -Name "[Route Name]" -A-AddressPrefix "[@("Destination prefix")]" -NextHopIpAddress "[Destination NVA IP address]" -NextHopIpAddress "[Destination NVA IP address]"
+ ```
+
+1. Define routing configuration.
+
+ ```azurepowershell-interactive
+ $associatedTable = Get-AzVHubRouteTable -ResourceGroupName "[resource group name]" -VirtualHubName $virtualhub.Name -Name "defaultRouteTable"
+ $propagatedTable = Get-AzVHubRouteTable -ResourceGroupName "[resource group name]" -VirtualHubName $virtualhub.Name -Name "noneRouteTable"
+ $updatedRoutingConfiguration= New-AzRoutingConfiguration -AssociatedRouteTable $associatedTable.Id -Label @("testLabel") -Id @($propagatedTable.Id) -StaticRoute @($staticRoute)
+ ```
+
+1. Update the existing virtual network connection.
+
+ ```azurepowershell-interactive
+ Update-AzVirtualHubVnetConnection -ResourceGroupName "[resource group name]" -VirtualHubName $virtualhub.Name -Name "[Virtual hub connection name]" -RoutingConfiguration $updatedRoutingConfiguration
+ ```
+
+1. Verify static route on the virtual network connection.
+
+ ```azurepowershell-interactive
+ Get-AzVirtualHubVnetConnection -ResourceGroupName "[Resource group]" -VirtualHubName "[virtual hub name]" -Name "[Virtual hub connection name]"
+ ```
+
+## Next steps
+
+* For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
+* For more information about Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md).
virtual-wan How To Virtual Hub Routing Preference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing-preference-powershell.md
+
+ Title: 'Configure virtual hub routing preference: Azure PowerShell'
+
+description: Learn how to configure Virtual WAN virtual hub routing preference using Azure PowerShell.
+++ Last updated : 10/26/2022++
+# Configure virtual hub routing preference - Azure PowerShell
+
+The following steps help you configure virtual hub routing preference settings using Azure PowerShell. You can also configure these settings using the [Azure portal](howto-virtual-hub-routing-preference.md). For information about this feature, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md).
+
+## Prerequisite
+
+If you're using Azure PowerShell locally from your computer, verify that your az.network module version is 4.19.0 or above.
+
+## Configure
+
+To configure virtual hub routing preference for an existing virtual hub, use the following steps.
+
+1. (Optional) Check the current HubRoutingPreference for an existing virtual hub.
+
+ ```azurepowershell-interactive
+ Get-AzVirtualHub -ResourceGroupName "[resource group name]" -Name "[virtual hub name]" | select-object HubRoutingPreference
+ ```
+
+1. Update the current HubRoutingPreference for an existing virtual hub. The preference can be either VpnGateway, or ExpressRoute. The following example sets the hub routing preference to VpnGateway.
+
+ ```azurepowershell-interactive
+ Update-AzVirtualHub -ResourceGroupName "[resource group name]" -Name "[virtual hub name]" -HubRoutingPreference "VpnGateway"
+ ```
+
+1. After the settings have saved, you can verify the configuration by running the following PowerShell command for virtual hub.
+
+ ```azurepowershell-interactive
+ Get-AzVirtualHub -ResourceGroupName "[resource group name]" -Name "[virtual hub name]" | select-object HubRoutingPreference
+ ```
+
+## Next steps
+
+To learn more about virtual hub routing preference, see [About virtual hub routing preference](about-virtual-hub-routing-preference.md).
virtual-wan How To Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing.md
Title: 'How to configure virtual hub routing'
+ Title: 'How to configure virtual hub routing: Azure portal'
-description: Learn how to configure Virtual WAN virtual hub routing.
+description: Learn how to configure Virtual WAN virtual hub routing using the Azure portal.
Previously updated : 10/18/2022 Last updated : 10/26/2022
-# How to configure virtual hub routing
+# How to configure virtual hub routing - Azure portal
-A virtual hub can contain multiple gateways such as a site-to-site VPN gateway, ExpressRoute gateway, point-to-site gateway, and Azure Firewall. The routing capabilities in the virtual hub are provided by a router that manages all routing, including transit routing, between the gateways using Border Gateway Protocol (BGP). This router also provides transit connectivity between virtual networks that connect to a virtual hub and can support up to an aggregate throughput of 50 Gbps. These routing capabilities apply to customers using **Standard** Virtual WANs. For more information, see [About virtual hub routing](about-virtual-hub-routing.md).
+A virtual hub can contain multiple gateways such as a site-to-site VPN gateway, ExpressRoute gateway, point-to-site gateway, and Azure Firewall. The routing capabilities in the virtual hub are provided by a router that manages all routing, including transit routing, between the gateways using Border Gateway Protocol (BGP). The virtual hub router also provides transit connectivity between virtual networks that connect to a virtual hub and can support up to an aggregate throughput of 50 Gbps. These routing capabilities apply to customers using **Standard** Virtual WANs. For more information, see [About virtual hub routing](about-virtual-hub-routing.md).
+
+This article helps you configure virtual hub routing using Azure portal. You can also configure virtual hub routing using the [Azure PowerShell steps](how-to-virtual-hub-routing-powershell.md).
## Create a route table
In the Azure portal, go to your **Virtual HUB -> Route Tables** page. To open th
## Delete a route table
-In the Azure portal, go to your **Virtual HUB -> Route Tables** page. Select the checkbox for route table that you want to delete. Click **"…"**, and then select **Delete**. You can't delete a Default or None route table. However, you can delete all custom route tables.
+In the Azure portal, go to your **Virtual HUB -> Route Tables** page. Select the checkbox for route table that you want to delete. Click **"…"**, and then select **Delete**. You can't delete a Default or None route table. However, you can delete all custom route tables.
## View effective routes
In the Azure portal, go to your **Virtual HUB -> Route Tables** page. Select the
:::image type="content" source="./media/how-to-virtual-hub-routing/effective-routes.png" alt-text="Screenshot of Effective Routes page." lightbox="./media/how-to-virtual-hub-routing/effective-routes.png":::
-## <a name="routing-configuration"></a>Set up routing configuration for a virtual network connection
+## <a name="routing-configuration"></a>Configure routing for a virtual network connection
[!INCLUDE [Connect](../../includes/virtual-wan-connect-vnet-hub-include.md)] ## Next steps
-For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
-For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
+* For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
+* For more information about Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md).
virtual-wan Howto Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-virtual-hub-routing-preference.md
Title: 'Configure virtual hub routing preference'
+ Title: 'Configure virtual hub routing preference: Azure portal'
-description: Learn how to configure Virtual WAN virtual hub routing preference.
+description: Learn how to configure Virtual WAN virtual hub routing preference using the Azure portal.
Previously updated : 05/30/2022 Last updated : 10/26/2022
-# Configure virtual hub routing preference
+# Configure virtual hub routing preference - Azure portal
-The following steps help you configure virtual hub routing preference settings. For information about this feature, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md).
+The following steps help you configure virtual hub routing preference settings. For information about this feature, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md). You can also configure these settings using the [Azure PowerShell](how-to-virtual-hub-routing-preference-powershell.md).
+## New virtual hub
-## Configure
+You can configure a new virtual hub to include the virtual hub routing preference setting by using the [Azure portal]( https://portal.azure.com/). Follow the steps in the [Tutorial: Create a site-to-site connection](virtual-wan-site-to-site-portal.md) article.
-You can configure a new virtual hub to include the virtual hub routing preference setting by using the [Azure Preview portal]( https://portal.azure.com/?feature.virtualWanRoutingPreference=true#home). Follow the steps in the [Tutorial: Create a site-to-site connection](virtual-wan-site-to-site-portal.md) article.
+## Existing virtual hub
To configure virtual hub routing preference for an existing virtual hub, use the following steps.
-1. Open the [Azure Preview portal]( https://portal.azure.com/?feature.virtualWanRoutingPreference=true#home). You can't use the regular Azure portal yet for this feature.
+1. Open the [Azure portal]( https://portal.azure.com/).
-1. Go to your virtual WAN. In the left pane, under the **Connectivity** section, click **Hubs** to view the list of hubs. Select **… > Edit virtual hub** to open the **Edit virtual hub** dialog box.
+1. Go to your virtual WAN. In the left pane, click **Hubs** to view the list of hubs.
- :::image type="content" source="./media/howto-virtual-hub-routing-preference/edit-virtual-hub.png" alt-text="Screenshot shows select Edit virtual hub." lightbox="./media/howto-virtual-hub-routing-preference/edit-virtual-hub-expand.png":::
-
- You can also click on the hub to open the virtual hub, and then under virtual hub resource, click the **Edit virtual hub** button.
+1. Click the hub that you want to configure. On the **Virtual HUB** page, click **Edit virtual hub**.
:::image type="content" source="./media/howto-virtual-hub-routing-preference/hub-edit.png" alt-text="Screenshot shows Edit virtual hub." lightbox="./media/howto-virtual-hub-routing-preference/hub-edit.png":::
-1. On the **Edit virtual hub** page, select from the dropdown to configure the field **Hub routing preference**. To determine the setting to use, see [About virtual hub routing preference](about-virtual-hub-routing-preference.md).
+1. On the **Edit virtual hub** page, select from the dropdown to configure **Hub routing preference**. To determine the setting to use, see [About virtual hub routing preference](about-virtual-hub-routing-preference.md).
Click **Confirm** to save the settings.
- :::image type="content" source="./media/howto-virtual-hub-routing-preference/select-preference.png" alt-text="Screenshot shows the dropdown showing ExpressRoute, VPN, and AS PATH." lightbox="./media/howto-virtual-hub-routing-preference/select-preference.png":::
+ :::image type="content" source="./media/howto-virtual-hub-routing-preference/select.png" alt-text="Screenshot shows the dropdown showing ExpressRoute, VPN, and AS PATH options." lightbox="./media/howto-virtual-hub-routing-preference/select.png":::
1. After the settings have saved, you can verify the configuration on the **Overview** page for the virtual hub.