Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-domain-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md | Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
active-directory | On Premises Ldap Connector Prepare Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ldap-connector-prepare-directory.md | In order to enable SSL to work, you need to grant the NETWORK SERVICE read permi Now that we have configured the certificate and granted the network service account permissions, test the connectivity to verify that it is working. 1. Open Server Manager and select AD LDS on the left 2. Right-click your instance of AD LDS and select ldp.exe from the pop-up.- [](../../../includes/media/active-directory-app-provisioning-ldap/ldp-1.png#lightbox)</br> + [](../../../includes/media/app-provisioning-ldap/ldp-1.png#lightbox)</br> 3. At the top of ldp.exe, select **Connection** and **Connect**. 4. Enter the following information and click **OK**. - Server: APP3 - Port: 636 - Place a check in the SSL box- [](../../../includes/media/active-directory-app-provisioning-ldap/ldp-2.png#lightbox)</br> + [](../../../includes/media/app-provisioning-ldap/ldp-2.png#lightbox)</br> 5. You should see a response similar to the screenshot below.- [](../../../includes/media/active-directory-app-provisioning-ldap/ldp-3.png#lightbox)</br> + [](../../../includes/media/app-provisioning-ldap/ldp-3.png#lightbox)</br> 6. At the top, under **Connection** select **Bind**. 7. Leave the defaults and click **OK**.- [](../../../includes/media/active-directory-app-provisioning-ldap/ldp-4.png#lightbox)</br> + [](../../../includes/media/app-provisioning-ldap/ldp-4.png#lightbox)</br> 8. You should now, successfully bind to the instance.- [](../../../includes/media/active-directory-app-provisioning-ldap/ldp-5.png#lightbox)</br> + [](../../../includes/media/app-provisioning-ldap/ldp-5.png#lightbox)</br> ### Disable the local password policy Currently, the LDAP connector provisions users with a blank password. This provisioning will not satisfy the local password policy on our server so we are going to disable it for testing purposes. To disable password complexity, on a non-domain-joined server, use the following steps. Currently, the LDAP connector provisions users with a blank password. This prov 1. On the server, click **Start**, **Run**, and then **gpedit.msc** 2. On the **Local Group Policy editor**, navigate to Computer Configuration > Windows Settings > Security Settings > Account Policies > Password Policy 3. On the right, double-click **Password must meet complexity requirements** and select **Disabled**.- [](../../../includes/media/active-directory-app-provisioning-ldap/local-1.png#lightbox)</br> + [](../../../includes/media/app-provisioning-ldap/local-1.png#lightbox)</br> 5. Click **Apply** and **Ok** 6. Close the Local Group Policy editor |
active-directory | Concept Sspr Howitworks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-howitworks.md | Consider the following example scenario: > [!NOTE] > Email notifications from the SSPR service will be sent from the following addresses based on the Azure cloud you are working with: -> - Public: msonlineservicesteam@microsoft.com -> - China: msonlineservicesteam@oe.21vianet.com -> - Government: msonlineservicesteam@azureadnotifications.us +> - Public: msonlineservicesteam@microsoft.com, msonlineservicesteam@microsoftonline.com +> - China: msonlineservicesteam@oe.21vianet.com, 21Vianetonlineservicesteam@21vianet.com +> - Government: msonlineservicesteam@azureadnotifications.us, msonlineservicesteam@microsoftonline.us > If you observe issues in receiving notifications, please check your spam settings. ## On-premises integration |
active-directory | V1 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-overview.md | The following articles provide detailed information about APIs, protocol message See [Azure Active Directory developer platform videos](videos.md) for help migrating to the new Microsoft identity platform. |
active-directory | Console Quickstart Portal Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-quickstart-portal-nodejs.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Daemon Quickstart Portal Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-java.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Daemon Quickstart Portal Netcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-netcore.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > > [!div class="sxs-lookup"] > > > [!NOTE]-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Daemon Quickstart Portal Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-python.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Desktop Quickstart Portal Uwp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-uwp.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > > #### Step 3: Your app is configured and ready to run-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Desktop Quickstart Portal Wpf | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-wpf.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > #### Step 3: Your app is configured and ready to run > We have configured your project with values of your app's properties and it's ready to run.-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Mobile App Quickstart Portal Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Mobile App Quickstart Portal Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Msal Error Handling Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-dotnet.md | catch (MsalUiRequiredException ex) when (ex.ErrorCode == MsalError.InvalidGrantE } } ``` When calling an API requiring Conditional Access from MSAL.NET, your application will need to handle claim challenge exceptions. This will appear as an [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) where the [Claims](/dotnet/api/microsoft.identity.client.msalserviceexception.claims) property won't be empty. To handle the claim challenge, you'll need to use the `.WithClaim()` method of the [`PublicClientApplicationBuilder`](/dotnet/api/microsoft.identity.client.publicclientapplicationbuilder) class. ### HTTP error codes 500-600 |
active-directory | Msal Error Handling Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-ios.md | The following Objective-C sample code demonstrates best practices for handling s MSAL for iOS and macOS allows you to request specific claims in both interactive and silent token acquisition scenarios. To request custom claims, specify `claimsRequest` in `MSALSilentTokenParameters` See [Request custom claims using MSAL for iOS and macOS](request-custom-claims.md) for more info. ## Next steps |
active-directory | Msal Error Handling Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-java.md | MSAL exposes a `reason` field, which you can use to provide a better user experi } ``` ## Next steps |
active-directory | Msal Error Handling Js | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-js.md | myMSALObj.acquireTokenSilent(request).then(function (response) { }); ``` When getting tokens silently (using `acquireTokenSilent`) using MSAL.js, your application may receive errors when a [Conditional Access claims challenge](v2-conditional-access-dev-guide.md) such as MFA policy is required by an API you're trying to access. myMSALObj.acquireTokenSilent(accessTokenRequest).then(function(accessTokenRespon Interactively acquiring the token prompts the user and gives them the opportunity to satisfy the required Conditional Access policy. -When calling an API requiring Conditional Access, you can receive a claims challenge in the error from the API. In this case, you can pass the claims returned in the error to the `claims` parameter in the [access token request object](https://learn.microsoft.com/azure/active-directory/develop/msal-js-pass-custom-state-authentication-request) to satisfy the appropriate policy. +When calling an API requiring Conditional Access, you can receive a claims challenge in the error from the API. In this case, you can pass the claims returned in the error to the `claims` parameter in the [access token request object](msal-js-pass-custom-state-authentication-request.md) to satisfy the appropriate policy. See [How to use Continuous Access Evaluation enabled APIs in your applications](./app-resilience-continuous-access-evaluation.md) for more detail. ## Next steps |
active-directory | Msal Error Handling Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-python.md | When an error is returned, the `"error_description"` key also contains a human-r In MSAL for Python, exceptions are rare because most errors are handled by returning an error value. The `ValueError` exception is only thrown when there's an issue with how you're attempting to use the library, such as when API parameter(s) are malformed. ## Retrying after errors and exceptions |
active-directory | Msal Logging Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-android.md | |
active-directory | Msal Logging Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-dotnet.md | |
active-directory | Msal Logging Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-ios.md | |
active-directory | Msal Logging Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-java.md | |
active-directory | Msal Logging Js | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-js.md | |
active-directory | Msal Logging Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-python.md | |
active-directory | Quickstart V2 Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-android.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Aspnet Core Web Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md | This quickstart will be deprecated in the near future and will be updated to use > } > ``` > -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Aspnet Core Webapp Calls Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > > ## Step 3: Your app is configured and ready to run-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Aspnet Core Webapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > > #### Step 3: Your app is configured and ready to run-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Aspnet Webapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > > #### Step 3: Your app is configured and ready to run-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Dotnet Native Aspnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > ## Register the web API (TodoListService) > -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-ios.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Java Daemon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-daemon.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Java Webapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-webapp.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Netcore Daemon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > > [!div class="sxs-lookup"] > > > [!NOTE]-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] -> +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] +> > ## Next steps > > To learn more about daemon applications, see the scenario overview: |
active-directory | Quickstart V2 Nodejs Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-console.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Python Daemon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-daemon.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Python Webapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-webapp.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Uwp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-uwp.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > > #### Step 3: Your app is configured and ready to run-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Quickstart V2 Windows Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-windows-desktop.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > #### Step 3: Your app is configured and ready to run > We have configured your project with values of your app's properties and it's ready to run.-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Scenario Daemon Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-call-api.md | data = requests.get(endpoint, headers=http_headers, stream=False).json() # [.NET low level](#tab/dotnet) |
active-directory | Scenario Desktop Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-call-api.md | Now that you have a token, you can call a protected web API. # [.NET](#tab/dotnet) # [Java](#tab/java) |
active-directory | Scenario Mobile Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-call-api.md | task.resume() ### Xamarin ## Make several API requests |
active-directory | Tutorial V2 Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-android.md | In this tutorial: ## How this tutorial works - + The app in this tutorial signs in users and get data on their behalf. This data is accessed through a protected API (Microsoft Graph API) that requires authorization and is protected by the Microsoft identity platform. The first time any user signs into your app, they'll be prompted by Microsoft id When no longer needed, delete the app object that you created in the [Register your application](#register-your-application-with-azure-ad) step. ## Next steps |
active-directory | Tutorial V2 Angular Auth Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-angular-auth-code.md | As you add scopes, your users might be prompted to provide extra consent for the > [!NOTE] > The user might be prompted for additional consents as you increase the number of scopes. ## Next steps |
active-directory | Tutorial V2 Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-ios.md | In this tutorial: ## How tutorial app works - + The app in this tutorial can sign in users and get data from Microsoft Graph on their behalf. This data is accessed via a protected API (Microsoft Graph API in this case) that requires authorization and is protected by the Microsoft identity platform. The only value you modify is the value assigned to `kClientID` to be your [Appli Add a new keychain group to your project **Signing & Capabilities**. The keychain group should be `com.microsoft.adalcache` on iOS and `com.microsoft.identity.universalstorage` on macOS. - + ## For iOS only, configure URL schemes |
active-directory | Tutorial V2 Javascript Auth Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-auth-code.md | As you add scopes, your users might be prompted to provide additional consent fo If a back-end API doesn't require a scope, which isn't recommended, you can use `clientId` as the scope in the calls to acquire tokens. ## Next steps |
active-directory | Tutorial V2 Javascript Spa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md | The Microsoft Graph API requires the `User.Read` scope to read a user's profile. > [!NOTE] > The user might be prompted for additional consents as you increase the number of scopes. - ## Next steps |
active-directory | Tutorial V2 Nodejs Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md | The Microsoft Graph API requires the *user.read* scope to read a user's profile. As you add scopes, your users might be prompted to provide another consent for the added scopes. ## Next steps |
active-directory | Tutorial V2 Windows Uwp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-uwp.md | You enable [integrated authentication on federated domains](#enable-integrated-a **Workaround:** Select **Sign in with other options**. Then select **Sign in with a username and password**. Select **Provide your password**. Then go through the phone authentication process. ## Next steps |
active-directory | Web Api Quickstart Portal Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-aspnet-core.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > ## Step 3: Configure the ASP.NET Core project > -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Web Api Quickstart Portal Dotnet Native Aspnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-dotnet-native-aspnet.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > ## Register the web API (TodoListService) > -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Web App Quickstart Portal Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet-core.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] > > > #### Step 3: Your app is configured and ready to run-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Web App Quickstart Portal Aspnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet.md | -> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] -> -> +> [!INCLUDE [active-directory-develop-path-length-tip](./includes/error-handling-and-tips/path-length-tip.md)] +> > #### Step 3: Your app is configured and ready to run > We've configured your project with values of your app's properties. > -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Web App Quickstart Portal Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-java.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Web App Quickstart Portal Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-python.md | -> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] +> [!INCLUDE [Help and support](./includes/error-handling-and-tips/help-support-include.md)] > > ## Next steps > |
active-directory | Licensing Service Plan Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic - **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]->This information last updated on June 22nd, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). +>This information last updated on July 6th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). ><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Dynamics 365 Business Central for IWs | PROJECT_MADEIRA_PREVIEW_IW_SKU | 6a4a1628-9b9a-424d-bed5-4118f0ede3fd | PROJECT_MADEIRA_PREVIEW_IW (3f2afeed-6fb5-4bf9-998f-f2912133aead)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Business Central for IWs (3f2afeed-6fb5-4bf9-998f-f2912133aead)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Business Central Premium | DYN365_BUSCENTRAL_PREMIUM | f991cecc-3f91-4cd0-a9a8-bf1c8167e029 | DYN365_BUSCENTRAL_PREMIUM (8e9002c0-a1d8-4465-b952-817d2948e6e2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | Dynamics 365 Business Central Premium (8e9002c0-a1d8-4465-b952-817d2948e6e2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>PowerApps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Dynamics 365 Business Central Team Members | DYN365_BUSCENTRAL_TEAM_MEMBER | 2e3c4023-80f6-4711-aa5d-29e0ecb46835 | DYN365_FINANCIALS_TEAM_MEMBERS (d9a6391b-8970-4976-bd94-5f205007c8d8)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449) | Dynamics 365 for Team Members (d9a6391b-8970-4976-bd94-5f205007c8d8)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps for Dynamics 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>Power Automate for Dynamics 365 (1ec58c70-f69c-486a-8109-4b87ce86e449) |+| Dynamics 365 Commerce Trial | DYN365_RETAIL_TRIAL | 1508ad2d-5802-44e6-bfe8-6fb65de63d28 | DYN365_RETAIL_TRIAL (874d6da5-2a67-45c1-8635-96e8b3e300ea)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Retail Trial (874d6da5-2a67-45c1-8635-96e8b3e300ea)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Customer Engagement Plan | DYN365_ENTERPRISE_PLAN1 | ea126fc5-a19e-42e2-a731-da9d437bffcf | D365_CSI_EMBED_CE (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>D365_ProjectOperations (69f07c66-bee4-4222-b051-195095efee5b)<br/>D365_ProjectOperationsCDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Forms_Pro_CE (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>PROJECT_FOR_PROJECT_OPERATIONS (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 Customer Service Insights for CE Plan (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>Dynamics 365 P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>Dynamics 365 Project Operations (69f07c66-bee4-4222-b051-195095efee5b)<br/>Dynamics 365 Project Operations CDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (b650d915-9886-424b-a08d-633cede56f57)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Microsoft Dynamics 365 Customer Voice for Customer Engagement Plan (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>Microsoft Social Engagement Enterprise (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Dynamics 365 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>Project for Project Operations (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |+| Dynamics 365 Customer Insights Attach | DYN365_CUSTOMER_INSIGHTS_ATTACH | a3d0cd86-8068-4071-ad40-4dc5b5908c4b | CDS_CUSTOMER_INSIGHTS_BASE (d04ca659-b119-4a92-b8fc-3ede584a9d65)<br/>CDS_CUSTOMER_INSIGHTS (ca00cff5-2568-4d03-bb6c-a653a8f360ca)<br/>DYN365_CUSTOMER_INSIGHTS_BASE (ee85d528-c4b4-4a99-9b07-fb9a1365dc93)<br/>Customer_Voice_Customer_Insights (46c5ea0a-2343-49d9-ae4f-1c268b232d53)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dataverse for Customer Insights BASE (d04ca659-b119-4a92-b8fc-3ede584a9d65)<br/>Common Data Service for Customer Insights (ca00cff5-2568-4d03-bb6c-a653a8f360ca)<br/>Dynamics 365 Customer Insights (ee85d528-c4b4-4a99-9b07-fb9a1365dc93)<br/>Microsoft Dynamics 365 Customer Voice for Customer Insights App (46c5ea0a-2343-49d9-ae4f-1c268b232d53)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Customer Insights vTrial | DYN365_CUSTOMER_INSIGHTS_VIRAL | 036c2481-aa8a-47cd-ab43-324f0c157c2d | CDS_CUSTOMER_INSIGHTS_TRIAL (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>DYN365_CUSTOMER_INSIGHTS_ENGAGEMENT_INSIGHTS_BASE_TRIAL (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>DYN365_CUSTOMER_INSIGHTS_VIRAL (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Forms_Pro_Customer_Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) | Common Data Service for Customer Insights Trial (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>Dynamics 365 Customer Insights Engagement Insights Viral (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>Dynamics 365 Customer Insights Viral Plan (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Microsoft Dynamics 365 Customer Voice for Customer Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) | | Dynamics 365 Customer Service Enterprise Viral Trial | Dynamics_365_Customer_Service_Enterprise_viral_trial | 1e615a51-59db-4807-9957-aa83c3657351 | CUSTOMER_VOICE_DYN365_VIRAL_TRIAL (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>DYN365_CS_MESSAGING_VIRAL_TRIAL (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>DYN365_CS_ENTERPRISE_VIRAL_TRIAL (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>DYNB365_CSI_VIRAL_TRIAL (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>DYN365_CS_VOICE_VIRAL_TRIAL (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_APPS_DYN365_VIRAL_TRIAL (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>POWER_AUTOMATE_DYN365_VIRAL_TRIAL (81d4ecb8-0481-42fb-8868-51536c5aceeb) | Customer Voice for Dynamics 365 vTrial (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Dynamics 365 Customer Service Digital Messaging vTrial (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>Dynamics 365 Customer Service Enterprise vTrial (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>Dynamics 365 Customer Service Insights vTrial (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>Dynamics 365 Customer Service Voice vTrial (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps for Dynamics 365 vTrial (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>Power Automate for Dynamics 365 vTrial (81d4ecb8-0481-42fb-8868-51536c5aceeb) | | Dynamics 365 for Customer Service Enterprise Attach to Qualifying Dynamics 365 Base Offer A | D365_CUSTOMER_SERVICE_ENT_ATTACH | eb18b715-ea9d-4290-9994-2ebf4b5042d2 | D365_CUSTOMER_SERVICE_ENT_ATTACH (61a2665f-1873-488c-9199-c3d0bc213fdf)<br/>Power_Pages_Internal_User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Customer Service Enterprise Attach (61a2665f-1873-488c-9199-c3d0bc213fdf)<br/>Power Pages Internal User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Dynamics 365 for Financials Business Edition | DYN365_FINANCIALS_BUSINESS_SKU | cc13a803-544e-4464-b4e4-6d6169a138fa | DYN365_FINANCIALS_BUSINESS (920656a2-7dd8-4c83-97b6-a356414dbd36)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR FINANCIALS (920656a2-7dd8-4c83-97b6-a356414dbd36) | | Dynamics 365 Hybrid Connector | CRM_HYBRIDCONNECTOR | de176c31-616d-4eae-829a-718918d7ec23 | CRM_HYBRIDCONNECTOR (0210d5c8-49d2-4dd1-a01b-a91c7c14e0bf)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | CRM Hybrid Connector (0210d5c8-49d2-4dd1-a01b-a91c7c14e0bf)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 for Marketing Additional Application | DYN365_MARKETING_APPLICATION_ADDON | 99c5688b-6c75-4496-876f-07f0fbd69add | DYN365_MARKETING_APPLICATION_ADDON (51cf0638-4861-40c0-8b20-1161ab2f80be)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Marketing Additional Application (51cf0638-4861-40c0-8b20-1161ab2f80be)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |+| Dynamics 365 for Marketing Additional Contacts Tier 3 | DYN365_MARKETING_CONTACT_ADDON_T3 | 23053933-0fda-431f-9a5b-a00fd78444c1 | DYN365_MARKETING_50K_CONTACT_ADDON (e626a4ec-1ba2-409e-bf75-9bc0bc30cca7)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Marketing 50K Additional Contacts (e626a4ec-1ba2-409e-bf75-9bc0bc30cca7)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 for Marketing Additional Non-Prod Application | DYN365_MARKETING_SANDBOX_APPLICATION_ADDON | c393e9bd-2335-4b46-8b88-9e2a86a85ec1 | DYN365_MARKETING_SANDBOX_APPLICATION_ADDON (1599de10-5250-4c95-acf2-491f74edce48) | Dynamics 365 Marketing Sandbox Application AddOn (1599de10-5250-4c95-acf2-491f74edce48) | | Dynamics 365 for Marketing Addnl Contacts Tier 5 | DYN365_MARKETING_CONTACT_ADDON_T5 | d8eec316-778c-4f14-a7d1-a0aca433b4e7 | DYN365_MARKETING_50K_CONTACT_ADDON (e626a4ec-1ba2-409e-bf75-9bc0bc30cca7)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Marketing 50K Addnl Contacts (e626a4ec-1ba2-409e-bf75-9bc0bc30cca7)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 for Marketing Attach | DYN365_MARKETING_APP_ATTACH | 85430fb9-02e8-48be-9d7e-328beb41fa29 | DYN365_MARKETING_APP (a3a4fa10-5092-401a-af30-0462a95a7ac8)<br/>Forms_Pro_Marketing_App (22b657cf-0a9e-467b-8a91-5e31f21bc570)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Marketing (a3a4fa10-5092-401a-af30-0462a95a7ac8)<br/>Microsoft Dynamics 365 Customer Voice for Marketing Application (22b657cf-0a9e-467b-8a91-5e31f21bc570)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Microsoft 365 F3 GCC | M365_F1_GOV | 2a914830-d700-444a-b73c-e3f31980d833 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_F1_GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>CDS_O365_F1_GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>EXCHANGE_S_DESKLESS_GOV (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>FORMS_GOV_F1 (bfd4133a-bbf3-4212-972b-60412137c428)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K_GOV (d65648f1-9504-46e4-8611-2658763f28b8)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708- 6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_S1_GOV (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>FLOW_O365_S1_GOV (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SHAREPOINTDESKLESS_GOV (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>MCOIMP_GOV (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service - O365 F1 GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>Common Data Service for Teams_F1 GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>Exchange Online (Kiosk) for Government (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>Forms for Government (Plan F1) (bfd4133a-bbf3-4212-972b-60412137c428)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (F1) (d65648f1-9504-46e4-8611-2658763f28b8)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>Power Apps for Office 365 F3 for Government (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>Power Automate for Office 365 F3 for Government (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SharePoint KioskG (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>Skype for Business Online (Plan 1) for Government (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3) | | MICROSOFT 365 G3 GCC | M365_G3_GOV | e823ca47-49c4-46b3-b38d-ca11d5abe3d2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>CONTENTEXPLORER_STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>AZURE RIGHTS MANAGEMENT PREMIUM FOR GOVERNMENT (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) | | Microsoft 365 GCC G5 | M365_G5_GCC | e2be619b-b125-455f-8660-fb503e431a5d | CDS_O365_P3_GCC (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>FORMS_GOV_E5 (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS_GOV (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>STREAM_O365_E5_GOV (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_PREMIUM2_GOV (5400a66d-eaa5-427d-80f2-0f26d59d8fce)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P3_GCC (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P3_GOV (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>FLOW_O365_P3_GOV (8055d84a-c172-42eb-b997-6c2ae4628246) | Common Data Service for Teams (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>Customer Lockbox for Government (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>Exchange Online (Plan 2) for Government (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>Microsoft 365 Audio Conferencing for Government (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System for Government (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>Microsoft Defender for Office 365 (Plan 2) for Government (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>Microsoft Forms for Government (Plan E5) (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics for Government (Full) (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery for Government (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>Skype for Business Online (Plan 2) for Government (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>Stream for Office 365 for Government (E5) (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Information Protection Premium P2 for GCC (5400a66d-eaa5-427d-80f2-0f26d59d8fce)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 for Government (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>Power Automate for Office 365 for Government (8055d84a-c172-42eb-b997-6c2ae4628246) |+| Microsoft 365 Lighthouse | Microsoft365_Lighthouse | 9c0587f3-8665-4252-a8ad-b7a5ade57312 | M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5) | Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5) | | Microsoft 365 Security and Compliance for Firstline Workers | M365_SECURITY_COMPLIANCE_FOR_FLW | 2347355b-4e81-41a4-9c22-55057a399791 | AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Online Archiving for Exchange Online (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft ML-based classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) | | Microsoft Business Center | MICROSOFT_BUSINESS_CENTER | 726a0894-2c77-4d65-99da-9775ef05aad1 | MICROSOFT_BUSINESS_CENTER (cca845f9-fd51-4df6-b563-976a37c56ce0) | MICROSOFT BUSINESS CENTER (cca845f9-fd51-4df6-b563-976a37c56ce0) |+| Microsoft Cloud for Sustainability vTrial | Microsoft_Cloud_for_Sustainability_vTrial | 556640c0-53ea-4773-907d-29c55332983f | MCS_BizApps_Cloud_for_Sustainability_vTrial (c1c902e3-a956-4273-abdb-c92afcd027ef)<br/>POWER_APPS_DYN365_VIRAL_TRIAL (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>POWER_AUTOMATE_DYN365_VIRAL_TRIAL (81d4ecb8-0481-42fb-8868-51536c5aceeb)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0) | MCS - BizApps_Cloud for Sustainability_vTrial (c1c902e3-a956-4273-abdb-c92afcd027ef)<br/>Power Apps for Dynamics 365 vTrial (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>Power Automate for Dynamics 365 vTrial (81d4ecb8-0481-42fb-8868-51536c5aceeb)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Common Data Service (17ab22cd-a0b3-4536-910a-cb6eb12696c0) | | Microsoft Cloud App Security | ADALLOM_STANDALONE | df845ce7-05f9-4894-b5f2-11bbfbcfd2b6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | | Microsoft Defender for Endpoint | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Endpoint P1 | DEFENDER_ENDPOINT_P1 | 16a55f2f-ff35-4cd5-9146-fb784e3761a5 | Intune_Defender (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4) | MDE_SecurityManagement (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Microsoft Defender for Identity | ATA | 98defdf7-f6c1-44f5-a1f6-943b6764e7a5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_FOR_AATP (61d18b02-6889-479f-8f36-56e6e0fe5792) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>SecOps Investigation for MDI (61d18b02-6889-479f-8f36-56e6e0fe5792) | | Microsoft Defender for Office 365 (Plan 1) GCC | ATP_ENTERPRISE_GOV | d0d1ca43-b81a-4f51-81e5-a5b1ad7bb005 | ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516) | Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516) | | Microsoft Defender for Office 365 (Plan 2) GCC | THREAT_INTELLIGENCE_GOV | 56a59ffb-9df1-421b-9e61-8b568583474d | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>Microsoft Defender for Office 365 (Plan 2) for Government (900018f1-0cdb-4ecb-94d4-90281760fdc6) |+| Microsoft Defender Vulnerability Management | TVM_Premium_Standalone | 1925967e-8013-495f-9644-c99f8b463748 | TVM_PREMIUM_1 (36810a13-b903-490a-aa45-afbeb7540832) | Microsoft Defender Vulnerability Management (36810a13-b903-490a-aa45-afbeb7540832) | | Microsoft Defender Vulnerability Management Add-on | TVM_Premium_Add_on | ad7a56e0-6903-4d13-94f3-5ad491e78960 | TVM_PREMIUM_1 (36810a13-b903-490a-aa45-afbeb7540832) | Microsoft Defender Vulnerability Management (36810a13-b903-490a-aa45-afbeb7540832) | | Microsoft Dynamics CRM Online | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Microsoft Imagine Academy | IT_ACADEMY_AD | ba9a34de-4489-469d-879c-0f0f145321cd | IT_ACADEMY_AD (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | MS IMAGINE ACADEMY (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Microsoft Teams Trial | MS_TEAMS_IW | 74fbf1bb-47c6-4796-9623-77dc7371723b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | | Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) |+| Microsoft Viva Goals | Microsoft_Viva_Goals | ba929637-f158-4dee-927c-eb7cdefcd955 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Viva_Goals_Premium (b44c6eaf-5c9f-478c-8f16-8cea26353bfb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Viva Goals (b44c6eaf-5c9f-478c-8f16-8cea26353bfb) | | Microsoft Viva Suite | VIVA | 61902246-d7cb-453e-85cd-53ee28eec138 | GRAPH_CONNECTORS_SEARCH_INDEX_TOPICEXP (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>CORTEX (c815c93d-0759-4bb8-b857-bc921a71be83)<br/>VIVAENGAGE_COMMUNITIES_AND_COMMUNICATIONS (43304c6a-1d4e-4e0b-9b06-5b2a2ff58a90)<br/>VIVAENGAGE_KNOWLEDGE (c244cc9e-622f-4576-92ea-82e233e44e36)<br/>Viva_Goals_Premium (b44c6eaf-5c9f-478c-8f16-8cea26353bfb)<br/>VIVA_LEARNING_PREMIUM (7162bd38-edae-4022-83a7-c5837f951759) | Graph Connectors Search with Index (Microsoft Viva Topics) (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>Microsoft Viva Insights (b622badb-1b45-48d5-920f-4b27a2c0996c)<br/>Microsoft Viva Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Viva Topics (c815c93d-0759-4bb8-b857-bc921a71be83)<br/>Viva Engage Communities and Communications (43304c6a-1d4e-4e0b-9b06-5b2a2ff58a90)<br/>Viva Engage Knowledge (c244cc9e-622f-4576-92ea-82e233e44e36)<br/>Viva Goals (b44c6eaf-5c9f-478c-8f16-8cea26353bfb)<br/>Viva Learning (7162bd38-edae-4022-83a7-c5837f951759) | | Minecraft Education Faculty | MEE_FACULTY | 984df360-9a74-4647-8cf8-696749f6247a | MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Minecraft Education (4c246bbc-f513-4311-beff-eba54c353256)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Minecraft Education Student | MEE_STUDENT | 533b8f26-f74b-4e9c-9c59-50fc4b393b63 | MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Minecraft Education (4c246bbc-f513-4311-beff-eba54c353256)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | |
active-directory | Tutorial Desktop App Maui Sign In Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-desktop-app-maui-sign-in-prepare-app.md | Wait for the project to be created and its dependencies to be restored. MSAL client enables developers to acquire security tokens from Azure Active Directory (Azure AD) for customers tenant to authenticate and access secured web APIs. In this section, you download files that makes up MSALClient. -Download the following files: +Download the following files into a folder in your computer: - [AzureAdConfig.cs](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/blob/main/1-Authentication/2-sign-in-maui/MSALClient/AzureAdConfig.cs) - This file gets and sets the Azure AD app unique identifiers from your app configuration file. - [DownStreamApiConfig.cs](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/blob/main/1-Authentication/2-sign-in-maui/MSALClient/DownStreamApiConfig.cs) - This file gets and sets the scopes for Microsoft Graph call. Download the following files: 1. In the **Solution Explorer** pane, right-click on the **SignInMaui** project and select **Add** > **New Folder**. Name the folder _MSALClient_. 1. Right-click on **MSALClient** folder, select **Add** > **Existing Item...**.-1. Navigate to the folder that contains the downloaded MSALClient files. -1. Select all of the MSALClient files you downloaded, then select **Add** +1. Navigate to the folder that contains the MSALClient files that you downloaded earlier. +1. Select all of the MSALClient files, then select **Add** ## Install required packages |
active-directory | Tutorial Mobile App Maui Sign In Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-mobile-app-maui-sign-in-prepare-app.md | Wait for the project to be created and its dependencies to be restored. MSAL client enables developers to acquire security tokens from Azure Active Directory (Azure AD) for customers tenant to authenticate and access secured web APIs. In this section, you download files that makes up MSALClient. -Download the following files: +Download the following files into a folder in your computer: - [AzureAdConfig.cs](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/blob/main/1-Authentication/2-sign-in-maui/MSALClient/AzureAdConfig.cs) - This file gets and sets the Azure AD app unique identifiers from your app configuration file. - [DownStreamApiConfig.cs](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/blob/main/1-Authentication/2-sign-in-maui/MSALClient/DownStreamApiConfig.cs) - This file gets and sets the scopes for Microsoft Graph call. Download the following files: 1. In the **Solution Explorer** pane, right-click on the **SignInMaui** project and select **Add** > **New Folder**. Name the folder _MSALClient_. 1. Right-click on **MSALClient** folder, select **Add** > **Existing Item...**.-1. Navigate to the folder that contains the downloaded MSALClient files. +1. Navigate to the folder that contains the downloaded MSALClient files that you downloaded earlier. 1. Select all of the MSALClient files you downloaded, then select **Add** ## Install required packages |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md | Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 06/01/2023 Last updated : 07/06/2023 Welcome to what's new in Azure Active Directory External Identities documentatio ### Updated articles -- [Set up tenant restrictions V2 (Preview)](tenant-restrictions-v2.md) Microsoft Teams updates.-- [Invite guest users to an app](add-users-information-worker.md) Link and structure updates.+- [Set up tenant restrictions V2 (Preview)](tenant-restrictions-v2.md) - Microsoft Teams updates. +- [Invite guest users to an app](add-users-information-worker.md) - Link and structure updates. ## May 2023 Welcome to what's new in Azure Active Directory External Identities documentatio ### Updated articles -- [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md) Graph API links were updated.-- [Reset redemption status for a guest user](reset-redemption-status.md) Screenshots were updated.+- [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md) - Graph API links were updated. +- [Reset redemption status for a guest user](reset-redemption-status.md) - Screenshots were updated. ## April 2023 ### Updated articles -- [Allow or block domains](allow-deny-list.md) Screenshots were updated. -- [Authentication and Conditional Access](authentication-conditional-access.md) Links to other articles were updated.-- [Code and Azure PowerShell samples](code-samples.md) Minor text updates.-- [Azure Active Directory](default-account.md) Minor text updates.+- [Allow or block domains](allow-deny-list.md) - Screenshots were updated. +- [Authentication and Conditional Access](authentication-conditional-access.md) - Links to other articles were updated. +- [Code and Azure PowerShell samples](code-samples.md) - Minor text updates. +- [Azure Active Directory](default-account.md) - Minor text updates. |
active-directory | Entitlement Management Access Package Auto Assignment Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md | -# Configure an automatic assignment policy for an access package in entitlement management (Preview) +# Configure an automatic assignment policy for an access package in entitlement management You can use rules to determine access package assignment based on user properties in Azure Active Directory (Azure AD), part of Microsoft Entra. In Entitlement Management, an access package can have multiple policies, and each policy establishes how users get an assignment to the access package, and for how long. As an administrator, you can establish a policy for automatic assignments by supplying a membership rule, that Entitlement Management will follow to create and remove assignments automatically. Similar to a [dynamic group](../enterprise-users/groups-create-rule.md), when an automatic assignment policy is created, user attributes are evaluated for matches with the policy's membership rule. When an attribute changes for a user, these automatic assignment policy rules in the access packages are processed for membership changes. Assignments to users are then added or removed depending on whether they meet the rule criteria. You'll need to have attributes populated on the users who will be in scope for b [!INCLUDE [active-directory-entra-governance-license.md](../../../includes/active-directory-entra-governance-license.md)] -## Create an automatic assignment policy (Preview) +## Create an automatic assignment policy To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new policy for an access package. To create a policy for an access package, you need to start from the access pack 1. Click **Create** to save the policy. > [!NOTE]- > In this preview, Entitlement management will automatically create a dynamic security group corresponding to each policy, in order to evaluate the users in scope. This group should not be modified except by Entitlement Management itself. This group may also be modified or deleted automatically by Entitlement Management, so don't use this group for other applications or scenarios. + > At this time, Entitlement management will automatically create a dynamic security group corresponding to each policy, in order to evaluate the users in scope. This group should not be modified except by Entitlement Management itself. This group may also be modified or deleted automatically by Entitlement Management, so don't use this group for other applications or scenarios. 1. Azure AD will evaluate the users in the organization that are in scope of this rule, and create assignments for those users who don't already have assignments to the access package. A policy can include at most 5000 users in its rule. It may take several minutes for the evaluation to occur, or for subsequent updates to user's attributes to be reflected in the access package assignments. -## Create an automatic assignment policy programmatically (Preview) +## Create an automatic assignment policy programmatically There are two ways to create an access package assignment policy for automatic assignment programmatically, through Microsoft Graph and through the PowerShell cmdlets for Microsoft Graph. |
active-directory | View Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/view-assignments.md | This section describes how to list role assignments with organization-wide scope Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get the role assignments for a specific role definition. The following example shows how to list the role assignments for a specific role definition with the ID `3671d40a-1aac-426c-a0c1-a3821ebd8218`. ```http-GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments&$filter=roleDefinitionId eq ΓÇÿ<template-id-of-role-definition>ΓÇÖ +GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments?$filter=roleDefinitionId eq ΓÇÿ<template-id-of-role-definition>ΓÇÖ ``` Response |
active-directory | Albert Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/albert-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Albert for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Albert. +++writer: twimmers ++ms.assetid: b5672366-08ad-40ba-9cdf-7a24feff6c66 ++++ Last updated : 07/05/2023++++# Tutorial: Configure Albert for automatic user provisioning ++This tutorial describes the steps you need to perform in both Albert and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Albert](https://www.albertinvent.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Albert. +> * Remove users in Albert when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Albert. +> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Albert (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Albert with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Albert](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Albert to support provisioning with Azure AD +Contact Albert support to configure Albert to support provisioning with Azure AD. ++## Step 3. Add Albert from the Azure AD application gallery ++Add Albert from the Azure AD application gallery to start managing provisioning to Albert. If you have previously setup Albert for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Albert ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. ++### To configure automatic user provisioning for Albert in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Albert**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Albert Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Albert. If the connection fails, ensure your Albert account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Albert**. ++1. Review the user attributes that are synchronized from Azure AD to Albert in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Albert for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Albert API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Albert| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |externalId|String||✓ ++1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Albert, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users that you would like to provision to Albert by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | G Suite Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md | Title: 'Tutorial: Configure G Suite for automatic user provisioning with Azure Active Directory' -description: Learn how to automatically provision and de-provision user accounts from Azure AD to G Suite. +description: Learn how to automatically provision and deprovision user accounts from Azure AD to G Suite. writer: twimmers-This tutorial describes the steps you need to perform in both G Suite and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [G Suite](https://gsuite.google.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +This tutorial describes the steps you need to perform in both G Suite and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [G Suite](https://gsuite.google.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). > [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). This tutorial describes the steps you need to perform in both G Suite and Azure The scenario outlined in this tutorial assumes that you already have the following prerequisites: * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) -* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * [A G Suite tenant](https://gsuite.google.com/pricing.html) * A user account on a G Suite with Admin permissions. The scenario outlined in this tutorial assumes that you already have the followi ## Step 2. Configure G Suite to support provisioning with Azure AD -Before configuring G Suite for automatic user provisioning with Azure AD, you will need to enable SCIM provisioning on G Suite. +Before configuring G Suite for automatic user provisioning with Azure AD, you need to enable SCIM provisioning on G Suite. 1. Sign in to the [G Suite Admin console](https://admin.google.com/) with your administrator account, then click on **Main menu** and then select **Security**. If you don't see it, it might be hidden under the **Show More** menu. Before configuring G Suite for automatic user provisioning with Azure AD, you wi 1. Select **ADD DOMAIN & START VERIFICATION**. Then follow the steps to verify that you own the domain name. For comprehensive instructions on how to verify your domain with Google, see [Verify your site ownership](https://support.google.com/webmasters/answer/35179). - 1. Repeat the preceding steps for any additional domains that you intend to add to G Suite. + 1. Repeat the preceding steps for any more domains that you intend to add to G Suite. 1. Next, determine which admin account you want to use to manage user provisioning in G Suite. Navigate to **Account->Admin roles**. Before configuring G Suite for automatic user provisioning with Azure AD, you wi ## Step 3. Add G Suite from the Azure AD application gallery -Add G Suite from the Azure AD application gallery to start managing provisioning to G Suite. If you have previously setup G Suite for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). +Add G Suite from the Azure AD application gallery to start managing provisioning to G Suite. If you have previously setup G Suite for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). -## Step 4. Define who will be in scope for provisioning +## Step 4. Define who is in scope for provisioning -The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who is provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. ## Step 5. Configure automatic user provisioning to G Suite This section guides you through the steps to configure the Azure AD provisioning ### To configure automatic user provisioning for G Suite in Azure AD: -1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. Users will need to log in to `portal.azure.com` and will not be able to use `aad.portal.azure.com`. +1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. Users will need to log in to `portal.azure.com` and won't be able to use `aad.portal.azure.com`.  This section guides you through the steps to configure the Azure AD provisioning  -5. Under the **Admin Credentials** section, click on **Authorize**. You will be redirected to a Google authorization dialog box in a new browser window. +5. Under the **Admin Credentials** section, click on **Authorize**. You'll be redirected to a Google authorization dialog box in a new browser window.  This section guides you through the steps to configure the Azure AD provisioning  -15. When you are ready to provision, click **Save**. +15. When you're ready to provision, click **Save**.  This operation starts the initial synchronization cycle of all users and groups Once you've configured provisioning, use the following resources to monitor your deployment: 1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion -3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). +2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +3. If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ## Troubleshooting Tips-* Removing a user from the sync scope will disable them in GSuite but will not result in deletion of the user in G Suite +* Removing a user from the sync scope disables them in GSuite but won't result in deletion of the user in G Suite ++## Just-in-time (JIT) application access with PIM for groups (preview) +With PIM for Groups, you can provide just-in-time access to groups in Google Cloud / Google Workspace and reduce the number of users that have permanent access to privileged groups in Google Cloud / Google Workspace. ++**Configure your enterprise application for SSO and provisioning** +1. Add Google Cloud / Google Workspace to your tenant, configure it for provisioning as described in the tutorial above, and start provisioning. +1. Configure [single sign-on](google-apps-tutorial.md) for Google Cloud / Google Workspace. +1. Create a [group](https://learn.microsoft.com/azure/active-directory/fundamentals/how-to-manage-groups) that provides all users access to the application. +1. Assign the group to the Google Cloud / Google Workspace application. +1. Assign your test user as a direct member of the group created in the previous step, or provide them access to the group through an access package. This group can be used for persistent, nonadmin access in Google Cloud / Google Workspace. ++**Enable PIM for groups** +1. Create a second group in Azure AD. This group provides access to admin permissions in Google Cloud / Google Workspace. +1. Bring the group under [management in Azure AD PIM](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/groups-discover-groups). +1. Assign your test user as [eligible for the group in PIM](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/groups-assign-member-owner) with the role set to member. +1. Assign the second group to the Google Cloud / Google Workspace application. +1. Use on-demand provisioning to create the group in Google Cloud / Google Workspace. +1. Sign-in to Google Cloud / Google Workspace and assign the second group the necessary permissions to perform admin tasks. ++Now any end user that was made eligible for the group in PIM can get JIT access to the group in Google Cloud / Google Workspace by [activating their group membership](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/groups-activate-roles#activate-a-role). ++> [!IMPORTANT] +> The group membership is provisioned roughly a minute after the activation is complete. Please wait before attempting to sign-in to Google Cloud / Google Workspace. If the user is unable to access the necessary group in Google Cloud / Google Workspace, please review the provisioning logs to ensure that the user was successfully provisioned. ## Change log -* 10/17/2020 - Added support for additional G Suite user and group attributes. +* 10/17/2020 - Added support for more G Suite user and group attributes. * 10/17/2020 - Updated G Suite target attribute names to match what is defined [here](https://developers.google.com/admin-sdk/directory). * 10/17/2020 - Updated default attribute mappings.-* 03/18/2021 - Manager email is now synchronized instead of ID for all new users. For any existing users that were provisioned with a manager as an ID, you can do a restart through [Microsoft Graph](/graph/api/synchronization-synchronizationjob-restart?preserve-view=true&tabs=http&view=graph-rest-beta) with scope "full" to ensure that the email is provisioned. This change only impacts the GSuite provisioning job and not the older provisioning job beginning with Goov2OutDelta. Note, the manager email is provisioned when the user is first created or when the manager changes. The manager email is not provisioned if the manager changes their email address. +* 03/18/2021 - Manager email is now synchronized instead of ID for all new users. For any existing users that were provisioned with a manager as an ID, you can do a restart through [Microsoft Graph](/graph/api/synchronization-synchronizationjob-restart?preserve-view=true&tabs=http&view=graph-rest-beta) with scope "full" to ensure that the email is provisioned. This change only impacts the GSuite provisioning job and not the older provisioning job beginning with Goov2OutDelta. Note, the manager email is provisioned when the user is first created or when the manager changes. The manager email isn't provisioned if the manager changes their email address. -## Additional resources +## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) |
active-directory | Rhombus Systems Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rhombus-systems-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Rhombus Systems for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Rhombus Systems. +++writer: twimmers ++ms.assetid: e5e53362-065c-4546-85f3-9454b8c0d4b1 ++++ Last updated : 07/05/2023++++# Tutorial: Configure Rhombus Systems for automatic user provisioning ++This tutorial describes the steps you need to perform in both Rhombus Systems and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Rhombus Systems](https://www.rhombussystems.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Rhombus Systems. +> * Remove users in Rhombus Systems when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Rhombus Systems. +> * [Single sign-on](rhombus-systems-tutorial.md) to Rhombus Systems (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Rhombus Systems with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Rhombus Systems](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Rhombus Systems to support provisioning with Azure AD +Contact Rhombus Systems support to configure Rhombus Systems to support provisioning with Azure AD. ++## Step 3. Add Rhombus Systems from the Azure AD application gallery ++Add Rhombus Systems from the Azure AD application gallery to start managing provisioning to Rhombus Systems. If you have previously setup Rhombus Systems for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Rhombus Systems ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. ++### To configure automatic user provisioning for Rhombus Systems in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Rhombus Systems**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Rhombus Systems Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Rhombus Systems. If the connection fails, ensure your Rhombus Systems account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Rhombus Systems**. ++1. Review the user attributes that are synchronized from Azure AD to Rhombus Systems in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Rhombus Systems for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Rhombus Systems API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Rhombus Systems| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |name.givenName|String||✓ + |name.familyName|String||✓ + |roles[primary eq "True"].value|String||✓ ++1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Rhombus Systems, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users that you would like to provision to Rhombus Systems by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Vault Platform Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vault-platform-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi ## Step 2. Configure Vault Platform to support provisioning with Azure AD Contact Vault Platform support to configure Vault Platform to support provisioning with Azure AD. +### 1. Authentication ++Go to the Vault Platform, login with your email and password (initial login method), then head to the **Administration > Authentication page**. ++There, first change the login method dropdown to **Identity Provider - Azure - SAML** ++Using the details in the SAML setup instructions page, enter the information: ++ ++1. Issuer URI must be set to `vaultplatform` +2. SSO URL must be set to the value of Login URL +  +3. Download the **Certificate (Base64)** file, open it in a text editor and copy its contents (including the `--BEGIN/END CERTIFICATE--` markers) into the **Certificate** text field +  ++### 2. Data Integration ++Next go to **Administration > Data Integration** inside Vault Platform ++ ++1. For **Data Integration** select `Azure`. +1. For **Method of providing SCIM secret location** set `bearer`. +1. For **Secret** set a complex string, similar to a strong password. Keep this string secure as it will be used later on at **Step 5** +1. Toggle **Set as active SCIM Provider** to be active. + ## Step 3. Add Vault Platform from the Azure AD application gallery Add Vault Platform from the Azure AD application gallery to start managing provisioning to Vault Platform. If you have previously setup Vault Platform for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). This section guides you through the steps to configure the Azure AD provisioning  -1. Under the **Admin Credentials** section, input your Vault Platform Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Vault Platform. If the connection fails, ensure your Vault Platform account has Admin permissions and try again. +1. Under the **Admin Credentials** section, input your Vault Platform Tenant URL (URL with structure `https://app.vaultplatform.com/api/scim/${organization-slug}`) and Secret Token (from Step 2.2). Click **Test Connection** to ensure Azure AD can connect to Vault Platform. If the connection fails, ensure your Vault Platform account has Admin permissions and try again.  |
aks | Cis Azure Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-azure-linux.md | + + Title: Center for Internet Security (CIS) Azure Linux benchmark +description: Learn how AKS applies the CIS benchmark with an Azure Linux image ++++ Last updated : 07/06/2023+++# Center for Internet Security (CIS) Azure Linux benchmark ++Azure Kubernetes Service (AKS) and the Microsoft Azure Linux image alignment with Center for Internet Security (CIS) benchmark ++The security OS configuration applied to the Azure Linux Container Host for AKS image is based on the Azure Linux security baseline, which aligns with the CIS benchmark. As a secure service, AKS complies with SOC, ISO, PCI DSS, and HIPAA standards. For more information about the Azure Linux Container Host security, see [Security concepts for clusters in AKS][security-concepts-aks]. To learn more about the CIS benchmark, see [Center for Internet Security (CIS) Benchmarks][cis-benchmarks]. For more information on the Azure security baselines for Linux, see [Linux security baseline][linux-security-baseline]. ++## Azure Linux 2.0 ++This Azure Linux Container Host operating system is based on the **Azure Linux 2.0** image with built-in security configurations applied. ++As part of the security-optimized operating system: ++* AKS and Azure Linux provide a security-optimized host OS by default with no option to select an alternate operating system. +* The security-optimized host OS is built and maintained specifically for AKS and is **not** supported outside of the AKS platform. +* Unnecessary kernel module drivers have been disabled in the OS to reduce the attack surface. ++## Recommendations ++The below table has four sections: ++* **CIS ID:** The associated rule ID with each of the baseline rules. +* **Recommendation description:** A description of the recommendation issued by the CIS benchmark. +* **Level:** L1, or Level 1, recommends essential basic security requirements that can be configured on any system and should cause little or no interruption of service or reduced functionality. +* **Status:** + * *Pass* - The recommendation has been applied. + * *Fail* - The recommendation hasn't been applied. + * *N/A* - The recommendation relates to manifest file permission requirements that are not relevant to AKS. + * *Depends on Environment* - The recommendation is applied in the user's specific environment and isn't controlled by AKS. + * *Equivalent Control* - The recommendation has been implemented in a different equivalent manner. +* **Reason:** + * *Potential Operation Impact* - The recommendation wasn't applied because it would have a negative effect on the service. + * *Covered Elsewhere* - The recommendation is covered by another control in Azure cloud compute. ++The following are the results from the [CIS Azure Linux 2.0 Benchmark v1.0][cis-benchmark-azure-linux] recommendations based on the CIS rules: ++| CIS ID | Recommendation description | Status | Reason | +||||| +|1.1.4| Disable Automounting|Pass|| +|1.1.1.1|Ensure mounting of cramfs filesystems is disabled|Pass|| +|1.1.2.1|Ensure /tmp is a separate partition|Pass|| +|1.1.2.2|Ensure nodev option set on /tmp partition|Pass|| +|1.1.2.3|Ensure nosuid option set on /tmp partition|Pass|| +|1.1.8.1|Ensure nodev option set on /dev/shm partition|Pass|| +|1.1.8.2|Ensure nosuid option set on /dev/shm partition|Pass|| +|1.2.1|Ensure DNF gpgcheck is globally activated|Pass|| +|1.2.2|Ensure TDNF gpgcheck is globally activated|Pass|| +|1.5.1|Ensure core dump storage is disabled|Pass|| +|1.5.2|Ensure core dump backtraces are disabled|Pass|| +|1.5.3|Ensure address space layout randomization (ASLR) is enabled|Pass|| +|1.7.1|Ensure local login warning banner is configured properly|Pass|| +|1.7.2|Ensure remote login warning banner is configured properly|Pass|| +|1.7.3|Ensure permissions on /etc/motd are configured|Pass|| +|1.7.4|Ensure permissions on /etc/issue are configured|Pass|| +|1.7.5|Ensure permissions on /etc/issue.net are configured|Pass|| +|2.1.1|Ensure time synchronization is in use|Pass|| +|2.1.2|Ensure chrony is configured|Pass|| +|2.2.1|Ensure xinetd is not installed|Pass|| +|2.2.2|Ensure xorg-x11-server-common is not installed|Pass|| +|2.2.3|Ensure avahi is not installed|Pass|| +|2.2.4|Ensure a print server is not installed|Pass|| +|2.2.5|Ensure a dhcp server is not installed|Pass|| +|2.2.6|Ensure a dns server is not installed|Pass|| +|2.2.7|Ensure FTP client is not installed|Pass|| +|2.2.8|Ensure an ftp server is not installed|Pass|| +|2.2.9|Ensure a tftp server is not installed|Pass|| +|2.2.10|Ensure a web server is not installed|Pass|| +|2.2.11|Ensure IMAP and POP3 server is not installed|Pass|| +|2.2.12|Ensure Samba is not installed|Pass|| +|2.2.13|Ensure HTTP Proxy Server is not installed|Pass|| +|2.2.14|Ensure net-snmp is not installed or the snmpd service is not enabled|Pass|| +|2.2.15|Ensure NIS server is not installed|Pass|| +|2.2.16|Ensure telnet-server is not installed|Pass|| +|2.2.17|Ensure mail transfer agent is configured for local-only mode|Pass|| +|2.2.18|Ensure nfs-utils is not installed or the nfs-server service is masked|Pass|| +|2.2.19|Ensure rsync-daemon is not installed or the rsyncd service is masked|Pass|| +|2.3.1|Ensure NIS Client is not installed|Pass|| +|2.3.2|Ensure rsh client is not installed|Pass|| +|2.3.3|Ensure talk client is not installed|Pass|| +|2.3.4|Ensure telnet client is not installed|Pass|| +|2.3.5|Ensure LDAP client is not installed|Pass|| +|2.3.6|Ensure TFTP client is not installed|Pass|| +|3.1.1|Ensure IPv6 is enabled|Pass|| +|3.2.1|Ensure packet redirect sending is disabled|Pass|| +|3.3.1|Ensure source routed packets are not accepted|Pass|| +|3.3.2|Ensure ICMP redirects are not accepted|Pass|| +|3.3.3|Ensure secure ICMP redirects are not accepted|Pass|| +|3.3.4|Ensure suspicious packets are logged|Pass|| +|3.3.5|Ensure broadcast ICMP requests are ignored|Pass|| +|3.3.6|Ensure bogus ICMP responses are ignored|Pass|| +|3.3.7|Ensure Reverse Path Filtering is enabled|Pass|| +|3.3.8|Ensure TCP SYN Cookies is enabled|Pass|| +|3.3.9|Ensure IPv6 router advertisements are not accepted|Pass|| +|3.4.3.1.1|Ensure iptables package is installed|Pass|| +|3.4.3.1.2|Ensure nftables is not installed with iptables|Pass|| +|3.4.3.1.3|Ensure firewalld is either not installed or masked with iptables|Pass|| +|4.2|Ensure logrotate is configured|Pass|| +|4.2.2|Ensure all logfiles have appropriate access configured|Pass|| +|4.2.1.1|Ensure rsyslog is installed|Pass|| +|4.2.1.2|Ensure rsyslog service is enabled|Pass|| +|4.2.1.3|Ensure rsyslog default file permissions are configured|Pass|| +|4.2.1.4|Ensure logging is configured|Pass|| +|4.2.1.5|Ensure rsyslog is not configured to receive logs from a remote client|Pass|| +|5.1.1|Ensure cron daemon is enabled|Pass|| +|5.1.2|Ensure permissions on /etc/crontab are configured|Pass|| +|5.1.3|Ensure permissions on /etc/cron.hourly are configured|Pass|| +|5.1.4|Ensure permissions on /etc/cron.daily are configured|Pass|| +|5.1.5|Ensure permissions on /etc/cron.weekly are configured|Pass|| +|5.1.6|Ensure permissions on /etc/cron.monthly are configured|Pass|| +|5.1.7|Ensure permissions on /etc/cron.d are configured|Pass|| +|5.1.8|Ensure cron is restricted to authorized users|Pass|| +|5.1.9|Ensure at is restricted to authorized users|Pass|| +|5.2.1|Ensure permissions on /etc/ssh/sshd_config are configured|Pass|| +|5.2.2|Ensure permissions on SSH private host key files are configured|Pass|| +|5.2.3|Ensure permissions on SSH public host key files are configured|Pass|| +|5.2.4|Ensure SSH access is limited|Pass|| +|5.2.5|Ensure SSH LogLevel is appropriate|Pass|| +|5.2.6|Ensure SSH PAM is enabled|Pass|| +|5.2.7|Ensure SSH root login is disabled|Pass|| +|5.2.8|Ensure SSH HostbasedAuthentication is disabled|Pass|| +|5.2.9|Ensure SSH PermitEmptyPasswords is disabled|Pass|| +|5.2.10|Ensure SSH PermitUserEnvironment is disabled|Pass|| +|5.2.11|Ensure SSH IgnoreRhosts is enabled|Pass|| +|5.2.12|Ensure only strong Ciphers are used|Pass|| +|5.2.13|Ensure only strong MAC algorithms are used|Pass|| +|5.2.14|Ensure only strong Key Exchange algorithms are used|Pass|| +|5.2.15|Ensure SSH warning banner is configured|Pass|| +|5.2.16|Ensure SSH MaxAuthTries is set to 4 or less|Pass|| +|5.2.17|Ensure SSH MaxStartups is configured|Pass|| +|5.2.18|Ensure SSH LoginGraceTime is set to one minute or less|Pass|| +|5.2.19|Ensure SSH MaxSessions is set to 10 or less|Pass|| +|5.2.20|Ensure SSH Idle Timeout Interval is configured|Pass|| +|5.3.1|Ensure sudo is installed|Pass|| +|5.3.2|Ensure re-authentication for privilege escalation is not disabled globally|Pass|| +|5.3.3|Ensure sudo authentication timeout is configured correctly|Pass|| +|5.4.1|Ensure password creation requirements are configured|Pass|| +|5.4.2|Ensure lockout for failed password attempts is configured|Pass|| +|5.4.3|Ensure password hashing algorithm is SHA-512|Pass|| +|5.4.4|Ensure password reuse is limited|Pass|| +|5.5.2|Ensure system accounts are secured|Pass|| +|5.5.3|Ensure default group for the root account is GID 0|Pass|| +|5.5.4|Ensure default user umask is 027 or more restrictive|Pass|| +|5.5.1.1|Ensure password expiration is 365 days or less|Pass|| +|5.5.1.2|Ensure minimum days between password changes is configured|Pass|| +|5.5.1.3|Ensure password expiration warning days is 7 or more|Pass|| +|5.5.1.4|Ensure inactive password lock is 30 days or less|Pass|| +|5.5.1.5|Ensure all users last password change date is in the past|Pass|| +|6.1.1|Ensure permissions on /etc/passwd are configured|Pass|| +|6.1.2|Ensure permissions on /etc/passwd- are configured|Pass|| +|6.1.3|Ensure permissions on /etc/group are configured|Pass|| +|6.1.4|Ensure permissions on /etc/group- are configured|Pass|| +|6.1.5|Ensure permissions on /etc/shadow are configured|Pass|| +|6.1.6|Ensure permissions on /etc/shadow- are configured|Pass|| +|6.1.7|Ensure permissions on /etc/gshadow are configured|Pass|| +|6.1.8|Ensure permissions on /etc/gshadow- are configured|Pass|| +|6.1.9|Ensure no unowned or ungrouped files or directories exist|Pass|| +|6.1.10|Ensure world writable files and directories are secured|Pass|| +|6.2.1|Ensure password fields are not empty|Pass|| +|6.2.2|Ensure all groups in /etc/passwd exist in /etc/group|Pass|| +|6.2.3|Ensure no duplicate UIDs exist|Pass|| +|6.2.4|Ensure no duplicate GIDs exist|Pass|| +|6.2.5|Ensure no duplicate user names exist|Pass|| +|6.2.6|Ensure no duplicate group names exist|Pass|| +|6.2.7|Ensure root PATH Integrity|Pass|| +|6.2.8|Ensure root is the only UID 0 account|Pass|| +|6.2.9|Ensure all users' home directories exist|Pass|| +|6.2.10|Ensure users' own their home directories|Pass|| +|6.2.11|Ensure users' home directories permissions are 750 or more restrictive|Pass|| +|6.2.12|Ensure users' dot files are not group or world writable|Pass|| +|6.2.13|Ensure users' .netrc Files are not group or world accessible|Pass|| +|6.2.14|Ensure no users have .forward files|Pass|| +|6.2.15|Ensure no users have .netrc files|Pass|| +|6.2.16|Ensure no users have .rhosts files|Pass|| ++## Next steps ++For more information about Azure Linux Container Host security, see the following articles: ++* [Azure Linux Container Host for AKS][linux-container-host-aks] +* [Security concepts for clusters in AKS][security-concepts-aks] ++<!-- LINKS - external --> ++<!-- LINKS - internal --> +[security-concepts-aks]: concepts-security.md +[cis-benchmarks]: /compliance/regulatory/offering-CIS-Benchmark +[cis-benchmark-azure-linux]: https://www.cisecurity.org/benchmark/azure_linux +[linux-security-baseline]: ../governance/policy/samples/guest-configuration-baseline-linux.md +[linux-container-host-aks]: ../azure-linux/intro-azure-linux.md |
aks | Devops Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/devops-pipeline.md | description: Build and push images to Azure Container Registry; Deploy to Azure Previously updated : 03/15/2022 Last updated : 07/05/2023 zone_pivot_groups: pipelines-version Sign in to the [Azure portal](https://portal.azure.com/), and then select the [C az group create --name myapp-rg --location eastus # Create a container registry-az acr create --resource-group myapp-rg --name myContainerRegistry --sku Basic +az acr create --resource-group myapp-rg --name mycontainerregistry --sku Basic # Create a Kubernetes cluster az aks create \ Within your selected organization, create a _project_. If you don't have any pro 1. Set the service port to 8080. -1. Set the **Enable Review App for Pull Requests** checkbox for [review app](/azure/devops/pipelines/process/environments-kubernetes) related configuration to be included in the pipeline YAML auto-generated in subsequent steps. +1. Set the **Enable Review App for Pull Requests** checkbox for [review app](/azure/devops/pipelines/process/environments-kubernetes) related configuration to be included in the pipeline YAML autogenerated in subsequent steps. 1. Select **Validate and configure**. After the pipeline run is finished, explore what happened and then go see your a 1. Select **View environment**. -1. Select the instance of your app for the namespace you deployed to. If you stuck to the defaults we mentioned above, then it will be the **myapp** app in the **default** namespace. +1. Select the instance of your app for the namespace you deployed to. If you used the defaults, then it is the **myapp** app in the **default** namespace. 1. Select the **Services** tab. The build stage uses the [Docker task](/azure/devops/pipelines/tasks/build/docke path: 'manifests' ``` -The deployment job uses the _Kubernetes manifest task_ to create the `imagePullSecret` required by Kubernetes cluster nodes to pull from the Azure Container Registry resource. Manifest files are then used by the Kubernetes manifest task to deploy to the Kubernetes cluster. +The deployment job uses the _Kubernetes manifest task_ to create the `imagePullSecret` required by Kubernetes cluster nodes to pull from the Azure Container Registry resource. Manifest files are then used by the Kubernetes manifest task to deploy to the Kubernetes cluster. The manifest files, `service.yml` and `deployment.yml`, were generated when you used the **Deploy to Azure Kubernetes Service** template. ```YAML - stage: Deploy It also packaged and published a Helm chart as an artifact. In the release pipel 1. In the build summary, choose the **Release** icon to start a new release pipeline. - If you've previously created a release pipeline that uses these build artifacts, you'll - be prompted to create a new release instead. In that case, go to the **Releases** page and + If you've previously created a release pipeline that uses these build artifacts, you are prompted to create a new release instead. In that case, go to the **Releases** page and start a new release pipeline from there by choosing the **+** icon. 1. Select the **Empty job** template. It also packaged and published a Helm chart as an artifact. In the release pipel - **Kubernetes cluster**: Enter or select the AKS cluster you created. - - **Command**: Select **init** as the Helm command. This will install Tiller to your running Kubernetes cluster. + - **Command**: Select **init** as the Helm command. This installs Tiller to your running Kubernetes cluster. It will also set up any necessary local configuration.- Tick **Use canary image version** to install the latest pre-release version of Tiller. - You could also choose to upgrade Tiller if it's pre-installed by ticking **Upgrade Tiller**. - If these options are enabled, the task will run `helm init --canary-image --upgrade` + Tick **Use canary image version** to install the latest prerelease version of Tiller. + You could also choose to upgrade Tiller if it's preinstalled by ticking **Upgrade Tiller**. + If these options are enabled, the task runs `helm init --canary-image --upgrade` 1. Choose **+** in the **Agent job** and add another **Package and deploy Helm charts** task. Configure the settings for this task as follows: It also packaged and published a Helm chart as an artifact. In the release pipel When you select the **upgrade**, the task shows some more fields: * **Chart Type**: Select **File Path**. Alternatively, you can specify **Chart Name** if you want to- specify a URL or a chart name. For example, if the chart name is `stable/mysql`, the task will execute + specify a URL or a chart name. For example, if the chart name is `stable/mysql`, the task executes `helm upgrade stable/mysql` * **Chart Path**: This can be a path to a packaged chart or a path to an unpacked chart directory. It also packaged and published a Helm chart as an artifact. In the release pipel * **Release Name**: Enter a name for your release; for example, `azuredevops` - * **Recreate Pods**: Tick this checkbox if there is a configuration change during the release and you want to replace a running pod with the new configuration. + * **Recreate Pods**: Tick this checkbox if there's a configuration change during the release and you want to replace a running pod with the new configuration. * **Reset Values**: Tick this checkbox if you want the values built into the chart to override all values provided by the task. Another alternative is to set the **Set Values** option of the task to specify t ## Create a release to deploy your app -You're now ready to create a release, which means to start the process of running the release pipeline with the artifacts produced by a specific build. This will result in deploying the build: +You're now ready to create a release, which means to start the process of running the release pipeline with the artifacts produced by a specific build. This results in deploying the build: 1. Choose **+ Release** and select **Create a release**. |
aks | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md | Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
api-center | Set Up Api Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center.md | After you've been added to the API Center preview, you need to register the **Mi ## Create an API center -1. [Sign in](https://portal.azure.com) to the portal. +1. [Sign in to the Azure portal using this link](https://aka.ms/apicenter/azureportal). 1. In the search bar, enter *API Centers*. In this tutorial, you learned how to use the portal to: > * Add information about API environments and deployments > [!div class="nextstepaction"]-> [Learn more about API Center](key-concepts.md) +> [Learn more about API Center](key-concepts.md) |
api-management | Api Management Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md | Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct | [External cache](./api-management-howto-cache-external.md) | Yes | Yes | Yes | Yes | Yes | | [Client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) | Yes | Yes | Yes | Yes | Yes | | [Policies](api-management-howto-policies.md)<sup>4</sup> | Yes | Yes | Yes | Yes | Yes |+| [API authorizations](authorizations-overview.md) | Yes | Yes | Yes | Yes | Yes | | [Backup and restore](api-management-howto-disaster-recovery-backup-restore.md) | No | Yes | Yes | Yes | Yes | | [Management over Git](api-management-configuration-repository-git.md) | No | Yes | Yes | Yes | Yes | | Direct management API | No | Yes | Yes | Yes | Yes | |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
api-management | Self Hosted Gateway Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md | To operate properly, each self-hosted gateway needs outbound connectivity on por | Public IP addresses of Azure Storage [service tag](../virtual-network/service-tags-overview.md) | ✔️ | Optional<sup>2</sup> | IP addresses must correspond to primary location of API Management instance. | | Hostname of Azure Blob Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<blob-storage-account-name>.blob.core.windows.net`) | | Hostname of Azure Table Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<table-storage-account-name>.table.core.windows.net`) |-| Endpoints for Azure Active Directory integration | ✔️ | Optional<sup>3</sup> | Required endpoints are `<region>.login.microsoft.com` and `login.microsoftonline.com`. | -| Endpoints for [Azure Application Insights integration](api-management-howto-app-insights.md) | Optional<sup>4</sup> | Optional<sup>4</sup> | Minimal required endpoints are:<ul><li>`rt.services.visualstudio.com:443`</li><li>`dc.services.visualstudio.com:443`</li><li>`{region}.livediagnostics.monitor.azure.com:443`</li></ul>Learn more in [Azure Monitor docs](../azure-monitor/app/ip-addresses.md#outgoing-ports) | -| Endpoints for [Event Hubs integration](api-management-howto-log-event-hubs.md) | Optional<sup>4</sup> | Optional<sup>4</sup> | Learn more in [Azure Event Hubs docs](../event-hubs/network-security.md) | -| Endpoints for [external cache integration](api-management-howto-cache-external.md) | Optional<sup>4</sup> | Optional<sup>4</sup> | This requirement depends on the external cache that is being used | +| Endpoints for Azure Resource Manager | ✔️ | Optional<sup>3</sup> | Required endpoints are `management.azure.com`. | +| Endpoints for Azure Active Directory integration | ✔️ | Optional<sup>4</sup> | Required endpoints are `<region>.login.microsoft.com` and `login.microsoftonline.com`. | +| Endpoints for [Azure Application Insights integration](api-management-howto-app-insights.md) | Optional<sup>5</sup> | Optional<sup>5</sup> | Minimal required endpoints are:<ul><li>`rt.services.visualstudio.com:443`</li><li>`dc.services.visualstudio.com:443`</li><li>`{region}.livediagnostics.monitor.azure.com:443`</li></ul>Learn more in [Azure Monitor docs](../azure-monitor/app/ip-addresses.md#outgoing-ports) | +| Endpoints for [Event Hubs integration](api-management-howto-log-event-hubs.md) | Optional<sup>5</sup> | Optional<sup>5</sup> | Learn more in [Azure Event Hubs docs](../event-hubs/network-security.md) | +| Endpoints for [external cache integration](api-management-howto-cache-external.md) | Optional<sup>5</sup> | Optional<sup>5</sup> | This requirement depends on the external cache that is being used | <sup>1</sup>For an API Management instance in an internal virtual network, enable private connectivity to the v2 configuration endpoint from the location of the self-hosted gateway, for example, using a private DNS in a peered network.<br/> <sup>2</sup>Only required in v2 when API inspector or quotas are used in policies.<br/>-<sup>3</sup>Only required when using Azure AD authentication or Azure AD-related policies.<br/> -<sup>4</sup>Only required when feature is used and requires public IP address, port, and hostname information.<br/> +<sup>3</sup>Only required when using Azure AD authentication to verify RBAC permissions.<br/> +<sup>4</sup>Only required when using Azure AD authentication or Azure AD-related policies.<br/> +<sup>5</sup>Only required when feature is used and requires public IP address, port, and hostname information.<br/> > [!IMPORTANT] > * DNS hostnames must be resolvable to IP addresses and the corresponding IP addresses must be reachable. |
app-service | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md | Title: Migrate to App Service Environment v3 by using the migration feature description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 06/19/2023 Last updated : 07/06/2023 If your App Service Environment doesn't pass the validation checks or you try to |`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location. |This error appears if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. | |Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade is initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. | |App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You can migrate once these operations are complete. |-|Migrate is not available for this subscription|Support needs to be engaged for migrating this App Service Environment.|Open a support case to engage support to resolve your issue.| +|Migrate is not available for this subscription.|Support needs to be engaged for migrating this App Service Environment.|Open a support case to engage support to resolve your issue.| +|Your InteralLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. | ## Overview of the migration process using the migration feature |
app-service | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md | Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
application-gateway | Http Response Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md | HTTP 307 responses are presented when a redirection rule is specified with the * 400-499 response codes indicate an issue that is initiated from the client. These issues can range from the client initiating requests to an unmatched hostname, request timeout, unauthenticated request, malicious request, and more. +Application Gateway collects metrics that capture the distribution of 4xx/5xx status codes has a logging mechanism that captures information such as the URI client IP address with the response code. Metrics and logging enable further troubleshooting. Clients can also receive 4xx response from other proxies between the client device and Application Gateway. For example, CDN and other authentication providers. See the following articles for more information. ++[Metrics supported by Application Gateway V2 SKU](application-gateway-metrics.md#metrics-supported-by-application-gateway-v2-sku) +[Diagnostic logs](application-gateway-diagnostics.md#diagnostic-logging) + #### 400 – Bad Request HTTP 400 response codes are commonly observed when: - Non-HTTP / HTTPS traffic is initiated to an application gateway with an HTTP or HTTPS listener. - HTTP traffic is initiated to a listener with HTTPS, with no redirection configured. - Mutual authentication is configured and unable to properly negotiate.-- The request is not compliant to RFC. +- The request isn't compliant to RFC. Some common reasons for the request to be non-compliant to RFC are: Some common reasons for the request to be non-compliant to RFC are: | - | - | | Invalid Host in request line | Host containing two colons (example.com:**8090:8080**) | | Missing Host Header | Request doesn't have Host Header |-| Presence of malformed or illegal character | Reserved characters are **&,!.** Workaround is to percent code it like %& | +| Presence of malformed or illegal character | Reserved characters are **&,!.** The workaround is to code it as a percentage. For example: %& | | Invalid HTTP version | Get /content.css HTTP/**0.3** |-| Header field name and URI contains non-ASCII Character | GET /**«úü¡»¿**.doc HTTP/1.1 | +| Header field name and URI contain non-ASCII Character | GET /**«úü¡»¿**.doc HTTP/1.1 | | Missing Content Length header for POST request | Self Explanatory | | Invalid HTTP Method | **GET123** /https://docsupdatetracker.net/index.html HTTP/1.1 |-| Duplicate Headers | Authorization:\<base64 encoded content\>,Authorization: \<base64 encoded content\> | +| Duplicate Headers | Authorization:\<base64 encoded content\>, Authorization: \<base64 encoded content\> | | Invalid value in Content-Length | Content-Length: **abc**,Content-Length: **-10**| For cases when mutual authentication is configured, several scenarios can lead to an HTTP 400 response being returned the client, such as: For more information about troubleshooting mutual authentication, see [Error cod #### 401 – Unauthorized -An HTTP 401 unauthorized response can be returned when the backend pool is configured with [NTLM](/windows/win32/secauthn/microsoft-ntlm?redirectedfrom=MSDN) authentication. -There are several ways to resolve this: +An HTTP 401 unauthorized response is returned to the client if the client isn't authorized to access the resource. There are several reasons for 401 to be returned. The following are a few reasons with potential fixes. + - If the client has access, it might have an outdated browser cache. Clear the browser cache and try accessing the application again. ++An HTTP 401 unauthorized response can be returned to AppGW probe request if the backend pool is configured with [NTLM](/windows/win32/secauthn/microsoft-ntlm?redirectedfrom=MSDN) authentication. In this scenario, the backend is marked as healthy. There are several ways to resolve this issue: - Allow anonymous access on backend pool. - Configure the probe to send the request to another "fake" site that doesn't require NTLM.-- Not recommended, as this will not tell us if the actual site behind the application gateway is active or not.+- Not recommended, as this won't tell us if the actual site behind the application gateway is active or not. - Configure application gateway to allow 401 responses as valid for the probes: [Probe matching conditions](/azure/application-gateway/application-gateway-probe-overview). #### 403 – Forbidden HTTP 403 Forbidden is presented when customers are utilizing WAF skus and have WAF configured in Prevention mode. If enabled WAF rulesets or custom deny WAF rules match the characteristics of an inbound request, the client is presented a 403 forbidden response. +Other reasons for clients receiving 403 responses include: +- You're using App Service as backend and it's configured to allow access only from Application Gateway. This can return a 403 error by App Services. This typically happens due to redirects/href links that point directly to App Services instead of pointing at the Application Gateway's IP address. +- If you're accessing a storage blog and the Application Gateway and storage endpoint is in different region, then a 403 error is returned if the Application Gateway's public IP address isn't allow-listed. See [Grant access from an internet IP range](/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-from-an-internet-ip-range). + #### 404 – Page not found An HTTP 404 response can be returned if a request is sent to an application gateway that is: An HTTP 408 response can be observed when client requests to the frontend listen #### 499 – Client closed the connection -An HTTP 499 response is presented if a client request that is sent to application gateways using v2 sku is closed before the server finished responding. This error can be observed in 2 scenarios. First scenario is when a large response is returned to the client and the client may have closed or refreshed their application before the server finished sending the large response. Second scenario is the timeout on the client side is low and does not wait long enough to receive the response from server. In this case it is better to increase the timeout on the client. In application gateways using v1 sku, an HTTP 0 response code may be raised for the client closing the connection before the server has finished responding as well. +An HTTP 499 response is presented if a client request that is sent to application gateways using v2 sku is closed before the server finished responding. This error can be observed in 2 scenarios. The first scenario is when a large response is returned to the client and the client might have closed or refreshed the application before the server finished sending a large response. The second scenario is when the timeout on the client side is low and doesn't wait long enough to receive the response from server. In this case it's better to increase the timeout on the client. In application gateways using v1 sku, an HTTP 0 response code may be raised for the client closing the connection before the server has finished responding as well. ## 5XX response codes (server error) For information about scenarios where 502 errors occur, and how to troubleshoot #### 504 – Gateway timeout -Azure application Gateway V2 SKU sent HTTP 504 errors if the backend response time exceeds the time-out value which is configured in the Backend Setting. +Azure application Gateway V2 SKU sent HTTP 504 errors if the backend response time exceeds the time-out value that is configured in the Backend Setting. IIS |
applied-ai-services | Choose Model Feature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/choose-model-feature.md | Azure Form Recognizer supports a wide variety of models that enable you to add i The following decision charts highlight the features of each **Form Recognizer v3.0** supported model and help you choose the best model to meet the needs and requirements of your application. > [!IMPORTANT]-> Be sure to heck the [**language support**](language-support.md) page for supported language text and field extraction by feature. +> Be sure to check the [**language support**](language-support.md) page for supported language text and field extraction by feature. ## Pretrained document-analysis models |
attestation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md | Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
automation | Automation Runbook Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md | Title: Azure Automation runbook types description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 06/18/2023 Last updated : 07/04/2023 The following are the current limitations and known issues with PowerShell runbo - Runbooks can't use [checkpoints](automation-powershell-workflow.md#use-checkpoints-in-a-workflow) to resume runbook if there's an error. - You can include only PowerShell, PowerShell Workflow runbooks, and graphical runbooks as child runbooks by using the [Start-AzAutomationRunbook](/powershell/module/az.automation/start-azautomationrunbook) cmdlet, which creates a new job. - Runbooks can't use the PowerShell [#Requires](/powershell/module/microsoft.powershell.core/about/about_requires) statement, it isn't supported in Azure sandbox or on Hybrid Runbook Workers and might cause the job to fail.+- Azure runbook doesn't support `Start-Job` with `-credential`. +- Azure doesn't support all PowerShell input parameters. [Learn more](runbook-input-parameters.md). **Known issues** The following are the current limitations and known issues with PowerShell runbo - Source control integration doesn't support PowerShell 7.1 (preview) Also, PowerShell 7.1 (preview) runbooks in source control gets created in Automation account as Runtime 5.1. - PowerShell 7.1 module management isn't supported through `Get-AzAutomationModule` cmdlets. - Runbook fails with no log trace if the input value contains the character '.+- Azure runbook doesn't support `Start-Job` with `-credential`. +- Azure doesn't support all PowerShell input parameters. [Learn more](runbook-input-parameters.md). **Known issues** The following are the current limitations and known issues with PowerShell runbo - Currently, PowerShell 7.2 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell aren't supported. - Az module 8.3.0 is installed by default and can't be managed at the automation account level. Use custom modules to override the Az module to the desired version. - The imported PowerShell 7.2 (preview) module would be validated during job execution. Ensure that all dependencies for the selected module are also imported for successful job execution.-- PowerShell 7.2 module management is not supported through `Get-AzAutomationModule` cmdlets. +- PowerShell 7.2 module management is not supported through `Get-AzAutomationModule` cmdlets. +- Azure runbook doesn't support `Start-Job` with `-credential`. +- Azure doesn't support all PowerShell input parameters. [Learn more](runbook-input-parameters.md). **Known issues** |
automation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md | Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 # |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-cache-for-redis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md | Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-functions | Functions Bindings Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md | This section describes the function app configuration settings available for fun |Property |Default | Description | ||||-|maxDegreeOfParallelism|8 * (the number of available cores)|The integer number of concurrent invocations allowed for each blob-triggered function. The minimum allowed value is 1.| +|maxDegreeOfParallelism|8 * (the number of available cores)|The integer number of concurrent invocations allowed for all blob-triggered functions in a given function app. The minimum allowed value is 1.| ## Next steps |
azure-functions | Functions Reference Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md | Triggers and bindings can be declared and used in a function in a decorator base @app.function_name(name="HttpTrigger1") @app.route(route="req") def main(req):- user = req.params.get('user') - return f'Hello, {user}!' + user = req.params.get("user") + return f"Hello, {user}!" ``` You can also explicitly declare the attribute types and return type in the function by using Python type annotations. Doing so helps you use the IntelliSense and autocomplete features that are provided by many Python code editors. ```python-import azure.functions +import azure.functions as func app = func.FunctionApp() @app.function_name(name="HttpTrigger1") @app.route(route="req")-def main(req: azure.functions.HttpRequest) -> str: - user = req.params.get('user') - return f'Hello, {user}!' +def main(req: func.HttpRequest) -> str: + user = req.params.get("user") + return f"Hello, {user}!" ``` To learn about known limitations with the v2 model and their workarounds, see [Troubleshoot Python errors in Azure Functions](./recover-python-functions.md?pivots=python-mode-decorators). Update the Python code file *init.py*, depending on the interface that's used by # [ASGI](#tab/asgi) ```python-app=fastapi.FastAPI() +app = fastapi.FastAPI() @app.get("hello/{name}")-async def get_name( - name: str,): - return { - "name": name,} +async def get_name(name: str): + return {"name": name} def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse: return func.AsgiMiddleware(app).handle(req, context) For a full example, see [Using FastAPI Framework with Azure Functions](/samples/ # [WSGI](#tab/wsgi) ```python-app=Flask("Test") +app = Flask("Test") -@app.route("hello/<name>", methods=['GET']) +@app.route("hello/<name>", methods=["GET"]) def hello(name: str): return f"hello {name}" def main(req: func.HttpRequest, context) -> func.HttpResponse:- logging.info('Python HTTP trigger function processed a request.') + logging.info("Python HTTP trigger function processed a request.") return func.WsgiMiddleware(app).handle(req, context) ``` For a full example, see [Using Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/). fast_app = FastAPI() @fast_app.get("/return_http_no_body") async def return_http_no_body(): - return Response(content='', media_type="text/plain") + return Response(content="", media_type="text/plain") app = func.AsgiFunctionApp(app=fast_app, http_auth_level=func.AuthLevel.ANONYMOUS) app = func.AsgiFunctionApp(app=fast_app, # function_app.py import azure.functions as func -from flask import Flask, request, Response, redirect, url_for +from flask import Flask, Response flask_app = Flask(__name__) -logger = logging.getLogger("my-function") @flask_app.get("/return_http") def return_http(): - return Response('<h1>Hello WorldΓäó</h1>', mimetype='text/html') + return Response("<h1>Hello WorldΓäó</h1>", mimetype="text/html") app = func.WsgiFunctionApp(app=flask_app.wsgi_app, http_auth_level=func.AuthLevel.ANONYMOUS) |
azure-functions | Legacy Proxies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/legacy-proxies.md | Re-enabling proxies requires you to set a flag in the `AzureWebJobsFeatureFlags` + If this setting already exists, add `,EnableProxies` to the end of the existing value. -[`AzureWebJobsFeatureFlags`](functions-app-settings.md#azurewebjobsfeatureflags) is a comma-delimited array of flags used to enable preview and other temporary features. +[`AzureWebJobsFeatureFlags`](functions-app-settings.md#azurewebjobsfeatureflags) is a comma-delimited array of flags used to enable preview and other temporary features. To learn more about how to create and modify application settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings). -To learn more about how to create and modify application settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings). +>[!NOTE] +>Even when re-enabled using the `EnableProxies` flag, you can't work with proxies in the Azure portal. Instead, you must work directly with the *proxies.json* file for your function app. For more information, see [Advanced configuration](#advanced-configuration). ## <a name="create"></a>Create a proxy |
azure-monitor | Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md | description: This article describes the different management tasks that you'll t Previously updated : 04/06/2022 Last updated : 07/06/2023 |
azure-monitor | Azure Monitor Agent Windows Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md | Title: Set up the Azure Monitor agent on Windows client devices description: This article describes the instructions to install the agent on Windows 10, 11 client OS devices, configure data collection, manage and troubleshoot the agent. Previously updated : 4/2/2023 Last updated : 7/6/2023 Here is a comparison between client installer and VM extension for Azure Monitor ## Create and associate a 'Monitored Object' You need to create a 'Monitored Object' (MO) that creates a representation for the Azure AD tenant within Azure Resource Manager (ARM). This ARM entity is what Data Collection Rules are then associated with. **This Monitored Object needs to be created only once for any number of machines in a single AAD tenant**.-Currently this association is only **limited** to the Azure AD tenant scope, which means configuration applied to the tenant will be applied to all devices that are part of the tenant and running the agent. +Currently this association is only **limited** to the Azure AD tenant scope, which means configuration applied to the AAD tenant will be applied to all devices that are part of the tenant and running the agent installed via the client installer. Agents installed as virtual machine extension will not be impacted by this. The image below demonstrates how this works:  |
azure-monitor | Data Sources Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md | Title: Collect Syslog data sources with the Log Analytics agent in Azure Monitor description: Syslog is an event logging protocol that's common to Linux. This article describes how to configure collection of Syslog messages in Log Analytics and details the records they create. Previously updated : 04/06/2022 Last updated : 07/06/2023 The following table provides different examples of log queries that retrieve Sys * Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions. * Use [custom fields](../logs/custom-fields.md) to parse data from Syslog records into individual fields.-* [Configure Linux agents](../vm/monitor-virtual-machine.md) to collect other types of data. +* [Configure Linux agents](../vm/monitor-virtual-machine.md) to collect other types of data. |
azure-monitor | Data Sources Windows Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md | Title: Collect Windows event log data sources with Log Analytics agent in Azure Monitor description: The article describes how to configure the collection of Windows event logs by Azure Monitor and details of the records they create. Previously updated : 04/06/2022 Last updated : 07/06/2023 |
azure-monitor | Diagnostics Extension Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md | Title: Azure Diagnostics extension overview description: Use Azure Diagnostics for debugging, measuring performance, monitoring, and performing traffic analysis in cloud services, virtual machines, and service fabric. Previously updated : 04/06/2022 Last updated : 07/06/2023 |
azure-monitor | Log Analytics Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md | Title: Log Analytics agent overview description: This article helps you understand how to collect data and monitor computers hosted in Azure, on-premises, or other cloud environments with Log Analytics. -- Previously updated : 12/16/2021++ Last updated : 07/06/2023 |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | The accuracy of the approximation largely depends on the configured sampling per There are three different sampling methods: -* **Adaptive sampling** automatically adjusts the volume of telemetry sent from the SDK in your ASP.NET/ASP.NET Core app, and from Azure Functions. This is the default sampling when you use the ASP.NET or ASP.NET Core SDK. Adaptive sampling is currently only available for ASP.NET server-side telemetry, and for Azure Functions. +* **Adaptive sampling** automatically adjusts the volume of telemetry sent from the SDK in your ASP.NET/ASP.NET Core app, and from Azure Functions. This is the default sampling when you use the ASP.NET or ASP.NET Core SDK. Adaptive sampling is currently only available for ASP.NET/ASP.NET Core server-side telemetry, and for Azure Functions. * **Fixed-rate sampling** reduces the volume of telemetry sent from both your ASP.NET or ASP.NET Core or Java server and from your users' browsers. You set the rate. The client and server will synchronize their sampling so that, in Search, you can navigate between related page views and requests. |
azure-monitor | Prometheus Metrics Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md | Last updated 05/10/2023 Azure Monitor managed service for Prometheus is a component of [Azure Monitor Metrics](data-platform-metrics.md), providing more flexibility in the types of metric data that you can collect and analyze with Azure Monitor. Prometheus metrics share some features with platform and custom metrics, but use some different features to better support open source tools such as [PromQL](https://aka.ms/azureprometheus-promio-promql) and [Grafana](../../managed-grafan). -Azure Monitor managed service for Prometheus allows you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Compute Foundation. This fully managed service allows you to use the [Prometheus query language (PromQL)](https://aka.ms/azureprometheus-promio-promql) to analyze and alert on the performance of monitored infrastructure and workloads without having to operate the underlying infrastructure. +Azure Monitor managed service for Prometheus allows you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Computing Foundation. This fully managed service allows you to use the [Prometheus query language (PromQL)](https://aka.ms/azureprometheus-promio-promql) to analyze and alert on the performance of monitored infrastructure and workloads without having to operate the underlying infrastructure. > [!IMPORTANT] > Azure Monitor managed service for Prometheus is intended for storing information about service health of customer machines and applications. It is not intended for storing any data classified as Personal Identifiable Information (PII) or End User Identifiable Information (EUII). We strongly recommend that you do not send any sensitive information (usernames, credit card numbers etc.) into Azure Monitor managed service for Prometheus fields like metric names, label names, or label values. |
azure-monitor | Basic Logs Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md | For more information, see [Set a table's log data plan](basic-logs-configure.md) > [!NOTE] > Billing of queries on Basic Logs is not yet enabled. You can query Basic Logs for free until early 2023. + ## Limitations Queries with Basic Logs are subject to the following limitations: ### KQL language limits |
azure-monitor | Ingest Logs Event Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingest-logs-event-hub.md | To send events from Azure Event Hubs to Azure Monitor Logs, you need these resou Azure Monitor currently supports ingestion from Event Hubs in these regions: -| Americas | Europe | Middle East | Africa | Asia Pacific | -| - | - | - | - | - | -| Brazil South | France Central | Qatar Central | South Africa North | Australia Central | -| Brazil Southeast | France South | UAE Central | South Africa West | Australia Central 2 | -| Canada Central | Germany North | UAE North | | Australia East | -| Canada East | Germany West Central | | | Central India | -| Central US | North Europe | | | East Asia | -| East US | Norway East | | | Japan East | -| East US 2 | Norway West | | | Japan West | -| North Central US | Poland Central | | | Jio India Central | -| South Central US | Sweden Central | | | Jio India West | -| West Central US | Sweden South | | | South India | -| West US | Switzerland North | | | | -| West US 2 | Switzerland West | | | | -| West US 3 | UK South | | | | -| | UK West | | | | -| | West Europe | | | | -+| Americas | Europe | Middle East | Africa | Asia Pacific | +| - | - | - | - | - | +| Brazil South | France Central | UAE North | South Africa North | Australia Central | +| Brazil Southeast | North Europe | | | Australia East | +| Canada Central | Norway East | | | Australia Southeast | +| Canada East | Switzerland North | | | Central India | +| East US | Switzerland West | | | East Asia | +| East US 2 | UK South | | | Japan East | +| South Central US | UK West | | | Jio India West | +| West US | West Europe | | | Korea Central | +| West US 3 | | | | Southeast Asia | ## Collect required information |
azure-monitor | Logs Dedicated Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md | Title: Azure Monitor Logs Dedicated Clusters description: Customers meeting the minimum commitment tier could use dedicated clusters Previously updated : 01/01/2023 Last updated : 07/01/2023 # Create and manage a dedicated cluster in Azure Monitor Logs -Linking a Log Analytics workspace to a dedicated cluster in Azure Monitor provides advanced capabilities and higher query utilization. Clusters require a minimum ingestion commitment of 500 GB per day. You can link and unlink workspaces from a dedicated cluster without any data loss or service interruption. +Linking a Log Analytics workspace to a dedicated cluster in Azure Monitor provides advanced capabilities and higher query utilization. Clusters require a minimum ingestion commitment of 100 GB per day. You can link and unlink workspaces from a dedicated cluster without any data loss or service interruption. ## Advanced capabilities Capabilities that require dedicated clusters: Capabilities that require dedicated clusters: - **[Cross-query optimization](../logs/cross-workspace-query.md)** - Cross-workspace queries run faster when workspaces are on the same cluster. - **Cost optimization** - Link your workspaces in same region to cluster to get commitment tier discount to all workspaces, even to ones with low ingestion that eligible for commitment tier discount.-- **[Availability zones](../../availability-zones/az-overview.md)** - Protect your data from datacenter failures by relying on datacenters in different physical locations, equipped with independent power, cooling, and networking. The physical separation in zones and independent infrastructure makes an incident far less likely since the workspace can rely on the resources from any of the zones. [Azure Monitor availability zones](./availability-zones.md) covers broader parts of the service and when available in your region, extends your Azure Monitor resilience automatically. Azure Monitor creates dedicated clusters as availability-zone-enabled (`isAvailabilityZonesEnabled`: 'true') by default in supported regions. You can't alter this setting after creating the cluster. -- Availability zones aren't currently supported in all regions. New clusters you create in supported regions have availability zones enabled by default. +- **[Availability zones](../../availability-zones/az-overview.md)** - Protect your data from datacenter failures by relying on datacenters in different physical locations, equipped with independent power, cooling, and networking. The physical separation in zones and independent infrastructure makes an incident far less likely since the workspace can rely on the resources from any of the zones. [Azure Monitor availability zones](./availability-zones.md#service-resiliencesupported-regions) covers broader parts of the service and when available in your region, extends your Azure Monitor resilience automatically. Azure Monitor creates dedicated clusters as availability-zone-enabled (`isAvailabilityZonesEnabled`: 'true') by default in supported regions. [Dedicated clusters Availability zones](./availability-zones.md#data-resiliencesupported-regions) aren't supported in all regions currently. +- **[Ingest from Azure Event Hubs](../logs/ingest-logs-event-hub.md)** - Lets you ingest data directly from an Event Bubs into a Log Analytics workspace. Dedicated cluster lets you use capability when ingestion from all linked workspaces combined meet commitment tier. ## Cluster pricing model-Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. +Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 100 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. ## Required permissions Provide the following properties when creating new dedicated cluster: - **ClusterName**: Must be unique for the resource group. - **ResourceGroupName**: Use a central IT resource group because many teams in the organization usually share clusters. For more design considerations, review Design a Log Analytics workspace configuration(../logs/workspace-design.md). - **Location**-- **SkuCapacity**: You can set the commitment tier (formerly called capacity reservations) to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters). +- **SkuCapacity**: You can set the commitment tier (formerly called capacity reservations) to 100, 200, 300, 400, 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters). - **Managed identity**: Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): - System-assigned managed identity - Generated automatically with the cluster creation when identity `type` is set to "*SystemAssigned*". This identity can be used later to grant storage access to your Key Vault for wrap and unwrap operations. Content-type: application/json }, "sku": { "name": "capacityReservation",- "Capacity": 500 + "Capacity": 100 }, "properties": { "billingType": "Cluster", Send a GET request on the cluster resource and look at the *provisioningState* v }, "sku": { "name": "capacityreservation",- "capacity": 500 + "capacity": 100 }, "properties": { "provisioningState": "ProvisioningAccount", Send a GET request on the cluster resource and look at the *provisioningState* v "isAvailabilityZonesEnabled": false, "capacityReservationProperties": { "lastSkuUpdate": "last-sku-modified-date",- "minCapacity": 500 + "minCapacity": 100 } }, "id": "/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.OperationalInsights/clusters/cluster-name", Authorization: Bearer <token> }, "sku": { "name": "capacityreservation",- "capacity": 500 + "capacity": 100 }, "properties": { "provisioningState": "Succeeded", Authorization: Bearer <token> "isAvailabilityZonesEnabled": false, "capacityReservationProperties": { "lastSkuUpdate": "last-sku-modified-date",- "minCapacity": 500 + "minCapacity": 100 } }, "id": "/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.OperationalInsights/clusters/cluster-name", The same as for 'clusters in a resource group', but in subscription scope. ## Update commitment tier in cluster -When the data volume to linked workspaces changes over time, you can update the Commitment Tier level appropriately to optimize cost. The tier is specified in units of Gigabytes (GB) and can have values of 500, 1000, 2000 or 5000 GB per day. You don't have to provide the full REST request body, but you must include the sku. +When the data volume to linked workspaces changes over time, you can update the Commitment Tier level appropriately to optimize cost. The tier is specified in units of Gigabytes (GB) and can have values of 100, 200, 300, 400, 500, 1000, 2000 or 5000 GB per day. You don't have to provide the full REST request body, but you must include the sku. During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period. During the commitment period, you can change to a higher commitment tier, which ```azurecli az account set --subscription "cluster-subscription-id" -az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --sku-capacity 500 +az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --sku-capacity 100 ``` #### [PowerShell](#tab/powershell) az monitor log-analytics cluster update --resource-group "resource-group-name" - ```powershell Select-AzSubscription "cluster-subscription-id" -Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity 500 +Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity 100 ``` #### [REST API](#tab/restapi) Authorization: Bearer <token> - 400--The body of the request is null or in bad format. - 400--SKU name is invalid. Set SKU name to capacityReservation. - 400--Capacity was provided but SKU isn't capacityReservation. Set SKU name to capacityReservation.-- 400--Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day.+- 400--Missing Capacity in SKU. Set Capacity value to 100, 200, 300, 400, 500, 1000, 2000 or 5000 GB/day. - 400--Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.-- 400--No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.+- 400--No SKU was set. Set the SKU name to capacityReservation and Capacity value to 100, 200, 300, 400, 500, 1000, 2000 or 5000 GB/day. - 400--Identity is null or empty. Set Identity with systemAssigned type. - 400--KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation. - 400--Operation can't be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed. |
azure-monitor | Monitor Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/monitor-workspace.md | Last updated 07/02/2023 To maintain the performance and availability of your Log Analytics workspace in Azure Monitor, you need to be able to proactively detect any issues that arise. This article describes how to monitor the health of your Log Analytics workspace by using data in the [Operation](/azure/azure-monitor/reference/tables/operation) table. This table is included in every Log Analytics workspace. It contains error messages and warnings that occur in your workspace. We recommend that you create alerts for issues with the level of Warning and Error. + ## _LogOperation function Azure Monitor Logs sends information on any issues to the [Operation](/azure/azure-monitor/reference/tables/operation) table in the workspace where the issue occurred. The `_LogOperation` system function is based on the **Operation** table and provides a simplified set of information for analysis and alerting. The following section provides information on data collection. #### Operation: Azure Activity Log collection -"Access to the subscription was lost. Ensure that the \<**subscription id**\> subscription is in the \<**tenant id**\> Azure Active Directory tenant. If the subscription is transferred to another tenant, there is no impact to the services, but information for the tenant could take up to an hour to propagate." +"Access to the subscription was lost. Ensure that the \<**subscription id**\> subscription is in the \<**tenant id**\> Azure Active Directory tenant. If the subscription is transferred to another tenant, there's no impact to the services, but information for the tenant could take up to an hour to propagate." In some situations, like moving a subscription to a different tenant, the Azure activity logs might stop flowing into the workspace. In those situations, you need to reconnect the subscription following the process described in this article. Check the `_LogOperation` table for the agent event:</br> `_LogOperation | where TimeGenerated >= ago(6h) | where Category == "Agent" | where Operation == "Linux Agent" | distinct _ResourceId` -The list will show the resource IDs where the agent has the wrong configuration. To mitigate the issue, reinstall the agents listed. +The list shows the resource IDs where the agent has the wrong configuration. To mitigate the issue, reinstall the agents listed. ## Alert rules Use the process in [Create, view, and manage log alerts by using Azure Monitor]( | `_LogOperation | where Level == "Error"` | 0 | 5 | 5 | | `_LogOperation | where Level == "Warning"` | 0 | 1,440 | 1,440 | -These alert rules will respond the same to all operations with Error or Warning. As you become more familiar with the operations that are generating alerts, you might want to respond differently for particular operations. For example, you might want to send notifications to different people for particular operations. +These alert rules respond the same to all operations with Error or Warning. As you become more familiar with the operations that are generating alerts, you might want to respond differently for particular operations. For example, you might want to send notifications to different people for particular operations. To create an alert rule for a specific operation, use a query that includes the **Category** and **Operation** columns. |
azure-monitor | Move Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace.md | description: Learn how to move your Log Analytics workspace to another subscript Previously updated : 09/01/2022 Last updated : 07/06/2023 Consider these points before you move a Log Analytics workspace: - Start/Stop VMs during off-hours - Microsoft Defender for Cloud - Workspace keys (both primary and secondary) are regenerated with a workspace move operation. If you keep a copy of your workspace keys in Azure Key Vault, update them with the new keys generated after the workspace is moved.-- Connected [Log Analytics agents](../agents/log-analytics-agent.md) remain connected and keep sending data to the workspace after the move. [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) will be disconnected via data collection rules during the move and should be reconfigured after the move.+- Connected [Log Analytics agents](../agents/log-analytics-agent.md) and [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) remain connected to the workspace after the move with no interruption to ingestion. >[!IMPORTANT] > **Microsoft Sentinel customers** |
azure-monitor | Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/scope.md | Last updated 10/20/2021 # Log query scope and time range in Azure Monitor Log Analytics When you run a [log query](../logs/log-query-overview.md) in [Log Analytics in the Azure portal](../logs/log-analytics-tutorial.md), the set of data evaluated by the query depends on the scope and the time range that you select. This article describes the scope and time range and how you can set each depending on your requirements. It also describes the behavior of different types of scopes. ## Query scope The query scope defines the records that are evaluated by the query. This will usually include all records in a single Log Analytics workspace or Application Insights application. Log Analytics also allows you to set a scope for a particular monitored Azure resource. This allows a resource owner to focus only on their data, even if that resource writes to multiple workspaces. |
azure-monitor | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md | Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-netapp-files | Azure Netapp Files Resource Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md | For volumes 100 TiB or under, you can increase the `maxfiles` limit up to 531,27 | Volume size (quota) | Automatic readjustment of the `maxfiles` limit | | - | - |-| > 100 TiB | 2,550,135,120 | +| > 100 TiB | 2,550,135,120 | +| 50 - 100 TiB | 1,530,081,072 to 2,550,135,120 | You can increase the `maxfiles` limit beyond 2,550,135,120 using a support request. For every 2,550,135,120 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 120 TiB. For example, if you increase `maxfiles` limit from 2,550,135,120 to 5,100,270,240 files (or any number in between), you need to increase the volume quota to at least 240 TiB. You can create an Azure support request to increase the adjustable limits from t - [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md) - [Regional capacity quota for Azure NetApp Files](regional-capacity-quota.md) - [Request region access for Azure NetApp Files](request-region-access.md)-- [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)+- [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md) |
azure-netapp-files | Backup Restore New Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md | Restoring a backup creates a new volume with the same protocol type. This articl * Restoring a backup to a new volume is not dependent on the networking type used by the source volume. You can restore the backup of a volume configured with Basic networking to a volume configured with Standard networking and vice versa. -See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for additional considerations about using Azure NetApp Files backup. +> [!CAUTION] +> Running multiple concurrent volume restores using Azure NetApp Files backup may increase the time it takes for each individual, in-progress restore to complete. As such, if time is a factor to you, you should prioritize and sequentialize the most important volume restores and wait until the restores are complete before starting another, lower priority, volume restores. ++See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup. ## Steps See [Requirements and considerations for Azure NetApp Files backup](backup-requi 3. In the Create a Volume page that appears, provide information for the fields in the page as applicable, and select **Review + Create** to begin restoring the backup to a new volume. * The **Protocol** field is pre-populated from the original volume and cannot be changed. - However, if you restore a volume from the backup list at the NetApp account level, you need to specify the Protocol field. The Protocol field must match the protocol of the original volume. Otherwise, the restore operation will fail with the following error: + However, if you restore a volume from the backup list at the NetApp account level, you need to specify the Protocol field. The Protocol field must match the protocol of the original volume. Otherwise, the restore operation fails with the following error: `Protocol Type value mismatch between input and source volume of backupId <backup-id of the selected backup>. Supported protocol type : <Protocol Type of the source volume>` * The **Quota** value must be greater than or equal to the size of the backup from which the restore is triggered (minimum 100 GiB). - * The **Capacity pool** that the backup is restored into must have sufficient unused capacity to host the new restored volume. Otherwise, the restore operation will fail. + * The **Capacity pool** that the backup is restored into must have sufficient unused capacity to host the new restored volume. Otherwise, the restore operation fails.  |
azure-netapp-files | Network Attached Storage Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-concept.md | Datasets in a NAS environment can be structured (data in a well-defined format, NAS is a common protocol across many industries, including oil & gas, high performance computing, media and entertainment, EDA, financial services, healthcare, genomics, manufacturing, higher education, and many others. Workloads can vary from simple file shares and home directories to applications with thousands of cores pushing operations to a single share, as well as more modernized application stacks, such as Kubernetes and container deployments. +To learn more about use cases and workloads, see [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md). ## Next steps * [Understand NAS protocols](network-attached-storage-protocols.md) |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md | Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md | Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md | Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-resource-manager | Quickstart Troubleshoot Bicep Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-bicep-deployment.md | The Bicep file attempts to reference a virtual network that doesn't exist in you 'Standard_ZRS' 'Premium_LRS' ])-parameter storageAccountType string = 'Standard_LRS' +param storageAccountType string = 'Standard_LRS' @description('Prefix for storage name.') param prefixName string |
azure-signalr | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md | Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
azure-vmware | Attach Azure Netapp Files To Azure Vmware Solution Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md | There are some important best practices to follow for optimal performance of NFS - Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 64, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). - Ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones) using the [the availability zone volume placement](../azure-netapp-files/manage-availability-zone-volume-placement.md) in the same subscription. Information regarding your AVS private cloud's availability zone can be viewed from the overview pane within the AVS private cloud. -For performance benchmarks that Azure NetApp Files datastores deliver for virtual machines on Azure VMware Solution, see [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](../azure-netapp-files/performance-benchmarks-azure-vmware-solution.md). +For performance benchmarks that Azure NetApp Files datastores deliver for VMs on Azure VMware Solution, see [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](../azure-netapp-files/performance-benchmarks-azure-vmware-solution.md). ## Attach an Azure NetApp Files volume to your private cloud To attach an Azure NetApp Files volume to your private cloud using Azure CLI, fo +## Protect Azure NetApp Files datastores and VMs ++Cloud Backup for Virtual Machines is a plug-in for Azure VMware Solution that provides backup and restore capabilities for datastores and VMs residing on Azure NetApp Files datastores. With Cloud Backup for Virtual Machines, you can take VM-consistent snapshots for quick recovery points and easily restore VMs and VMDKs residing on Azure NetApp Files datastores. For more information, see [Install Cloud Backup for Virtual Machines](install-cloud-backup-virtual-machines.md). + ## Service level change for Azure NetApp Files datastore Based on the performance requirements of the datastore, you can change the service level of the Azure NetApp Files volume used for the datastore by following the instructions to [dynamically change the service level of a volume for Azure NetApp Files](../azure-netapp-files/dynamic-change-volume-service-level.md). |
azure-vmware | Backup Azure Netapp Files Datastores Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/backup-azure-netapp-files-datastores-vms.md | You must create backup policies before you can use Cloud Backup for Virtual Mach | - | - | | VM consistency | Check this box to pause the VMs and create a VMware snapshot each time the backup job runs. <br> When you check the VM consistency box, backup operations might take longer and require more storage space. In this scenario, the VMs are first paused, then VMware performs a VM consistent snapshot. Cloud Backup for Virtual Machines then performs its backup operation, and then VM operations are resumed. <br> VM guest memory is not included in VM consistency snapshots. | | Include datastores with independent disks | Check this box to include any datastores with independent disks that contain temporary data in your backup. | - | Scripts | Enter the fully qualified path of the prescript or postscript that you want the Cloud Backup for Virtual Machines to run before or after backup operations. For example, you can run a script to update Simple Network Management Protocol (SNMP) traps, automate alerts, and send logs. The script path is validated at the time the script is executed. <br> **NOTE**: Prescripts and postscripts must be located on the virtual appliance VM. To enter multiple scripts, press **Enter** after each script path to list each script on a separate line. The semicolon (;) character is not allowed. | 7. Select **Add** to save your policy. You can verify that the policy has been created successfully and review the policy configuration by selecting the policy in the **Policies** page. |
azure-vmware | Deploy Disaster Recovery Using Jetstream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md | Title: Deploy disaster recovery using JetStream DR description: Learn how to implement JetStream DR for your Azure VMware Solution private cloud and on-premises VMware workloads. Previously updated : 07/15/2022 Last updated : 7/6/2022 -In this article, you'll implement JetStream DR for your Azure VMware Solution private cloud and on-premises VMware workloads. +In this article, you'll implement JetStream DR for your Azure VMware Solution private cloud and on-premises VMware vSphere workloads. To learn more about JetStream DR, see: To learn more about JetStream DR, see: | Items | Description | | | |-| **JetStream Management Server Virtual Appliance (MSA)** | MSA enables both Day 0 and Day 2 configuration, such as primary sites, protection domains, and recovering VMs. The MSA is deployed from an OVA on a vSphere node by the cloud admin. The MSA collects and maintains statistics relevant to VM protection and implements a vCenter plugin that allows you to manage JetStream DR natively with the vSphere Client. The MSA doesn't handle replication data of protected VMs. | -| **JetStream DR Virtual Appliance (DRVA)** | Linux-based Virtual Machine appliance receives protected VMs replication data from the source ESXi host. It maintains the replication log and manages the transfer of the VMs and their data to the object store such as Azure Blob Storage. Depending on the number of protected VMs and the amount of VM data to replicate, the private cloud admin can create one or more DRVA instances. | -| **JetStream ESXi host components (IO Filter packages)** | JetStream software installed on each ESXi host configured for JetStream DR. The host driver intercepts the vSphere VMs IO and sends the replication data to the DRVA. The IO filters also monitor relevant events, such as vMotion, Storage vMotion, snapshots, etc. | +| **JetStream Management Server Virtual Appliance (MSA)** | MSA enables both Day 0 and Day 2 configuration, such as primary sites, protection domains, and recovering VMs. The MSA is deployed from an OVA file on a vSphere node by the cloud admin. The MSA collects and maintains statistics relevant to VM protection and implements a vCenter Server plugin that allows you to manage JetStream DR natively with the vSphere Client. The MSA doesn't handle replication data of protected VMs. | +| **JetStream DR Virtual Appliance (DRVA)** | Linux-based Virtual Machine appliance receives protected VMs replication data from the source ESXi host. It maintains the replication log and manages the transfer of the VMs and their data to the object store such as Azure Blob Storage. Depending upon the number of protected VMs and the amount of VM data to replicate, the private cloud admin can create one or more DRVA instances. | +| **JetStream ESXi host components (IO Filter packages)** | JetStream software installed on each ESXi host configured for JetStream DR. The host driver intercepts the vSphere VMs I/O and sends the replication data to the DRVA. The IO filters also monitor relevant events, such as vMotion, Storage vMotion, snapshots, etc. | | **JetStream Protected Domain** | Logical group of VMs that will be protected together using the same policies and runbook. The data for all VMs in a protection domain is stored in the same Azure Blob container instance. A single DRVA instance handles replication to remote DR storage for all VMs in a Protected Domain. | | **Azure Blob Storage containers** | The protected VMs replicated data is stored in Azure Blobs. JetStream software creates one Azure Blob container instance for each JetStream Protected Domain. | |
azure-vmware | Disaster Recovery Using Vmware Site Recovery Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md | Title: Deploy disaster recovery with VMware Site Recovery Manager description: Deploy disaster recovery with VMware Site Recovery Manager (SRM) in your Azure VMware Solution private cloud. Previously updated : 7/5/2023 Last updated : 7/6/2023 # Deploy disaster recovery with VMware Site Recovery Manager (SRM) In this article, you'll implement disaster recovery for on-premises VMware vSphe VMware SRM helps you plan, test, and run the recovery of VMs between a protected VMware vCenter Server site and a recovery VMware vCenter Server site. You can use VMware SRM with Azure VMware Solution with the following two DR scenarios: -- On-premises VMware to Azure VMware Solution private cloud disaster recovery +- On-premises VMware vSphere to Azure VMware Solution private cloud disaster recovery - Primary Azure VMware Solution to Secondary Azure VMware Solution private cloud disaster recovery +The diagram shows the deployment of the on-premises VMware vSphere to Azure VMware Solution private cloud disaster recovery scenario. ++ The diagram shows the deployment of the primary Azure VMware Solution to secondary Azure VMware Solution scenario. :::image type="content" source="media/vmware-srm-vsphere-replication/vmware-site-recovery-manager-diagram.png" alt-text="Diagram showing the VMware Site Recovery Manager (SRM) disaster recovery solution in Azure VMware Solution." border="false" lightbox="media/vmware-srm-vsphere-replication/vmware-site-recovery-manager-diagram.png"::: You can use VMware SRM to implement different types of recovery, such as: - **Bidirectional Protection** uses a single set of paired VMware SRM sites to protect VMs in both directions. Each site can simultaneously be a protected site and a recovery site, but for a different set of VMs. >[!IMPORTANT]->Azure VMware Solution doesn't support: +>Azure VMware Solution doesn't support: >->- Array-based replication and storage policy protection groups ->- vVOLs Protection Groups +>- Array-based replication and storage policy protection groups +>- VMware vVOLs Protection Groups >- VMware SRM IP customization using SRM command-line tools->- One-to-Many and Many-to-One topology +>- One-to-Many and Many-to-One topologies >- Custom VMware SRM plug-in identifier or extension ID |
azure-vmware | Install Cloud Backup Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-cloud-backup-virtual-machines.md | Last updated 05/10/2023 Cloud Backup for Virtual Machines is a plug-in installed in the Azure VMware Solution and enables you to back up and restore Azure NetApp Files datastores and virtual machines (VMs). -Use Cloud Backup for VMs to: -* Build and securely connect both legacy and cloud-native workloads across environments and unify operations -* Provision and resize datastore volumes right from the Azure portal -* Take VM consistent snapshots for quick checkpoints -* Quickly recover VMs +Cloud Backup for Virtual Machines features: ++* Simple deployment via AVS `run command` from Azure portal +* Integration into the vSphere client for easy operations +* VM-consistent snapshots for quick recovery points +* Quick restoration of VMs and VMDKs on Azure NetApp Files datastores ## Install Cloud Backup for Virtual Machines |
azure-vmware | Restore Azure Netapp Files Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/restore-azure-netapp-files-vms.md | This article covers how to: ## Restore VMs from backups -When you restore a VM, you can overwrite the existing content with the backup copy that you select or you can restore to a new VM. +When you restore a VM, you can overwrite the existing content with the backup copy that you select or you can restore a deleted VM from a backup copy. You can restore VMs to the original datastore mounted on the original ESXi host (this overwrites the original VM). You can restore VMs to the original datastore mounted on the original ESXi host 1. On the **Select Scope** page, select **Entire Virtual Machine** in the **Restore scope** field, then select **Restore location**, and then enter the destination ESXi information where the backup should be mounted. 1. When restoring partial backups, the restore operation skips the Select Scope page. 1. Enable **Restart VM** checkbox if you want the VM to be powered on after the restore operation.-1. On the **Select Location** page, select the location for the primary or secondary location. +1. On the **Select Location** page, select the location for the primary location. 1. Review the **Summary** page and then select **Finish**. 1. **Optional:** Monitor the operation progress by selecting Recent Tasks at the bottom of the screen. Although the VMs are restored, they're not automatically added to their former r ## Restore deleted VMs from backups -You can restore a deleted VM from a datastore primary or secondary backup to an ESXi host that you select. You can also restore VMs to the original datastore mounted on the original ESXi host, which creates a clone of the VM. +You can restore a deleted VM from a datastore primary backup to an ESXi host that you select. You can restore VMs to the original datastore mounted on the original ESXi host, which creates a clone of the VM. ## Prerequisites to restore deleted VMs You can restore existing VMDKs or deleted or detached VMDKs from either a primar * Filter the backup list by selecting the filter icon and a date and time range. Select if you want backups that contain VMware snapshots, if you want mounted backups, and primary location. Select **OK** to return to the wizard. 1. On the **Select Scope** page, select **Particular virtual disk** in the Restore scope field, then select the virtual disk and destination datastore.-1. On the **Select Location** page, select the snapshot copy that you want to restore. +1. On the **Select Location** page, select the location that you want to restore to. 1. Review the **Summary** page and then select **Finish**. 1. **Optional:** Monitor the operation progress by clicking Recent Tasks at the bottom of the screen. |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
backup | Sap Hana Database With Hana System Replication Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md | Title: Back up SAP HANA System Replication databases on Azure VMs description: In this article, discover how to back up SAP HANA databases with HANA System Replication enabled. Previously updated : 03/08/2023 Last updated : 07/06/2023 +You can also switch the protection of SAP HANA database on Azure VM (standalone) on Azure Backup to HSR. [Learn more](#switch-database-protection-from-standalone-to-hsr-on-azure-backup). + >[!Note]-> For more information about the supported configurations and scenarios, see [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md). +>- The support for HSR + Database scenario is currently not available because there is a restriction to have VM and Vault in the same region. +>- For more information about the supported configurations and scenarios, see [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md). ## Prerequisites Backups run in accordance with the policy schedule. Learn how to [run an on-dema You can run an on-demand backup using SAP HANA native clients to local file-system instead of Backint. Learn more how to [manage operations using SAP native clients](sap-hana-database-manage.md#manage-operations-using-sap-hana-native-clients). +## Switch database protection from standalone to HSR on Azure Backup ++You can now switch the protection of SAP HANA database on Azure VM (standalone) on Azure Backup to HSR. If youΓÇÖve already configured HSR and protecting only the primary node using Azure Backup, you can modify the configuration to protect both primary and secondary nodes. ++Follow these steps: ++1. On standalone VM, Primary node, or Secondary node (once protected using Azure Backup), go to *vault* > **Backup Items** > **SAP HANA in Azure VM** > **View Details** > **Stop backup**, and then select **Retain backup data** > **Stop backup** to stop backup and retain data. ++2. (Mandatory) [Run the latest preregistration script](sap-hana-database-with-hana-system-replication-backup.md#run-the-preregistration-script) on both primary and condary VM nodes ++ The preregistration script contains the HSR attributes. ++3. [Configure HSR manually](sap-hana-database-with-hana-system-replication-backup.md#configure-backup). +You can also configure the backup with clustering tools, such as **Pacemaker**. ++ Skip this step if HSR configuration is complete. ++4. Add the Primary and secondary nodes to Azure Backup, [rediscover the databases](sap-hana-database-with-hana-system-replication-backup.md#discover-the-databases), and [resume protection](sap-hana-database-manage.md#resume-protection-for-an-sap-hana-database-or-hana-instance). ++ >[!Note] + >For HSR deployments, Protected Instance cost is charged to HSR container. Two nodes (primary and secondary) will form a single HSR logical container and storage cost is charged as applicable. ++5. Before a planned failover, [ensure that both VMs/Nodes are registered to the vault (physical and logical registration)](sap-hana-database-manage.md#verify-the-registration-status-of-vms-or-nodes-to-the-vault). ## Next steps |
batch | Disk Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/disk-encryption.md | Title: Create a pool with disk encryption enabled description: Learn how to use disk encryption configuration to encrypt nodes with a platform-managed key. Previously updated : 04/16/2021 Last updated : 06/29/2023 ms.devlang: csharp Request body: "imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",- "sku": "18.04-LTS" + "sku": "22.04-LTS" }, "diskEncryptionConfiguration": { "targets": [ Request body: "TemporaryDisk" ] }- "nodeAgentSKUId": "batch.node.ubuntu 18.04" + "nodeAgentSKUId": "batch.node.ubuntu 22.04" }, "resizeTimeout": "PT15M", "targetDedicatedNodes": 5, az batch pool create \ --id diskencryptionPool \ --vm-size Standard_DS1_V2 \ --target-dedicated-nodes 2 \- --image canonical:ubuntuserver:18.04-LTS \ - --node-agent-sku-id "batch.node.ubuntu 18.04" \ + --image canonical:ubuntuserver:22.04-LTS \ + --node-agent-sku-id "batch.node.ubuntu 22.04" \ --disk-encryption-targets OsDisk TemporaryDisk ``` |
batch | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md | Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
cognitive-services | Embedded Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md | Follow these steps to install the Speech SDK for Java using Apache Maven: <dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk-embedded</artifactId>- <version>1.29.0</version> + <version>1.30.0</version> </dependency> </dependencies> </project> Be sure to use the `@aar` suffix when the dependency is specified in `build.grad ``` dependencies {- implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.29.0@aar' + implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.30.0@aar' } ``` ::: zone-end |
cognitive-services | How To Speech Synthesis Viseme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md | zone_pivot_groups: programming-languages-speech-services-nomore-variant # Get facial position with viseme > [!NOTE]-> Viseme ID supports neural voices in [all viseme-supported locales](language-support.md?tabs=tts). Scalable Vector Graphics (SVG) only supports neural voices in `en-US` locale, and blend shapes supports neural voices in `en-US` and `zh-CN` locales. +> To explore the locales supported for Viseme ID and blend shapes, refer to [the list of all supported locales](language-support.md?tabs=tts#viseme). Scalable Vector Graphics (SVG) is only supported for the `en-US` locale. A *viseme* is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. Each viseme depicts the key facial poses for a specific set of phonemes. |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md | Use the following table to determine supported styles and roles for each neural [!INCLUDE [Language support include](includes/language-support/voice-styles-and-roles.md)] +### Viseme ++This table lists all the locales supported for [Viseme](speech-synthesis-markup-structure.md#viseme-element). For more information about Viseme, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md) and [Viseme element](speech-synthesis-markup-structure.md#viseme-element). ++ ### Prebuilt neural voices Each prebuilt neural voice supports a specific language and dialect, identified by locale. You can try the demo and hear the voices in the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery). |
cognitive-services | Releasenotes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md | Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to ## Recent highlights -* Speech SDK 1.29.0 was released in June 2023. +* Speech SDK 1.30.0 was released in July 2023. * Speech to text and text to speech container versions were updated in March 2023. * Some Speech Studio [scenarios](speech-studio-overview.md#speech-studio-scenarios) are available to try without an Azure subscription. * Custom Speech to text container disconnected mode was released in January 2023. |
cognitive-services | Enable Vnet Service Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/enable-vnet-service-endpoint.md | + + Title: Enable Virtual Network service endpoints with Custom Translator service ++description: This article describes how to use Custom Translator service with an Azure Virtual Network service endpoint. +++++ Last updated : 07/05/2023+++++# Enable Custom Translator through Azure Virtual Network ++In this article, we show you how to set up and use VNet service endpoints with Custom Translator. ++Azure Virtual Network (VNet) [service endpoints](../../../../virtual-network/virtual-network-service-endpoints-overview.md) securely connect your Azure service resources to your virtual networks over an optimized route via the Azure global network. Service endpoints enable private IP addresses within your virtual network to reach the endpoint of an Azure service without the need for a public IP address on the virtual network. ++For more information, see [Azure Virtual Network overview](../../../../virtual-network/virtual-networks-overview.md) ++> [!NOTE] +> Before you start, review [how to use virtual networks with Cognitive Services](../../../cognitive-services-virtual-networks.md). ++ To set up a Translator resource for VNet service endpoint scenarios, you need the resources: ++* [A regional Translator resource (global isn't supported)](../../create-translator-resource.md). +* [VNet and networking settings for the Translator resource](#configure-virtual-networks-resource-networking-settings). ++## Configure virtual networks resource networking settings ++To start, you need to add all virtual networks that are allowed access via the service endpoint to the Translator resource networking properties. To enable access to a Translator resource via the VNet, you need to enable the `Microsoft.CognitiveServices` service endpoint type for the required subnets of your virtual network. Doing so routes all subnet traffic related to Cognitive Services through the private global network. If you intend to access any other Cognitive Services resources from the same subnet, make sure these resources are also configured to allow your virtual network. ++> [!NOTE] +> +> * If a virtual network isn't added as *allowed* in the Translator resource networking properties, it won't have access to the Translator resource via the service endpoint, even if the `Microsoft.CognitiveServices` service endpoint is enabled for the virtual network. +> * If the service endpoint is enabled but the virtual network isn't allowed, the Translator resource won't be accessible for the virtual network through a public IP address, regardless of your other network security settings. +> * Enabling the `Microsoft.CognitiveServices` endpoint routes all traffic related to Cognitive Services through the private global network. Thus, the virtual network should be explicitly allowed to access the resource. +> * This guidance applies for all Cognitive Services resources, not just for Translator resources. ++Let's get started: ++1. Navigate to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. ++1. Select a regional Translator resource. ++1. From the **Resource Management** group in the left side panel, select **Networking**. ++ :::image type="content" source="../media/how-to/resource-management-networking.png" alt-text="Screenshot of the networking selection under Resource Management in the Azure portal."::: ++1. From the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**. ++ :::image type="content" source="../media/how-to/firewalls-virtual-network.png" alt-text="Screenshot of the firewalls and virtual network page in the Azure portal."::: ++ > [!NOTE] + > To use Virtual Network service endpoints, you need to select the **Selected Networks and Private Endpoints** network security option. No other options are supported. ++1. Select **Add existing virtual network** or **Add new virtual network** and provide the required parameters. ++ * Complete the process by selecting **Add** for an existing virtual network or **Create** for a new one. ++ * If you add an existing virtual network, the `Microsoft.CognitiveServices` service endpoint is automatically enabled for the selected subnets. ++ * If you create a new virtual network, the **default** subnet is automatically configured to the `Microsoft.CognitiveServices` service endpoint. This operation can take few minutes. ++ > [!NOTE] + > As described in the [previous section](#configure-virtual-networks-resource-networking-settings), when you configure a virtual network as *allowed* for the Translator resource, the `Microsoft.CognitiveServices` service endpoint is automatically enabled. If you later disable it, you need to re-enable it manually to restore the service endpoint access to the Translator resource (and to other Cognitive Services resources). ++1. Now, when you choose the **Selected Networks and Private Endpoints** tab, you can see your enabled virtual network and subnets under the **Virtual networks** section. ++1. How to check the service endpoint ++ * From the **Resource Management** group in the left side panel, select **Networking**. ++ * Select your **virtual network** and then select the desired **subnet**. ++ :::image type="content" source="../media/how-to/select-subnet.png" alt-text="Screenshot of subnet selection section in the Azure portal."::: ++ * A new **Subnets** window appears. ++ * Select **Service endpoints** from the **Settings** menu located on the left side panel. ++ :::image type="content" source="../media/how-to/service-endpoints.png" alt-text="Screenshot of the **Subnets** selection from the **Settings** menu in the Azure portal."::: ++1. From the **Settings** menu in the left side panel, choose **Service Endpoints** and, in the main window, check that your virtual network subnet is included in the `Microsoft.CognitiveServices` list. ++## Use the Custom Translator portal ++The following table describes Custom Translator project accessibility per Translator resource **Networking** → **Firewalls and virtual networks** security setting: ++ :::image type="content" source="../media/how-to/allow-network-access.png" alt-text="Screenshot of allowed network access section in the Azure portal."::: ++> [!IMPORTANT] + > If you configure **Selected Networks and Private Endpoints** via the **Networking** → **Firewalls and virtual networks** tab, you can't use the Custom Translator portal and your Translator resource. However, you can still use the Translator resource outside of the Custom Translator portal. ++| Translator resource network security setting | Custom Translator portal accessibility | +|--|--| +| All networks | No restrictions | +| Selected Networks and Private Endpoints | Accessible from allowed VNET IP addresses | +| Disabled | Not accessible | ++To use Custom Translator without relaxing network access restrictions on your production Translator resource, consider this workaround: ++* Create another Translator resource for development that can be used on a public network. ++* Prepare your custom model in the Custom Translator portal on the development resource. ++* Copy the model on your development resource to your production resource using [Custom Translator non-interactive REST API](https://microsofttranslator.github.io/CustomTranslatorApiSamples/) `workspaces` → `copy authorization and models` → `copy functions`. ++Congratulations! You learned how to use Azure VNet service endpoints with Custom Translator. ++## Learn more ++Visit the [**Custom Translator API**](https://microsofttranslator.github.io/CustomTranslatorApiSamples/) page to view our non-interactive REST APIs. |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/quickstart.md | -Translator is a cloud-based neural machine translation service that is part of the Azure Cognitive Services family of REST API that can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this quickstart, you'll learn to build custom solutions for your applications across all [supported languages](../language-support.md). +Translator is a cloud-based neural machine translation service that is part of the Azure Cognitive Services family of REST API that can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this quickstart, learn to build custom solutions for your applications across all [supported languages](../language-support.md). ## Prerequisites - To use the [Custom Translator](https://portal.customtranslator.azure.ai/) portal, you'll need the following resources: + To use the [Custom Translator](https://portal.customtranslator.azure.ai/) portal, you need the following resources: * A [Microsoft account](https://signup.live.com). * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) * Once you have an Azure subscription, [create a Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.- * You'll need the key and endpoint from the resource to connect your application to the Translator service. You'll paste your key and endpoint into the code below later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page: + * You need the key and endpoint from the resource to connect your application to the Translator service. Paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page: :::image type="content" source="../media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page."::: For more information, *see* [how to create a Translator resource](../how-to-crea ## Custom Translator portal ->[!Note] ->Custom Translator does not support creating workspace for a Translator Text API resource created inside an [Enabled VNet](../../../api-management/api-management-using-with-vnet.md?tabs=stv2). - Once you have the above prerequisites, sign in to the [Custom Translator](https://portal.customtranslator.azure.ai/) portal to create workspaces, build projects, upload files, train models, and publish your custom solution. You can read an overview of translation and custom translation, learn some tips, and watch a getting started video in the [Azure AI technical blog](https://techcommunity.microsoft.com/t5/azure-ai/customize-a-translation-to-make-sense-in-a-specific-context/ba-p/2811956). You can read an overview of translation and custom translation, learn some tips, 1. [**Create a workspace**](#create-a-workspace). A workspace is a work area for composing and building your custom translation system. A workspace can contain multiple projects, models, and documents. All the work you do in Custom Translator is done inside a specific workspace. -1. [**Create a project**](#create-a-project). A project is a wrapper for models, documents, and tests. Each project includes all documents that are uploaded into that workspace with the correct language pair. For example, if you have both an English-to-Spanish project and a Spanish-to-English project, the same documents will be included in both projects. +1. [**Create a project**](#create-a-project). A project is a wrapper for models, documents, and tests. Each project includes all documents that are uploaded into that workspace with the correct language pair. For example, if you have both an English-to-Spanish project and a Spanish-to-English project, the same documents are included in both projects. 1. [**Upload parallel documents**](#upload-documents). Parallel documents are pairs of documents where one (target) is the translation of the other (source). One document in the pair contains sentences in the source language and the other document contains sentences translated into the target language. It doesn't matter which language is marked as "source" and which language is marked as "target"ΓÇöa parallel document can be used to train a translation system in either direction. -1. [**Train your model**](#train-your-model). A model is the system that provides translation for a specific language pair. The outcome of a successful training is a model. When you train a model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself. A 10,000 parallel sentence is the minimum requirement to train a model. +1. [**Train your model**](#train-your-model). A model is the system that provides translation for a specific language pair. The outcome of a successful training is a model. When you train a model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator automatically assembles tuning and testing data. It uses a random subset of sentences from your training documents, and excludes these sentences from the training data itself. A 10,000 parallel sentence is the minimum requirement to train a model. 1. [**Test (human evaluate) your model**](#test-your-model). The testing set is used to compute the [BLEU](beginners-guide.md#what-is-a-bleu-score) score. This score indicates the quality of your translation system. You can read an overview of translation and custom translation, learn some tips, ## Create a project -Once the workspace is created successfully, you'll be taken to the **Projects** page. +Once the workspace is created successfully, you're taken to the **Projects** page. -You'll create English-to-German project to train a custom model with only a [training](training-and-model.md#training-document-type-for-custom-translator) document type. +You create English-to-German project to train a custom model with only a [training](training-and-model.md#training-document-type-for-custom-translator) document type. 1. Select **Create project**. You'll create English-to-German project to train a custom model with only a [tra ## Upload documents -In order to create a custom model, you need to upload all or a combination of [training](training-and-model.md#training-document-type-for-custom-translator), [tuning](training-and-model.md#tuning-document-type-for-custom-translator), [testing](training-and-model.md#testing-dataset-for-custom-translator), and [dictionary](concepts/dictionaries.md) document types. --In this quickstart, you'll upload [training](training-and-model.md#training-document-type-for-custom-translator) documents for customization. +In order to create a custom model, you need to upload all or a combination of [training](training-and-model.md#training-document-type-for-custom-translator), [tuning](training-and-model.md#tuning-document-type-for-custom-translator), [testing](training-and-model.md#testing-dataset-for-custom-translator), and [dictionary](concepts/dictionaries.md) document types for customization. >[!Note] > You can use our sample training, phrase and sentence dictionaries dataset, [Customer sample English-to-German datasets](https://github.com/MicrosoftTranslator/CustomTranslatorSampleDatasets), for this quickstart. However, for production, it's better to upload your own training dataset. Now you're ready to train your English-to-German model. 1. After successful model training, select **Model details** from the left navigation menu. -1. Select the model name *en-de with sample data*. Review training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You'll use the `Category ID` to make translation requests. +1. Select the model name *en-de with sample data*. Review training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You use the `Category ID` to make translation requests. -1. Evaluate the model [BLEU](beginners-guide.md#what-is-a-bleu-score) score. The test set **BLEU score** is the custom model score and **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model. +1. Evaluate the model [BLEU](beginners-guide.md#what-is-a-bleu-score) score. The test set **BLEU score** is the custom model score and **Baseline BLEU** is the pretrained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model. >[!Note] >If you train with our shared customer sample datasets, BLEU score will be different than the image. Once your training has completed successfully, inspect the test set translated s 1. Select **Test model** from the left navigation menu. 2. Select "en-de with sample data"-3. Human evaluate translation from **New model** (custom model), and **Baseline model** (our pre-trained baseline used for customization) against **Reference** (target translation from the test set) +3. Human evaluate translation from **New model** (custom model), and **Baseline model** (our pretrained baseline used for customization) against **Reference** (target translation from the test set) ## Publish your model Publishing your model makes it available for use with the Translator API. A proj ## Next steps > [!div class="nextstepaction"]-> [Learn how to manage workspaces](how-to/create-manage-workspace.md) +> [Learn how to manage workspaces](how-to/create-manage-workspace.md) |
cognitive-services | Entity Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/entity-metadata.md | + + Title: Entity Metadata provided by Named Entity Recognition ++description: Learn about entity metadata in the NER feature. ++++++ Last updated : 06/13/2023++++# Entity Metadata ++The Entity Metadata object captures optional additional information about detected entities, providing resolutions specifically for numeric and temporal entities. This attribute is populated only when there's supplementary data available, enhancing the comprehensiveness of the detected entities. The Metadata component encompasses resolutions designed for both numeric and temporal entities. It's important to handle cases where the Metadata attribute may be empty or absent, as its presence isn't guaranteed for every entity. ++Currently, metadata components handle resolutions to a standard format for an entity. Entities can be expressed in various forms and resolutions provide standard predictable formats for common quantifiable types. For example, "eighty" and "80" should both resolve to the integer `80`. ++You can use NER resolutions to implement actions or retrieve further information. For example, your service can extract datetime entities to extract dates and times that are provided to a meeting scheduling system. ++> [!NOTE] +> Entity Metadata are only supported starting from **_api-version=2023-04-15-preview_**. For older API versions, you may check the [Entity Resolutions article](./entity-resolutions.md). ++This article documents the resolution objects returned for each entity category or subcategory under the metadata object. ++## Numeric Entities ++### Age ++Examples: "10 years old", "23 months old", "sixty Y.O." ++```json +"metadata": { + "unit": "Year", + "value": 10 + } +``` ++Possible values for "unit": +- Year +- Month +- Week +- Day +++### Currency ++Examples: "30 Egyptian pounds", "77 USD" ++```json +"metadata": { + "unit": "Egyptian pound", + "ISO4217": "EGP", + "value": 30 + } +``` ++Possible values for "unit" and "ISO4217": +- [ISO 4217 reference](https://docs.1010data.com/1010dataReferenceManual/DataTypesAndFormats/currencyUnitCodes.html). ++## Datetime/Temporal entities ++Datetime includes several different subtypes that return different response objects. ++### Date ++Specific days. ++Examples: "January 1 1995", "12 april", "7th of October 2022", "tomorrow" ++```json +"metadata": { + "dateValues": [ + { + "timex": "1995-01-01", + "value": "1995-01-01" + } + ] + } +``` ++Whenever an ambiguous date is provided, you're offered different options for your resolution. For example, "12 April" could refer to any year. Resolution provides this year and the next as options. The `timex` value `XXXX` indicates no year was specified in the query. ++```json +"metadata": { + "dateValues": [ + { + "timex": "XXXX-04-12", + "value": "2022-04-12" + }, + { + "timex": "XXXX-04-12", + "value": "2023-04-12" + } + ] + } +``` ++Ambiguity can occur even for a given day of the week. For example, saying "Monday" could refer to last Monday or this Monday. Once again the `timex` value indicates no year or month was specified, and uses a day of the week identifier (W) to indicate the first day of the week. ++```json +"metadata" :{ + "dateValues": [ + { + "timex": "XXXX-WXX-1", + "value": "2022-10-03" + }, + { + "timex": "XXXX-WXX-1", + "value": "2022-10-10" + } + ] + } +``` +++### Time ++Specific times. ++Examples: "9:39:33 AM", "seven AM", "20:03" ++```json +"metadata": { + "timex": "T09:39:33", + "value": "09:39:33" + } +``` ++### Datetime ++Specific date and time combinations. ++Examples: "6 PM tomorrow", "8 PM on January 3rd", "Nov 1 19:30" ++```json +"metadata": { + "timex": "2022-10-07T18", + "value": "2022-10-07 18:00:00" + } +``` ++Similar to dates, you can have ambiguous datetime entities. For example, "May 3rd noon" could refer to any year. Resolution provides this year and the next as options. The `timex` value **XXXX** indicates no year was specified. ++```json +"metadata": { + "dateValues": [ + { + "timex": "XXXX-05-03T12", + "value": "2022-05-03 12:00:00" + }, + { + "timex": "XXXX-05-03T12", + "value": "2023-05-03 12:00:00" + } + ] + } +``` ++### Datetime ranges ++A datetime range is a period with a beginning and end date, time, or datetime. ++Examples: "from january 3rd 6 AM to april 25th 8 PM 2022", "between Monday to Thursday", "June", "the weekend" ++The "duration" parameter indicates the time passed in seconds (S), minutes (M), hours (H), or days (D). This parameter is only returned when an explicit start and end datetime are in the query. "Next week" would only return with "begin" and "end" parameters for the week. ++```json +"metadata": { + "duration": "PT2702H", + "begin": "2022-01-03 06:00:00", + "end": "2022-04-25 20:00:00" + } +``` ++### Set ++A set is a recurring datetime period. Sets don't resolve to exact values, as they don't indicate an exact datetime. ++Examples: "every Monday at 6 PM", "every Thursday", "every weekend" ++For "every Monday at 6 PM", the `timex` value indicates no specified year with the starting **XXXX**, then every Monday through **WXX-1** to determine first day of every week, and finally **T18** to indicate 6 PM. ++```json +"metadata": { + "timex": "XXXX-WXX-1T18", + "value": "not resolved" + } +``` ++## Dimensions ++Examples: "24 km/hr", "44 square meters", "sixty six kilobytes" ++```json +"metadata": { + "unit": "KilometersPerHour", + "value": 24 + } +``` ++Possible values for the "unit" field values: ++- **For Measurements**: + - SquareKilometer + - SquareHectometer + - SquareDecameter + - SquareMeter + - SquareDecimeter + - SquareCentimeter + - SquareMillimeter + - SquareInch + - SquareFoot + - SquareMile + - SquareYard + - Acre ++- **For Information**: + - Bit + - Kilobit + - Megabit + - Gigabit + - Terabit + - Petabit + - Byte + - Kilobyte + - Megabyte + - Gigabyte + - Terabyte + - Petabyte + +- **For Length, width, height**: + - Kilometer + - Hectometer + - Decameter + - Meter + - Decimeter + - Centimeter + - Millimeter + - Micrometer + - Nanometer + - Picometer + - Mile + - Yard + - Inch + - Foot + - Light year + - Pt ++- **For Speed**: + - MetersPerSecond + - KilometersPerHour + - KilometersPerMinute + - KilometersPerSecond + - MilesPerHour + - Knot + - FootPerSecond + - FootPerMinute + - YardsPerMinute + - YardsPerSecond + - MetersPerMillisecond + - CentimetersPerMillisecond + - KilometersPerMillisecond ++- **For Volume**: + - CubicMeter + - CubicCentimeter + - CubicMillimiter + - Hectoliter + - Decaliter + - Liter + - Deciliter + - Centiliter + - Milliliter + - CubicYard + - CubicInch + - CubicFoot + - CubicMile + - FluidOunce + - Teaspoon + - Tablespoon + - Pint + - Quart + - Cup + - Gill + - Pinch + - FluidDram + - Barrel + - Minim + - Cord + - Peck + - Bushel + - Hogshead ++- **For Weight**: + - Kilogram + - Gram + - Milligram + - Microgram + - Gallon + - MetricTon + - Ton + - Pound + - Ounce + - Grain + - Pennyweight + - LongTonBritish + - ShortTonUS + - ShortHundredweightUS + - Stone + - Dram +++## Ordinal ++Examples: "3rd", "first", "last" ++```json +"metadata": { + "offset": "3", + "relativeTo": "Start", + "value": "3" + } +``` ++Possible values for "relativeTo": +- Start +- End ++## Temperature ++Examples: "88 deg fahrenheit", "twenty three degrees celsius" ++```json +"metadata": { + "unit": "Fahrenheit", + "value": 88 + } +``` ++Possible values for "unit": +- Celsius +- Fahrenheit +- Kelvin +- Rankine |
cognitive-services | Ga Preview Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/ga-preview-mapping.md | + + Title: Preview API overview ++description: Learn about the NER preview API. ++++++ Last updated : 06/14/2023+++++# Preview API changes ++Use this article to get an overview of the new API changes starting from `2023-04-15-preview` version. This API change mainly introduces two new concepts (`entity types` and `entity tags`) replacing the `category` and `subcategory` fields in the current Generally Available API. ++## Entity types +Entity types represent the lowest (or finest) granularity at which the entity has been detected and can be considered to be the base class that has been detected. ++## Entity tags +Entity tags are used to further identify an entity where a detected entity is tagged by the entity type and additional tags to differentiate the identified entity. The entity tags list could be considered to include categories, subcategories, sub-subcategories, and so on. ++## Changes from generally available API to preview API +The changes introduce better flexibility for named entity recognition, including: +* More granular entity recognition through introducing the tags list where an entity could be tagged by more than one entity tag. +* Overlapping entities where entities could be recognized as more than one entity type and if so, this entity would be returned twice. If an entity was recognized to belong to two entity tags under the same entity type, both entity tags are returned in the tags list. +* Filtering entities using entity tags, you can learn more about this by navigating to [this article](../how-to-call#select-which-entities-to-be-returned-(Preview API only).md). +* Metadata Objects which contain additional information about the entity but currently only act as a wrapper for the existing entity resolution feature. You can learn more about this new feature [here](entity-metadata.md). ++## Generally available to preview API entity mappings +You can see a comparison between the structure of the entity categories/types in the [Supported Named Entity Recognition (NER) entity categories and entity types article](./named-entity-categories.md). Below is a table describing the mappings between the results you would expect to see from the Generally Available API and the Preview API. ++| Type | Tags | +|-|-| +| Date | Temporal, Date | +| DateRange | Temporal, DateRange | +| DateTime | Temporal, DateTime | +| DateTimeRange | Temporal, DateTimeRange | +| Duration | Temporal, Duration | +| SetTemporal | Temporal, SetTemporal | +| Time | Temporal, Time | +| TimeRange | Temporal, TimeRange | +| City | GPE, Location, City | +| State | GPE, Location, State | +| CountryRegion | GPE, Location, CountryRegion | +| Continent | GPE, Location, Continent | +| GPE | Location, GPE | +| Location | Location | +| Airport | Structural, Location | +| Structural | Location, Structural | +| Geological | Location, Geological | +| Age | Numeric, Age | +| Currency | Numeric, Currency | +| Number | Numeric, Number | +| NumberRange | Numeric, NumberRange | +| Percentage | Numeric, Percentage | +| Ordinal | Numeric, Ordinal | +| Temperature | Numeric, Dimension, Temperature | +| Speed | Numeric, Dimension, Speed | +| Weight | Numeric, Dimension, Weight | +| Height | Numeric, Dimension, Height | +| Length | Numeric, Dimension, Length | +| Volume | Numeric, Dimension, Volume | +| Area | Numeric, Dimension, Area | +| Information | Numeric, Dimension, Information | +| Address | Address | +| Person | Person | +| PersonType | PersonType | +| Organization | Organization | +| Product | Product | +| ComputingProduct | Product, ComputingProduct | +| IP | IP | +| Email | Email | +| URL | URL | +| Skill | Skill | +| Event | Event | +| CulturalEvent | Event, CulturalEvent | +| SportsEvent | Event, SportsEvent | +| NaturalEvent | Event, NaturalEvent | + |
cognitive-services | Named Entity Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/named-entity-categories.md | -# Supported Named Entity Recognition (NER) entity categories +# Supported Named Entity Recognition (NER) entity categories and entity types -Use this article to find the entity categories that can be returned by [Named Entity Recognition](../how-to-call.md) (NER). NER runs a predictive model to identify and categorize named entities from an input document. +Use this article to find the entity categories that can be returned by [Named Entity Recognition](../how-to-call.md) (NER). NER runs a predictive model to identify and categorize named entities from an input document. +> [!NOTE] +> * Starting from API version 2023-04-15-preview, the category and subcategory fields are replaced with entity types and tags to introduce better flexibility. ++# [Generally Available API](#tab/ga-api) + ## Category: Person This category contains the following entity: The entity in this category can have the following subcategories. :::column-end::: :::row-end::: +# [Preview API](#tab/preview-api) ++## Supported Named Entity Recognition (NER) entity categories ++Use this article to find the entity types and the additional tags that can be returned by [Named Entity Recognition](../how-to-call.md) (NER). NER runs a predictive model to identify and categorize named entities from an input document. ++### Type: Address ++Specific street-level mentions of locations: house/building numbers, streets, avenues, highways, intersections referenced by name. ++### Type: Numeric ++Numeric values. ++This entity type could be tagged by the following entity tags: ++#### Age ++**Description:** Ages ++#### Currency ++**Description:** Currencies ++#### Number ++**Description:** Numbers without a unit ++#### NumberRange ++**Description:** Range of numbers ++#### Percentage ++**Description:** Percentages ++#### Ordinal ++**Description:** Ordinal Numbers ++#### Temperature ++**Description:** Temperatures ++#### Dimension ++**Description:** Dimensions or measurements ++This entity tag also supports tagging the entity type with the following tags: ++|Entity tag |Details | +|--|-| +|Length |Length of an object| +|Weight |Weight of an object| +|Height |Height of an object| +|Speed |Speed of an object | +|Area |Area of an object | +|Volume |Volume of an object| +|Information|Unit of measure for digital information| ++## Type: Temporal ++Dates and times of day ++This entity type could be tagged by the following entity tags: ++#### Date ++**Description:** Calendar dates ++#### Time ++**Description:** Times of day ++#### DateTime ++**Description:** Calendar dates with time ++#### DateRange ++**Description:** Date range ++#### TimeRange ++**Description:** Time range ++#### DateTimeRange ++**Description:** Date Time range ++#### Duration ++**Description:** Durations ++#### SetTemporal ++**Description:** Set, repeated times ++## Type: Event ++Events with a timed period ++This entity type could be tagged by the following entity tags: ++#### SocialEvent ++**Description:** Social events ++#### CulturalEvent ++**Description:** Cultural events ++#### NaturalEvent ++**Description:** Natural events ++## Type: Location ++Particular point or place in physical space ++This entity type could be tagged by the following entity tags: +#### GPE ++**Description:** GeoPolitialEntity ++This entity tag also supports tagging the entity type with the following tags: ++|Entity tag |Details | +|-|-| +|City |Cities | +|State |States | +|CountryRegion|Countries/Regions | +|Continent |Continents | ++#### Structural ++**Description:** Manmade structures ++This entity tag also supports tagging the entity type with the following tags: ++|Entity tag |Details | +|-|-| +|Airport |Airports | ++#### Geological ++**Description:** Geographic and natural features ++This entity tag also supports tagging the entity type with the following tags: ++|Entity tag |Details | +|-|-| +|River |Rivers | +|Ocean |Oceans | +|Desert |Deserts | ++## Type: Organization ++Corporations, agencies, and other groups of people defined by some established organizational structure ++This entity type could be tagged by the following entity tags: ++#### MedicalOrganization ++**Description:** Medical companies and groups ++#### StockExchange ++**Description:** Stock exchange groups ++#### SportsOrganization ++**Description:** Sports-related organizations ++## Type: Person ++Names of individuals ++## Type: PersonType ++Human roles classified by group membership ++## Type: Email ++Email addresses ++## Type: URL ++URLs to websites ++## Type: IP ++Network IP addresses ++## Type: PhoneNumber ++Phone numbers ++## Type: Product ++Commercial, consumable objects ++This entity type could be tagged by the following entity tags: ++#### ComputingProduct ++**Description:** Computing products ++## Type: Skill ++Capabilities, skills, or expertise ++ ## Next steps |
cognitive-services | How To Call | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/how-to-call.md | Title: How to perform Named Entity Recognition (NER) -description: This article will show you how to extract named entities from text. +description: This article shows you how to extract named entities from text. -The NER feature can evaluate unstructured text, and extract named entities from text in several pre-defined categories, for example: person, location, event, product, and organization. +The NER feature can evaluate unstructured text, and extract named entities from text in several predefined categories, for example: person, location, event, product, and organization. ## Development options The NER feature can evaluate unstructured text, and extract named entities from ### Specify the NER model -By default, this feature will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../concepts/model-lifecycle.md). +By default, this feature uses the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../concepts/model-lifecycle.md). ### Input languages -When you submit documents to be processed, you can specify which of [the supported languages](language-support.md) they're written in. if you don't specify a language, key phrase extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../concepts/multilingual-emoji-support.md). +When you submit documents to be processed, you can specify which of [the supported languages](language-support.md) they're written in. if you don't specify a language, key phrase extraction defaults to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../concepts/multilingual-emoji-support.md). ## Submitting data Analysis is performed upon receipt of the request. Using the NER feature synchro [!INCLUDE [asynchronous-result-availability](../includes/async-result-availability.md)] -The API will attempt to detect the [defined entity categories](concepts/named-entity-categories.md) for a given document language. +The API attempts to detect the [defined entity categories](concepts/named-entity-categories.md) for a given document language. ## Getting NER results -When you get results from NER, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/named-entity-categories.md), including their categories and sub-categories, and confidence scores. +When you get results from NER, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/named-entity-categories.md), including their categories and subcategories, and confidence scores. ++## Select which entities to be returned (Preview API only) ++Starting with **API version 2023-04-15-preview**, the API attempts to detect the [defined entity types and tags](concepts/named-entity-categories.md) for a given document language. The entity types and tags replace the categories and subcategories structure the older models use to define entities for more flexibility. You can also specify which entities are detected and returned, use the optional `includeList` and `excludeList` parameters with the appropriate entity types. The following example would detect only `Location`. You can specify one or more [entity types](concepts/named-entity-categories.md) to be returned. Given the types and tags hierarchy introduced for this version, you have the flexibility to filter on different granularity levels as so: ++**Input:** ++> [!NOTE] +> In this example, it returns only the **Location** entity type. ++```bash +{ +    "kind": "EntityRecognition", +    "parameters":  +    { + "includeList" : + [ + "Location" + ] +    }, +    "analysisInput": +    { +        "documents": +        [ +            { +                "id":"1", +                "language": "en", +                "text": "We went to Contoso foodplace located at downtown Seattle last week for a dinner party, and we adore the spot! They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! The pasta I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their online menu at www.contosofoodplace.com, call 112-555-0176 or send email to order@contosofoodplace.com! The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!" +            } +        ] +    } +} ++``` ++The above examples would return entities falling under the `Location` entity type such as the `GPE`, `Structural`, and `Geological` tagged entities as [outlined by entity types and tags](concepts/named-entity-categories.md). We could also further filter the returned entities by filtering using one of the entity tags for the `Location` entity type such as filtering over `GPE` tag only as outlined: ++```bash ++ "parameters":  +    { + "includeList" : + [ + "GPE" + ] +    } + +``` ++This method returns all `Location` entities only falling under the `GPE` tag and ignore any other entity falling under the `Location` type that is tagged with any other entity tag such as `Structural` or `Geological` tagged `Location` entities. We could also further drill-down on our results by using the `excludeList` parameter. `GPE` tagged entities could be tagged with the following tags: `City`, `State`, `CountryRegion`, `Continent`. We could, for example, exclude `Continent` and `CountryRegion` tags for our example: ++```bash ++ "parameters":  +    { + "includeList" : + [ + "GPE" + ], + "excludeList": : + [ + "Continent", + "CountryRegion" + ] +    } + +``` ++Using these parameters we can successfully filter on only `Location` entity types, since the `GPE` entity tag included in the `includeList` parameter, falls under the `Location` type. We then filter on only Geopolitical entities and exclude any entities tagged with `Continent` or `CountryRegion` tags. ## Service and data limits |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md | Use this article to learn which natural languages are supported by the NER featu > [!NOTE] > * Languages are added as new [model versions](how-to-call.md#specify-the-ner-model) are released. -> * The language support below is for model version `2023-02-01-preview`. +> * The language support below is for model version `2023-02-01-preview` for the Generally Available API. +> * You can additionally find the language support for the Preview API in the second tab. ## NER language support +# [Generally Available API](#tab/ga-api) + |Language|Language Code|Supports resolution|Notes| |:-|:-|:-|:-| |Afrikaans|`af`| | | Use this article to learn which natural languages are supported by the NER featu |Vietnamese|`vi`| | | |Welsh|`cy`| | | +# [Preview API](#tab/preview-api) ++|Language|Language Code|Supports metadata|Notes| +|:-|:-|:-|:-| +|Afrikaans|`af`|✓|| +|Albanian|`sq`|✓|| +|Amharic|`am`|✓|| +|Arabic|`ar`|✓|| +|Armenian|`hy`|✓|| +|Assamese|`as`|✓|| +|Azerbaijani|`az`|✓|| +|Basque|`eu`|✓|| +|Belarusian (new)|`be`|✓|| +|Bengali|`bn`|✓|| +|Bosnian|`bs`|✓|| +|Breton (new)|`br`|✓|| +|Bulgarian|`bg`|✓|| +|Burmese|`my`|✓|`zh` also accepted| +|Catalan|`ca`|✓|| +|Chinese (Simplified)|`zh-Hans`|✓|| +|Chinese (Traditional)|`zh-Hant`|✓|| +|Croatian|`hr`|✓|| +|Czech|`cs`|✓|| +|Danish|`da`|✓|| +|Dutch|`nl`|✓|| +|English|`en`|✓|| +|Esperanto (new)|`eo`|✓|| +|Estonian|`et`|✓|| +|Filipino|`fil`|✓|| +|Finnish|`fi`|✓|| +|French|`fr`|✓|| +|Galician|`gl`|✓|| +|Georgian|`ka`|✓|| +|German|`de`|✓|| +|Greek|`el`|✓|| +|Gujarati|`gu`|✓|| +|Hausa (new)|`ha`|✓|| +|Hebrew|`he`|✓|| +|Hindi|`hi`|✓|| +|Hungarian|`hu`|✓|| +|Indonesian|`id`|✓|| +|Irish|`ga`|✓|| +|Italian|`it`|✓|| +|Japanese|`ji`|✓|| +|Javanese (new)|`jv`|✓|| +|Kannada|`kn`|✓|| +|Kazakh|`kk`|✓|| +|Khmer|`km`|✓|| +|Korean|`ko`|✓|| +|Kurdish (Kurmanji)|`ku`|✓|| +|Kyrgyz|`ky`|✓|| +|Lao|`lo`|✓|| +|Latin (new)|`la`|✓|| +|Latvian|`lv`|✓|| +|Lithuanian|`lt`|✓|| +|Macedonian|`mk`|✓|nb also accepted| +|Malagasy|`mg`|✓|| +|Malay|`ms`|✓|| +|Malayalam|`ml`|✓|| +|Marathi|`mr`|✓|| +|Mongolian|`mn`|✓|| +|Nepali|`ne`|✓|pt also accepted| +|Norwegian|`no`|✓|| +|Odia|`or`|✓|| +|Oromo (new)|`om`|✓|| +|Pashto|`ps`|✓|| +|Persian|`fa`|✓|| +|Polish|`pl`|✓|| +|Portuguese (Brazil)|`pt-BR`|✓|| +|Portuguese (Portugal)|`pt-PT`|✓|| +|Punjabi|`pa`|✓|| +|Romanian|`ro`|✓|| +|Russian|`ru`|✓|| +|Sanskrit (new)|`sa`|✓|| +|Scottish Gaelic (new)|`gd`|✓|| +|Serbian|`sr`|✓|| +|Sindhi (new)|`sd`|✓|| +|Sinhala (new)|`si`|✓|| +|Slovak|`sk`|✓|| +|Slovenian|`sl`|✓|| +|Somali|`so`|✓|| +|Spanish|`es`|✓|| +|Sundanese (new)|`su`|✓|| +|Swahili|`sw`|✓|| +|Swedish|`sv`|✓|| +|Tamil|`ta`|✓|| +|Telugu|`te`|✓|| +|Thai|`th`|✓|| +|Turkish|`tr`|✓|| +|Ukrainian|`uk`|✓|| +|Urdu|`ur`|✓|| +|Uyghur|`ug`|✓|| +|Uzbek|`uz`|✓|| +|Vietnamese|`vi`|✓|| +|Welsh|`cy`|✓|| +|Western Frisian (new)|`fy`|✓|| +|Xhosa (new)|`xh`|✓|| +|Yiddish (new)|`yi`|✓|| +++ ## Next steps [NER feature overview](overview.md) |
cognitive-services | How To Call For Conversations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md | When you get results from PII detection, you can stream the results to an applic |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) | |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.1.0b2) | -4. After you've installed the client library, use the following samples on GitHub to start calling the API. - - * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples/Sample8_AnalyzeConversation_ConversationPII_Transcript.md) - * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples/sample_conv_pii_transcript_input.py) - -5. See the following reference documentation for more information on the client, and return object: +4. See the following reference documentation for more information on the client, and return object: * [C#](/dotnet/api/azure.ai.language.conversations) * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio) |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/language-support.md | Use this article to learn which natural languages are supported by the PII and c ## PII language support -| Language | Language code | Starting with model version | Notes | +| Language | Language code | Starting with model version | Notes | |:-|:-:|:-:|::| | English | `en` | 2022-05-15-preview | |+| French | `fr` | XXXX-XX-XX-preview | | +| German | `de` | XXXX-XX-XX-preview | | +| Spanish | `es` | XXXX-XX-XX-preview | | |
cognitive-services | Document Summarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/document-summarization.md | curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/ } } ]+} ' ``` If you do not specify `sentenceCount`, the model will determine the summary length. Note that `sentenceCount` is the approximation of the sentence count of the output summary, range 1 to 20. |
cognitive-services | Chatgpt Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/chatgpt-quickstart.md | Title: 'Quickstart - Get started using ChatGPT and GPT-4 with Azure OpenAI Service' + Title: 'Quickstart - Get started using GPT-35-Turbo and GPT-4 with Azure OpenAI Service' -description: Walkthrough on how to get started with ChatGPT and GPT-4 on Azure OpenAI Service. +description: Walkthrough on how to get started with GPT-35-Turbo and GPT-4 on Azure OpenAI Service. zone_pivot_groups: openai-quickstart-new recommendations: false -# Quickstart: Get started using ChatGPT and GPT-4 with Azure OpenAI Service +# Quickstart: Get started using GPT-35-Turbo and GPT-4 with Azure OpenAI Service Use this article to get started using Azure OpenAI. |
cognitive-services | Abuse Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/abuse-monitoring.md | description: Learn about the abuse monitoring capabilities of Azure OpenAI Servi - Last updated 06/16/2023 |
cognitive-services | Advanced Prompt Engineering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/advanced-prompt-engineering.md | Title: Prompt engineering techniques with Azure OpenAI -description: Learn about the options for how to use prompt engineering with GPT-3, ChatGPT, and GPT-4 models +description: Learn about the options for how to use prompt engineering with GPT-3, GPT-35-Turbo, and GPT-4 models - Last updated 04/20/2023 While the principles of prompt engineering can be generalized across many differ - Chat Completion API. - Completion API. -Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the ChatGPT and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries. +Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the GPT-35-Turbo and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries. -The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. Technically the ChatGPT models can be used with either APIs, but we strongly recommend using the Chat Completion API for these models. To learn more, please consult our [in-depth guide on using these APIs](../how-to/chatgpt.md). +The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. Technically the GPT-35-Turbo models can be used with either APIs, but we strongly recommend using the Chat Completion API for these models. To learn more, please consult our [in-depth guide on using these APIs](../how-to/chatgpt.md). The techniques in this guide will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#limitations), is just as important as understanding how to leverage their strengths. |
cognitive-services | Legacy Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/legacy-models.md | + + Title: Azure OpenAI Service legacy models ++description: Learn about the legacy models in Azure OpenAI. ++ Last updated : 07/06/2023+++++recommendations: false +keywords: +++# Azure OpenAI Service legacy models ++Azure OpenAI Service offers a variety of models for different use cases. The following models are not available for new deployments beginning July 6, 2023. Deployments created prior to July 6, 2023 remain available to customers until July 5, 2024. We recommend customers migrate to the replacement models prior to the July 5, 2024 retirement. ++## GPT-3.5 ++The impacted GPT-3.5 models are the following. The replacement for the GPT-3.5 models is GPT-3.5 Turbo Instruct when that model becomes available. ++- `text-davinci-002` +- `text-davinci-003` +- `code-davinci-002` ++## GPT-3 ++The impacted GPT-3 models are the following. The replacement for the GPT-3 models is GPT-3.5 Turbo Instruct when that model becomes available. ++- `text-ada-001` +- `text-babbage-001` +- `text-curie-001` +- `text-davinci-001` +- `code-cushman-001` ++## Embedding models ++The embedding models below will be retired effective July 5, 2024. Customers should migrate to `text-embedding-ada-002` (version 2). ++- [Similarity](#similarity-embedding) +- [Text search](#text-search-embedding) +- [Code search](#code-search-embedding) ++Each family includes models across a range of capability. The following list indicates the length of the numerical vector returned by the service, based on model capability: ++| Base Model | Model(s) | Dimensions | +|||| +| Ada | | 1024 | +| Babbage | | 2048 | +| Curie | | 4096 | +| Davinci | | 12288 | +++### Similarity embedding ++These models are good at capturing semantic similarity between two or more pieces of text. ++| Use cases | Models | +||| +| Clustering, regression, anomaly detection, visualization | `text-similarity-ada-001` <br> `text-similarity-babbage-001` <br> `text-similarity-curie-001` <br> `text-similarity-davinci-001` <br>| ++### Text search embedding ++These models help measure whether long documents are relevant to a short search query. There are two input types supported by this family: `doc`, for embedding the documents to be retrieved, and `query`, for embedding the search query. ++| Use cases | Models | +||| +| Search, context relevance, information retrieval | `text-search-ada-doc-001` <br> `text-search-ada-query-001` <br> `text-search-babbage-doc-001` <br> `text-search-babbage-query-001` <br> `text-search-curie-doc-001` <br> `text-search-curie-query-001` <br> `text-search-davinci-doc-001` <br> `text-search-davinci-query-001` <br> | ++### Code search embedding ++Similar to text search embedding models, there are two input types supported by this family: `code`, for embedding code snippets to be retrieved, and `text`, for embedding natural language search queries. ++| Use cases | Models | +||| +| Code search and relevance | `code-search-ada-code-001` <br> `code-search-ada-text-001` <br> `code-search-babbage-code-001` <br> `code-search-babbage-text-001` | ++## Model summary table and region availability ++Region availability is for customers with deployments of the models prior to July 6, 2023. ++### GPT-3.5 models ++| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | +| | | - | -- | - | +| text-davinci-002 | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 | +| text-davinci-003 | East US, West Europe | N/A | 4,097 | Jun 2021 | +| code-davinci-002 | East US, West Europe | N/A | 8,001 | Jun 2021 | ++### GPT-3 models +++| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | +| | | - | -- | - | +| ada | N/A | N/A | 2,049 | Oct 2019| +| text-ada-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019| +| babbage | N/A | N/A | 2,049 | Oct 2019 | +| text-babbage-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 | +| curie | N/A | N/A | 2,049 | Oct 2019 | +| text-curie-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 | +| davinci | N/A | N/A | 2,049 | Oct 2019| +| text-davinci-001 | South Central US, West Europe | N/A | | | +++### Codex models ++| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | +| | | | | | +| code-cushman-001 | South Central US, West Europe | N/A | 2,048 | | ++### Embedding models ++| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | +| | | | | | +| text-similarity-ada-001| East US, South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-similarity-babbage-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-similarity-curie-001 | East US, South Central US, West Europe | N/A | 2046 | Aug 2020 | +| text-similarity-davinci-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-search-ada-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-search-ada-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-search-babbage-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-search-babbage-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-search-curie-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-search-curie-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-search-davinci-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| text-search-davinci-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| code-search-ada-code-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| code-search-ada-text-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| code-search-babbage-code-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | +| code-search-babbage-text-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | |
cognitive-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md | Title: Azure OpenAI Service models description: Learn about the different model capabilities that are available with Azure OpenAI. - Previously updated : 06/30/2023 Last updated : 07/06/2023 keywords: # Azure OpenAI Service models -Azure OpenAI provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Refer to the [model capability table](#model-capabilities) in this article for a full breakdown. +Azure OpenAI Service is powered by a diverse set of models with different capabilities and price points. Model availability varies by region. For GPT-3 and other models retiring in July 2024, see [Azure OpenAI Service legacy models](./legacy-models.md). -| Model family | Description | +| Models | Description | |--|--|-| [GPT-4](#gpt-4-models) | A set of models that improve on GPT-3.5 and can understand as well as generate natural language and code. | -| [GPT-3](#gpt-3-models) | A series of models that can understand and generate natural language. This includes the new [ChatGPT model](#chatgpt-gpt-35-turbo). | +| [GPT-4](#gpt-4) | A set of models that improve on GPT-3.5 and can understand as well as generate natural language and code. | +| [GPT-3.5](#gpt-35) | A set of models that improve on GPT-3 and can understand as well as generate natural language and code. | +| [Embeddings](#embeddings-models) | A set of models that can convert text into numerical vector form to facilitate text similarity. | | [DALL-E](#dall-e-models-preview) (Preview) | A series of models in preview that can generate original images from natural language. |-| [Codex](#codex-models) | A series of models that can understand and generate code, including translating natural language to code. | -| [Embeddings](#embeddings-models) | A set of models that can understand and use embeddings. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently, we offer three families of Embeddings models for different functionalities: similarity, text search, and code search. | -## Model capabilities +## GPT-4 -Each model family has a series of models that are further distinguished by capability. These capabilities are typically identified by names, and the alphabetical order of these names generally signifies the relative capability and cost of that model within a given model family. For example, GPT-3 models use names such as Ada, Babbage, Curie, and Davinci to indicate relative capability and cost. Davinci is more capable and more expensive than Curie, which in turn is more capable and more expensive than Babbage, and so on. + GPT-4 can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like GPT-3.5 Turbo, GPT-4 is optimized for chat and works well for traditional completions tasks. Use the Chat Completions API to use GPT-4. To learn more about how to interact with GPT-4 and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md). -> [!NOTE] -> Any task that can be performed by a less capable model like Ada can be performed by a more capable model like Curie or Davinci. +Due to high demand, access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4) ++- `gpt-4` +- `gpt-4-32k` ++The `gpt-4` model supports 8192 max input tokens and the `gpt-4-32k` model supports up to 32,768 tokens. ++## GPT-3.5 ++GPT-3.5 models can understand and generate natural language or code. The most capable and cost effective model in the GPT-3.5 family is GPT-3.5 Turbo, which has been optimized for chat and works well for traditional completions tasks as well. We recommend using GPT-3.5 Turbo over [legacy GPT-3.5 and GPT-3 models](./legacy-models.md). ++- `gpt-35-turbo` +- `gpt-35-turbo-16k` ++The `gpt-35-turbo` model supports 4096 max input tokens and the `gpt-35-turbo-16k` model supports up to 16,384 tokens. ++Like GPT-4, use the Chat Completions API to use GPT-3.5 Turbo. To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md). ++## Embeddings models ++> [!IMPORTANT] +> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model. +++Currently, we offer three families of Embeddings models for different functionalities: + The following list indicates the length of the numerical vector returned by the service, based on model capability: ++| Base Model | Model(s) | Dimensions | +|||| +| Ada | models ending in -001 (Version 1) | 1024 | +| Ada | text-embedding-ada-002 (Version 2) | 1536 | ++## DALL-E (Preview) ++The DALL-E models, currently in preview, generate images from text prompts that the user provides. +++## Model summary table and region availability ++> [!IMPORTANT] +> South Central US is temporarily unavailable for creating new resources due to high demand. ++### GPT-4 models ++These models can only be used with the Chat Completion API. ++| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | +| | | | | | +| `gpt-4` <sup>1,</sup><sup>2</sup> (0314) | East US, France Central | N/A | 8,192 | September 2021 | +| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (0314) | East US, France Central | N/A | 32,768 | September 2021 | +| `gpt-4` <sup>1</sup> (0613) | East US, France Central | N/A | 8,192 | September 2021 | +| `gpt-4-32k` <sup>1</sup> (0613) | East US, France Central | N/A | 32,768 | September 2021 | -## Naming convention +<sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br> +<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired on January 4, 2024. See [model updates](#model-updates) for model upgrade behavior. ++### GPT-3.5 models -Azure OpenAI model names typically correspond to the following standard naming convention: +GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can also be used with the Completions API. GPT3.5 Turbo (0613) only supports the Chat Completions API. -`{capability}-{family}[-{input-type}]-{identifier}` +| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | +| | | - | -- | - | +| `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 | +| `gpt-35-turbo` (0613) | East US, France Central, UK South | N/A | 4,096 | Sep 2021 | +| `gpt-35-turbo-16k` (0613) | East US, France Central, UK South | N/A | 16,384 | Sep 2021 | -| Element | Description | -| | | -| `{capability}` | The model capability of the model. For example, [GPT-3 models](#gpt-3-models) uses `text`, while [Codex models](#codex-models) use `code`.| -| `{family}` | The relative family of the model. For example, GPT-3 models include `ada`, `babbage`, `curie`, and `davinci`.| -| `{input-type}` | ([Embeddings models](#embeddings-models) only) The input type of the embedding supported by the model. For example, text search embedding models support `doc` and `query`.| -| `{identifier}` | The version identifier of the model. | +<sup>1</sup> Version `0301` of gpt-35-turbo will be retired on January 4, 2024. See [model updates](#model-updates) for model upgrade behavior. -For example, our most powerful GPT-3 model is called `text-davinci-003`, while our most powerful Codex model is called `code-davinci-002`. -> The older versions of GPT-3 models named `ada`, `babbage`, `curie`, and `davinci` that don't follow the standard naming convention are primarily intended for fine tuning. For more information, see [Learn how to customize a model for your application](../how-to/fine-tuning.md). +### Embeddings models -## Finding what models are available +These models can only be used with Embedding API requests. ++> [!NOTE] +> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model. ++| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | +| | | | | | +| text-embedding-ada-002 (version 2) | East US, South Central US, West Europe | N/A |8,191 | Sep 2021 | +| text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 | ++### DALL-E models (Preview) ++| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (characters) | Training Data (up to) | +| | | | | | +| dalle2 | East US | N/A | 1000 | N/A | ++## Working with models ++### Finding what models are available You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/cognitiveservices/azureopenaistable/models/list). -## Model updates +### Model updates Azure OpenAI now supports automatic updates for select model deployments. On models where automatic update support is available, a model version drop-down will be visible in Azure OpenAI Studio under **Create new deployment** and **Edit deployment**: Azure OpenAI now supports automatic updates for select model deployments. On mod When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a new version being released. -If you are still in the early testing phases for completion and chat completion based models we recommend deploying models with **auto-update to default** set whenever it is available. For embeddings models while we recommend using the latest model version, you should choose when you want to upgrade since embeddings generated with an earlier model version will not be interchangeable with the new version. +If you are still in the early testing phases for completion and chat completion based models, we recommend deploying models with **auto-update to default** set whenever it is available. ### Specific model version As your use of Azure OpenAI evolves, and you start to build and integrate with applications you will likely want to manually control model updates so that you can first test and validate that model performance is remaining consistent for your use case prior to upgrade. -When you select a specific model version for a deployment this version will remain selected until you either choose to manually update yourself, or once you reach the expiration date for the model. When the deprecation/expiration date is reached the model will auto-upgrade to the latest available version. +When you select a specific model version for a deployment this version will remain selected until you either choose to manually update yourself, or once you reach the retirement date for the model. When the retirement date is reached the model will auto-upgrade to the default version at the time of retirement. -### GPT-35-Turbo 0301 and GPT-4 0314 expiration +### GPT-35-Turbo 0301 and GPT-4 0314 retirement -The original `gpt-35-turbo` (`0301`) and both `gpt-4` (`0314`) models will expire no earlier than October 15th, 2023. Upon expiration, deployments will automatically be upgraded to the default version. If you would like your deployment to stop accepting completion requests rather than upgrading, then you will be able to set the model upgrade option to expire through the API. We will publish guidelines on this by September 1. +The `gpt-35-turbo` (`0301`) and both `gpt-4` (`0314`) models will be retired on January 4, 2024. Upon retirement, deployments will automatically be upgraded to the default version at the time of retirement. If you would like your deployment to stop accepting completion requests rather than upgrading, then you will be able to set the model upgrade option to expire through the API. We will publish guidelines on this by September 1. ### Viewing deprecation dates PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{ **Request body** -This is only a subset of the available request body parameters. For the full list of the parameters you can refer to the [REST API spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json). +This is only a subset of the available request body parameters. For the full list of the parameters, you can refer to the [REST API spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json). |Parameter|Type| Description | |--|--|--| curl -X PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-0 } ``` -## Finding the right model --We recommend starting with the most capable model in a model family to confirm whether the model capabilities meet your requirements. Then you can stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities. --## GPT-4 models -- GPT-4 can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like gpt-35-turbo, GPT-4 is optimized for chat but works well for traditional completions tasks. --Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4) --- `gpt-4`-- `gpt-4-32k`--The `gpt-4` supports 8192 max input tokens and the `gpt-4-32k` supports up to 32,768 tokens. --## GPT-3 models --The GPT-3 models can understand and generate natural language. The service offers four model capabilities, each with different levels of power and speed suitable for different tasks. Davinci is the most capable model, while Ada is the fastest. In the order of greater to lesser capability, the models are: --- `text-davinci-003`-- `text-curie-001`-- `text-babbage-001`-- `text-ada-001`--While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting, because it produces the best results and validate the value that Azure OpenAI can provide. Once you have a prototype working, you can then optimize your model choice with the best latency/performance balance for your application. --### <a id="gpt-3-davinci"></a>Davinci --Davinci is the most capable model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, like summarization for a specific audience and creative content generation, Davinci produces the best results. The increased capabilities provided by Davinci require more compute resources, so Davinci costs more and isn't as fast as other models. --Another area where Davinci excels is in understanding the intent of text. Davinci is excellent at solving many kinds of logic problems and explaining the motives of characters. Davinci has been able to solve some of the most challenging AI problems involving cause and effect. --**Use for**: Complex intent, cause and effect, summarization for audience --### Curie --Curie is powerful, yet fast. While Davinci is stronger when it comes to analyzing complicated text, Curie is capable for many nuanced tasks like sentiment classification and summarization. Curie is also good at answering questions and performing Q&A and as a general service chatbot. --**Use for**: Language translation, complex classification, text sentiment, summarization --### Babbage --Babbage can perform straightforward tasks like simple classification. ItΓÇÖs also capable when it comes to semantic search, ranking how well documents match up with search queries. -**Use for**: Moderate classification, semantic search classification -### Ada -Ada is usually the fastest model and can perform tasks like parsing text, address correction and certain kinds of classification tasks that donΓÇÖt require too much nuance. AdaΓÇÖs performance can often be improved by providing more context. -**Use for**: Parsing text, simple classification, address correction, keywords -### ChatGPT (gpt-35-turbo) --The ChatGPT model (gpt-35-turbo) is a language model designed for conversational interfaces and the model behaves differently than previous GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT model is conversation-in and message-out. The model expects a prompt string formatted in a specific chat-like transcript format, and returns a completion that represents a model-written message in the chat. --To learn more about the ChatGPT model and how to interact with the Chat API check out our [in-depth how-to](../how-to/chatgpt.md). --### DALL-E models (Preview) --The DALL-E models, currently in preview, generate images from text prompts that the user provides. --## Codex models --The Codex models are descendants of our base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub. --TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and Shell. In the order of greater to lesser capability, the Codex models are: --- `code-davinci-002`-- `code-cushman-001`--### <a id="codex-davinci"></a>Davinci --Similar to GPT-3, Davinci is the most capable Codex model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, Davinci produces the best results. Greater capabilities require more compute resources, so Davinci costs more and isn't as fast as other models. --### Cushman --Cushman is powerful, yet fast. While Davinci is stronger when it comes to analyzing complicated tasks, Cushman is a capable model for many code generation tasks. Cushman typically runs faster and cheaper than Davinci, as well. --## Embeddings models --> [!IMPORTANT] -> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model. --Currently, we offer three families of Embeddings models for different functionalities: --- [Similarity](#similarity-embedding)-- [Text search](#text-search-embedding)-- [Code search](#code-search-embedding)--Each family includes models across a range of capability. The following list indicates the length of the numerical vector returned by the service, based on model capability: --| Base Model | Model(s) | Dimensions | -|||| -| Ada | models ending in -001 (Version 1) | 1024 | -| Ada | text-embedding-ada-002 (Version 2) | 1536 | -| Babbage | | 2048 | -| Curie | | 4096 | -| Davinci | | 12288 | --Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is both faster and cheaper. --### Similarity embedding --These models are good at capturing semantic similarity between two or more pieces of text. --| Use cases | Models | -||| -| Clustering, regression, anomaly detection, visualization | `text-similarity-ada-001` <br> `text-similarity-babbage-001` <br> `text-similarity-curie-001` <br> `text-similarity-davinci-001` <br>| --### Text search embedding --These models help measure whether long documents are relevant to a short search query. There are two input types supported by this family: `doc`, for embedding the documents to be retrieved, and `query`, for embedding the search query. --| Use cases | Models | -||| -| Search, context relevance, information retrieval | `text-search-ada-doc-001` <br> `text-search-ada-query-001` <br> `text-search-babbage-doc-001` <br> `text-search-babbage-query-001` <br> `text-search-curie-doc-001` <br> `text-search-curie-query-001` <br> `text-search-davinci-doc-001` <br> `text-search-davinci-query-001` <br> | --### Code search embedding --Similar to text search embedding models, there are two input types supported by this family: `code`, for embedding code snippets to be retrieved, and `text`, for embedding natural language search queries. --| Use cases | Models | -||| -| Code search and relevance | `code-search-ada-code-001` <br> `code-search-ada-text-001` <br> `code-search-babbage-code-001` <br> `code-search-babbage-text-001` | --When using our embeddings models, keep in mind their limitations and risks. --## Model Summary table and region availability --> [!IMPORTANT] -> South Central US is temporarily unavailable for creating new resources due to high demand. --### GPT-3 Models --These models can be used with Completion API requests. `gpt-35-turbo` is the only model that can be used with both Completion API requests and the Chat Completion API. --| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | -| | | - | -- | - | -| ada | N/A | N/A | 2,049 | Oct 2019| -| text-ada-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019| -| babbage | N/A | N/A | 2,049 | Oct 2019 | -| text-babbage-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 | -| curie | N/A | N/A | 2,049 | Oct 2019 | -| text-curie-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 | -| davinci | N/A | N/A | 2,049 | Oct 2019| -| text-davinci-001 | South Central US, West Europe | N/A | | | -| text-davinci-002 | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 | -| text-davinci-003 | East US, West Europe | N/A | 4,097 | Jun 2021 | -| text-davinci-fine-tune-002 | N/A | N/A | | | -| gpt-35-turbo<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 | -| gpt-35-turbo (0613) | East US, France Central, UK South | N/A | 4,096 | Sep 2021 | -| gpt-35-turbo-16k (0613) | East US, France Central, UK South | N/A | 16,384 | Sep 2021 | --<sup>1</sup> Version `0301` of gpt-35-turbo will be deprecated no earlier than October 15th, 2023 in favor of version `0613`. --### GPT-4 Models --These models can only be used with the Chat Completion API. --| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | -| | | | | | -| `gpt-4` <sup>1,</sup><sup>2</sup> (0314) | East US, France Central | N/A | 8,192 | September 2021 | -| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (0314) | East US, France Central | N/A | 32,768 | September 2021 | -| `gpt-4` <sup>1</sup> (0613) | East US, France Central | N/A | 8,192 | September 2021 | -| `gpt-4-32k` <sup>1</sup> (0613) | East US, France Central | N/A | 32,768 | September 2021 | --<sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br> -<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be deprecated no earlier than October 15th, 2023 in favor of version `0613`. --### Dall-E Models --| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (characters) | Training Data (up to) | -| | | | | | -| dalle2 | East US | N/A | 1000 | N/A | ---### Codex Models --These models can only be used with Completions API requests. --| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | -| | | | | | -| code-cushman-001<sup>1</sup> | South Central US, West Europe | Currently unavailable | 2,048 | | -| code-davinci-002 | East US, West Europe | N/A | 8,001 | Jun 2021 | --<sup>1</sup> The model is available for fine-tuning by request only. Currently we aren't accepting new requests to fine-tune the model. --### Embeddings Models --These models can only be used with Embedding API requests. --> [!NOTE] -> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model. --| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | -| | | | | | -| text-embedding-ada-002 (version 2) | East US, South Central US | N/A |8,191 | Sep 2021 | -| text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 | -| text-similarity-ada-001| East US, South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-similarity-babbage-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-similarity-curie-001 | East US, South Central US, West Europe | N/A | 2046 | Aug 2020 | -| text-similarity-davinci-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-search-ada-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-search-ada-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-search-babbage-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-search-babbage-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-search-curie-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-search-curie-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-search-davinci-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| text-search-davinci-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| code-search-ada-code-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| code-search-ada-text-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| code-search-babbage-code-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | -| code-search-babbage-text-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | ## Next steps |
cognitive-services | Prompt Engineering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/prompt-engineering.md | Title: Azure OpenAI Service | Introduction to Prompt engineering description: Learn how to use prompt engineering to optimize your work with Azure OpenAI Service. - Last updated 03/21/2023 As you develop more complex prompts, it's helpful to keep this fundamental behav ### Prompt components -When using the Completion API while there's no differentiation between different parts of the prompt, it can still be useful for learning and discussion to identify underlying prompt components. With the [Chat Completion API](../how-to/chatgpt.md) there are distinct sections of the prompt that are sent to the API in the form of an array of dictionaries with associated roles: system, user, and assistant. This guidance will focus more generally on how to think about prompt construction rather than providing prescriptive guidance that is specific to one API over another. +When using the Completion API while there's no differentiation between different parts of the prompt, it can still be useful for learning and discussion to identify underlying prompt components. With the [Chat Completion API](../how-to/chatgpt.md) there are distinct sections of the prompt that are sent to the API in the form of an array of dictionaries with associated roles: system, user, and assistant. This guidance focuses more generally on how to think about prompt construction rather than providing prescriptive guidance that is specific to one API over another. It's also important to understand that while there could be other valid ways to dissect prompts, the goal of this breakdown is to provide a relatively simple way to think about prompt construction. With the Completion API, all the components are optional, but at least one must be present and most prompts include more than one component. There can be some grey area between components as well. The order presented below roughly corresponds to how commonly each component is used, from most to least. Primary content refers to some sort of text that is being processed or transform |--|--| | Can you please tell me how to get to the museum?<br>Translate to French: |`Pouvez-vous s'il vous plaît me dire comment aller au musée?` | -Primary content can also be much longer. In the following example the primary content is the introduction section from the [Wikipedia entry on time complexity](https://en.wikipedia.org/wiki/Time_complexity), which is almost 300 words long. This content has been abbreviated in our table for ease of display. +Primary content can also be much longer. In the following example, the primary content is the introduction section from the [Wikipedia entry on time complexity](https://en.wikipedia.org/wiki/Time_complexity), which is almost 300 words long. This content has been abbreviated in our table for ease of display. | Prompt | Completion| |--|--| | In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by [...]<br>Summarize the previous text in a few sentences:| Time complexity is a measure of how long an algorithm takes to run. It is typically expressed as a function of the size of the input. The most common measure is worst-case time complexity, which is the maximum amount of time required for inputs of a given size.|-| In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by [...]<br>TL;DR (Too long didn't read) | The time complexity of an algorithm is the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. The time complexity is generally expressed as a function of the size of the input.| +| In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by [...]<br>`TL;DR` (Too long didn't read) | The time complexity of an algorithm is the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. The time complexity is generally expressed as a function of the size of the input.| GPT models can also handle primary content that is structured. In the example below, a TSV about beers in Belgium (abbreviated for display in this article) is passed to the model as part of the prompt. It is capable of properly interpreting the content and answering questions about the data. GPT models can also handle primary content that is structured. In the example be ### Examples -Successful prompts often rely on the practice of “one-shot” or “few-shot” learning. This refers to the inclusion of one or more examples of the desired behavior of the model, typically by including input and output pairs. This is not learning in the sense that the model is permanently changed, but rather that the examples better condition the model to respond as desired for only the current inference. The use of prompts with no examples is sometimes referred to as “zero-shot” learning. Please note that with the Chat Completion API few-shot learning examples are typically added to the messages array in the form of example user/assistant interactions after the initial system message. +Successful prompts often rely on the practice of “one-shot” or “few-shot” learning. This refers to the inclusion of one or more examples of the desired behavior of the model, typically by including input and output pairs. This is not learning in the sense that the model is permanently changed, but rather that the examples better condition the model to respond as desired for only the current inference. The use of prompts with no examples is sometimes referred to as “zero-shot” learning. Note that with the Chat Completion API few-shot learning examples are typically added to the messages array in the form of example user/assistant interactions after the initial system message. | Learning Type| Prompt| Completion| |- |-|--| Supporting content is information that the model can utilize to influence the ou | Supporting Content| Prompt | Completion| |--|-|--| | Without | Workshop list:<br>Title Description<br> Planning Workshop Kickoff Please join us to kick off our workshop! Hear from our leaders about cross group priorities and learnings for the year ahead. The invitation to this event has already been sent to everyone in […]<br>Summarize the workshops above, grouped by topic: | Workshop list: <br> Planning Workshop Kickoff <br> -Planning Workshop Wrap-up<br>- Core MeetUp<br>- Customer Driven Perspectives<br>-Impact and Key Focus Areas<br>-Best practices<br>-Accelerating app adoption […] |-| With | Workshop list: <br>Title Description<br>Planning Workshop Kickoff Please join us to kick off our workshop! Hear from our leaders about cross group priorities and learnings for the year ahead. The invitation to this event has already been sent to everyone in […] <br>My Important Topics: prompt engineering, search, GPT models<br>Summarize the workshops above, grouped by my important topics:| 1. Prompt Engineering: <br>- New prompt engineering tips<br>- The art of prompt engineering<br>2. Search:<br>-Intro to vector search with embedding<br>3. GPT Models:<br>- Intro to GPT-4<br>- ChatGPT in-depth.| +| With | Workshop list: <br>Title Description<br>Planning Workshop Kickoff Please join us to kick off our workshop! Hear from our leaders about cross group priorities and learnings for the year ahead. The invitation to this event has already been sent to everyone in […] <br>My Important Topics: prompt engineering, search, GPT models<br>Summarize the workshops above, grouped by my important topics:| 1. Prompt Engineering: <br>- New prompt engineering tips<br>- The art of prompt engineering<br>2. Search:<br>-Intro to vector search with embedding<br>3. GPT Models:<br>- Intro to GPT-4<br>- GPT-35-Turbo in-depth.| ## Best practices Supporting content is information that the model can utilize to influence the ou - **Be Descriptive**. Use analogies. - **Double Down**. Sometimes you may need to repeat yourself to the model. Give instructions before and after your primary content, use an instruction and a cue, etc. - **Order Matters**. The order in which you present information to the model may impact the output. Whether you put instructions before your content (“summarize the following…”) or after (“summarize the above…”) can make a difference in output. Even the order of few-shot examples can matter. This is referred to as recency bias.-- **Give the model an “out”**. It can sometimes be helpful to give the model an alternative path if it is unable to complete the assigned task. For example, when asking a question over a piece of text you might include something like "respond with ‘not found’ if the answer is not present". This can help the model avoid generating false responses.+- **Give the model an “out”**. It can sometimes be helpful to give the model an alternative path if it is unable to complete the assigned task. For example, when asking a question over a piece of text you might include something like "respond with ‘not found’ if the answer is not present." This can help the model avoid generating false responses. ## Space efficiency |
cognitive-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/use-your-data.md | recommendations: false # Azure OpenAI on your data (preview) -Azure OpenAI on your data enables you to run supported chat models such as ChatGPT and GPT-4 on your data without needing to train or fine-tune models. Running models on your data enables you to chat on top of, and analyze your data with greater accuracy and speed. By doing so, you can unlock valuable insights that can help you make better business decisions, identify trends and patterns, and optimize your operations. One of the key benefits of Azure OpenAI on your data is its ability to tailor the content of conversational AI. +Azure OpenAI on your data enables you to run supported chat models such as GPT-35-Turbo and GPT-4 on your data without needing to train or fine-tune models. Running models on your data enables you to chat on top of, and analyze your data with greater accuracy and speed. By doing so, you can unlock valuable insights that can help you make better business decisions, identify trends and patterns, and optimize your operations. One of the key benefits of Azure OpenAI on your data is its ability to tailor the content of conversational AI. To get started, [connect your data source](../use-your-data-quickstart.md) using [Azure OpenAI Studio](https://oai.azure.com/) and start asking questions and chatting on your data. Because the model has access to, and can reference specific sources to support i ## What is Azure OpenAI on your data -Azure OpenAI on your data works with OpenAI's powerful ChatGPT (gpt-35-turbo) and GPT-4 language models, enabling them to provide responses based on your data. You can access Azure OpenAI on your data using a REST API or the web-based interface in the [Azure OpenAI Studio](https://oai.azure.com/) to create a solution that connects to your data to enable an enhanced chat experience. +Azure OpenAI on your data works with OpenAI's powerful GPT-35-Turbo) and GPT-4 language models, enabling them to provide responses based on your data. You can access Azure OpenAI on your data using a REST API or the web-based interface in the [Azure OpenAI Studio](https://oai.azure.com/) to create a solution that connects to your data to enable an enhanced chat experience. One of the key features of Azure OpenAI on your data is its ability to retrieve and utilize data in a way that enhances the model's output. Azure OpenAI on your data, together with Azure Cognitive Search, determines what data to retrieve from the designated data source based on the user input and provided conversation history. This data is then augmented and resubmitted as a prompt to the OpenAI model, with retrieved information being appended to the original prompt. Although retrieved data is being appended to the prompt, the resulting input is still processed by the model like any other prompt. Once the data has been retrieved and the prompt has been submitted to the model, the model uses this information to provide a completion. See the [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext) article for more information. ## Data source options -Azure OpenAI on your data uses an [Azure Cognitive Search](/azure/search/search-what-is-azure-search) index to determine what data to retrieve based on user inputs and provided conversation history. We recommend using Azure OpenAI Studio to create your index from a blob storage or local files. See the [quickstart article](../use-your-data-quickstart.md?pivots=programming-language-studio) for more information. +Azure OpenAI on your data uses an [Azure Cognitive Services](/azure/search/search-what-is-azure-search) index to determine what data to retrieve based on user inputs and provided conversation history. We recommend using Azure OpenAI Studio to create your index from a blob storage or local files. See the [quickstart article](../use-your-data-quickstart.md?pivots=programming-language-studio) for more information. ## Ingesting your data into Azure cognitive search |
cognitive-services | Encrypt Data At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/encrypt-data-at-rest.md | |
cognitive-services | Chatgpt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/chatgpt.md | Title: How to work with the ChatGPT and GPT-4 models + Title: How to work with the GPT-35-Turbo and GPT-4 models -description: Learn about the options for how to use the ChatGPT and GPT-4 models +description: Learn about the options for how to use the GPT-35-Turbo and GPT-4 models keywords: ChatGPT zone_pivot_groups: openai-chat -# Learn how to work with the ChatGPT and GPT-4 models +# Learn how to work with the GPT-35-Turbo and GPT-4 models -The ChatGPT and GPT-4 models are language models that are optimized for conversational interfaces. The models behave differently than the older GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT and GPT-4 models are conversation-in and message-out. The models expect input formatted in a specific chat-like transcript format, and return a completion that represents a model-written message in the chat. While this format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too. +The GPT-35-Turbo and GPT-4 models are language models that are optimized for conversational interfaces. The models behave differently than the older GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the GPT-35-Turbo and GPT-4 models are conversation-in and message-out. The models expect input formatted in a specific chat-like transcript format, and return a completion that represents a model-written message in the chat. While this format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too. In Azure OpenAI there are two different options for interacting with these type of models: - Chat Completion API. - Completion API with Chat Markup Language (ChatML). -The Chat Completion API is a new dedicated API for interacting with the ChatGPT and GPT-4 models. This API is the preferred method for accessing these models. **It is also the only way to access the new GPT-4 models**. +The Chat Completion API is a new dedicated API for interacting with the GPT-35-Turbo and GPT-4 models. This API is the preferred method for accessing these models. **It is also the only way to access the new GPT-4 models**. -ChatML uses the same [completion API](../reference.md#completions) that you use for other models like text-davinci-002, it requires a unique token based prompt format known as Chat Markup Language (ChatML). This provides lower level access than the dedicated Chat Completion API, but also requires additional input validation, only supports ChatGPT (gpt-35-turbo) models, and **the underlying format is more likely to change over time**. +ChatML uses the same [completion API](../reference.md#completions) that you use for other models like text-davinci-002, it requires a unique token based prompt format known as Chat Markup Language (ChatML). This provides lower level access than the dedicated Chat Completion API, but also requires additional input validation, only supports gpt-35-turbo models, and **the underlying format is more likely to change over time**. -This article walks you through getting started with the new ChatGPT and GPT-4 models. It's important to use the techniques described here to get the best results. If you try to interact with the models the same way you did with the older model series, the models will often be verbose and provide less useful responses. +This article walks you through getting started with the GPT-35-Turbo and GPT-4 models. It's important to use the techniques described here to get the best results. If you try to interact with the models the same way you did with the older model series, the models will often be verbose and provide less useful responses. ::: zone pivot="programming-language-chat-completions" |
cognitive-services | Completions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/completions.md | While all prompts result in completions, it can be helpful to think of text comp Vertical farming provides a novel solution for producing food locally, reducing transportation costs and ``` -This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/models.md#codex-models) section in [Models](../concepts/models.md). +This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/legacy-models.md#codex-models) section in [Models](../concepts/models.md). ``` import React from 'react'; |
cognitive-services | Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/quota.md | When a deployment is created, the assigned TPM will directly map to the tokens-p The flexibility to distribute TPM globally within a subscription and region has allowed Azure OpenAI Service to loosen other restrictions: - The maximum resources per region are increased to 30.-- The limit on creating no more than one deployments of the same model in a resource has been removed.+- The limit on creating no more than one deployment of the same model in a resource has been removed. ## Assign quota |
cognitive-services | Switching Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/switching-endpoints.md | for text in inputs: ## Next steps -* Learn more about how to work with ChatGPT and the GPT-4 models with [our how-to guide](../how-to/chatgpt.md). +* Learn more about how to work with GPT-35-Turbo and the GPT-4 models with [our how-to guide](../how-to/chatgpt.md). * For more examples, check out the [Azure OpenAI Samples GitHub repository](https://aka.ms/AOAICodeSamples) |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md | Title: What is Azure OpenAI Service? description: Apply advanced language models to variety of use cases with Azure OpenAI --++ Previously updated : 06/28/2023 Last updated : 07/06/2023 recommendations: false keywords: keywords: # What is Azure OpenAI Service? -Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series. In addition, the new GPT-4 and ChatGPT (gpt-35-turbo) model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio. +Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-35-Turbo, and Embeddings model series. In addition, the new GPT-4 and gpt-35-turbo model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio. ### Features overview | Feature | Azure OpenAI | | | |-| Models available | **NEW GPT-4 series** <br> GPT-3 base series <br>**NEW ChatGPT (gpt-35-turbo)**<br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.| +| Models available | **GPT-4 series** <br>**GPT-35-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.| | Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman <br> Davinci <br>**Fine-tuning is currently unavailable to new customers**.| | Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | | Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). | The service provides users access to several different models. Each model provid GPT-4 models are the latest available models. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4) -The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and increasing order of speed. --The Codex series of models is a descendant of GPT-3 and has been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our [models concept page](./concepts/models.md). - The DALL-E models, currently in preview, generate images from text prompts that the user provides. +Learn more about each model on our [models concept page](./concepts/models.md). + ## Next steps Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md). |
cognitive-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md | curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM ## Chat completions -Create completions for chat messages with the ChatGPT and GPT-4 models. +Create completions for chat messages with the GPT-35-Turbo and GPT-4 models. **Create chat completions** |
cognitive-services | Use Your Data Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/use-your-data-quickstart.md | Title: 'Use your own data with Azure OpenAI Service' + Title: 'Use your own data with Azure OpenAI service' description: Use this article to import and use your data in Azure OpenAI. If you want to clean up and remove an OpenAI or Azure Cognitive Search resource, ## Next steps - Learn more about [using your data in Azure OpenAI Service](./concepts/use-your-data.md)-- [Chat app sample code on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main).+- [Chat app sample code on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main). |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md | keywords: ### Use Azure OpenAI on your own data (preview) -- [Azure OpenAI on your data](./concepts/use-your-data.md) is now available in preview, enabling you to chat with OpenAI models such as ChatGPT and GPT-4 and receive responses based on your data. +- [Azure OpenAI on your data](./concepts/use-your-data.md) is now available in preview, enabling you to chat with OpenAI models such as GPT-35-Turbo and GPT-4 and receive responses based on your data. ### New versions of gpt-35-turbo and gpt-4 models If you are currently using the `2023-03-15-preview` API, we recommend migrating - **GPT-4 series models are now available in preview on Azure OpenAI**. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4). These models are currently available in the East US and South Central US regions. -- **New Chat Completion API for ChatGPT and GPT-4 models released in preview on 3/21**. To learn more checkout the [updated quickstarts](./quickstart.md) and [how-to article](./how-to/chatgpt.md).+- **New Chat Completion API for GPT-35-Turbo and GPT-4 models released in preview on 3/21**. To learn more checkout the [updated quickstarts](./quickstart.md) and [how-to article](./how-to/chatgpt.md). -- **ChatGPT (gpt-35-turbo) preview**. To learn more checkout the [how-to article](./how-to/chatgpt.md).+- **GPT-35-Turbo preview**. To learn more checkout the [how-to article](./how-to/chatgpt.md). - Increased training limits for fine-tuning: The max training job size (tokens in training file) x (# of epochs) is 2 Billion tokens for all models. We have also increased the max training job from 120 to 720 hours. - Adding additional use cases to your existing access.  Previously, the process for adding new use cases required customers to reapply to the service. Now, we're releasing a new process that allows you to quickly add new use cases to your use of the service. This process follows the established Limited Access process within Azure Cognitive Services. [Existing customers can attest to any and all new use cases here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUM003VEJPRjRSOTZBRVZBV1E5N1lWMk1XUyQlQCN0PWcu). Please note that this is required anytime you would like to use the service for a new use case you did not originally apply for. |
cognitive-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md | Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
communication-services | Voice And Video Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md | The call summary log contains data to help you identify key properties of all ca | `endpointType` | This value describes the properties of each endpoint that's connected to the call. It can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. | | `sdkVersion` | The version string for the Communication Services Calling SDK version that each relevant endpoint uses (for example, `"1.1.00.20212500"`). | | `osVersion` | A string that represents the operating system and version of each endpoint device. |-| `participantTenantId` | The ID of the Microsoft tenant associated with the participant. This field is used to guide cross-tenant redaction. +| `participantTenantId` | The ID of the Microsoft tenant associated with the identity of the participant. The tenant can either be the Azure tenant that owns the ACS resource or the Microsoft tenant of an M365 identity. This field is used to guide cross-tenant redaction. +|`participantType` | Description of the participant as a combination of its client (Azure Communication Services (ACS) or Teams), and its identity, (ACS or Microsoft 365). Possible values include: ACS (ACS identity and ACS SDK), Teams (Teams identity and Teams client), ACS as Teams external user (ACS identity and ACS SDK in Teams call or meeting), and ACS as Microsoft 365 user (M365 identity and ACS client). +| `pstnPartcipantCallType `|It represents the type and direction of PSTN participants including Emergency calling, direct routing, transfer, forwarding, etc.| ### Call diagnostic log schema |
communication-services | Calling Sdk Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md | The Azure Communication Services Calling SDK supports the following streaming co | - | | -- | | **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video and 1 screen sharing | 1 video + 1 screen sharing | | **Maximum # of incoming remote streams that can be rendered simultaneously** | 4 videos + 1 screen sharing | 6 videos + 1 screen sharing |+| **Maximum # of incoming remote streams that can be rendered simultaneousl - public preview WebSDK or greater [1.14.1](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1141-beta1-2023-06-01)** | 9 videos + 1 screen sharing | 6 videos + 1 screen sharing | While the Calling SDK don't enforce these limits, your users may experience performance degradation if they're exceeded. |
communication-services | Calling Widget Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-overview.md | + + Title: Get started with a click to call experience using Azure Communication Services ++description: Learn how to create a Calling Widget widget experience with the Azure Communication Services CallComposite to facilitate click to call. +++++ Last updated : 06/05/2023+++++# Get started with a click to call experience using Azure Communication Services +++ ++This project aims to guide developers on creating a seamless click to call experience using the Azure Communication UI Library. ++As per your requirements, you may need to offer your customers an easy way to reach out to you without any complex setup. ++Click to call is a simple yet effective concept that facilitates instant interaction with, customer support, financial advisor, and other customer-facing teams. The goal of this tutorial is to assist you in making interactions with your customers just a click away. ++If you wish to try it out, you can download the code from [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-click-to-call). ++Following this tutorial will: ++- Allow you to control your customers audio and video experience depending on your customer scenario +- Move your customers call into a new window so they can continue browsing while on the call +++This tutorial is broken down into three parts: ++- Creating your widget +- using post messaging to start a calling experience in a new window +- Embed your calling experience ++## Prerequisites ++- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). +- [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version. +++### Set up the project ++Only use this step if you are creating a new application. ++To set up the react App, we use the `create-react-app` command line tool. This tool +creates an easy to run TypeScript application powered by React. This command will create a simple react application using TypeScript. ++```bash +# Create an Azure Communication Services App powered by React. +npx create-react-app ui-library-click-to-call-app --template typescript ++# Change to the directory of the newly created App. +cd ui-library-click-to-call-app +``` ++### Get your dependencies ++Then you need to update the dependency array in the `package.json` to include some beta and alpha packages from Azure Communication Services for this to work: +```json +"@azure/communication-calling": "1.14.1-beta.1", +"@azure/communication-chat": "1.3.2-beta.2", +"@azure/communication-react": "1.7.0-beta.1", +"@azure/communication-calling-effects": "1.0.1", +"@fluentui/react-icons": "~2.0.203", +"@fluentui/react": "~8.98.3", +``` ++Once you run these commands, youΓÇÖre all set to start working on your new project. In this tutorial, we are modifying the files in the `src` directory. +++## Initial app setup ++To get started, we replace the provided `App.tsx` content with a main page that will: ++- Store all of the Azure Communication information that we need to create a CallAdapter to power our Calling experience +- Control the different pages of our application +- Register the different fluent icons we use in the UI library and some new ones for our purposes ++`src/App.tsx` ++```ts +// imports needed +import { CallAdapterLocator } from '@azure/communication-react'; +import './App.css'; +import { useEffect, useMemo, useState } from 'react'; +import { CommunicationIdentifier, CommunicationUserIdentifier } from '@azure/communication-common'; +import { Spinner, Stack, initializeIcons, registerIcons } from '@fluentui/react'; +import { CallAdd20Regular, Dismiss20Regular } from '@fluentui/react-icons'; +``` ++```ts +type AppPages = "calling-widget" | "new-window-call"; ++registerIcons({ + icons: { dismiss: <Dismiss20Regular />, callAdd: <CallAdd20Regular /> }, +}); +initializeIcons(); +function App() { + const [page, setPage] = useState<AppPages>("calling-widget"); ++ /** + * Token for local user. + */ + const token = "<Enter your Azure Communication Services token here>"; ++ /** + * User identifier for local user. + */ + const userId: CommunicationIdentifier = { + communicationUserId: "<Enter your user Id>", + }; ++ /** + * This decides where the call will be going. This supports many different calling modalities in the Call Composite. + * + * - Teams meeting locator: {meetingLink: 'url to join link for a meeting'} + * - Azure Communication Services group call: {groupId: 'GUID that defines the call'} + * - Azure Communication Services Rooms call: {roomId: 'guid that represents a rooms call'} + * - Teams adhoc, Azure communications 1:n, PSTN calls all take a participants locator: {participantIds: ['Array of participant id's to call']} + * + * You can call teams voice apps like a Call queue with the participants locator. + */ + const locator: CallAdapterLocator = { + participantIds: ["<Enter Participant Id's here>"], + }; ++ /** + * The phone number needed from your Azure Communication Services resource to start a PSTN call. Can be created under the phone numbers. + * + * For more information on phone numbers and Azure Communication Services go to this link: https://learn.microsoft.com/en-us/azure/communication-services/concepts/telephony/plan-solution + * + * This can be left alone if not making a PSTN call. + */ + const alternateCallerId = "<Enter your alternate CallerId here>"; ++ switch (page) { + case "calling-widget": { + return ( + <Stack verticalAlign='center' style={{ height: "100%", width: "100%" }}> + <Spinner + label={"Getting user credentials from server"} + ariaLive="assertive" + labelPosition="top" + /> + </Stack> + ); + } + case "new-window-call": { + return ( + <Stack verticalAlign='center' style={{ height: "100%", width: "100%" }}> + <Spinner + label={"Getting user credentials from server"} + ariaLive="assertive" + labelPosition="top" + /> + </Stack> + ); + } + default: { + return <>Something went wrong!</> + } + } +} ++export default App; +``` +In this snippet we register two new icons `<Dismiss20Regular/>` and `<CallAdd20Regular>`. These new icons are used inside the widget component that we are creating later. ++### Running the app ++We can then test to see that the basic application is working by running: ++```bash +# Install the newe dependencies +npm install ++# run the React app +npm run start +``` ++Once the app is running, you can see it on `http://localhost:3000` in your browser. You should see a little spinner saying: `getting credentials from server` as +a test message. ++## Next steps ++> [!div class="nextstepaction"] +> [Part 1: Creating your widget](./calling-widget-tutorial-part-1-creating-your-widget.md) |
communication-services | Calling Widget Tutorial Part 1 Creating Your Widget | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial-part-1-creating-your-widget.md | + + Title: Part 1 creating your widget ++description: Learn how to construct your own custom widget for your click to call experience - Part 1. +++++ Last updated : 06/05/2023+++++# Part 1 creating your widget +++To begin, we're going to make a new component. This component will serve as the widget for initiating the click to call experience. ++We are using our own widget setup for this tutorial but you can expand the functionality to suit your needs. For us, we have the widget perform the following actions: +- Display a custom logo. This can be replaced with another image or branding of your choosing. Feel free to download the image from the code if you would like to use our image. +- Let the user decide if they want to include video in the call. +- Obtain the userΓÇÖs consent regarding the possibility of the call being recorded. ++First step will be to create a new directory called `src/components`. Within this directory, we're going to create a new file named `CallingWidgetComponent.tsx`. We'll then proceed to set up the widget component with the following imports: ++`CallingWidgetComponent.tsx` +```ts +// imports needed +import { IconButton, PrimaryButton, Stack, TextField, useTheme, Checkbox, Icon } from '@fluentui/react'; +import React, { useEffect, useState } from 'react'; +``` ++Now let's introduce an interface containing the props that the component uses. ++`CallingWidgetComponent.tsx` +```ts +export interface clickToCallComponentProps { + /** + * Handler to start a new call. + */ + onRenderStartCall: () => void; + /** + * Custom render function for displaying logo. + * @returns + */ + onRenderLogo?: () => JSX.Element; + /** + * Handler to set displayName for the user in the call. + * @param displayName + * @returns + */ + onSetDisplayName?: (displayName: string | undefined) => void; + /** + * Handler to set whether to use video in the call. + */ + onSetUseVideo?: (useVideo: boolean) => void; +} +``` ++Each callback controls different behaviors for the calling experience. ++- `onRenderStartCall` - This callback is used to trigger any handlers in your app to do things like create a new window for your click to call experience. +- `onRenderLogo` - This is used as a rendering callback to have a custom logo or image render inside the widget when getting user information. +- `onSetDisplayName` - We use this callback to set the `displayName` of the participant when they're calling your support center. +- `onSetUseVideo` - Finally, this callback is used to control for our tutorial whether the user will have camera and screen sharing controls (more on that later). ++Finally, we add the body of the component. ++`src/views/CallingWidgetComponent.tsx` +```ts +/** + * Widget for Calling Widget + * @param props + */ +export const CallingWidgetComponent = ( + props: clickToCallComponentProps +): JSX.Element => { + const { onRenderStartCall, onRenderLogo, onSetDisplayName, onSetUseVideo } = + props; ++ const [widgetState, setWidgetState] = useState<"new" | "setup">(); + const [displayName, setDisplayName] = useState<string>(); + const [consentToData, setConsentToData] = useState<boolean>(false); ++ const theme = useTheme(); ++ useEffect(() => { + if (widgetState === "new" && onSetUseVideo) { + onSetUseVideo(false); + } + }, [widgetState, onSetUseVideo]); ++ /** widget template for when widget is open, put any fields here for user information desired */ + if (widgetState === "setup" && onSetDisplayName && onSetUseVideo) { + return ( + <Stack + styles={clicktoCallSetupContainerStyles(theme)} + tokens={{ childrenGap: "1rem" }} + > + <IconButton + styles={collapseButtonStyles} + iconProps={{ iconName: "Dismiss" }} + onClick={() => setWidgetState("new")} + /> + <Stack tokens={{ childrenGap: "1rem" }} styles={logoContainerStyles}> + <Stack style={{ transform: "scale(1.8)" }}> + {onRenderLogo && onRenderLogo()} + </Stack> + </Stack> + <TextField + label={"Name"} + required={true} + placeholder={"Enter your name"} + onChange={(_, newValue) => { + setDisplayName(newValue); + }} + /> + <Checkbox + styles={checkboxStyles(theme)} + label={ + "Use video - Checking this box will enable camera controls and screen sharing" + } + onChange={(_, checked?: boolean | undefined) => { + onSetUseVideo(!!checked); + }} + ></Checkbox> + <Checkbox + required={true} + styles={checkboxStyles(theme)} + label={ + "By checking this box you are consenting that we collect data from the call for customer support reasons" + } + onChange={(_, checked?: boolean | undefined) => { + setConsentToData(!!checked); + }} + ></Checkbox> + <PrimaryButton + styles={startCallButtonStyles(theme)} + onClick={() => { + if (displayName && consentToData) { + onSetDisplayName(displayName); + onRenderStartCall(); + } + }} + > + StartCall + </PrimaryButton> + </Stack> + ); + } ++ /** default waiting state for the widget */ + return ( + <Stack + horizontalAlign="center" + verticalAlign="center" + styles={clickToCallContainerStyles(theme)} + onClick={() => { + setWidgetState("setup"); + }} + > + <Stack + horizontalAlign="center" + verticalAlign="center" + style={{ + height: "4rem", + width: "4rem", + borderRadius: "50%", + background: theme.palette.themePrimary, + }} + > + <Icon iconName="callAdd" styles={callIconStyles(theme)} /> + </Stack> + </Stack> + ); +}; +``` ++### Time for some styles ++Once you have your component, you need some styles to give it a visually appealing look. For this, we'll create a new folder named `src/styles`. Within this folder we'll create a new file called `CallingWidgetComponent.styles.ts` and add the following styles. ++`src/styles/CallingWidgetComponent.styles.ts` ++```ts +// needed imports +import { IButtonStyles, ICheckboxStyles, IIconStyles, IStackStyles, Theme } from '@fluentui/react'; +``` +`CallingWidgetComponent.styles.ts` +```ts +export const checkboxStyles = (theme: Theme): ICheckboxStyles => { + return { + label: { + color: theme.palette.neutralPrimary, + }, + }; +}; ++export const clickToCallContainerStyles = (theme: Theme): IStackStyles => { + return { + root: { + width: "5rem", + height: "5rem", + padding: "0.5rem", + boxShadow: theme.effects.elevation16, + borderRadius: "50%", + bottom: "1rem", + right: "1rem", + position: "absolute", + overflow: "hidden", + cursor: "pointer", + ":hover": { + boxShadow: theme.effects.elevation64, + }, + }, + }; +}; ++export const clicktoCallSetupContainerStyles = (theme: Theme): IStackStyles => { + return { + root: { + width: "18rem", + minHeight: "20rem", + maxHeight: "25rem", + padding: "0.5rem", + boxShadow: theme.effects.elevation16, + borderRadius: theme.effects.roundedCorner6, + bottom: 0, + right: "1rem", + position: "absolute", + overflow: "hidden", + cursor: "pointer", + }, + }; +}; ++export const callIconStyles = (theme: Theme): IIconStyles => { + return { + root: { + paddingTop: "0.2rem", + color: theme.palette.white, + transform: "scale(1.6)", + }, + }; +}; ++export const startCallButtonStyles = (theme: Theme): IButtonStyles => { + return { + root: { + background: theme.palette.themePrimary, + borderRadius: theme.effects.roundedCorner6, + borderColor: theme.palette.themePrimary, + }, + textContainer: { + color: theme.palette.white, + }, + }; +}; ++export const logoContainerStyles: IStackStyles = { + root: { + margin: "auto", + padding: "0.2rem", + height: "5rem", + width: "10rem", + zIndex: 0, + }, +}; ++export const collapseButtonStyles: IButtonStyles = { + root: { + position: "absolute", + top: "0.2rem", + right: "0.2rem", + zIndex: 1, + }, +}; +``` ++These styles should already be added to the widget as seen in the snippet earlier. If you added the snippet as is, these styles just need importing into the `CallingWidgetComponent.tsx` file. ++`CallingWidgetComponent.tsx` +```ts ++// add to other imports +import { + clicktoCallSetupContainerStyles, + checkboxStyles, + startCallButtonStyles, + clickToCallContainerStyles, + callIconStyles, + logoContainerStyles, + collapseButtonStyles +} from '../styles/CallingWidgetComponent.styles'; ++``` ++### Adding the widget to the app ++Now we create a new folder `src/views` and add a new file for one of our pages `CallingWidgetScreen.tsx`. This screen acts as our home page for the app where the user can start a new call. ++We want to add the following props to the page: ++`CallingWidgetScreen.tsx` ++```ts +export interface CallingWidgetPageProps { + token: string; + userId: + | CommunicationUserIdentifier + | MicrosoftTeamsUserIdentifier; + callLocator: CallAdapterLocator; + alternateCallerId?: string; +} +``` ++These properties are fed by the values that we set in `App.tsx`. We'll use these props to make post messages to the app when we want to start a call in a new window (More on this later). ++Next, lets add the page content: ++`CallingWidgetScreen.tsx` +```ts +// imports needed +import { CommunicationUserIdentifier, MicrosoftTeamsUserIdentifier } from '@azure/communication-common'; +import { Stack, Text } from '@fluentui/react'; +import React, { useCallback, useEffect, useMemo, useState } from 'react'; +import { CallingWidgetComponent } from '../components/CallingWidgetComponent'; +import { CallAdapterLocator } from '@azure/communication-react'; +import hero from '../hero.svg'; +``` +```ts +export const CallingWidgetScreen = (props: CallingWidgetPageProps): JSX.Element => { + const { token, userId, callLocator, alternateCallerId } = props; ++ const [userDisplayName, setUserDisplayName] = useState<string>(); + const [useVideo, setUseVideo] = useState<boolean>(false); + // we also want to make this memoized version of the args for the new window. + const adapterParams = useMemo(() => { + const args = { + userId: userId as CommunicationUserIdentifier, + displayName: userDisplayName ?? "", + token, + locator: callLocator, + alternateCallerId, + }; + return args; + }, [userId, userDisplayName, token, callLocator, alternateCallerId]); ++ return ( + <Stack + style={{ height: "100%", width: "100%", padding: "3rem" }} + tokens={{ childrenGap: "1.5rem" }} + > + <Stack style={{ margin: "auto" }}> + <Stack + style={{ padding: "3rem" }} + horizontal + tokens={{ childrenGap: "2rem" }} + > + <Text style={{ marginTop: "auto" }} variant="xLarge"> + Welcome to a Calling Widget sample + </Text> + <img + style={{ width: "7rem", height: "auto" }} + src={hero} + alt="kcup logo" + /> + </Stack> ++ <Text> + Welcome to a Calling Widget sample for the Azure Communication Services UI + Library. Sample has the ability to: + </Text> + <ul> + <li> + Adhoc call teams users with a tenant set that allows for external + calls + </li> + <li>Joining Teams interop meetings as a Azure Communication Services user</li> + <li>Make a calling Widget PSTN call to a help phone line</li> + <li>Join a Azure Communication Services group call</li> + </ul> + <Text> + As a user all you need to do is click the widget below, enter your + display name for the call - this will act as your caller id, and + action the <b>start call</b> button. + </Text> + </Stack> + <Stack + horizontal + tokens={{ childrenGap: "1.5rem" }} + style={{ overflow: "hidden", margin: "auto" }} + > + <CallingWidgetComponent + onRenderStartCall={() => {}} + onRenderLogo={() => { + return ( + <img + style={{ height: "4rem", width: "4rem", margin: "auto" }} + src={hero} + alt="logo" + /> + ); + }} + onSetDisplayName={setUserDisplayName} + onSetUseVideo={setUseVideo} + /> + </Stack> + </Stack> + ); +}; +``` +This page provides general information on the current capabilities of our calling experiences, along with the addition of our previously created widget component. ++To integrate the widget screen, we simply update the existing `'calling-widget'` case in the root of the app `App.tsx`, by adding the new view. ++`App.tsx` +```ts +// add this with the other imports ++import { CallingWidgetScreen } from './views/CallingWidgetScreen'; ++``` ++```ts + + case 'calling-widget': { + if (!token || !userId || !locator) { + return ( + <Stack verticalAlign='center' style={{height: '100%', width: '100%'}}> + <Spinner label={'Getting user credentials from server'} ariaLive="assertive" labelPosition="top" />; + </Stack> + ) + } + return <CallingWidgetScreen token={token} userId={userId} callLocator={locator} alternateCallerId={alternateCallerId}/>; +} + +``` ++Once you have set the arguments defined in `App.tsx`, run the app with `npm run start` to see the changes: ++ ++Then when you action the widget button, you should see: ++ ++Yay! We have made the control surface for the widget! Next, we'll discuss what we need to add to make this widget start a call in a new window. ++> [!div class="nextstepaction"] +> [Part 2: Creating a new window calling experience](./calling-widget-tutorial-part-2-creating-new-window-experience.md) |
communication-services | Calling Widget Tutorial Part 2 Creating New Window Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial-part-2-creating-new-window-experience.md | + + Title: Part 2 creating a new window calling experience ++description: Learn how to deal with post messaging and React to create a new window calling experience with the CallComposite - Part 2. +++++ Last updated : 06/05/2023+++++# Part 2 creating a new window calling experience +++Now that we have a running application with our widget on the home page, we'll talk about starting the calling experience for your users with a new window. This scenario allows you to give your customer the ability to browse while still seeing your call in a new window. This can be useful in situations similar to when your users use video and screen sharing. ++To begin, we'll create a new view in the `src/views` folder called `NewWindowCallScreen.tsx`. This new screen will be used by the `App.tsx` file to go into a new call with the arguments provided to it using our `CallComposite`. If desired, the `CallComposite` can be swapped with a stateful client and UI component experience if desired as well, but that won't be covered in this tutorial. For more information see our [storybook documentation](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-statefulcallclient--page) about the stateful client. ++`src/views/NewWindowCallScreen.tsx` +```ts +// imports needed +import { CommunicationUserIdentifier, AzureCommunicationTokenCredential } from '@azure/communication-common'; +import { + CallAdapter, + CallAdapterLocator, + CallComposite, + useAzureCommunicationCallAdapter +} from '@azure/communication-react'; +import { Spinner, Stack } from '@fluentui/react'; +import React, { useMemo } from 'react'; +``` +```ts +export const NewWindowCallScreen = (props: { + adapterArgs: { + userId: CommunicationUserIdentifier; + displayName: string; + token: string; + locator: CallAdapterLocator; + alternateCallerId?: string; + }; + useVideo: boolean; +}): JSX.Element => { + const { adapterArgs, useVideo } = props; ++ const credential = useMemo(() => { + try { + return new AzureCommunicationTokenCredential(adapterArgs.token); + } catch { + console.error("Failed to construct token credential"); + return undefined; + } + }, [adapterArgs.token]); ++ const args = useMemo(() => { + return { + userId: adapterArgs.userId, + displayName: adapterArgs.displayName, + credential, + token: adapterArgs.token, + locator: adapterArgs.locator, + alternateCallerId: adapterArgs.alternateCallerId, + }; + }, [ + adapterArgs.userId, + adapterArgs.displayName, + credential, + adapterArgs.token, + adapterArgs.locator, + adapterArgs.alternateCallerId, + ]); +++ const afterCreate = (adapter: CallAdapter): Promise<CallAdapter> => { + adapter.on("callEnded", () => { + adapter.dispose(); + window.close(); + }); + adapter.joinCall(true); + return new Promise((resolve, reject) => resolve(adapter)); + }; ++ const adapter = useAzureCommunicationCallAdapter(args, afterCreate); ++ if (!adapter) { + return ( + <Stack + verticalAlign="center" + styles={{ root: { height: "100vh", width: "100vw" } }} + > + <Spinner + label={"Creating adapter"} + ariaLive="assertive" + labelPosition="top" + /> + </Stack> + ); + } + return ( + <Stack styles={{ root: { height: "100vh", width: "100vw" } }}> + <CallComposite + options={{ + callControls: { + cameraButton: useVideo, + screenShareButton: useVideo, + moreButton: false, + peopleButton: false, + displayType: "compact", + }, + localVideoTileOptions: { + position: !useVideo ? "hidden" : "floating", + }, + }} + adapter={adapter} + /> + </Stack> + ); +}; +``` ++To configure our `CallComposite` to fit in the Calling Widget, we need to make some changes. Depending on your use case, we have a number of customizations that can change the user experience. This sample chooses to hide the local video tile, camera, and screen sharing controls if the user opts out of video for their call. In addition to these configurations on the `CallComposite`, we use the `afterCreate` function defined in the snippet to automatically join the call. This bypasses the configuration screen and drop the user into the call with their mic live, as well auto close the window when the call ends. Just remove the call to `adapter.join(true);` from the `afterCreate` function and the configuration screen shows as normal. Next let's talk about how to get this screen the information once we have our `CallComposite` configured. ++To make sure we are passing around data correctly, let's create some handlers to send post messages between the parent window and child window to signal that we want some information. See diagram: ++ ++This flow illustrates that if the child window has spawned, it needs to ask for the arguments. This behavior has to do with React and that if the parent window just sends a message right after creation, the call adapter arguments needed are lost before the application mounts. The adapter arguments are lost because in the new window the listener is not set yet until after a render pass completes. More on where these event handlers are made to come. ++Now we want to update the splash screen we created earlier. First we add a reference to the new child window that we create. ++`CallingWidgetScreen.tsx` ++```ts + + const [userDisplayName, setUserDisplayName] = useState<string>(); + const newWindowRef = useRef<Window | null>(null); + const [useVideo, setUseVideo] = useState<boolean>(false); + +``` ++Next we create a handler that we pass to our widget that creates a new window that starts the process of sending the post messages. ++`CallingWidgetScreen.tsx` +```ts + + const startNewWindow = useCallback(() => { + const startNewSessionString = 'newSession=true'; + newWindowRef.current = window.open( + window.origin + `/?${startNewSessionString}`, + 'call screen', + 'width=500, height=450' + ); + }, []); + +``` ++This handler starts a new window position and place a new query arg in the window URL so that the main application knows that it's time to start a new call. The path that you give the window can be a new path in your application where your calling experience exists. For us this will be the `NewWindowCallScreen.tsx` file but this can also be a React app on its own. ++Next we add a `useEffect` hook that is creating an event handler listening for new post messages from the child window. ++`CallingWidgetScreen.tsx` +```ts + + useEffect(() => { + window.addEventListener('message', (event) => { + if (event.origin !== window.origin) { + return; + } + if (event.data === 'args please') { + const data = { + userId: adapterParams.userId, + displayName: adapterParams.displayName, + token: adapterParams.token, + locator: adapterParams.locator, + alternateCallerId: adapterParams.alternateCallerId, + useVideo: useVideo + }; + console.log(data); + newWindowRef.current?.postMessage(data, window.origin); + } + }); + }, [adapterParams, adapterParams.locator, adapterParams.displayName, useVideo]); + +``` ++This handler listens for events from the child window. (**NOTE: make sure that if the origin of the message is not from your app then return**) If the child window asks for arguments, we send it with the arguments needed to construct a `AzureCommunicationsCallAdapter`. ++Finally on this screen, let's add the `startNewWindow` handler to the widget so that it knows to create the new window. We do this by adding the property to the template of the widget screen like below. ++`CallingWidgetScreen.tsx` +```ts + + <Stack horizontal tokens={{ childrenGap: '1.5rem' }} style={{ overflow: 'hidden', margin: 'auto' }}> + <CallingWidgetComponent + onRenderStartCall={startNewWindow} + onRenderLogo={() => { + return ( + <img + style={{ height: '4rem', width: '4rem', margin: 'auto' }} + src={hero} + alt="logo" + /> + ); + }} + onSetDisplayName={setUserDisplayName} + onSetUseVideo={setUseVideo} + /> + </Stack> + +``` ++Next, we need to make sure that our application can listen for and ask for the messages from what would be the parent window. First to start, you might recall that we added a new query parameter to the URL of the application `newSession=true`. To use this and have our app look for that in the URL, we need to create a utility function to parse out that parameter. Once we do that, we'll use it to make our application behave differently when it's received. ++To do that, let's add a new folder `src/utils` and in this folder, we add the file `AppUtils.ts`. In this file let's put the following function: ++`AppUtils.ts` +```ts +/** + * get go ahead to request for adapter args from url + * @returns + */ +export const getStartSessionFromURL = (): boolean | undefined => { + const urlParams = new URLSearchParams(window.location.search); + return urlParams.get("newSession") === "true"; +}; +``` ++This function will look into our application's URL and see if the parameters we're looking for are there. If desired, you can also stick some other parameters in there to extend other functionality for your application. ++As well, we'll want to add a new type in here to track the different pieces needed to create a `AzureCommunicationCallAdapter`. This type can also be simplified if you are using our calling stateful client, this approach won't be covered in this tutorial though. ++`AppUtils.ts` +```ts +/** + * Properties needed to create a call screen for a Azure Communication Services CallComposite. + */ +export type AdapterArgs = { + token: string; + userId: CommunicationIdentifier; + locator: CallAdapterLocator; + displayName?: string; + alternateCallerId?: string; +}; +``` ++Once we have added these two things, we can go back to the `App.tsx` file to make some more updates. ++First thing we want to do is update `App.tsx` to use that new utility function that we created in `AppUtils.ts`. We want to use a `useMemo` hook for the `startSession` parameter so that it's fetched exactly once and not at every render. The fetch of `startSession` is done like so: ++`App.tsx` +```ts +// you will need to add these imports +import { useMemo } from 'react'; +import { AdapterArgs, getStartSessionFromURL } from './utils/AppUtils'; ++``` ++```ts ++ const startSession = useMemo(() => { + return getStartSessionFromURL(); + }, []); ++``` ++Following this, we want to add some state to make sure that we're tracking the new arguments for the adapter. We pass these arguments to the `NewWindowCallScreen.tsx` view that we made earlier so it can construct an adapter. As well state to track whether the user wants to use video controls or not. ++`App.tsx` +```ts +/** + * Properties needed to start an Azure Communication Services CallAdapter. When these are set the app will go to the Call screen for the + * click to call scenario. Call screen should create the credential that will be used in the call for the user. + */ + const [adapterArgs, setAdapterArgs] = useState<AdapterArgs | undefined>(); + const [useVideo, setUseVideo] = useState<boolean>(false); +``` ++We now want to add an event listener to `App.tsx` to listen for post messages. Insert a `useEffect` hook with an empty dependency array so that we add the listener only once on the initial render. ++`App.tsx` +```ts +import { CallAdapterLocator } from "@azure/communication-react"; +import { CommunicationIdentifier } from '@azure/communication-common'; +``` +```ts ++ useEffect(() => { + window.addEventListener('message', (event) => { + if (event.origin !== window.location.origin) { + return; + } ++ if ((event.data as AdapterArgs).userId && (event.data as AdapterArgs).displayName !== '') { + console.log(event.data); + setAdapterArgs({ + userId: (event.data as AdapterArgs).userId as CommunicationUserIdentifier, + displayName: (event.data as AdapterArgs).displayName, + token: (event.data as AdapterArgs).token, + locator: (event.data as AdapterArgs).locator, + alternateCallerId: (event.data as AdapterArgs).alternateCallerId + }); + setUseVideo(!!event.data.useVideo); + } + }); + }, []); ++``` +Next, we want to add two more `useEffect` hooks to `App.tsx`. These two hooks will: +- Ask the parent window of the application for arguments for the `AzureCommunicationCallAdapter`, we use the `window.opener` reference provided since this hook checks to see if it's the child window. +- Checks to see if we have the arguments appropriately set from the event listener fetching the arguments from the post message to start a call and change the app page to be the call screen. ++`App.tsx` +```ts ++ useEffect(() => { + if (startSession) { + console.log('asking for args'); + if (window.opener) { + window.opener.postMessage('args please', window.opener.origin); + } + } + }, [startSession]); ++ useEffect(() => { + if (adapterArgs) { + console.log('starting session'); + setPage('new-window-call'); + } + }, [adapterArgs]); ++``` +Finally, once we have done that, we want to add the new screen that we made earlier to the template as well. We also want to make sure that we do not show the Calling widget screen if the `startSession` parameter is found. Using this parameter this way avoids a flash for the user. ++`App.tsx` +```ts +// add with other imports ++import { NewWindowCallScreen } from './views/NewWindowCallScreen'; ++``` ++```ts ++ switch (page) { + case 'calling-widget': { + if (!token || !userId || !locator || startSession !== false) { + return ( + <Stack verticalAlign='center' style={{height: '100%', width: '100%'}}> + <Spinner label={'Getting user credentials from server'} ariaLive="assertive" labelPosition="top" />; + </Stack> + ) + + } + return <CallingWidgetScreen token={token} userId={userId} callLocator={locator} alternateCallerId={alternateCallerId}/>; + } + case 'new-window-call': { + if (!adapterArgs) { + return ( + <Stack verticalAlign='center' style={{ height: '100%', width: '100%' }}> + <Spinner label={'Getting user credentials from server'} ariaLive="assertive" labelPosition="top" />; + </Stack> + ) + } + return ( + <NewWindowCallScreen + adapterArgs={{ + userId: adapterArgs.userId as CommunicationUserIdentifier, + displayName: adapterArgs.displayName ?? '', + token: adapterArgs.token, + locator: adapterArgs.locator, + alternateCallerId: adapterArgs.alternateCallerId + }} + useVideo={useVideo} + /> + ); + } + } ++``` +Now, when the application runs in a new window, it sees that it's supposed to start a call so it will: +- Ask for the different Adapter arguments from the parent window +- Make sure that the adapter arguments are set appropriately and start a call ++Now when you pass in the arguments, set your `displayName`, and click `Start Call` you should see the following screens: ++ ++With this new window experience, your users are able to: +- continue using other tabs in their browser or other applications and still be able to see your call +- resize the window to fit their viewing needs such as increasing the size to better see a screen share ++This concludes the tutorial for click to call with a new window experience. Next will be an optional step to embed the calling surface into the widget itself keeping your users on their current page. ++If you would like to learn more about the Azure Communication Services UI library, check out our [storybook documentation](https://azure.github.io/communication-ui-library/?path=/story/overview--page). ++> [!div class="nextstepaction"] +> [Part 3: Embedding your calling experience](./calling-widget-tutorial-part-3-embedding-your-calling-experience.md) |
communication-services | Calling Widget Tutorial Part 3 Embedding Your Calling Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial-part-3-embedding-your-calling-experience.md | + + Title: Part 3 (optional) embedding your calling experience ++description: Learn how to embed a calling experience inside your new widget - Part 3. +++++ Last updated : 06/05/2023++++++# Part 3 (optional) embedding your calling experience +++Finally in this optional section of the tutorial we'll talk about making an embedded version of the Calling surface. We'll continue from where we left off in the last section and make some modifications to our existing screens. ++To start, let's take a look at the props for the `CallingWidgetComponent.tsx` props, these will need to be updated to have the widget hold the Calling surface. We'll make two changes. +- Add a new prop for the adapter arguments needed for the `AzureCommunicationCallAdapter` we'll call this `adapterArgs`. +- Make `onRenderStartCall` optional, this will allow us to come back to using a new window easier in the future. ++`CallingWidgetComponent.tsx` ++```ts +export interface CallingWidgetComponentProps { + /** + * arguments for creating an AzureCommunicationCallAdapter for your Calling experience + */ + adapterArgs: AdapterArgs; + /** + * if provided, will be used to create a new window for call experience. if not provided + * will use the current window. + */ + onRenderStartCall?: () => void; + /** + * Custom render function for displaying logo. + * @returns + */ + onRenderLogo?: () => JSX.Element; + /** + * Handler to set displayName for the user in the call. + * @param displayName + * @returns + */ + onSetDisplayName?: (displayName: string | undefined) => void; + /** + * Handler to set whether to use video in the call. + */ + onSetUseVideo?: (useVideo: boolean) => void; +} +``` ++Now, we'll need to introduce some logic to use these arguments to make sure that we're starting a call appropriately. This will include adding state to create an `AzureCommunicationCallAdapter` inside the widget itself so it will look a lot like the logic in `NewWindowCallScreen.tsx` adding the adapter to the widget will look something like this: ++`CallingWidgetComponent.tsx` +```ts +// add this to the other imports ++import { CommunicationUserIdentifier, AzureCommunicationTokenCredential } from '@azure/communication-common'; +import { + CallAdapter, + CallAdapterLocator, + CallComposite, + useAzureCommunicationCallAdapter, + AzureCommunicationCallAdapterArgs +} from '@azure/communication-react'; +import { AdapterArgs } from '../utils/AppUtils'; +// lets update our react imports as well +import React, { useCallback, useEffect, useMemo, useState } from 'react'; ++``` +```ts ++ const credential = useMemo(() => { + try { + return new AzureCommunicationTokenCredential(adapterArgs.token); + } catch { + console.error('Failed to construct token credential'); + return undefined; + } + }, [adapterArgs.token]); ++ const callAdapterArgs = useMemo(() => { + return { + userId:adapterArgs.userId, + credential: credential, + locator: adapterArgs.locator, + displayName: displayName, + alternateCallerId: adapterArgs.alternateCallerId + } + },[adapterArgs.locator, adapterArgs.userId, credential, displayName]) ++ const adapter = useAzureCommunicationCallAdapter(callAdapterArgs as AzureCommunicationCallAdapterArgs); ++``` ++Let's also add a `afterCreate` function like before, to do a few things with our adapter once it's constructed. Since we're now interacting with state in the widget we'll want to use a React `useCallback` just to make sure we're not defining this function every time we do a render pass. In our case, our function will reset the widget to the `'new'` state when the call ends and clear the user's `displayName` so they can start a new session. You can however return it to the `'setup'` state with the old displayName so that the app can easily call again as well. ++`CallingWidgetComponent.tsx` +```ts ++ const afterCreate = useCallback(async (adapter: CallAdapter): Promise<CallAdapter> => { + adapter.on('callEnded',() => { + setDisplayName(undefined); + setWidgetState('new'); + adapter.dispose(); + }); + return adapter; + },[]) ++ const adapter = useAzureCommunicationCallAdapter(callAdapterArgs as AzureCommunicationCallAdapterArgs, afterCreate); ++``` ++Once we again have an adapter we'll need to update the template to account for a new widget state, so on that note we'll also need to add to the different modes that the widget itself can hold. We'll add a new `'inCall'` state like so: ++`CallingWidgetComponent.tsx` +```ts ++const [widgetState, setWidgetState] = useState<'new' | 'setup' | 'inCall'>('new'); ++``` ++Next, we'll need to add a new logic to our Start Call button in the widget that will check to see which mode it will start the call, new window or embedded. That logic is as follows: ++`CallingWidgetComponent.tsx` +```ts ++ <PrimaryButton + styles={startCallButtonStyles(theme)} + onClick={() => { + if (displayName && consentToData && onRenderStartCall) { + onSetDisplayName(displayName); + onRenderStartCall(); + } else if (displayName && consentToData && adapter) { + setWidgetState('inCall'); + adapter?.joinCall(); + } + }} + > + StartCall + </PrimaryButton> ++``` ++We'll also want to introduce some internal state to the widget about the local user's video controls. ++`CallingWidgetComponent.tsx` +```ts +const [useLocalVideo, setUseLocalVideo] = useState<boolean>(false); +``` ++Next, lets go back to our style sheet for the widget. We'll need to add new styles to allow the `CallComposite` to grow to its minimum size. ++`CallingWidgetComponent.styles.ts` +```ts +export const clickToCallInCallContainerStyles = (theme: Theme): IStackStyles => { + return { + root: { + width: '35rem', + height: '25rem', + padding: '0.5rem', + boxShadow: theme.effects.elevation16, + borderRadius: theme.effects.roundedCorner6, + bottom: 0, + right: '1rem', + position: 'absolute', + overflow: 'hidden', + cursor: 'pointer', + background: theme.semanticColors.bodyBackground + } + } +} +``` ++Finally, in the widget we'll need to add a section to the template that is when the widget is in the `'inCall'` state that we added earlier. So now we should have our template looking as follows: ++`CallingWidgetComponent.tsx` +```ts +if (widgetState === 'setup' && onSetDisplayName && onSetUseVideo) { + return ( + <Stack styles={clicktoCallSetupContainerStyles(theme)} tokens={{ childrenGap: '1rem' }}> + <IconButton + styles={collapseButtonStyles} + iconProps={{ iconName: 'Dismiss' }} + onClick={() => setWidgetState('new')} + /> + <Stack tokens={{ childrenGap: '1rem' }} styles={logoContainerStyles}> + <Stack style={{ transform: 'scale(1.8)' }}>{onRenderLogo && onRenderLogo()}</Stack> + </Stack> + <TextField + label={'Name'} + required={true} + placeholder={'Enter your name'} + onChange={(_, newValue) => { + setDisplayName(newValue); + }} + /> + <Checkbox + styles={checkboxStyles(theme)} + label={'Use video - Checking this box will enable camera controls and screen sharing'} + onChange={(_, checked?: boolean | undefined) => { + onSetUseVideo(!!checked); + setUseLocalVideo(true); + }} + ></Checkbox> + <Checkbox + required={true} + styles={checkboxStyles(theme)} + label={ + 'By checking this box, you are consenting that we'll collect data from the call for customer support reasons' + } + onChange={(_, checked?: boolean | undefined) => { + setConsentToData(!!checked); + }} + ></Checkbox> + <PrimaryButton + styles={startCallButtonStyles(theme)} + onClick={() => { + if (displayName && consentToData && onRenderStartCall) { + onSetDisplayName(displayName); + onRenderStartCall(); + } else if (displayName && consentToData && adapter) { + setWidgetState('inCall'); + adapter?.joinCall(); + } + }} + > + StartCall + </PrimaryButton> + </Stack> + ); + } ++ if(widgetState === 'inCall' && adapter){ + return( + <Stack styles={clickToCallInCallContainerStyles(theme)}> + <CallComposite adapter={adapter} options={{ + callControls: { + cameraButton: useLocalVideo, + screenShareButton: useLocalVideo, + moreButton: false, + peopleButton: false, + displayType: 'compact' + }, + localVideoTileOptions: { position: !useLocalVideo ? 'hidden' : 'floating' } + }}></CallComposite> + </Stack> + ) + } ++ return ( + <Stack + horizontalAlign="center" + verticalAlign="center" + styles={clickToCallContainerStyles(theme)} + onClick={() => { + setWidgetState('setup'); + }} + > + <Stack + horizontalAlign="center" + verticalAlign="center" + style={{ height: '4rem', width: '4rem', borderRadius: '50%', background: theme.palette.themePrimary }} + > + <Icon iconName="callAdd" styles={callIconStyles(theme)} /> + </Stack> + </Stack> + ); +``` +Now that we have updated our widget to be more versatile, we'll want to take another look at the `CallingWidgetScreen.tsx` to make some adjustments to how we're calling the widget. We'll turn on the new embedded experience do two things: +- Remove the start call handler that we provided earlier +- provide the adapter arguments to the widget that we would normally be emitting through our post messages. ++That looks like this: ++`CallingWidgetScreen.tsx` +```ts ++ <Stack horizontal tokens={{ childrenGap: '1.5rem' }} style={{ overflow: 'hidden', margin: 'auto' }}> + <CallingWidgetComponent + adapterArgs={adapterParams} + onRenderLogo={() => { + return ( + <img + style={{ height: '4rem', width: '4rem', margin: 'auto' }} + src={hero} + alt="logo" + /> + ); + }} + onSetDisplayName={setUserDisplayName} + onSetUseVideo={setUseVideo} + /> + </Stack> ++``` +Now that we have made these changes we can start our app again if it's shut down with `npm run start`. If we go through the start call process like we did before we should see the following when starting the call: ++ ++Like before, this is a call starting with the video controls enabled. ++Thanks for following the different tutorials here. This concludes the quickstart guide for click to call with the Azure Communication Services UI Library. |
confidential-computing | Anjuna | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/anjuna.md | + + Title: Anjuna Security +description: Confidential computing solutions from Anjuna Security on Azure ++++++ Last updated : 03/29/2023++++# Anjuna Security +++## Overview ++AnjunaΓÇÖs goal is to give companies the freedom to run applications in the cloud with complete data security and privacy. ANjuna believes that Confidential Computing should be the foundational fabric of the cloud, fostering secure and reliable operations for organizations of all types. Through collaboration with Microsoft Azure, Anjuna is dedicated to delivering solutions that transform security into a business enabler, offering simplified adoption without compromising on data protection. ++You can learn more about Anjuna Security in [our partner webinar here](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Anjuna_Security). ++## Anjuna Confidential Computing Platform ++Anjuna Confidential Computing Platform is a breakthrough software solution. It allows you to run applications on [Azure confidential computing](../overview.md) instances powered by [AMD SEV-SNP](../confidential-vm-overview.md) or by [Intel SGX](../application-development.md) CPUs with enhanced ease of use, operational efficiency and security posture. Anjuna seamlessly integrates with Azure confidential computing instances creating a protected execution environment that intrinsically secures applications. With Anjuna, data is kept secure because itΓÇÖs encrypted ΓÇö in use, in transit and at restΓÇö and kept private because itΓÇÖs isolated from anyone with access to the infrastructure. Through built-in cryptographic attestation, Anjuna empowers enterprises to directly control application-level trust policies, ensuring that only trusted code can access sensitive data. + +The result is a revolutionary approach that brings security directly into compute, eliminating exposure to a wide range of threats from insiders, third parties, malicious software and more. ++Get started today with the Azure Marketplace solution - there's a [SaaS offering you can check out here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/anjuna1646713490052.anjuna_cc_saas?tab=Overview), and a [managed application here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/anjuna1646713490052.anjuna_cc_mgdapp?tab=Overview). +++## Learn more ++- Learn more about [Anjuna Security](https://www.anjuna.io/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. |
confidential-computing | Beekeeperai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/beekeeperai.md | + + Title: BeeKeeper AI +description: Confidential computing solutions from BeeKeeper AI on Azure ++++++ Last updated : 03/29/2023++++# BeeKeeperAI, Inc. +++## Overview ++BeeKeeperAI safely accelerates artificial intelligence (AI) algorithm development and deployment by using [Azure confidential computing](../index.yml) capabilities within a zero trust and sightless computing environment. ++Their EscrowAI collaboration platform enables algorithm developers to compute securely and ethically on real-world, protected data. The platform enables optimal algorithm development, deployment, and ongoing monitoring for use cases that require: ++- Data that can't be deidentified (genomic, retinal, social determinants of health) +- Small datasets that are difficult to deidentify (rare disease) +- Data that is too sensitive to risk exposure (mental health) +- Deidentification efforts that are time consuming or too costly ++EscrowAI is a SaaS offering available in the Azure Cloud environment requiring little time to activate. You can try it today from the [Azure Marketplace solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/beekeeperaiinc1643748994169.beekeeper_ai?tab=Overview). ++You can also check out how BeeKeeperAI speeds healthcare AI development with Azure confidential computing and [Intel SGX](../confidential-computing-enclaves.md), in our Customer Stories published [here](https://customers.microsoft.com/en-us/story/1503405357498110670-beekeeper-ai-healthcare-microsoft-security-solutions). ++## Learn more ++- Learn more about [BeeKeeperAI, Inc. here](https://www.beekeeperai.com/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. |
confidential-computing | Decentriq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/decentriq.md | + + Title: Decentriq +description: Confidential computing solutions from Decentriq on Azure ++++++ Last updated : 03/29/2023++++# Decentriq +++## Overview ++Decentriq is an enterprise SaaS platform for confidential data collaboration, powered by [Azure Confidential Computing](../index.yml). ++With Decentriq, organizations can join and analyze data with external partners in Data Clean Rooms: ultra-secure spaces where data isn't accessible and not shared. A unique combination of advanced privacy technologies such as confidential computing, synthetic data and differential privacy work together - to enforce compliance, security, control, unlocking the use of even the most sensitive data. ++You can learn more about Decentriq in [our partner webinar here](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Decentriq). ++## Decentriq Data Clean Rooms for healthcare ++In the healthcare ecosystem, DecentriqΓÇÖs Data Clean Rooms offer a data privacy-compliant and secure way for partners to securely analyze dataΓÇöwithout sharing the sensitive data itself, or proprietary algorithms. By enforcing compliance and control with strongest-in-market privacy technologies, it unlocks real-world data for innovations in diagnostics, treatment, and patient care. ++In a straightforward SaaS environment designed to plug into existing research processes and data pipelines, organizations can initiate compliant data partnerships twice as fast and uncover novel insights at unprecedented speeds. Data scientists have full analytical flexibility while powerful privacy technologies keep the data encrypted and verifiably confidential, even while in use. ++Decentriq Data Clean Rooms for Healthcare is a solution for life sciences companies, hospitals, researchers, real world evidence teams, insurers, and others - who can't afford to compromise security and privacy for analytical flexibility and ability to easily scale. Get started today with the Azure Marketplace solution ΓÇö [you can check it out here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dqtechnologiesag1586942031480.decentriq_healthcare_cleanroom?tab=Overview) and get started today. ++## Decentriq Data Clean Rooms for media & advertising ++Especially for privacy-conscious and highly regulated companies, DecentriqΓÇÖs Data Clean Rooms enable brands to join their first-party customer data with publishersΓÇÖ data to find and activate their ideal audiences. In the clean rooms, data and targeting models remain verifiably confidential and inaccessible, even to Decentriq. Control and compliance are enforced by Confidential Computing and other advanced privacy technologies, making it possible to use even the most sensitive data and easing approvals with data protection and legal departments. ++With privacy-preserving lookalikes, brands can use their first-party data to extend the reach of their target audiences while maintaining precision and quality, leading to more effective media buys. ++Audiences are immediately actionable from within a no-code SaaS environment. Setup takes only five minutesΓÇöno support needed from an engineering team. ++Get started today with the Azure Marketplace solution, [you can check it out here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dqtechnologiesag1586942031480.decentriq_media_cleanroom?tab=Overview). +++## Learn more ++- Learn more about [Decentriq](https://www.decentriq.com/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. |
confidential-computing | Edgeless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/edgeless.md | + + Title: Edgeless +description: Confidential computing solutions from Edgeless on Azure ++++++ Last updated : 03/29/2023++++# Edgeless Systems ++## Overview ++Edgeless Systems is a cybersecurity startup on the mission to build easy-to-use, open-source tools that make confidential computing accessible to everyone. They develop innovative software that enables new and exciting forms of trustworthy data processing. ++You can learn more about Edgeless Systems in [our blog here](https://techcommunity.microsoft.com/t5/azure-confidential-computing/introducing-edgelessdb-a-database-designed-for-confidential/ba-p/2813631). ++## Scalable confidential apps on AKS with EGo and MarbleRun ++In this webinar, Felix Schuster, CEO and cofounder of Edgeless Systems, is giving an introduction to the EGo and MarbleRun products. EGo lets anyone create an SGX-enabled app in 5-min. MarbleRun makes it possible to scale and manage SGX-enabled apps on the Azure Kubernetes Service (AKS), effectively extending the properties of a single SGX enclave to an entire Kubernetes cluster. As this webinar shows, this makes it possible to build privacy-preserving and scalable Al training on AKS. ++Head to the [webinar here to learn more](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Edgeless_Systems). ++## Constellation, the worldΓÇÖs first Confidential Kubernetes +Constellation keeps all data in a Kubernetes deployment encryptedat rest, in transit, and during processing in memory. Constellation protects the integrity of the control plane and the workload. Finally, Constellation makes these properties easily verifiable. With Constellation, your Kubernetes cluster is shielded as a whole from the cloud environment and is protected against both hackers and insider threat. ++Head to the [blog here to learn more](https://techcommunity.microsoft.com/t5/azure-confidential-computing/confidential-computing-at-scale-with-open-source-confidential/ba-p/3641021). ++## Get started with Edgeless Systems on Azure today ++You can learn more and get started with these [Azure Marketplace solutions, here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/edgelesssystems.edb?tab=Overview). You can also find code and the docs on their [GitHub](https://github.com/edgelesssys). ++## Learn more ++- Learn more about [Edgeless Systems](https://www.edgeless.systems/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. + |
confidential-computing | Enclaive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/enclaive.md | + + Title: Enclaive +description: Confidential computing solutions from Enclaive on Azure ++++++ Last updated : 03/29/2023++++# Enclaive ++## Overview +++You can learn more about Enclaive in [our blog here](https://techcommunity.microsoft.com/t5/azure-confidential-computing/enclaive-s-the-base-developing-confidential-cloud-applications/ba-p/3658799). ++## Data-in-Use Encrypting Mosquitto (Confidential Compute VM) +Enclaive Mosquitto* – Confidential Compute Enterprise Enclave 4 SGX Protects IoT Data from Insider Attacks Azure confidential computing instances offer the opportunity to quickly protect any application from insider threats, using Intel® Software Guard Extensions (SGX)-enabled CPUs and Enclaive’s Enterprise Enclaves software. With a single command, enclaive automatically creates a secure enclave that isolates and encrypts all application resources in runtime, at rest, and on the network to achieve the strongest end-to-end data protection available. No changes to the application code or SDKs are required. ++Get started today with the [Azure Marketplace solution, here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/enclaivegmbh1643578052639.vm-mosquitto-sgx?tab=Overview). ++## NGINX* for confidential cloud computing (VM) +Enclaive NGINX* – Confidential Compute Enterprise Enclave 4 SGX Protects Data from Insider Attacks Azure confidential computing instances offer the opportunity to quickly protect any application from insider threats, using Intel® Software Guard Extensions (SGX)-enabled CPUs and Enclaive’s Enterprise Enclaves software. With a single command, Enclaive automatically creates a secure enclave that isolates and encrypts all application resources in runtime, at rest, and on the network to achieve the strongest end-to-end data protection available. No changes to the application code or SDKs are required. ++Head to the [Azure Marketplace solution, here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/enclaivegmbh1643578052639.vm-nginx-sgx?tab=Overview). Here, you can also find runtime environments for [Python](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/enclaivegmbh1643578052639.vm-python-sgx?tab=Overview), [Java](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/enclaivegmbh1643578052639.vm-java-sgx?tab=Overview), [Rust](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/enclaivegmbh1643578052639.vm-rust-sgx?tab=Overview), [Golang](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/enclaivegmbh1643578052639.vm-go-sgx?tab=Overview), [NodeJS](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/enclaivegmbh1643578052639.vm-nodejs-sgx?tab=Overview) and more. ++## Enabling cookieless eCommerce: Confidential WordPress-SGX +WordPress is the mother of Content Management Systems (CMS) and still one of the most widely spread technologies for private Web Sites and Online Shops. Thanks to a large development community and open source ecosystem, WordPress evolved into a Web tool for easy eCommerce and for building marketplaces literally with a single click. ++You can learn more in this [webinar here](https://vshow.on24.com/vshow/Azure_Confidential/#exhibits/enclaive_GmbH). ++## Get started with Enclaive on Azure today ++All Enclaive solutions on Azure can be found at the [Azure Marketplace solutions, here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/edgelesssystems.edb?tab=Overview). You can also find code and the docs on their [GitHub](https://github.com/enclaive). ++## Learn more ++- Learn more about [Enclaive](https://enclaive.io/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. |
confidential-computing | Fortanix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/fortanix.md | + + Title: Fortanix +description: Confidential computing solutions from Fortanix on Azure ++++++ Last updated : 03/29/2023++++# Fortanix ++## Overview ++Fortanix secures data, wherever it is. Fortanix’s data-first approach helps businesses of all sizes to modernize their security solutions on-premises, in the cloud and everywhere in-between. Enterprises worldwide, especially in privacy-sensitive industries like healthcare, fintech, financial services, government, and retail, trust Fortanix for data security, privacy and compliance. +++You can learn more about Fortanix through our [webinars here](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Fortanix_Inc). ++## Fortanix Confidential Computing Node Agent ++Fortanix Node Agent is software deployed on Azure confidential computing DC VMs to manage the compute node and applications running in secure enclaves. The node agent is compatible with Fortanix Confidential Computing Manager, which enables running containerized apps in the Intel SGX secure enclaves using Azure Confidential Computing. The Node Agent helps the verification of hardware and software running on the compute nodes. The Node Agent also assists with application attestation and visibility for Fortanix Confidential Computing Manager. ++Get started today with the [Azure Marketplace solution, here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/fortanix.rte_node_agent?tab=Overview) or check out our [quickstart, here](../how-to-fortanix-confidential-computing-manager-node-agent.md). ++## Fortanix Confidential Computing Manager on Azure +The Fortanix Confidential Computing Manager enables applications to run in confidential computing environments, verifies the integrity of those environments, and manages the enclave application life-cycle. The solution orchestrates critical security policies such as identity verification, data access control, and code attestation for enclaves that are required for confidential computing. Unlike other approaches, Fortanix provides the flexibility to run and manage the broadest set of applications, including existing applications, new enclave-native applications, and prepackaged applications. ++Head to the [Azure Marketplace solution for more](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/fortanix.em_managed?tab=Overview). You can also check out our [quickstart, here](../how-to-fortanix-confidential-computing-manager.md). ++## Fortanix Data Security Manager +Fortanix Data Security Manager is a unified Key Management, HSM, Tokenization, and Secrets Management solution secured with Intel® SGX. Fortanix protects sensitive data across public, hybrid, multicloud, and private cloud environments with a single point of management and control. Additionally, Azure cluster of Fortanix Data Security Manager can be connected seamlessly to an on-premises cluster of Fortanix Data Security Manager or other third party legacy HSMs and Azure managed HSM to achieve FIPS 140-2 Level 3 HSM security with software-defined simplicity ++You can learn more and get started with these [Azure Marketplace solutions, here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/fortanix.fortanix-sdkms-sgx?tab=Overview). ++++## Learn more ++- Learn more about [Fortanix](https://www.fortanix.com/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. + |
confidential-computing | Habu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/habu.md | + + Title: Habu +description: Confidential computing solutions from Habu on Azure ++++++ Last updated : 03/29/2023++++# Habu +++## Overview ++Data clean rooms allow organizations to share data and collaborate on analytics without compromising privacy and security. Habu is a pioneer in data clean rooms, offering a fully interoperable cloud solution that allows multiple organizations to collaborate without moving data. They now provide clean rooms that support [Azure confidential computing](../overview.md) on [AMD powered confidential VMs](../confidential-vm-overview.md) to increase data privacy protection. ++Collaboration partners can now participate in cross-cloud, cross-region data sharing - with protections against unauthorized access to data across partners, cloud providers, and even Habu. You can hear more from HabuΓÇÖs Chief Product Officer, Matthew Karasick, on their [partnership with Azure here](https://build.microsoft.com/en-US/sessions/4cdcea58-d6fa-43f9-a1ea-27a8983e3f57?source=partnerdetail). ++You can also get started on their [Azure Marketplace solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/habuinc1663874067667.habu?tab=Overview), today. +++## Learn more ++- Learn more about [Habu](https://habu.com/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. |
confidential-computing | Mithril | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/mithril.md | + + Title: Mithril Security +description: Confidential computing solutions from Mithril Security on Azure ++++++ Last updated : 03/29/2023++++# Mithril Security +++## Overview ++Mithril Security provides tooling to help SaaS vendors serve AI models inside secure enclaves, and providing an on-premises level of security and control to data owners. Data owners can use their SaaS AI solutions while remaining compliant and in control of their data. ++ Mithril Security is also a key [Azure confidential computing](../overview.md) partner solution that enables [confidential data clean rooms](../multi-party-data.md), learn more about their offering [here](https://blindbox.mithrilsecurity.io/en/latest/). ++You can learn more about Mithril Security in [our partner webinar here](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Mithril_Security). ++## Learn more ++- Learn more about [Mithril Security](https://www.mithrilsecurity.io/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. |
confidential-computing | Opaque | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/opaque.md | + + Title: Opaque +description: Confidential computing solutions from Opaque on Azure ++++++ Last updated : 03/29/2023++++# Opaque Systems, Inc. +++## Overview ++Opaque makes confidential data useful by enabling secure analytics and machine learning on encrypted data. With Opaque Systems, you can analyze encrypted data in the cloud using popular tools like Apache Spark, while ensuring that your data is never exposed unencrypted to anybody else ΓÇö not the cloud provider, not system administrators with root access, not even to Opaque! Analyze encrypted, structured data securely using Spark SQL. Run arbitrary SQL queries, complex analytics, and ETL pipelines on encrypted data loaded from multiple sources. ++You can learn more about Opaque in [our partner webinar here](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Opaque). ++## Opaque analytics ++By combining encrypted data from several sources and training models on the joint dataset, you can generate insights that would otherwise be impossible. This product enables secure collaboration with partners, data providers, data processors, different business units, and 3rd parties. The data remains encrypted from your storage system all the way to the cloud platformΓÇÖs CPU run-time memory. ++Get started today with the Azure Marketplace solution, [you can check it out here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/opaquesystemsinc1638314744398.opaque_analytics_001?tab=Overview). +++## Learn more ++- Learn more about [Opaque Systems, Inc](https://opaque.co/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. |
confidential-computing | Partner Pages Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/partner-pages-index.md | + + Title: Azure confidential computing partners +description: Learn about how Azure confidential computing partners build on the Azure infrastructure to solve customer problems +++++ Last updated : 03/29/2023+++++# Azure confidential computing partners ++Azure confidential computing enables an ecosystem of partners that build on our privacy preserving infrastructure to provide additional capabilities for customers. Learn more about our partners, their unique solutions to your use cases and links to get started with their Azure Marketplace solutions. ++- [Anjuna](../partner-pages/anjuna.md) Confidential Computing Platform is the solution for protecting your workloads from prying eyes and unauthorized tampering in the cloud. Your workloads stay confidential and trusted during execution so that you can embrace the cloud and innovate faster without the threat of code and data exposure. ++- [BeeKeeperAI](../partner-pages/beekeeperai.md) - Accelerating healthcare AI through a secure collaboration platform for algorithm owners and data stewards. ++- [Decentriq](../partner-pages/decentriq.md) enables companies around the globe to collaborate with other organizations on their most sensitive datasets and create value for their clients. The technologies we apply make it impossible for anyone to see the sensitive data, us included. ++- [Edgeless Systems](../partner-pages/edgeless.md) allows you to take cloud security and compliance to the next level, easily and at scale. ++- [Enclaive](../partner-pages/enclaive.md) System's revolutionary enclavation technology establishes the highest level of application security and data privacy. Our apps are so secure, even the host is unable to look inside. ++- [Fortanix](../partner-pages/fortanix.md), a leading confidential computing solutions provider, based in Santa Clara (CA), provides a Confidential AI platform that allows data teams to work with their sensitive data sets to train and run AI models in a confidential manner. ++- [Habu](../partner-pages/habu.md) works with a wide range of collaborators across the ecosystem including leading platforms for activation, data and identity companies, agencies and consultancies, and major clouds. Fast-track business growth with a platform that integrates with your existing tools and technology investments. ++- [Mithril Security](../partner-pages/mithril.md) helps software vendors sell SaaS to enterprises, thanks to our secure enclave deployment tooling, which provides SaaS on-prem levels of security and control for customers. ++- [Opaque Systems](../partner-pages/opaque.md) is a confidential computing and data clean room platform that enables secure data sharing, multi-party analytics and machine learning on encrypted data. ++- [Scone](../partner-pages/scone.md) confidential computing platform facilitates always encrypted execution: one can run services and applications such that neither the data nor the code is ever accessible as plain text - not even for root users. |
confidential-computing | Scone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/scone.md | + + Title: Scontain +description: Confidential computing solutions from Scontain on Azure ++++++ Last updated : 03/29/2023++++# Scontain ++## Overview ++Scontain sells the SCONE platform and confidential services like Spark and MariaDB. It supports its customers in setting up confidential multi-stakeholder computing and confidential machine learning. Scontain helps its customers to build confidential services and to educate developers. ++You can learn more about Scontain in [partner webinar here](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Scontain_GmbH). ++## SCONE platform for Azure confidential computing ++SCONE supports the development and operations of modern confidential cloud-native applications and multi-party confidential computing. It enables service providers and software developers to transform their applications into confidential applications running inside TEE hardware enclaves (for example, [Intel SGX](../confidential-computing-enclaves.md)) without requiring source code changes. The platform supports all common programming languages and has excellent performance. ++You can try the SCONE platform on Azure today, check out their page on [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/scontainug1595751515785.scone?tab=Overview). ++## SCONE Confidential PySpark on Kubernetes +The SCONE Confidential PySpark Virtual Machine includes everything you need to evaluate our confidential PySpark offering on a Kubernetes cluster. Run distributed tasks on large datasets while protecting your Spark application code and data. Remote attestation ensures that your workload hasn't been tampered with when deployed to an untrusted host - for example a VM instance or a Kubernetes node that runs on the cloud. In this process, attestation evidence provided by Intel SGX hardware is analyzed by an attestation provider, such as Intel or Microsoft Azure Attestation. ++Head to this offering on [Azure Marketplace to learn more](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/scontainug1595751515785.scone-pyspark?tab=Overview). ++## SCONE Confidential Computing Playground +The SCONE Playground Virtual Machine has everything you need to evaluate the SCONE Confidential Computing Platform. The Virtual Machine includes our internal tooling (scone-build and sconify-image for effortlessly transforming standard container images into confidential ones), preloaded container images and Helm charts, a local Kubernetes cluster, as well as many practical examples, from simple "Hello World" applications to complex, distributed, multi-stakeholder Machine Learning scenarios with TensorFlow and Spark. This way one can try our solutions without the need to set up everything from scratch (install all the tooling, manage access tokens, setup Kubernetes clusters, and so on...). ++This is the easiest way to get started with SCONE, now available on [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/scontainug1595751515785.scone-demos?tab=Overview). +++## Learn more ++- Learn more about [Scontain](https://scontain.com/). ++- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners. + |
container-apps | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md | IP addresses are broken down into the following types: | Public inbound IP address | Used for app traffic in an external deployment, and management traffic in both internal and external deployments. | | Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Outbound IPs aren't guaranteed and may change over time. Using a NAT gateway or other proxy for outbound traffic from a Container App environment is only supported on the workload profile environment. | | Internal load balancer IP address | This address only exists in an internal deployment. |-| App-assigned IP-based TLS/SSL addresses | These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured. | ## Subnet |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
container-apps | Scale App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md | Adding or editing scaling rules creates a new revision of your container app. A ## Scale definition -Scaling is defined by the combination of limits and rules. +Scaling is defined by the combination of limits, rules, and behavior. - **Limits** are the minimum and maximum possible number of replicas per revision as your container app scales. Scaling is defined by the combination of limits and rules. [Scale rules](#scale-rules) are implemented as HTTP, TCP, or custom. +- **Behavior** is how the rules and limits are combined together to determine scale decisions over time. ++ [Scale behavior](#scale-behavior) explains how scale decisions are calculated. + As you define your scaling rules, keep in mind the following items: - You aren't billed usage charges if your container app scales to zero. If you don't create a scale rule, the default scale rule is applied to your cont > [!IMPORTANT] > Make sure you create a scale rule or set `minReplicas` to 1 or more if you don't enable ingress. If ingress is disabled and you don't define a `minReplicas` or a custom scale rule, then your container app will scale to zero and have no way of starting back up. -## Considerations +## Scale behavior -- In "multiple revision" mode, adding a new scale trigger creates a new revision of your application but your old revision remains available with the old scale rules. Use the **Revision management** page to manage traffic allocations.+Scaling behavior has the following defaults: -- No usage charges are incurred when an application scales to zero. For more pricing information, see [Billing in Azure Container Apps](billing.md).+| Parameter | Value | +|--|--| +| Polling interval | 30 seconds | +| Cool down period | 300 seconds | +| Scale up stabilization window | 0 seconds | +| Scale down stabilization window | 300 seconds | +| Scale up step | 1, 4, 100% of current | +| Scale down step | 100% of current | +| Scaling algorithm | `desiredReplicas = ceil(currentMetricValue / targetMetricValue)` | ++- **Polling interval** is how frequently event sources are queried by KEDA. This value doesn't apply to HTTP and TCP scale rules. +- **Cool down period** is how long after the last event was observed before the application scales down to its minimum replica count. +- **Scale up stabilization window** is how long to wait before performing a scale up decision once scale up conditions were met. +- **Scale down stabilization window** is how long to wait before performing a scale down decision once scale down conditions were met. +- **Scale up step** is the rate new instances are added at. It starts with 1, 4, 8, 16, 32, ... up to the configured maximum replica count. +- **Scale down step** is the rate at which replicas are removed. By default 100% of replicas that need to shut down are removed. +- **Scaling algorithm** is the formula used to calculate the current desired number of replicas. ++### Example -### Unsupported KEDA capabilities +For the following scale rule: -- KEDA ScaledJobs aren't supported. For more information, see [KEDA Scaling Jobs](https://keda.sh/docs/concepts/scaling-jobs/#overview).+```json +"minReplicas": 0, +"maxReplicas": 20, +"rules": [ + { + "name": "azure-servicebus-queue-rule", + "custom": { + "type": "azure-servicebus", + "metadata": { + "queueName": "my-queue", + "namespace": "service-bus-namespace", + "messageCount": "5" + } + } + } +] +``` ++Starting with an empty queue, KEDA takes the following steps in a scale up scenario: ++1. Check `my-queue` every 30 seconds. +1. If the queue length equals 0, go back to (1). +1. If the queue length is > 0, scale the app to 1. +1. If the queue length is 50, calculate `desiredReplicas = ceil(50/5) = 10`. +1. Scale app to `min(maxReplicaCount, desiredReplicas, max(4, 2*currentReplicaCount))` +1. Go back to (1). ++If the app was scaled to the maximum replica count of 20, scaling goes through the same previous steps. Scale down only happens if the condition was satisfied for 300 seconds (scale down stabilization window). Once the queue length is 0, KEDA waits for 300 seconds (cool down period) before scaling the app to 0. ++## Considerations ++- In "multiple revisions" mode, adding a new scale trigger creates a new revision of your application but your old revision remains available with the old scale rules. Use the **Revision management** page to manage traffic allocations. ++- No usage charges are incurred when an application scales to zero. For more pricing information, see [Billing in Azure Container Apps](billing.md). ### Known limitations |
container-apps | Tutorial Dev Services Kafka | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-dev-services-kafka.md | Azure CLI commands and Bicep template fragments are featured in this tutorial. I # [Bash](#tab/bash) ```bash- az rest \ - --method PUT \ - --url "/subscriptions/$(az account show --output tsv --query id)/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.App/containerApps/$KAFKA_SVC?api-version=2023-04-01-preview" \ - --body "{\"location\": \"$LOCATION\", \"properties\": {\"environmentId\": \"$ENVIRONMENT_ID\", \"configuration\": {\"service\": {\"type\": \"kafka\"}}}}" + az containerapp service kafka create \ + --name "$KAFKA_SVC" \ + --resource-group "$RESOURCE_GROUP" \ + --environment "$ENVIRONMENT" ``` # [Bicep](#tab/bicep) When you create the app, you'll set it up to use `./kafka-topics.sh`, `./kafka-c az containerapp create \ --name "$KAFKA_CLI_APP" \ --image mcr.microsoft.com/k8se/services/kafka:3.4 \+ --bind "$KAFKA_SVC" \ --environment "$ENVIRONMENT" \ --resource-group "$RESOURCE_GROUP" \ --min-replicas 1 \ --max-replicas 1 \ --command "/bin/sleep" "infinity"- - az rest \ - --method PATCH \ - --headers "Content-Type=application/json" \ - --url "/subscriptions/$(az account show --output tsv --query id)/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.App/containerApps/$KAFKA_CLI_APP?api-version=2023-04-01-preview" \ - --body "{\"properties\": {\"template\": {\"serviceBinds\": [{\"serviceId\": \"/subscriptions/$(az account show --output tsv --query id)/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.App/containerApps/$KAFKA_SVC\"}]}}}" ``` # [Bicep](#tab/bicep) |
container-instances | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md | |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
cosmos-db | Migrate Dotnet V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v3.md | await client.CreateDocumentAsync( * `Microsoft.Azure.Documents.Document` -Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types. +Because the .NET v3 SDK allows users to configure [a custom serialization engine](migrate-dotnet-v3.md#customize-serialization), there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types. * `Microsoft.Azure.Documents.Resource` The `FeedOptions` class in SDK v2 has now been renamed to `QueryRequestOptions` |`FeedOptions.EnableCrossPartitionQuery`|Removed. Default behavior in SDK 3.0 is that cross-partition queries will be executed without the need to enable the property specifically. | |`FeedOptions.PopulateQueryMetrics`|Removed. It is now enabled by default and part of the [diagnostics](troubleshoot-dotnet-sdk.md#capture-diagnostics).| |`FeedOptions.RequestContinuation`|Removed. It is now promoted to the query methods themselves. |-|`FeedOptions.JsonSerializerSettings`|Removed. Serialization can be customized through a [custom serializer](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializer) or [serializer options](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializeroptions).| +|`FeedOptions.JsonSerializerSettings`|Removed. See how to [customize serialization](#customize-serialization) for additional information.| |`FeedOptions.PartitionKeyRangeId`|Removed. Same outcome can be obtained from using [FeedRange](change-feed-pull-model.md#use-feedrange-for-parallelization) as input to the query method.| |`FeedOptions.DisableRUPerMinuteUsage`|Removed.| The v3 SDK has built-in support for the bulk executor library, allowing you to u For more information, see [how to migrate from the bulk executor library to bulk support in Azure Cosmos DB .NET V3 SDK](how-to-migrate-from-bulk-executor-library.md) +### Customize serialization +The .NET V2 SDK allows setting *JsonSerializerSettings* in *RequestOptions* at the operational level used to deserialize the result document: ++```csharp +// .NET V2 SDK +var result = await container.ReplaceDocumentAsync(document, new RequestOptions { JsonSerializerSettings = customSerializerSettings }) +``` ++The .NET SDK v3 provides a [serializer interface](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializer) to fully customize the serialization engine, or more generic [serialization options](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializeroptions) as part of the client construction. ++Customizing the serialization at the operation level can be achieved through the use of Stream APIs: ++```csharp +// .NET V3 SDK +using(Response response = await this.container.ReplaceItemStreamAsync(stream, "itemId", new PartitionKey("itemPartitionKey")) +{ ++ using(Stream stream = response.ContentStream) + { + using (StreamReader streamReader = new StreamReader(stream)) + { + // Read the stream and do dynamic deserialization based on type with a custom Serializer + } + } +} +``` + ## Code snippet comparisons The following code snippet shows the differences in how resources are created between the .NET v2 and v3 SDKs: |
cosmos-db | Quickstart Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-spark.md | df.show() For more information related to schema inference, see the full [schema inference configuration](https://github.com/Azure/azure-sdk-for-jav#schema-inference-config) documentation. +## Raw JSON support for Spark Connector + When working with Cosmos DB, you may come across documents that contain an array of entries with potentially different structures. These documents typically have an array called "tags" that contains items with varying structures, along with a "tag_id" field that serves as an entity type identifier. To handle patching operations efficiently in Spark, you can use a custom function that handles the patching of such documents. ++**Sample document that can be used** +++``` +{ + "id": "Test01", + "document_type": "tag", + "tags": [ + { + "tag_id": "key_val", + "params": "param1=val1;param2=val2" + }, + { + "tag_id": "arrays", + "tags": "tag1,tag2,tag3" + } + ] +} +``` ++#### [Python](#tab/python) ++```python ++def init_sequences_db_config(): + #Configure Config for Cosmos DB Patch and Query + global cfgSequencePatch + cfgSequencePatch = {"spark.cosmos.accountEndpoint": cosmosEndpoint, + "spark.cosmos.accountKey": cosmosMasterKey, + "spark.cosmos.database": cosmosDatabaseName, + "spark.cosmos.container": cosmosContainerNameTarget, + "spark.cosmos.write.strategy": "ItemPatch", # Partial update all documents based on the patch config + "spark.cosmos.write.bulk.enabled": "true", + "spark.cosmos.write.patch.defaultOperationType": "Replace", + "spark.cosmos.read.inferSchema.enabled": "false" + } + +def adjust_tag_array(rawBody): + print("test adjust_tag_array") + array_items = json.loads(rawBody)["tags"] + print(json.dumps(array_items)) + + output_json = [{}] ++ for item in array_items: + output_json_item = {} + # Handle different tag types + if item["tag_id"] == "key_val": + output_json_item.update({"tag_id" : item["tag_id"]}) + params = item["params"].split(";") + for p in params: + key_val = p.split("=") + element = {key_val[0]: key_val[1]} + output_json_item.update(element) ++ if item["tag_id"] == "arrays": + tags_array = item["tags"].split(",") + output_json_item.update({"tags": tags_array}) + + output_json.append(output_json_item) ++ # convert to raw json + return json.dumps(output_json) +++init_sequences_db_config() ++native_query = "SELECT c.id, c.tags, c._ts from c where EXISTS(SELECT VALUE t FROM t IN c.tags WHERE IS_DEFINED(t.tag_id))".format() ++# the custom query will be processed against the Cosmos endpoint +cfgSequencePatch["spark.cosmos.read.customQuery"] = native_query +# Cosmos DB patch column configs +cfgSequencePatch["spark.cosmos.write.patch.columnConfigs"] = "[col(tags_new).path(/tags).op(set).rawJson]" ++# load df +df_relevant_sequences = spark.read.format("cosmos.oltp").options(**cfgSequencePatch).load() +print(df_relevant_sequences) +df_relevant_sequences.show(20, False) +if not df_relevant_sequences.isEmpty(): + print("Found sequences to patch") + + # prepare udf function + tags_udf= udf(lambda a: adjust_tag_array(a), StringType()) ++ df_relevant_sequences.show(20, False) ++ # apply udf function for patching raw json + df_relevant_sequences_adjusted = df_relevant_sequences.withColumn("tags_new", tags_udf("_rawBody")) + df_relevant_sequences_adjusted.show(20, False) ++ # write df + output_df = df_relevant_sequences_adjusted.select("id","tags_new") + output_df.write.format("cosmos.oltp").mode("Append").options(**cfgSequencePatch).save() ++``` +#### [Scala](#tab/scala) +```scala +var cfgSequencePatch = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint, + "spark.cosmos.accountKey" -> cosmosMasterKey, + "spark.cosmos.database" -> cosmosDatabaseName, + "spark.cosmos.container" -> cosmosContainerName, + "spark.cosmos.write.strategy" -> "ItemPatch", // Partial update all documents based on the patch config + "spark.cosmos.write.bulk.enabled" -> "false", + "spark.cosmos.write.patch.defaultOperationType" -> "Replace", + "spark.cosmos.read.inferSchema.enabled" -> "false" +) ++def patchTags(rawJson: String): String = { + implicit val formats = DefaultFormats + val json = JsonMethods.parse(rawJson) + val tagsArray = (json \ "tags").asInstanceOf[JArray] + var outList = new ListBuffer[Map[String, Any]] ++ tagsArray.arr.foreach { tag => + val tagId = (tag \ "tag_id").extract[String] + var outMap = Map.empty[String, Any] ++ // Handle different tag types + tagId match { + case "key_val" => + val params = (tag \ "params").extract[String].split(";") + for (p <- params) { + val paramVal = p.split("=") + outMap += paramVal(0) -> paramVal(1) + } + case "arrays" => + val tags = (tag \ "tags").extract[String] + val tagList = tags.split(",") + outMap += "arrays" -> tagList + case _ => {} + } + outList += outMap + } + // convert to raw json + write(outList) +} ++val nativeQuery = "SELECT c.id, c.tags, c._ts from c where EXISTS(SELECT VALUE t FROM t IN c.tags WHERE IS_DEFINED(t.tag_id))" ++// the custom query will be processed against the Cosmos endpoint +cfgSequencePatch += "spark.cosmos.read.customQuery" -> nativeQuery ++//Cosmos DB patch column configs +cfgSequencePatch += "spark.cosmos.write.patch.columnConfigs" -> "[col(tags_new).path(/tags).op(set).rawJson]" ++// load df +val dfRelevantSequences = spark.read.format("cosmos.oltp").options(cfgSequencePatch).load() +dfRelevantSequences.show(20, false) ++if(!dfRelevantSequences.isEmpty){ + println("Found sequences to patch") ++ // prepare udf function + val patchTagsUDF = udf(patchTags _) ++ // apply udf function for patching raw json + val dfRelevantSequencesAdjusted = dfRelevantSequences.withColumn("tags_new", patchTagsUDF(dfRelevantSequences("_rawBody"))) + + dfRelevantSequencesAdjusted.show(20, false) + + var outputDf = dfRelevantSequencesAdjusted.select("id","tags_new") ++ // write df + outputDf.write.format("cosmos.oltp").mode("Append").options(cfgSequencePatch).save() +} ++``` ++ ## Configuration reference The Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL has a complete configuration reference that provides more advanced settings for writing and querying data, serialization, streaming using change feed, partitioning and throughput management and more. For a complete listing with details, see our [Spark Connector Configuration Reference](https://aka.ms/azure-cosmos-spark-3-config) on GitHub. |
cosmos-db | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md | Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
cosmos-db | Vercel Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vercel-integration.md | Use this guide if you have already identified the Vercel project(s) or want to i ## Integrate Cosmos DB with Vercel using marketplace template -We have an [Azure Cosmos DB Next.js Starter](https://aka.ms/azurecosmosdb-vercel-template), which a great ready-to-use template with guided structure and configuration, saving you time and effort in setting up the initial project setup. Click on Deploy to Deploy on Vercel and View Repo to view the (source code)[https://github.com/Azure/azurecosmosdb-vercel-starter]. +We have an [Azure Cosmos DB Next.js Starter](https://aka.ms/azurecosmosdb-vercel-template), which a great ready-to-use template with guided structure and configuration, saving you time and effort in setting up the initial project setup. Click on Deploy to Deploy on Vercel and View Repo to view the [source code](https://github.com/Azure/azurecosmosdb-vercel-starter). 1. Choose the GitHub repository, where you want to clone the starter repo. :::image type="content" source="./media/integrations/vercel/create-git-repository.png" alt-text="Screenshot to create the repository." lightbox="./media/integrations/vercel/create-git-repository.png"::: |
cost-management-billing | Link Partner Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md | description: Track engagements with Azure customers by linking a partner ID to t Previously updated : 12/05/2022 Last updated : 07/06/2023 C:\ az managementpartner update --partner-id 12345 C:\ az managementpartner delete --partner-id 12345 ``` -## Next steps --Join the discussion in the [Microsoft Partner Community](https://aka.ms/PALdiscussion) to receive updates or send feedback. - ## Frequently asked questions **What PAL identity permissions are needed to show revenue?** PAL can be as granular as a resource instance. For example, a single virtual machine. However, PAL is set on a user account. The scope of the Azure Consumed Revenue (ACR) measurement is whatever administrative permissions that a user account has within the environment. An administrative scope can be subscription, resource group, or resource instance using standard Azure RBAC roles. +In other words, PAL association can happen for all RBAC roles. The roles determine eligibility for partner incentives. For more information about eligibility, see [Partner Incentives](https://aka.ms/partnerincentives). + For example, if you're partner, your customer might hire you to do a project. Your customer can give you an administrative account to deploy, configure, and support an application. Your customer can scope your access to a resource group. If you use PAL and associate your MPN ID with the administrative account, Microsoft measures the consumed revenue from the services within the resource group. If the Azure AD identity that was used for PAL is deleted or disabled, the ACR attribution stops for the partner on the associated resources. PAL association only adds partnerΓÇÖs ID to the credential already provisioned a **What happens if the PAL identity is deleted?** If the partner network ID, also called MPN ID, is deleted, then all the recognition mechanisms including Azure Consumed Revenue (ACR) attribution stops working.++## Next steps ++Join the discussion in the [Microsoft Partner Community](https://aka.ms/PALdiscussion) to receive updates or send feedback. |
cost-management-billing | Reservation Discount Azure Sql Dw | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-azure-sql-dw.md | Title: How reservation discounts apply to Azure Synapse Analytics (data warehousing only) | Microsoft Docs -description: Learn how reservation discounts apply to Azure Synapse Analytics to help save you money. + Title: How reservation discounts apply to Azure Synapse Analytics (data warehousing only) +description: Learn how reservation discounts apply to Azure Synapse Analytics to help you save money. Previously updated : 12/06/2022 Last updated : 07/06/2023 After you buy Azure Synapse Analytics reserved capacity, the reservation discoun The Azure Synapse Analytics reserved capacity discount is applied to running data warehouses on an hourly basis. If you don't have a warehouse deployed for an hour, then the reserved capacity is wasted for that hour. It doesn't carry over. -After purchase, the reservation that you buy is matched to Azure Synapse Analytics usage emitted by running warehouses at any point in time. If you shut down some warehouses, then reservation discounts automatically apply to any other matching warehouses. +After purchase, the reservation is matched to Azure Synapse Analytics usage emitted by running warehouses at any point in time. If you shut down some warehouses, then reservation discounts automatically apply to any other matching warehouses. For warehouses that don't run for a full hour, the reservation is automatically applied to other matching instances in that hour. For warehouses that don't run for a full hour, the reservation is automatically The following examples show how the Azure Synapse Analytics reserved capacity discount applies, depending on the deployments. -- **Example 1**: You purchase 5 units of 100 cDWU reserved capacity. You run a DW1500c Azure Synapse Analytics instance for an hour. In this case, usage is emitted for 15 units of 100 cDWU usage. The reservation discount applies to the 5 units that you used. You are charged using pay-as-you-go rates for the remaining 10 units of 100 cDWU usage that you used. In other words, partial coverage is possible for multiple reservations.+- **Example 1**: You purchase five units of 100 cDWU reserved capacity. You run a DW1500c Azure Synapse Analytics instance for an hour. In this case, usage is emitted for 15 units of 100 cDWU usage. The reservation discount applies to the five units that you used. You're charged using pay-as-you-go rates for the remaining 10 units of 100 cDWU usage that you used. In other words, partial coverage is possible for multiple reservations. -- **Example 2**: You purchase 5 units of 100 cDWU reserved capacity. You run two DW100c Azure Synapse Analytics instances for an hour. In this case, two usage events are emitted for 1 unit of 100 cDWU usage. Both usage events get reserved capacity discounts. The remaining 3 units of 100 cDWU reserved capacity are wasted and don't carry over for future use. In other words, a single reservation can get matched to multiple Azure Synapse Analytics instances.+- **Example 2**: You purchase five units of 100 cDWU reserved capacity. You run two DW100c Azure Synapse Analytics instances for an hour. In this case, two usage events are emitted for one unit of 100 cDWU usage. Both usage events get reserved capacity discounts. The remaining three units of 100 cDWU reserved capacity are wasted and don't carry over for future use. In other words, a single reservation can get matched to multiple Azure Synapse Analytics instances. -- **Example 3**: You purchase 1 unit of 100 cDWU reserved capacity. You run two DW100c Azure Synapse Analytics instances. Each runs for 30 minutes. In this case, both usage events get reserved capacity discounts. No usage is charged using pay-as-you-go rates.+- **Example 3**: You purchase one unit of 100 cDWU reserved capacity. You run two DW100c Azure Synapse Analytics instances. Each runs for 30 minutes. In this case, both usage events get reserved capacity discounts. No usage is charged using pay-as-you-go rates. ++When you apply a management group scope and have multiple Synapse Dedicated Pools running concurrently, your reservation applies to the usage based on a first come, first served basis. Any usage beyond what's covered by your reservation is charged at pay-as-you-go rates. ## Need help? Contact us |
data-factory | Deactivate Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/deactivate-activity.md | + + Title: Deactivate an Activity in Azure Data Factory ++description: Learn how to deactivate an activity to exclude from pipeline run and validation ++++++ Last updated : 07/01/2023+++# Deactivate an Activity +++You can now deactivate one or more activities from a pipeline, and we skip them during validation and during pipeline run. This feature significantly improves pipeline developer efficiency, allowing customers to comment out part of the pipeline, without deleting it from the canvas. You may choose to reactivate them at a later time. ++++## Deactivate and Reactivate ++There are two ways to deactivate an activity. ++First, you may deactivate a single activity from its **General** tab. ++- Select the activity you want to deactivate +- Under **General** tab, select _Inactive_ for _Activity state_ +- Pick a state for _Mark activity as_. Choose from _Succeeded_, _Failed_ or _Skipped_ ++++Alternatively, you can deactive multiple activities with right click. ++- Press down _Ctrl_ key to multi-select. Using your mouse, left click on all activities you want to deactivate +- Right click to bring up the drop down menu +- Select _Deactivate_ to deactive them all +- To fine tune the settings fro _Mark activity as_, go to **General** tab of the activity, and make appropriate changes +++To reactivate the activities, choose _Active_ for the _Activity State_, and they revert back to their previous behaviors, as expected. ++## Behaviors +++An inactive activity behaves differently in a pipeline. ++1. On canvas, the inactive activity is grayed out, with _Inactive sign_ placed next to the activity type +1. On canvas, a status sign (Succeeded, Failed or Skipped) is placed on the box, to visualize the _Mark activity as_ setting +1. The activity is excluded from pipeline validation. Hence, you don't need to provide all required fields for an inactive activity. +1. During debug run and pipeline run, the activity won't actually execute. Instead, it runs a place holder line item, with the reserved status **Inactive** +1. The branching option is controlled by _Mark activity as_ option. In other words: + * if you mark the activity as _Succeeded_, the _UponSuccess_ or _UponCompletion_ branch runs + * if you mark the activity as _Failed_, the _UponFailure_ or _UponCompletion_ branch runs + * if you mark the activity as _Skipped_, the _UponSkip_ branch runs + * for more information, please refer to [Conditional Execution](tutorial-pipeline-failure-error-handling.md#conditional-paths) +++## Best practices ++Deactivation is a powerful tool for pipeline developer. It allows developers to "comment out" part of the code, without permanently deleting the activities. It shines in following scenarios: ++- When developing a pipeline, developer can add place holder inactive activities before filling all the required fields. For instance, I need a Copy activity from SQL Server to Data warehouse, but I haven't set up all the connections yet. So I use an _inactive_ copy activity as the place holder for iterative development process. +- After deployment, developer can comment out certain activities that are constantly causing troubles to avoid costly retries. For instance, my on-premises SQL server is having network connection issues, and I know my copy activities fail for certain. I may want to deactivate the copy activity, to avoid retry requests from flooding the brittle system. ++### Known limitations ++An inactive activity never actually runs. This means the activity won't have an output or an error field. Any references to these fields throw errors downstream. ++## Next steps ++Learn more about Azure Data Factory and Synapse pipelines. ++- [Conditional Execution](tutorial-pipeline-failure-error-handling.md) |
data-factory | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md | |
data-factory | Self Hosted Integration Runtime Automation Scripts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-automation-scripts.md | To automate installation of Self-hosted Integration Runtime on local machines (o ## Prerequisites * Launch PowerShell on your local machine. To run the scripts, you need to choose **Run as Administrator**.-* [Download](https://www.microsoft.com/download/details.aspx?id=39717) the self-hosted integration runtime software. Copy the path where the downloaded file is. +* [Download](https://www.microsoft.com/download/details.aspx?id=39717) the self-hosted integration runtime software. Copy the path where the downloaded file is. * You also need an **authentication key** to register the self-hosted integration runtime. * For automating manual updates, you need to have a pre-configured self-hosted integration runtime. -## Scripts introduction +## Scripts introduction > [!NOTE] > These scripts are created using the [documented command line utility](./create-self-hosted-integration-runtime.md#set-up-an-existing-self-hosted-ir-via-local-powershell) in the self-hosted integration runtime. If needed one can customize these scripts accordingly to cater to their automation needs. Install and register a new self-hosted integration runtime node using **[Install * For automating manual updates: Update the self-hosted IR node with a specific version or to the latest version **[script-update-gateway.ps1](https://github.com/Azure/Azure-DataFactory/blob/main/SamplesV2/SelfHostedIntegrationRuntime/AutomationScripts/script-update-gateway.ps1)** - This is also supported in case you have turned off the auto-update, or want to have more control over updates. The script can be used to update the self-hosted integration runtime node to the latest version or to a specified higher version (downgrade doesnΓÇÖt work). It accepts an argument for specifying version number (example: -version 3.13.6942.1). When no version is specified, it always updates the self-hosted IR to the latest version found in the [downloads](https://www.microsoft.com/download/details.aspx?id=39717). > [!NOTE]- > Only last 3 versions can be specified. Ideally this is used to update an existing node to the latest version. **IT ASSUMES THAT YOU HAVE A REGISTERED SELF HOSTED IR**. + > Only last 3 versions can be specified. Ideally this is used to update an existing node to the latest version. **IT ASSUMES THAT YOU HAVE A REGISTERED SELF HOSTED IR**. ## Usage examples ### For automating setup-1. Download the self-hosted IR from [here](https://www.microsoft.com/download/details.aspx?id=39717). -1. Specify the path where the above downloaded SHIR MSI (installation file) is. For example, if the path is *C:\Users\username\Downloads\IntegrationRuntime_4.7.7368.1.msi*, then you can use below PowerShell command-line example for this task: +1. Download the [self-hosted IR](https://www.microsoft.com/download/details.aspx?id=39717). +1. Specify the path where the above downloaded SHIR MSI (installation file) is. For example, if the path is *C:\Users\username\Downloads\IntegrationRuntime_4.7.7368.1.msi*, then you can use the following PowerShell command-line example for this task: ```powershell PS C:\windows\system32> C:\Users\username\Desktop\InstallGatewayOnLocalMachine.ps1 -path "C:\Users\username\Downloads\IntegrationRuntime_4.7.7368.1.msi" -authKey "[key]" Update the self-hosted IR node with a specific version or to the latest version :::image type="content" source="media/self-hosted-integration-runtime-automation-scripts/integration-runtime-configure.png" alt-text="configure integration runtime"::: 1. When the installation and key registration completes, you'll see *Succeed to install gateway* and *Succeed to register gateway* results in your local PowerShell.- [:::image type="content" source="media/self-hosted-integration-runtime-automation-scripts/script-1-run-result.png#lightbox" alt-text="script 1 run result](media/self-hosted-integration-runtime-automation-scripts/script-1-run-result.png)"::: + :::image type="content" source="media/self-hosted-integration-runtime-automation-scripts/script-1-run-result.png" alt-text="script 1 run result" lightbox="media/self-hosted-integration-runtime-automation-scripts/script-1-run-result.png"::: ### For automating manual updates This script is used to update/install + register latest self-hosted integration runtime. The script run performs the following steps: You can follow below command-line example to use this script: ```powershell PS C:\windows\system32> C:\Users\username\Desktop\script-update-gateway.ps1- ``` + ``` * Download and install gateway of specified version: ```powershell PS C:\windows\system32> C:\Users\username\Desktop\script-update-gateway.ps1 -version 3.13.6942.1- ``` - If your current version is already the latest one, you'll see following result, suggesting no update is required. + ``` + If your current version is already the latest one, you'll see following result, suggesting no update is required. [:::image type="content" source="media/self-hosted-integration-runtime-automation-scripts/script-2-run-result.png#lightbox" alt-text="script 2 run result](media/self-hosted-integration-runtime-automation-scripts/script-2-run-result.png)"::: |
data-lake-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
data-lake-store | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
databox-online | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md | Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
databox | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md | Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
ddos-protection | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md | |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
defender-for-iot | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md | This procedure describes how to add a trial license for Defender for IoT to your **To add a trial license**: -1. Go to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) **Marketplace**. +1. Go to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) **Billing > Purchase services**. If you don't have this option, select **Marketplace** instead. -1. Select **All products** and search for **Microsoft Defender for IoT**. --1. Locate the **Microsoft Defender for IoT - OT Site License - Large Site** item. +1. Search for **Microsoft Defender for IoT** and locate the **Microsoft Defender for IoT - OT Site License - Large Site** item. 1. Select **Details** > **Start free trial** > **Try now** to start the trial. |
defender-for-iot | How To Manage Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md | This procedure describes how to purchase Defender for IoT licenses in the Micros **To purchase Defender for IoT licenses**: -1. Go to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) **Marketplace**. +1. Go to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) **Billing > Purchase services**. If you don't have this option, select **Marketplace** instead. -1. Select **All products** and search for **Microsoft Defender for IoT**. +1. Search for **Microsoft Defender for IoT**, and then locate the **Microsoft Defender for IoT** license for your site size. -1. Locate the **Microsoft Defender for IoT** license for your site size, and then follow the options through to buy the license and add it to your Microsoft 365 products. -- Make sure to select the number of licenses you want to purchase, based on the number of sites you want to monitor at the selected size. +1. Follow the options through to buy the license and add it to your Microsoft 365 products. Make sure to select the number of licenses you want to purchase, based on the number of sites you want to monitor at the selected size. > [!IMPORTANT] > All license management procedures are done from the Microsoft 365 admin center, including buying, canceling, renewing, setting to auto-renew, auditing, and more. For more information, see the [Microsoft 365 admin center help](/microsoft-365/admin/). |
event-grid | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md | Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
event-hubs | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md | Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
hdinsight | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md | Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
healthcare-apis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md | Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
healthcare-apis | Deploy Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-arm-template.md | In this quickstart, learn how to: To begin your deployment and complete the quickstart, you must have the following prerequisites: -- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).+* An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). -- **Owner** or **Contributor and User Access Administrator** role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)+* **Owner** or **Contributor and User Access Administrator** role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) -- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).+* The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). When you have these prerequisites, you're ready to configure the ARM template by using the **Deploy to Azure** button. When deployment is completed, the following resources and access roles are creat After you have successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. - * To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md). +* To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md). - * To learn about the FHIR destination mapping, see [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md). +* To learn about the FHIR destination mapping, see [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md). ## Next steps In this quickstart, you learned how to deploy the MedTech service in the Azure portal using an ARM template with the **Deploy to Azure** button. -To learn about other methods for deploying the MedTech service, see +To learn about other methods of deploying the MedTech service, see > [!div class="nextstepaction"]-> [Choose a deployment method for the MedTech service](deploy-choose-method.md) +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++For an overview of the MedTech service device data processing stages, see ++> [!div class="nextstepaction"] +> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) ++For frequently asked questions (FAQs) about the MedTech service, see ++> [!div class="nextstepaction"] +> [Frequently asked questions about the MedTech service](frequently-asked-questions.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Bicep Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-bicep-powershell-cli.md | Complete the following five steps to deploy the MedTech service using the Azure When deployment is completed, the following resources and access roles are created in the Bicep file deployment: -* Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*. +* Azure Event Hubs namespace and event hub. In this deployment, the event hub is named *devicedata*. * Event hub consumer group. In this deployment, the consumer group is named *$Default*. For example: `az group delete --resource-group BicepTestDeployment` ## Next steps -In this quickstart, you learned about how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. +In this quickstart, you learned how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. -To learn about other methods for deploying the MedTech service, see +To learn about other methods of deploying the MedTech service, see > [!div class="nextstepaction"]-> [Choose a deployment method for the MedTech service](deploy-choose-method.md) +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++For an overview of the MedTech service device data processing stages, see ++> [!div class="nextstepaction"] +> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) ++For frequently asked questions (FAQs) about the MedTech service, see ++> [!div class="nextstepaction"] +> [Frequently asked questions about the MedTech service](frequently-asked-questions.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Choose Method | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-choose-method.md | The MedTech service provides multiple methods for deployment into Azure. Each de In this quickstart, learn about these deployment methods: * Azure Resource Manager template (ARM template) including an Azure Iot Hub using the **Deploy to Azure** button. -* ARM template using the **Deploy to Azure** button -* ARM template using Azure PowerShell or the Azure CLI -* Bicep file using Azure PowerShell or the Azure CLI -* Azure portal +* ARM template using the **Deploy to Azure** button. +* ARM template using Azure PowerShell or the Azure CLI. +* Bicep file using Azure PowerShell or the Azure CLI. +* Azure portal. ## Deployment overview To learn more about deploying the MedTech service using the Azure portal, see [D In this quickstart, you learned about the different types of deployment methods for the MedTech service. -To learn about the MedTech service, see +To learn about other methods of deploying the MedTech service, see > [!div class="nextstepaction"]-> [What is the MedTech service?](overview.md) +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++For an overview of the MedTech service device data processing stages, see ++> [!div class="nextstepaction"] +> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) ++For frequently asked questions (FAQs) about the MedTech service, see ++> [!div class="nextstepaction"] +> [Frequently asked questions about the MedTech service](frequently-asked-questions.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Json Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-json-powershell-cli.md | Complete the following five steps to deploy the MedTech service using the Azure When deployment is completed, the following resources and access roles are created in the ARM template deployment: -* Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*. +* Azure Event Hubs namespace and event hub. In this deployment, the event hub is named *devicedata*. * Event hub consumer group. In this deployment, the consumer group is named *$Default*. For example: `az group delete --resource-group ArmTestDeployment` In this quickstart, you learned how to use Azure PowerShell or Azure CLI to deploy an instance of the MedTech service using an ARM template. -To learn about other methods for deploying the MedTech service, see +To learn about other methods of deploying the MedTech service, see > [!div class="nextstepaction"]-> [Choose a deployment method for the MedTech service](deploy-choose-method.md) +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++For an overview of the MedTech service device data processing stages, see ++> [!div class="nextstepaction"] +> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) ++For frequently asked questions (FAQs) about the MedTech service, see ++> [!div class="nextstepaction"] +> [Frequently asked questions about the MedTech service](frequently-asked-questions.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Manual Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-portal.md | -You may prefer to deploy the MedTech service using the Azure portal if you: +In this quickstart, learn how to deploy the MedTech service and required resources using the Azure portal. -* Need to track every step of the provisioning process. -* Want to customize or troubleshoot your deployment. --In this quickstart, the MedTech service deployment using the Azure portal is divided into the following three sections: +The MedTech service deployment using the Azure portal is divided into the following three sections: * [Deploy prerequisite resources](#deploy-prerequisite-resources) * [Configure and deploy the MedTech service](#configure-and-deploy-the-medtech-service) As a prerequisite, you need an Azure subscription and have been granted the prop The first step is to deploy the MedTech service prerequisite resources: -* Azure resource group. -* Azure Event Hubs namespace and event hub. -* Azure Health Data services workspace. -* Azure Health Data Services FHIR service. +* Azure resource group +* Azure Event Hubs namespace and event hub +* Azure Health Data services workspace +* Azure Health Data Services FHIR service Once the prerequisite resources are available, deploy: -* Azure Health Data Services MedTech service. +* Azure Health Data Services MedTech service ### Deploy a resource group Follow these four steps to fill in the **Basics** tab configuration: 2. Select the **Event Hubs Namespace**. - The **Event Hubs Namespace** is the name of the *Event Hubs namespace* that you previously deployed. For this example, we're using *eh-azuredocsdemo* for our MedTech service device messages. + The **Event Hubs Namespace** is the name of the *Event Hubs namespace* that you previously deployed. For this example, we're using the name *eh-azuredocsdemo*. 3. Select the **Events Hubs name**. - The **Event Hubs name** is the name of the event hub that you previously deployed within the Event Hubs Namespace. For this example, we're using *devicedata* for our MedTech service device messages. + The **Event Hubs name** is the name of the event hub that you previously deployed within the Event Hubs Namespace. For this example, we're using the name *devicedata*. 4. Select the **Consumer group**. Under the **Destination** tab, use these values to enter the destination propert * Next, enter the **Destination name**. - The **Destination name** is a friendly name for the destination. Enter a unique name for your destination. In this example, the **Destination name** is + The **Destination name** is a friendly name for the destination. Enter a unique name for your destination. In this example, the **Destination name** name is *fs-azuredocsdemo*. * Next, select the **Resolution type**. - **Resolution type** specifies how the MedTech service associates device data with FHIR Device resources and FHIR Patient resources. The MedTech service reads Device and Patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/r4/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/r4/patient-definitions.html#Patient.identifier). If an [encounter identifier](https://hl7.org/fhir/r4/encounter-definitions.html#Encounter.identifier) is specified and extracted from the device data payload, it's linked to the observation if an encounter exists on the FHIR service with that identifier. If the encounter identifier is successfully normalized, but no FHIR Encounter exists with that encounter identifier, a **FhirResourceNotFound** exception is thrown. + **Resolution type** specifies how the MedTech service associates device data with Device resources and Patient resources. The MedTech service reads Device and Patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/r4/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/r4/patient-definitions.html#Patient.identifier). If an [encounter identifier](https://hl7.org/fhir/r4/encounter-definitions.html#Encounter.identifier) is specified and extracted from the device data payload, it's linked to the observation if an encounter exists on the FHIR service with that identifier. If the [encounter identifier](../../healthcare-apis/release-notes.md#medtech-service) is successfully normalized, but no FHIR Encounter exists with that encounter identifier, a **FhirResourceNotFound** exception is thrown. Device and Patient resources can be resolved by choosing a **Resolution type** of **Create** and **Lookup**: Valid and conforming device and FHIR destination mappings have to be provided to ## Next steps -This article described the deployment steps needed to get started using the MedTech service. +In this article, you learned how to deploy the MedTech service and required resources using the Azure portal. To learn about other methods of deploying the MedTech service, see |
healthcare-apis | Device Messages Through Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md | To learn how to get an Azure AD access token and view FHIR resources in your FHI In this tutorial, you deployed an ARM template in the Azure portal, connected to your IoT hub, created a device, sent a test message, and reviewed your MedTech service metrics. -To learn about other methods for deploying the MedTech service, see +To learn about other methods of deploying the MedTech service, see -> [!div class="nextstepaction"] -> [Choose a deployment method for the MedTech service](deploy-choose-method.md) +> [!div class="nextstepaction"] +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++For an overview of the MedTech service device data processing stages, see ++> [!div class="nextstepaction"] +> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) ++For frequently asked questions (FAQs) about the MedTech service, see ++> [!div class="nextstepaction"] +> [Frequently asked questions about the MedTech service](frequently-asked-questions.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Overview Of Device Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-mapping.md | -The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The device mapping is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The [FHIR destination mapping](how-to-configure-fhir-mappings.md) is the second type and controls the mapping for [FHIR Observations](https://www.hl7.org/fhir/observation.html). +The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager (ARM) API. The device mapping is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The [FHIR destination mapping](overview-of-fhir-destination-mapping.md) is the second type and controls the mapping for [FHIR Observations](https://www.hl7.org/fhir/observation.html). > [!NOTE] > The device and FHIR destination mappings are re-evaluated each time a device message is processed. Any updates to either mapping will take effect immediately. The MedTech service requires two types of [JSON](https://www.json.org/) mappings The device mapping contains collections of expression templates used to extract device message data into an internal, normalized format for further evaluation. Each device message received is evaluated against **all** expression templates in the collection. This evaluation means that a single device message can be separated into multiple outbound messages that can be mapped to multiple FHIR Observations in the FHIR service. > [!TIP]-> For more information about how the MedTech service processes device message data into FHIR Observations for persistence on the FHIR service, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). +> For more information about how the MedTech service processes device message data into FHIR Observations for persistence in the FHIR service, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). This diagram provides an illustration of what happens during the normalization stage within the MedTech service. The normalization process validates the device mapping before allowing it to be |values[].valueExpression|True |True | |values[].required |True |True | +> [!IMPORTANT] +> The **Resolution type** specifies how the MedTech service associates device data with Device resources and Patient resources. The MedTech service reads Device and Patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/r4/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/r4/patient-definitions.html#Patient.identifier). If an [encounter identifier](https://hl7.org/fhir/r4/encounter-definitions.html#Encounter.identifier) is specified and extracted from the device data payload, it's linked to the observation if an encounter exists on the FHIR service with that identifier. If the [encounter identifier](../../healthcare-apis/release-notes.md#medtech-service) is successfully normalized, but no FHIR Encounter exists with that encounter identifier, a **FhirResourceNotFound** exception is thrown. For more information on configuring the the MedTech service **Resolution type**, see [Configure the Destination tab](deploy-manual-portal.md#configure-the-destination-tab). + > [!NOTE] > The `values[].valueName, values[].valueExpression`, and `values[].required` elements are only required if you have a value entry in the array. It's valid to have no values mapped. These elements are used when the telemetry being sent is an event. > |
healthcare-apis | Overview Of Fhir Destination Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-fhir-destination-mapping.md | CollectionFhir is the root template type used by the MedTech service FHIR destin CodeValueFhir is currently the only template supported in the FHIR destination mapping. It allows you to define codes, the effective period, and the value of the observation. Multiple value types are supported: [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept), [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity), and [String](https://www.hl7.org/fhir/datatypes.html#string). Along with these configurable values, the identifier for the Observation resource and linking to the proper Device and Patient resources are handled automatically. +> [!IMPORTANT] +> The **Resolution type** specifies how the MedTech service associates device data with Device resources and Patient resources. The MedTech service reads Device and Patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/r4/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/r4/patient-definitions.html#Patient.identifier). If an [encounter identifier](https://hl7.org/fhir/r4/encounter-definitions.html#Encounter.identifier) is specified and extracted from the device data payload, it's linked to the observation if an encounter exists on the FHIR service with that identifier. If the [encounter identifier](../../healthcare-apis/release-notes.md#medtech-service) is successfully normalized, but no FHIR Encounter exists with that encounter identifier, a **FhirResourceNotFound** exception is thrown. For more information on configuring the the MedTech service **Resolution type**, see [Configure the Destination tab](deploy-manual-portal.md#configure-the-destination-tab). + |Element|Description|Required| |:|:-|:-| |**typeName**| The type of measurement this template should bind to. Note: There should be at least one device mapping template that has this same `typeName`. The `typeName` element is used to link a FHIR destination mapping template to one or more device mapping templates. Device mapping templates with the same `typeName` element generate normalized data that is evaluated with a FHIR destination mapping template that has the same `typeName`.|True| |
iot-hub | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md | Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
key-vault | Javascript Developer Guide Backup Delete Restore Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-backup-delete-restore-key.md | + + Title: Back up and restore keys with Azure Key Vault +description: Back up, delete, restore, and purge keys with Azure Key Vault and the client SDK. ++++++ Last updated : 07/06/2023++#Customer intent: As a JavaScript developer who is new to Azure, I want to backup and restore keys using a key to the Key Vault with the SDK. +++# Back up, delete and restore keys in Azure Key Vault with JavaScript ++Create the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) with the appropriate [programmatic authentication credentials](javascript-developer-guide-get-started.md#authorize-access-and-connect-to-key-vault), then create a [CryptographyClient](/javascript/api/@azure/keyvault-keys/cryptographyclient) use the client to set, update, and rotate a key in Azure Key Vault. ++## Back up, delete, purge and restore key ++Before deleting a key and its versions, back up the key and serialize to a secure data store. Once the key is backed up, delete the key and all versions. If the vault uses soft-deletes, you can wait for the purge date to pass or purge the key manually. Once the key is purged, you can restore the key and all version from the backup. If you want to restore the key prior to the purge, you don't need to use the backup object but instead you can recover the soft-deleted key and all versions. ++```javascript +// Authenticate to Azure Key Vault +const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++// Create key +const keyName = `myKey-${Date.now()}`; +const key = await client.createRsaKey(keyName); +console.log(`${key.name} is created`); ++// Backup key and all versions (as Uint8Array) +const keyBackup = await client.backupKey(keyName); +console.log(`${key.name} is backed up`); ++// Delete key - wait until delete is complete +await (await client.beginDeleteKey(keyName)).pollUntilDone(); +console.log(`${key.name} is deleted`); ++// Purge soft-deleted key +await client.purgeDeletedKey(keyName); +console.log(`Soft-deleted key, ${key.name}, is purged`); ++if (keyBackup) { + // Restore key and all versions to + // Get last version + const { name, key, properties } = await client.restoreKeyBackup(keyBackup); + console.log(`${name} is restored from backup, latest version is ${properties.version}`); + + // do something with key +} +``` ++## Next steps ++* [Encrypt and descript key with JavaScript SDK](javascript-developer-guide-encrypt-decrypt-key.md) |
key-vault | Javascript Developer Guide Create Update Rotate Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-create-update-rotate-key.md | + + Title: Create, update, or rotate Azure Key Vault keys with JavaScript +description: Create or update with the set method, or rotate keys with JavaScript. ++++++ Last updated : 07/06/2023++#Customer intent: As a JavaScript developer who is new to Azure, I want to create, update, or rotate a key to the Key Vault with the SDK. +++# Create, rotate, and update properties of a key in Azure Key Vault with JavaScript ++Create the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) with the appropriate [programmatic authentication credentials](javascript-developer-guide-get-started.md#authorize-access-and-connect-to-key-vault), then use the client to set, update, and rotate a key in Azure Key Vault. ++To rotate a key means to create a new version of the key and set that version as the latest version. The previous version isn't deleted, but it's no longer the active version. ++## Create a key with a rotation policy ++To create a key in Azure Key Vault, use the [createKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-createkey) method of the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) class. Set any properties with the optional [createKeyOptions](/javascript/api/%40azure/keyvault-keys/createkeyoptions) object. After the key is created, update the key with a rotation policy. ++A [KeyVaultKey](/javascript/api/@azure/keyvault-keys/keyvaultkey) is returned. Update the key using [updateKeyRotationPolicy](/javascript/api/@azure/keyvault-keys/keyclient) with a policy, which includes notification. ++Convenience create methods are available for the following key types, which set properties associated with that key type: ++* [createKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-createkey) +* [createEcKey](/javascript/api/@azure/keyvault-keys/keyclient#createeckey) +* [createOctKey](/javascript/api/@azure/keyvault-keys/keyclient#createoctkey) +* [createRsaKey](/javascript/api/@azure/keyvault-keys/keyclient#creatersakey) +++```javascript +// Azure client libraries +import { DefaultAzureCredential } from '@azure/identity'; +import { + CreateKeyOptions, + KeyClient, + KeyRotationPolicyProperties, + KnownKeyOperations, + KnownKeyTypes +} from '@azure/keyvault-keys'; ++// Day/time manipulation +import dayjs from 'dayjs'; +import duration from 'dayjs/plugin/duration'; +dayjs.extend(duration); ++// Authenticate to Azure Key Vault +const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++// Name of key +const keyName = `mykey-${Date.now().toString()}`; ++// Set key options +const keyOptions: CreateKeyOptions = { +enabled: true, +expiresOn: dayjs().add(1, 'year').toDate(), +exportable: false, +tags: { + project: 'test-project' +}, +keySize: 2048, +keyOps: [ + KnownKeyOperations.Encrypt, + KnownKeyOperations.Decrypt + // KnownKeyOperations.Verify, + // KnownKeyOperations.Sign, + // KnownKeyOperations.Import, + // KnownKeyOperations.WrapKey, + // KnownKeyOperations.UnwrapKey +] +}; ++// Set key type +const keyType = KnownKeyTypes.RSA; // 'EC', 'EC-HSM', 'RSA', 'RSA-HSM', 'oct', 'oct-HSM' ++// Create key +const key = await client.createKey(keyName, keyType, keyOptions); +if (key) { + // Set rotation policy properties: KeyRotationPolicyProperties + const rotationPolicyProperties: KeyRotationPolicyProperties = { + expiresIn: 'P90D', + lifetimeActions: [ + { + action: 'Rotate', + timeAfterCreate: 'P30D' + }, + { + action: 'Notify', + timeBeforeExpiry: dayjs.duration({ days: 7 }).toISOString() + } + ]}; + + // Set rotation policy: KeyRotationPolicy + const keyRotationPolicy = await client.updateKeyRotationPolicy( + key.name, + rotationPolicyProperties + ); + console.log(keyRotationPolicy); +} +``` ++## Manually rotate key ++When you need to rotate the key, use the [rotateKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-rotatekey) method. This creates a new version of the key and sets that version as the active version. ++```javascript +// Azure client libraries +import { DefaultAzureCredential } from '@azure/identity'; +import { + KeyClient +} from '@azure/keyvault-keys'; ++// Authenticate to Azure Key Vault +const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++// Get existing key +let key = await client.getKey(`MyKey`); +console.log(key); ++if(key?.name){ ++ // rotate key + key = await client.rotateKey(key.name); + console.log(key); +} +``` ++## Update key properties ++Update properties of the latest version of the key with the [updateKeyProperties](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-updatekeyproperties-1) or update a specific version of a key with [updateKeyProperties](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-updatekeyproperties). Any [UpdateKeyPropertiesOptions](/javascript/api/@azure/keyvault-keys/updatekeypropertiesoptions) properties not specified are left unchanged. This doesn't change the key value. ++```javascript +// Azure client libraries +import { DefaultAzureCredential } from '@azure/identity'; +import { + KeyClient +} from '@azure/keyvault-keys'; ++// Authenticate to Azure Key Vault +const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++// Get existing key +const key = await client.getKey('MyKey'); ++if (key) { ++ // + const updateKeyPropertiesOptions = { + enabled: false, + // expiresOn, + // keyOps, + // notBefore, + // releasePolicy, + tags: { + ...key.properties.tags, subproject: 'Health and wellness' + } + } + + // update properties of latest version + await client.updateKeyProperties( + key.name, + updateKeyPropertiesOptions + ); + + // update properties of specific version + await client.updateKeyProperties( + key.name, + key?.properties?.version, + { + enabled: true + } + ); +} +``` ++## Update key value ++To update a key value, use the [rotateKey](#manually-rotate-key) method. Make sure to pass the new value with all the properties you want to keep from the current version of the key. Any current properties not set in additional calls to rotateKey will be lost. ++This generates a new version of a key. The returned [KeyVaultKey](/javascript/api/@azure/keyvault-keys/keyvaultkey) object includes the new version ID. ++## Next steps ++* [Get a key with JavaScript SDK](javascript-developer-guide-get-key.md) |
key-vault | Javascript Developer Guide Enable Disable Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-enable-disable-key.md | + + Title: Enable and disable keys using Azure Key Vault keys with JavaScript +description: Enable and disable keys from cryptographic operations with keys in JavaScript. ++++++ Last updated : 07/06/2023++#Customer intent: As a JavaScript developer who is new to Azure, I want to enable and disable cryptographic operations using a key to the Key Vault with the SDK. +++# Enable and disable a key in Azure Key Vault with JavaScript ++To enable a key for use with cryptographic operations in Azure Key Vault, use the [updateKeyProperties](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-updatekeyproperties) method of the [SecretClient](/javascript/api/@azure/keyvault-secrets/secretclient) class. ++## Enable a key ++To enable a key in Azure Key Vault, use the [updateKeyProperties](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-updatekeyproperties) method of the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) class. ++```javascript +const properties = await keyClient.updateKeyProperties( + keyName, + version, // optional, remove to update the latest version + { enabled: true } +); +``` ++Refer to the [update key properties](javascript-developer-guide-create-update-rotate-key.md#update-key-properties) example for the full code example. ++## Disable a new key ++To disable a new key, use the [createKey](javascript-developer-guide-create-update-rotate-key.md#create-a-key-with-a-rotation-policy) method and use the [createKeyOptions](/javascript/api/%40azure/keyvault-keys/createkeyoptions) to disable the key. ++```javascript +const keyVaultKey = await keyClient.createKey(keyName, keyType, { enabled: false }); +``` ++## Disable an existing key ++To disable a key in Azure Key Vault, use the [updateKeyProperties](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-updatekeyproperties) method of the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) class. ++```javascript +const properties = await keyClient.updateKeyProperties( + keyName, + version, // optional, remove to update the latest version + { enabled: false } +); +``` ++Refer to the [update key properties](javascript-developer-guide-create-update-rotate-key.md#update-key-properties) example for the full code example. ++## Next steps ++* [List keys with JavaScript SDK](javascript-developer-guide-list-key-version.md) |
key-vault | Javascript Developer Guide Encrypt Decrypt Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-encrypt-decrypt-key.md | + + Title: Encrypt and decrypt using Azure Key Vault keys with JavaScript +description: Encrypt and decrypt data with keys in JavaScript. ++++++ Last updated : 07/06/2023++#Customer intent: As a JavaScript developer who is new to Azure, I want to encrypt and decrypt data using a key to the Key Vault with the SDK. +++# Encrypt and decrypt data using a key in Azure Key Vault with JavaScript ++Create the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) with the appropriate [programmatic authentication credentials](javascript-developer-guide-get-started.md#authorize-access-and-connect-to-key-vault), then create a [CryptographyClient](/javascript/api/@azure/keyvault-keys/cryptographyclient) use the client to set, update, and rotate a key in Azure Key Vault. ++## Select an encryption algorithm ++To make the best use of the SDK and its provided enums and types, select your encryption algorithm before continuing to the next section. ++* RSA - RivestΓÇôShamirΓÇôAdleman +* AES GCM - Advanced Encryption Standard Galois Counter Mode +* AES CBC - Advanced Encryption Standard Cipher Block Chaining ++Use the [KnownEncryptionAlgorithms](/javascript/api/@azure/keyvault-keys/knownencryptionalgorithms) enum to select a specific algorithm. ++```javascript +import { + KnownEncryptionAlgorithms +} from '@azure/keyvault-keys'; ++const myAlgorithm = KnownEncryptionAlgorithms.RSAOaep256 +``` ++## Get encryption key ++[Create](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-createkey) or [get](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-getkey) your [KeyVaultKey](/javascript/api/@azure/keyvault-keys/keyvaultkey) encryption key from the Key Vault to use with encryption and decryption. ++## Encrypt and decrypt with a key ++Encryption requires one of the following parameter objects: ++* [RsaEncryptParameters](/javascript/api/@azure/keyvault-keys/rsaencryptparameters) +* [AesGcmEncryptParameters](/javascript/api/@azure/keyvault-keys/aesgcmdecryptparameters) +* [AesCbcEncryptParameters](/javascript/api/@azure/keyvault-keys/aescbcencryptparameters) ++All three parameter objects require the `algorithm` and the `plaintext` used to encrypt. An example of RSA encryption parameters is shown below. ++```javascript +import { DefaultAzureCredential } from '@azure/identity'; +import { + CryptographyClient, + KeyClient, + KnownEncryptionAlgorithms +} from '@azure/keyvault-keys'; ++// get service client using AZURE_KEYVAULT_NAME environment variable +const credential = new DefaultAzureCredential(); +const serviceClient = new KeyClient( +`https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, +credential +); ++// get existing key +const keyVaultKey = await serviceClient.getKey('myRsaKey'); ++if (keyVaultKey?.name) { ++ // get encryption client + const encryptClient = new CryptographyClient(keyVaultKey, credential); + + // set data to encrypt + const originalInfo = 'Hello World'; + + // set encryption algorithm + const algorithm = KnownEncryptionAlgorithms.RSAOaep256; + + // encrypt settings: RsaEncryptParameters | AesGcmEncryptParameters | AesCbcEncryptParameters + const encryptParams = { + algorithm, + plaintext: Buffer.from(originalInfo) + }; + + // encrypt + const encryptResult = await encryptClient.encrypt(encryptParams); + + // ... hand off encrypted result to another process + // ... other process needs to decrypt data ++ // decrypt settings: DecryptParameters + const decryptParams = { + algorithm, + ciphertext: encryptResult.result + }; + + // decrypt + const decryptResult = await encryptClient.decrypt(decryptParams); + console.log(decryptResult.result.toString()); +} +``` ++The **encryptParams** object sets the parameters for encryption. Use the following encrypt parameter objects to set properties. ++* [RsaEncryptParameters](/javascript/api/@azure/keyvault-keys/rsaencryptparameters) +* [AesGcmEncryptParameters](/javascript/api/@azure/keyvault-keys/aesgcmencryptparameters) +* [AesCbcEncryptParameters](/javascript/api/@azure/keyvault-keys/aescbcencryptparameters) ++The **decryptParams** object sets the parameters for decryption. Use the following decrypt parameter objects to set properties. ++* [RsaDecryptParameters](/javascript/api/@azure/keyvault-keys/rsadecryptparameters) +* [AesGcmDecryptParameters](/javascript/api/@azure/keyvault-keys/aesgcmdecryptparameters) +* [AesCbcDecryptParameters](/javascript/api/@azure/keyvault-keys/aescbcdecryptparameters) ++## Next steps ++* [Sign and verify with key with JavaScript SDK](javascript-developer-guide-sign-verify-key.md) |
key-vault | Javascript Developer Guide Get Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-get-key.md | + + Title: Get Azure Key Vault keys with JavaScript +description: Get lastest version or any version of key with JavaScript. ++++++ Last updated : 07/06/2023++#Customer intent: As a JavaScript developer who is new to Azure, I want to get a key to the Key Vault with the SDK. +++# Get a key in Azure Key Vault with JavaScript ++Create the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) with the appropriate [programmatic authentication credentials](javascript-developer-guide-get-started.md#authorize-access-and-connect-to-key-vault), then use the client to set, update, and rotate a key in Azure Key Vault. ++## Get key ++You can get the latest version of a key or a specific version of a key with the [getKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-getkey) method. The version is within the [properties](/javascript/api/@azure/keyvault-keys/keyproperties) of the [KeyVaultKey](/javascript/api/@azure/keyvault-keys/keyvaultkey) object. ++* Get latest version: `await client.getKey(name);` +* Get specific version: `await client.getKey(name, { version });` ++```javascript +// Azure client libraries +import { DefaultAzureCredential } from '@azure/identity'; +import { + KeyClient, +} from '@azure/keyvault-keys'; ++// Authenticate to Azure Key Vault +const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++const name = `myRsaKey`; ++// Get latest key +const latestKey = await client.getKey(name); +console.log(`${latestKey.name} version is ${latestKey.properties.version}`); ++// Get previous key by version id +const keyPreviousVersionId = '2f2ec6d43db64d66ad8ffa12489acc8b'; +const keyByVersion = await client.getKey(name, { + version: keyPreviousVersionId +}); +console.log(`Previous key version is ${keyByVersion.properties.version}`); +``` ++## Get all versions of a key ++To get all versions of a key in Azure Key Vault, use the [ +listPropertiesOfKeyVersions](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-listpropertiesofkeyversions) method of the KeyClient Class to get an iterable list of key's version's properties. This returns a [KeyProperties](/javascript/api/@azure/keyvault-keys/keyproperties) object, which doesn't include the version's value. If you want the version's value, use the version returned in the property to get the key's value with the getKey method. ++|Method|Returns value| Returns properties| +|--|--|--| +|[getKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-getKey)|Yes|Yes| +|[listPropertiesOfKeyVersions](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-listpropertiesofkeyversions)|No|Yes| ++```javascript +// Azure client libraries +import { DefaultAzureCredential } from '@azure/identity'; +import { + KeyClient, +} from '@azure/keyvault-keys'; ++// Authenticate to Azure Key Vault +const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++const name = `myRsaKey`; ++for await (const keyProperties of client.listPropertiesOfKeyVersions(name)) { + const thisVersion = keyProperties.version; + + const { key } = await client.getKey(name, { + version: thisVersion + }); ++ // do something with version's key value +} +``` ++## Get disabled key ++Use the following table to understand what you can do with a disabled key. ++|Allowed|Not allowed| +|--|--| +|Enable key<br>Update properties|Get value| +++## Next steps ++* [Enabled and disable key from JavaScript SDK](javascript-developer-guide-enable-disable-key.md) |
key-vault | Javascript Developer Guide Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-get-started.md | + + Title: Getting started with Azure Key Vault key in JavaScript +description: Set up your environment, install npm packages, and authenticate to Azure to get started using Key Vault keys in JavaScript ++++++ Last updated : 07/06/2023++#Customer intent: As a JavaScript developer who is new to Azure, I want to know the high level steps necessary to use Key Vault keys in JavaScript. ++# Get started with Azure Key Vault keys in JavaScript + +This article shows you how to connect to Azure Key Vault by using the Azure Key Vault keys client library for JavaScript. Once connected, your code can operate on keys in the vault. ++[API reference](/javascript/api/overview/azure/keyvault-keys-readme) | [Package (npm)](https://www.npmjs.com/package/@azure/keyvault-keys) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/keyvault/keyvault-keys) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/keyvault/keyvault-keys/samples/v4) | [Give feedback](https://github.com/Azure/azure-sdk-for-js/issues) + +## Prerequisites + +- An Azure subscription - [create one for free](https://azure.microsoft.com/free). +- [Azure Key Vault](../general/quick-create-cli.md) instance. Review [the access policies](../general/assign-access-policy.md) on your Key Vault to include the permissions necessary for the specific tasks performed in code. +- [Node.js version LTS](https://nodejs.org/) ++## Set up your project ++1. Open a command prompt and change into your project folder. Change `YOUR-DIRECTORY` to your folder name: ++ ```bash + cd YOUR-DIRECTORY + ``` ++1. If you don't have a `package.json` file already in your directory, initialize the project to create the file: ++ ```bash + npm init -y + ``` ++1. Install the Azure Key Vault keys client library for JavaScript: ++ ```bash + npm install @azure/keyvault-keys + ``` ++1. If you want to use passwordless connections using Azure AD, install the Azure Identity client library for JavaScript: ++ ```bash + npm install @azure/identity + ``` ++## Authorize access and connect to Key Vault ++Azure Active Directory (Azure AD) provides the most secure connection by managing the connection identity ([**managed identity**](../../active-directory/managed-identities-azure-resources/overview.md)). This **passwordless** functionality allows you to develop an application that doesn't require any keys stored in the code. ++Before programmatically authenticating to Azure to use Azure Key Vault keys, make sure you set up your environment. +++#### [Developer authentication](#tab/developer-auth) +++#### [Production authentication](#tab/production-auth) ++Use the [DefaultAzureCredential](https://www.npmjs.com/package/@azure/identity#DefaultAzureCredential) in production based on the credential mechanisms. ++++## Build your application ++As you build your application, your code interacts with two types of resources: ++- [**KeyVaultKey**](/javascript/api/@azure/keyvault-keys/keyvaultkey), which includes: + - ID, name, and value. + - Allowed operations. + - Type such as `EC`, `EC-HSM`, `RSA`, `RSA-HSM`, `oct`, `oct-HSM`. + - Properties as KeyProperties +- [**KeyProperties**](/javascript/api/@azure/keyvault-keys/keyproperties), which include the keys's metadata, such as its name, version, tags, expiration data, and whether it's enabled. ++If you need the value of the KeyVaultKey, use methods that return the [KeyVaultKey](/javascript/api/@azure/keyvault-keys/keyvaultkey): ++* [getKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-getkey) ++## Object model ++The Azure Key Vault keys client library for JavaScript includes the following clients: ++* [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient): The KeyClient object is the top object in the SDK. This client allows you to perform key management tasks such as create, rotate, delete, and list the keys. +* [CryptographyClient](/javascript/api/@azure/keyvault-keys/cryptographyclient) allows you to encrypt, decrypt, sign, verify, wrap and unwrap keys. +++## Create a KeyClient object ++Once your local environment and Key Vault authorization are set up, create a JavaScript file, which includes the [@azure/identity](https://www.npmjs.com/package/@azure/identity) and the [@azure/keyvault-keys](https://www.npmjs.com/package/@azure/keyvault-keys) packages. Create a credential, such as the [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential), to implement passwordless connections to your vault. Use that credential to authenticate with a [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) object. ++```javascript +// Include required dependencies +import { DefaultAzureCredential } from '@azure/identity'; +import { KeyClient } from '@azure/keyvault-keys'; ++// Authenticate to Azure +// Create KeyClient +const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential + ); ++// Get key +const key = await client.getKey("MyKeyName"); +``` ++## Create a CryptographyClient object ++The CryptographyClient object is the operational object in the SDK, using your key to perform actions such as encrypt, decrypt, sign and verify, wrapping and unwrapping. ++Use your Identity credential from your KeyClient, along with the key name, to create a [CryptographyClient](/javascript/api/@azure/keyvault-keys/cryptographyclient?) to perform operations. ++```javascript +// Include required dependencies +import { DefaultAzureCredential } from '@azure/identity'; +import { + CryptographyClient, + KeyClient, + KnownEncryptionAlgorithms, + RsaEncryptParameters +} from '@azure/keyvault-keys'; ++// Authenticate to Azure +// Create KeyClient +const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential + ); ++// Get key +const key = await client.getKey("MyKeyName"); ++if (key?.name) { ++ // get encryption client + const encryptClient = new CryptographyClient(key, credential); ++ // encrypt data + const encryptParams = { + algorithm: KnownEncryptionAlgorithms.RSAOaep256, + plaintext: Buffer.from("Hello world!") + } + const encryptResult = await encryptClient.encrypt(encryptParams); +} +``` ++## See also ++- [Package (npm)](https://www.npmjs.com/package/@azure/keyvault-keys) +- [Samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/keyvault/keyvault-keys/samples/v4) +- [API reference](/javascript/api/overview/azure/keyvault-keys-readme) +- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/keyvault/keyvault-keys) +- [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues) ++## Next steps ++* [Create a key](javascript-developer-guide-create-update-rotate-key.md) |
key-vault | Javascript Developer Guide Import Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-import-key.md | + + Title: Import keys using Azure Key Vault keys with JavaScript +description: Import keys using Azure Key Vault keys with JavaScript. ++++++ Last updated : 07/06/2023++#Customer intent: As a JavaScript developer who is new to Azure, I want to import a key to the Key Vault with the SDK. +++# Import keys in Azure Key Vault with JavaScript ++Create the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) with the appropriate [programmatic authentication credentials](javascript-developer-guide-get-started.md#authorize-access-and-connect-to-key-vault). ++## Import a key ++A best practice is to allow Key Vault to generate your keys. If you need to migrate a key to Key Vault, the key needs to be in the JWK format with any Base64 values converted to UInt8Array values. ++The JSON Web Key (JWK), represented in the SDK as a [JsonWebKey](/javascript/api/@azure/keyvault-keys/jsonwebkey) object, contains a well-known public key, which can be used to validate the signature of a signed JWT. ++```javascript +// Azure client libraries +import { DefaultAzureCredential } from '@azure/identity'; +import { + KeyClient +} from '@azure/keyvault-keys'; ++// Authenticate to Azure Key Vault +const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++const keyName = `MyImportedKey`; ++// Key must be in Jwk format +const keyInJWKFormat = +{ + "p": Buffer.from("21DSdJucATc4p6OyajUDAunbanGfY0TjeELnDUfCrqFElWOX0lSw4Hy52eJkkvahqk6sUrZUa82QhJn607RPQLLIU08OUhbgvLkvhLeQYT38Tzshoefn6IoQypOk0Gn0pQ00A-nVbb7kFLx8PgT-6QA46llOqF5FZ395NjzW3V8", "base64"), + "kty": "RSA", + "q": Buffer.from("z1VpOnSDbcQWjuLzjDEyLWKCxskd00r2bzvYtyS593c4qD1KrguiO-3HWLtMIz5vv3861892IT6XvLJYZLR5inoXfEIpKY-0DSLC5vXtbyvbJoHm72ONJpZRP-6iVHsyNrIm6ZjKi4xKip8fulGcwXwSHA4NgC5X9cwKcAmPxo0", "base64"), + "d": Buffer.from("OcfB9Yv8qgB3p4LHK0pBYl1B3zhM80mq9_lk-6dewc9UZtNaWhc8j6H3IFFT2CdSFobywV87YUXcOpawEVcKCuXaXy5N2aO9qa-xz5yQYacV3T3DALgAyLPwW0AqN0l2neRPTmu38PqRl7_s1-7Y4XYmx8Cn1mELXNw_MURBRtA7DY-qLd_31OdxR37NUYfWmMWCC37DzMDXuoaWIOIPnZ0QUW2MTt4YXMOYD22dZWV5JFtrFPCb19E2FjlgT4oS4N0AUFldVq73fx8igXNAzq3dDSudg3q8eNWxsO9OCkw38rYgK2A5Fw4Km324JaPuZfuN8SlrMo5A_VXKRobp2Q", "base64"), + "e": Buffer.from("AQAB", "base64"), + "use": "enc", + "qi": Buffer.from("HfdGlVI9nKucgkHj9qQJJpwQG8a0DWiZQ8BwnHjgUwQCN7d85Vzc7gr-bidpg3NRRo1yVeeS7NO0wFpYMVUCoeh8Q6UdhhFz_C8gzzWHETPeJ6vV-3oKMaVXweFU16hwCrUI-rOTuoYTkARnNr-ZNjsgTYMbLVtJOgO8wF402rI", "base64"), + "dp": Buffer.from("yulVPjP2u5022st2uBMCDUEHE826VSMYfl0P3talBeMJTFpPznczCw_6998hhGORobuWbhRpuTAA5N5-Fj8-EDMZaxK6wjKOja2cjGM1vvKVrUydSmoAw8Jx1KuTkoxloAu-M1y2bgpuhcz5-nuuyS6-efxU7SwDdMWZBRh3B2s", "base64"), + "alg": Buffer.from("RSA1_5", "base64"), + "dq": Buffer.from("FqRYMocI11Ljt8T3Hec9eJFagMTz2eBE207o0s9S88B0UoMnBazFkc_cxkbmAK9P2tTVIz5Hw0enoHbFinHfGA1PRUWgYyaLXifeqwROYqaibykehCQWBRHDW7z-w0UU7b4026vQ6r5uYYcRGvLQsJyRCblLJiVpe7FFroiMx_0", "base64"), + "n": Buffer.from("sZ-GKGT9icg6-74JLMuRoRiPMJ9r0MSG8T8XAg7ANx46EqhX3kzoUYqFrV2tSD4VqSVlgg8pyDm0bTZeT8t-ScCWsIz8snWAqNmIOSOOSURO33c0_1Pe0XQSGTL96oBv6E6kqdSVSuypcAqfTB2Ms8XukCl-taUGFkId918fV4cDvBWdekaf1DbmG3D05vjfqNG-ZXYnJlgRG4Soz5RrNEWkftcdWcj8Jg7kDCYKXCcYJbyaT13vdW7A10_gY6AgmZT0Y2DJeb8qyhMT_WPnXz8fURbE8U2-fLcKXD-RFUJcHOYftcKM9dF-8UUNI_64kegynTJNdjaLv89LsKBnUw", "base64"), +}; ++const key = await client.importKey(keyName, keyInJWKFormat); +console.log(key?.name); +``` ++Learn more about JWK ++* [JWT](https://jwt.io/introduction/) +* [Create JWK](https://mkjwk.org/) +++## Next steps ++* [Backup, delete, and restore a key](javascript-developer-guide-backup-delete-restore-key.md) |
key-vault | Javascript Developer Guide List Key Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-list-key-version.md | + + Title: List keys using Azure Key Vault keys with JavaScript +description: List keys in JavaScript. ++++++ Last updated : 07/06/2023++#Customer intent: As a JavaScript developer who is new to Azure, I want to list keys to the Key Vault with the SDK. +++# List keys and versions in Azure Key Vault with JavaScript ++Create the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) with the appropriate [programmatic authentication credentials](javascript-developer-guide-get-started.md#authorize-access-and-connect-to-key-vault). ++## List all keys ++List current version of all keys with the iterable [listPropertiesOfKeys](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-listpropertiesofkeys). ++```javascript +import { KeyClient, CreateKeyOptions, KeyVaultKey } from '@azure/keyvault-keys'; +import { DefaultAzureCredential } from '@azure/identity'; ++const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++// Get latest version of (not soft-deleted) keys +for await (const keyProperties of client.listPropertiesOfKeys()) { + console.log(keyProperties.version); +} +``` ++The returned [KeyProperties](/javascript/api/@azure/keyvault-keys/keyproperties) object includes the key version. ++## List all keys by page ++To list all keys in Azure Key Vault, use the [listPropertiesOfKeys](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-listpropertiesofkeys) method to get secret properties a page at a time by setting the [PageSettings](/javascript/api/@azure/core-paging/pagesettings) object. ++```javascript +import { KeyClient } from '@azure/keyvault-keys'; +import { DefaultAzureCredential } from '@azure/identity'; ++const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++let page = 1; +const maxPageSize = 5; ++// Get latest version of not-deleted keys +for await (const keyProperties of client.listPropertiesOfKeys().byPage({maxPageSize})) { + console.log(`Page ${page++} `) + + for (const props of keyProperties) { + console.log(`${props.name}`); + } +} +``` ++The returned [KeyProperties](/javascript/api/@azure/keyvault-keys/keyproperties) object includes the key version. ++## List all versions of a key ++To list all versions of a key in Azure Key Vault, use the [listPropertiesOfKeyVersions](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-listpropertiesofkeyversions) method. ++```javascript +import { KeyClient } from '@azure/keyvault-keys'; +import { DefaultAzureCredential } from '@azure/identity'; ++const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++// Get all versions of key +for await (const versionProperties of client.listPropertiesOfKeyVersions( + keyName +)) { + console.log(`\tversion: ${versionProperties.version} created on ${versionProperties.createdOn}`); +} +``` ++The returned [KeyProperties](/javascript/api/@azure/keyvault-keys/keyproperties) object includes the key version. ++Refer to the [List all keys by page](#list-all-keys-by-page) example to see how to page through the results. ++## List deleted keys ++To list all deleted keys in Azure Key Vault, use the [listDeletedKeys](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-listdeletedkeys) method. ++```javascript +import { KeyClient } from '@azure/keyvault-keys'; +import { DefaultAzureCredential } from '@azure/identity'; ++const credential = new DefaultAzureCredential(); +const client = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++for await (const deletedKey of client.listDeletedKeys()) { + console.log( + `Deleted: ${deletedKey.name} deleted on ${deletedKey.properties.deletedOn}, to be purged on ${deletedKey.properties.scheduledPurgeDate}` + ); +} +``` +The deletedKey object is a [DeletedKey](/javascript/api/@azure/keyvault-keys/deletedkey) object which includes the KeyProperties object with additional properties such as: ++* `deletedOn` - The time when the key was deleted. +* `scheduledPurgeDate` - The date when the key is scheduled to be purged. After a key is purged, it cannot be [recovered](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-beginrecoverdeletedkey). If you [backed up the key](javascript-developer-guide-backup-delete-restore-key.md), you can restore it with the same name and all its versions. ++Refer to the [List all keys by page](#list-all-keys-by-page) example to see how to page through the results. ++## Next steps ++* [Import key with JavaScript SDK](javascript-developer-guide-import-key.md) |
key-vault | Javascript Developer Guide Sign Verify Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-sign-verify-key.md | + + Title: Sign and verify using Azure Key Vault keys with JavaScript +description: Sign and verify data with keys in JavaScript. ++++++ Last updated : 07/06/2023++#Customer intent: As a JavaScript developer who is new to Azure, I want to sign and verify data using a key to the Key Vault with the SDK. +++# Sign and verify data using a key in Azure Key Vault with JavaScript ++Create the [KeyClient](/javascript/api/@azure/keyvault-keys/keyclient) with the appropriate [programmatic authentication credentials](javascript-developer-guide-get-started.md#authorize-access-and-connect-to-key-vault), then create a [CryptographyClient](/javascript/api/@azure/keyvault-keys/cryptographyclient) use the client to set, update, and rotate a key in Azure Key Vault. ++## Signing data ++A few suggestions for signing data: ++* Hash large data before signing +* Hash one-way data before signing such as passwords +* Small 2-way data can be signed directly +++## Sign and verify large or one-way data with key ++To sign and verify your digested message, use the following methods: ++For digested messages: +* [sign](/javascript/api/@azure/keyvault-keys/cryptographyclient#@azure-keyvault-keys-cryptographyclient-sign) to sign the digest of a message. This is useful for large data or one-way data such as passwords. +* [verify](/javascript/api/@azure/keyvault-keys/cryptographyclient#@azure-keyvault-keys-cryptographyclient-verify) to verify the digest of a message. ++```javascript +import { createHash } from "crypto"; +import { DefaultAzureCredential } from '@azure/identity'; +import { + CryptographyClient, + KeyClient, + KnownSignatureAlgorithms +} from '@azure/keyvault-keys'; ++// get service client +const credential = new DefaultAzureCredential(); +const serviceClient = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++// get existing key +const keyVaultKey = await serviceClient.getKey('MyRsaKey'); ++if (keyVaultKey?.name) { ++ // get encryption client with key + const cryptoClient = new CryptographyClient(keyVaultKey, credential); + + // get digest + const digestableData = "MyLargeOrOneWayData"; + const digest = createHash('sha256') + .update(digestableData) + .update(process.env.SYSTEM_SALT || '') + .digest(); + + // sign digest + const { result: signature } = await cryptoClient.sign(KnownSignatureAlgorithms.RS256, digest); + + // store signed digest in database ++ // verify signature + const { result: verified } = await cryptoClient.verify(KnownSignatureAlgorithms.RS256, digest, signature); + console.log(`Verification ${verified ? 'succeeded' : 'failed'}.`); +} +``` ++## Sign and verify small data with key ++To sign and verify your data, use the following methods: ++For data: +* [signData](/javascript/api/@azure/keyvault-keys/cryptographyclient#@azure-keyvault-keys-cryptographyclient-signdata) to sign a block of data. +* [verifyData](/javascript/api/@azure/keyvault-keys/cryptographyclient#@azure-keyvault-keys-cryptographyclient-verifydata) to verify data. ++```javascript +import { createHash } from "crypto"; +import { DefaultAzureCredential } from '@azure/identity'; +import { + CryptographyClient, + KeyClient, + KnownSignatureAlgorithms +} from '@azure/keyvault-keys'; ++// get service client +const credential = new DefaultAzureCredential(); +const serviceClient = new KeyClient( + `https://${process.env.AZURE_KEYVAULT_NAME}.vault.azure.net`, + credential +); ++// get existing key +const keyVaultKey = await serviceClient.getKey('MyRsaKey'); ++if (keyVaultKey?.name) { ++ // get encryption client with key + const cryptoClient = new CryptographyClient(keyVaultKey, credential); + + const data = 'Hello you bright big beautiful world!'; + + // sign + const { result: signature } = await cryptoClient.signData( + KnownSignatureAlgorithms.RS256, + Buffer.from(data, 'utf8') + ); + + // verify signature + const { result: verified } = await cryptoClient.verifyData( + KnownSignatureAlgorithms.RS256, + Buffer.from(data, 'utf8'), + signature + ); + console.log(`Verification ${verified ? 'succeeded' : 'failed'}.`); +} +``` ++## Next steps ++* [Key types, algorithms, and operations](about-keys-details.md) |
key-vault | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md | Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
lab-services | Add Lab Creator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/add-lab-creator.md | Title: Add a user as a lab creator in Azure Lab Services -description: This article shows how to add a user to the Lab Creator role for a lab plan in Azure Lab Services. The lab creators can create labs within this lab plan. + Title: Assign a lab creator ++description: This article shows how to add a user to the Lab Creator role for a lab plan in Azure Lab Services. Lab creators can create labs within the lab plan. ++++ Previously updated : 11/19/2021 Last updated : 07/04/2023 -This article shows you how to add users as lab creators to a lab account or lab plan in Azure Lab Services. These users then can create labs and manage those labs. +This article describes how to add users as lab creators to a lab account or lab plan in Azure Lab Services. Users with the Lab Creator role can create labs and manage labs for the lab account or lab plan. ## Prerequisites -- To add lab creators to a lab plan, your Azure account needs to have the [Owner](./concept-lab-services-role-based-access-control.md#owner-role) Azure RBAC role assigned on the resource group. Learn more about the [Azure Lab Services built-in roles](./reliability-in-azure-lab-services.md).+- To add lab creators to a lab plan, your Azure account needs to have the [Owner](./concept-lab-services-role-based-access-control.md#owner-role) Azure RBAC role assigned on the resource group. Learn more about the [Azure Lab Services built-in roles](./concept-lab-services-role-based-access-control.md). ## Add Azure AD user account to Lab Creator role If you're using a lab account, assign the Lab Creator role on the lab account. ## Add a guest user as a lab creator -You might need to add an external user as a lab creator. If that is the case, you'll need to add them as a guest account on the Azure AD attached to the subscription. The following types of email accounts might be used: +If you need to add an external user as a lab creator, you need to add the external user as a guest account in the Azure Active Directory that is linked to your Azure subscription. -- A Microsoft email account, such as `@outlook.com`, `@hotmail.com`, `@msn.com`, or `@live.com`.-- A non-Microsoft email account, such as one provided by Yahoo or Google. However, these types of accounts must be linked with a Microsoft account.-- A GitHub account. This account must be linked with a Microsoft account.+The following types of email accounts can be used: -For instructions to add someone as a guest account in Azure AD, see [Quickstart: Add guest users in the Azure portal - Azure AD](../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md). If using an email account that's provided by your universityΓÇÖs Azure AD, you don't have to add them as a guest account. +- A Microsoft-domain email account, such as *outlook.com*, *hotmail.com*, *msn.com*, or *live.com*. +- A non-Microsoft email account, such as one provided by Yahoo! or Google. The user needs to [link the account with a Microsoft account](./how-to-manage-labs.md#use-a-non-organizational-account-as-a-lab-creator). +- A GitHub account. The user needs to [link the account with a Microsoft account](./how-to-manage-labs.md#use-a-non-organizational-account-as-a-lab-creator). -Once the user has an Azure AD account, [add the Azure AD user account to Lab Creator role](#add-azure-ad-user-account-to-lab-creator-role). +To add a guest user as a lab creator: -> [!IMPORTANT] -> Only lab creators need an account in Azure AD connected to the subscription. For account requirements for students see [Tutorial: Access a lab in Azure Lab Services](tutorial-connect-lab-virtual-machine.md). --### Using a non-Microsoft email account --Educators can use non-Microsoft email accounts to register and sign in to a lab. However, the sign-in to the Lab Services portal requires that educators first create a Microsoft account that's linked to their non-Microsoft email address. --Many educators might already have a Microsoft account linked to their non-Microsoft email addresses. For example, educators already have a Microsoft account if they have used their email address with MicrosoftΓÇÖs other products or services, such as Office, Skype, OneDrive, or Windows. --When educators sign in to the Lab Services portal, they are prompted for their email address and password. If the educator attempts to sign in with a non-Microsoft account that does not have a Microsoft account linked, the educator will receive the following error message: -- +1. Follow these steps to [add guest users to Azure Active Directory](/azure/active-directory/external-identities/b2b-quickstart-add-guest-users-portal). -To sign up for a Microsoft account, educators should go to [https://signup.live.com](https://signup.live.com). + If using an email account that's provided by your universityΓÇÖs Azure AD, you don't have to add them as a guest account. -### Using a GitHub Account +1. Follow these steps to [assign the Lab Creator role to the Azure AD user account](#add-azure-ad-user-account-to-lab-creator-role). -Educators can also use an existing GitHub account to register and sign in to a lab. If the educator already has a Microsoft account linked to their GitHub account, then they can sign in and provide their password as shown in the previous section. If they have not yet linked their GitHub account to a Microsoft account, they should select **Sign-in options**: -- --On the **Sign-in options** page, select **Sign in with GitHub**. -- --Finally, they are prompted to create a Microsoft account that's linked to their GitHub account. It happens automatically when the educator selects **Next**. The educator is then immediately signed in and connected to the lab. +> [!IMPORTANT] +> Only lab creators need an account in Azure AD connected to the Azure subscription. For account requirements for lab users see [Access a lab in Azure Lab Services](./how-to-access-lab-virtual-machine.md). ## Next steps See the following articles: - [As a lab owner, create and manage labs](how-to-manage-labs.md) - [As a lab owner, set up and publish templates](how-to-create-manage-template.md)-- [As a lab owner, configure and control usage of a lab](how-to-configure-student-usage.md)+- [As a lab owner, configure and control usage of a lab](how-to-manage-lab-users.md) - [As a lab user, access labs](how-to-use-lab.md) |
lab-services | Class Type Arcgis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-arcgis.md | The steps in this section show how to set up the template VM: 3. Set up external backup storage for students. Students can save files directly to their assigned VM since all changes that they make are saved across sessions. However, we recommend that students back up their work to storage that is external from their VM for a few reasons: - To enable students to access their work after the class and lab ends. - - In case the student gets their VM into a bad state and their image needs to be [reset](how-to-manage-vm-pool.md#reset-vms). + - In case the student gets their VM into a bad state and their image needs to be [reset](how-to-manage-vm-pool.md#reset-lab-vms). With ArcGIS, each student should back up the following files at the end of each work session: |
lab-services | Classroom Labs Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-concepts.md | To create labs in Azure Lab Services, your Azure account needs to have the Lab C You use the Azure Lab Services website (https://labs.azure.com) to create labs for a lab plan. Alternately, you can also [configure Microsoft Teams integration](./how-to-configure-teams-for-lab-plans.md) or [Canvas integration](./how-to-configure-canvas-for-lab-plans.md) with Azure Lab Services to create labs directly in Microsoft Teams or Canvas. -By default, access to lab virtual machines is restricted. For a lab, you can [configure the list of lab users](./how-to-configure-student-usage.md) that have access to the lab. +By default, access to lab virtual machines is restricted. For a lab, you can [configure the list of lab users](./how-to-manage-lab-users.md) that have access to the lab. Get started by [creating a lab using the Azure portal](quick-create-connect-lab.md). |
lab-services | Classroom Labs Fundamentals 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals-1.md | Each lab is isolated by its own virtual network. If the lab has a [peered virtu Lab Services handles the studentΓÇÖs ability to perform actions like start and stop on their virtual machines. It also controls access to their VM connection information. -Lab Services also handles the registration of students to the service. There are currently two different access settings: restricted and nonrestricted. For more information, see the [manage lab users](how-to-configure-student-usage.md#send-invitations-to-users) article. Restricted access means Lab Services verifies that the students are added as user before allowing access. Nonrestricted means any user can register as long as they have the registration link and there's capacity in the lab. Nonrestricted can be useful for hackathon events. +Lab Services also handles the registration of students to the service. There are currently two different access settings: restricted and nonrestricted. For more information, see the [manage lab users](how-to-manage-lab-users.md#send-invitations-to-users) article. Restricted access means Lab Services verifies that the students are added as user before allowing access. Nonrestricted means any user can register as long as they have the registration link and there's capacity in the lab. Nonrestricted can be useful for hackathon events. Student VMs that are hosted in the lab have a username and password set by the creator of the lab. Alternately, the creator of the lab can allow registered students to choose their own password on first sign-in. |
lab-services | Classroom Labs Fundamentals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals.md | Azure Lab Services manages access to lab virtual machines at different levels: - Start or stop a lab VM. Azure Lab Services grants lab users permission to perform such actions on their own virtual machines. The service also controls access to the lab virtual machine connection information. -- Register for a lab. Azure Lab Services offers two different access settings: restricted and nonrestricted. *Restricted access* means that Azure Lab Services verifies that lab users are added to the lab before allowing access. *Nonrestricted access* means that any user can register for a lab by using the lab registration link, if there's capacity in the lab. Nonrestricted access can be useful for hackathon events. For more information, see the [manage lab users](how-to-configure-student-usage.md#send-invitations-to-users) article.+- Register for a lab. Azure Lab Services offers two different access settings: restricted and nonrestricted. *Restricted access* means that Azure Lab Services verifies that lab users are added to the lab before allowing access. *Nonrestricted access* means that any user can register for a lab by using the lab registration link, if there's capacity in the lab. Nonrestricted access can be useful for hackathon events. For more information, see the [manage lab users](how-to-manage-lab-users.md#send-invitations-to-users) article. - Virtual machine credentials. Lab virtual machines that are hosted in the lab have a username and password set by the creator of the lab. Alternately, the creator of the lab can allow registered users to choose their own password on first sign-in. |
lab-services | Classroom Labs Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-scenarios.md | Azure Lab Services uses Azure Role-Based Access (Azure RBAC) to manage access to Depending on your organizational structure, responsibilities, and skill level, there might be different options to map these permissions to your organizational roles or personas, such as administrators, or educators. The scenarios and diagrams also include students to show where they fit in the process, although they don't require Azure AD permissions. -The following sections give different examples of assigning permissions across an organization. Azure Lab Services enables you to flexibly assign permissions beyond these typical scenarios to match your organizational set up. +The following sections give different examples of assigning permissions across an organization. Azure Lab Services enables you to flexibly assign permissions beyond these typical scenarios to match your organizational setup. ### Scenario 1: Splitting responsibilities between IT department and educators The following table shows the corresponding mapping of organization roles to Azu | | Lab Contributor | Optionally, assign to an educator to create and manage all labs (when assigned at the resource group level). | | | Lab Operator | Optionally, assign to other educators to manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. | | | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reset/start/stop/connect lab VMs. |-| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-configure-student-usage.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | +| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-manage-lab-users.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. | ### Scenario 2: The IT department owns the entire lab creation process As mentioned in [scenario 1](#scenario-1-splitting-responsibilities-between-it-d Get started as an administrator with the [Quickstart: create and connect to a lab](./quick-create-connect-lab.md). -Get started as an educator and [add students to a lab](./how-to-configure-student-usage.md), or [create a lab schedule](./how-to-create-schedules.md). +Get started as an educator and [add students to a lab](./how-to-manage-lab-users.md), or [create a lab schedule](./how-to-create-schedules.md). :::image type="content" source="./media/classroom-labs-scenarios/lab-services-process-education-roles-scenario2.png" alt-text="Diagram that shows lab creation steps where admins own the entire process."::: The following table shows the corresponding mapping of organization roles to Azu | | Lab Operator | Optionally, assign to other administrator to manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. | | Educator | Lab Operator | Manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. | | | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reset/start/stop/connect lab VMs. |-| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-configure-student-usage.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | +| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-manage-lab-users.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. | ### Scenario 3: The educator owns the entire lab creation process In this scenario, the educator manages their Azure subscription and manages the entire process of creating the Azure Lab Services lab plan and lab. This scenario might be useful in cases where educators are comfortable with creating Azure resources, and creating and customizing labs. -Get started as an administrator with the [Quickstart: create and connect to a lab](./quick-create-connect-lab.md) and then [add students to a lab](./how-to-configure-student-usage.md), and [create a lab schedule](./how-to-create-schedules.md). +Get started as an administrator with the [Quickstart: create and connect to a lab](./quick-create-connect-lab.md) and then [add students to a lab](./how-to-manage-lab-users.md), and [create a lab schedule](./how-to-create-schedules.md). :::image type="content" source="./media/classroom-labs-scenarios/lab-services-process-education-roles-scenario3.png" alt-text="Diagram that shows lab creation steps where educators own the entire process."::: The following table shows the corresponding mapping of organization roles to Azu | | | | | Educator | - Subscription Owner<br/>- Subscription Contributor | Create lab plan in Azure portal. As an Owner, you can also fully manage all labs. | | | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reset/start/stop/connect lab VMs. |-| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-configure-student-usage.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | +| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-manage-lab-users.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. | ## Next steps |
lab-services | Concept Migrating Physical Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-migrating-physical-labs.md | There are multiple benefits of using single-purpose labs (for example, one class - Lab VMs only contain the software that is needed for their purpose. This simplifies the set-up and maintenance of labs by lab creators, and provides more clarity for lab users. -- Access to each individual lab is controlled. Lab users are only granted access to labs and software they need. Learn how to [add and manage lab users](./how-to-configure-student-usage.md).+- Access to each individual lab is controlled. Lab users are only granted access to labs and software they need. Learn how to [add and manage lab users](./how-to-manage-lab-users.md). - Further optimize costs by taking advantage of the following features: - [Schedules](./how-to-create-schedules.md) are used to automatically start and stop all VMs within a lab according to each classΓÇÖs schedule. - - [Quotas](./how-to-configure-student-usage.md#set-quotas-for-users) allow you to control the amount of time that each classΓÇÖs students can access VMs outside of their scheduled hours. + - [Quotas](./how-to-manage-lab-users.md#set-quotas-for-users) allow you to control the amount of time that each classΓÇÖs students can access VMs outside of their scheduled hours. ## Example use case |
lab-services | Cost Management Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/cost-management-guide.md | For Azure Lab Services, cost management can be broken down into two distinct are ## Estimate the lab costs -Each lab dashboard has a **Costs & Billing** section that lays out a rough estimate of what the lab will cost for the lab. The estimate uses the number [schedules](classroom-labs-concepts.md#schedule), [quota hours](classroom-labs-concepts.md#quota), [extra quota for individual students](how-to-configure-student-usage.md#set-additional-quotas-for-specific-users), and [lab capacity](how-to-manage-vm-pool.md#set-lab-capacity) when calculating the cost estimate. Changes to the number of quota hours, schedules or lab capacity will affect the cost estimate value. +Each lab dashboard has a **Costs & Billing** section that lays out a rough estimate of what the lab will cost for the lab. The estimate uses the number [schedules](classroom-labs-concepts.md#schedule), [quota hours](classroom-labs-concepts.md#quota), [extra quota for individual students](how-to-manage-lab-users.md#set-additional-quotas-for-specific-users), and [lab capacity](how-to-manage-vm-pool.md#change-lab-capacity) when calculating the cost estimate. Changes to the number of quota hours, schedules or lab capacity will affect the cost estimate value. If users don't consume their assigned quota hours, you are only charged for the quota hours that lab users consumed. |
lab-services | Hackathon Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/hackathon-labs.md | To use Lab Services for your hackathon, ensure that both lab plan and your lab a - **Set VM capacity according to number of participants**. - Ensure that your lab virtual machine capacity is set based on the number of participants you expect at your hackathon. When you publish the template virtual machine, it can take several hours to create all of the lab virtual machines. It's recommended that you create the lab and lab VMs well in advance of the start of the hackathon. For more information, see [Set lab capacity](how-to-manage-vm-pool.md#set-lab-capacity). + Ensure that your lab virtual machine capacity is set based on the number of participants you expect at your hackathon. When you publish the template virtual machine, it can take several hours to create all of the lab virtual machines. It's recommended that you create the lab and lab VMs well in advance of the start of the hackathon. For more information, see [Set lab capacity](how-to-manage-vm-pool.md#change-lab-capacity). - **Decide whether to restrict lab access**. - By default, access to the lab is restricted. This feature requires you to add all of your hackathon participantsΓÇÖ emails to the list before they can register and access the lab using the registration link. If you have a hackathon where you donΓÇÖt know the specific participants, you can choose to disable the restrict access option. In this case, anyone can register directly to the lab by using the registration link. For more information, see the [how-to guide on adding users](how-to-configure-student-usage.md). + By default, access to the lab is restricted. This feature requires you to add all of your hackathon participantsΓÇÖ emails to the list before they can register and access the lab using the registration link. If you have a hackathon where you donΓÇÖt know the specific participants, you can choose to disable the restrict access option. In this case, anyone can register directly to the lab by using the registration link. For more information, see the [how-to guide on adding users](how-to-manage-lab-users.md). - **Verify schedule, quota, and autoshutdown settings**. To use Lab Services for your hackathon, ensure that both lab plan and your lab a **Schedule**: A [schedule](how-to-create-schedules.md) allows you to automatically control when your labsΓÇÖ machines are started and shut down. By default, no schedule is configured when you create a new lab. However, you should ensure that your labΓÇÖs schedule is set according to what makes sense for your hackathon. For example, if your hackathon starts on Saturday at 8:00 AM and ends on Sunday at 5:00 PM, create a schedule that automatically starts the machine at 7:30 AM on Saturday (about 30 minutes before the start of the hackathon) and shuts it down at 5:00 PM on Sunday. You might also decide not to use a schedule at all and rely on quota time. - **Quota**: The [quota](how-to-configure-student-usage.md#set-quotas-for-users) controls the number of hours that participants have access to a lab virtual machine outside of the scheduled hours. If the quota is reached while a participant is using it, the machine is automatically shut down and the participant is unable to restart it, unless the quota is increased. By default, when you create a lab, the quota is set to 10 hours. Configure the quota to allow enough time for the duration of the hackathon, especially if you haven't created a schedule. + **Quota**: The [quota](how-to-manage-lab-users.md#set-quotas-for-users) controls the number of hours that participants have access to a lab virtual machine outside of the scheduled hours. If the quota is reached while a participant is using it, the machine is automatically shut down and the participant is unable to restart it, unless the quota is increased. By default, when you create a lab, the quota is set to 10 hours. Configure the quota to allow enough time for the duration of the hackathon, especially if you haven't created a schedule. **Autoshutdown**: When enabled, the [autoshutdown](how-to-enable-shutdown-disconnect.md) setting causes Windows virtual machines to automatically shut down after a certain period of time once a participant has disconnected from their RDP session. By default, this setting is disabled. This section outlines the steps to complete the day of your hackathon. Provide your participants with the following information so that participants can access their lab VMs. - - The labΓÇÖs registration link. For more information, See [how-to guide on sending invitations to users](how-to-configure-student-usage.md#send-invitations-to-users). + - The labΓÇÖs registration link. For more information, See [how-to guide on sending invitations to users](how-to-manage-lab-users.md#send-invitations-to-users). - Credentials to use for connecting to the machine. This step only applies if the lab was configured with the same credentials for all lab VMs. - Instructions on how to connect to the lab VM. For OS-specific instructions, see [Connect to a lab VM](connect-virtual-machine.md). |
lab-services | How To Access Lab Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-access-lab-virtual-machine.md | + + Title: Access a lab ++description: Learn how to access a lab in Azure Lab Services. Use Teams, Canvas, or the Lab Services website to view, start, stop, and connect to a lab. +++++ Last updated : 06/29/2023+++# Access a lab virtual machine in Azure Lab Services ++This article describes how you can access your lab virtual machines in Azure Lab Services. Use Teams, Canvas, or the Azure Lab Services website to view, start, stop, and connect to a lab virtual machine. ++## Prerequisites ++- To register for a lab, you need a lab registration link. +- To view, start, stop, and connect to a lab VM, you need to register for the lab and have an assigned lab VM. ++## Access a lab virtual machine ++# [Lab Services website](#tab/lab-services-website) ++In the Azure Lab Services website, you can view and manage your assigned lab virtual machines. To access the Azure Lab Services website: ++1. Go to the Azure Lab Services website (https://labs.azure.com) in a web browser. ++1. Sign in with the email address that was granted access to the lab by the lab creator. ++> [!IMPORTANT] +> If you have received a lab registration link from the lab creator, you first need to go through a one-time registration process before you can access your labs. The registration process depends on how the lab creator configured the lab. ++### Register for the lab +++After the registration finishes, confirm that you see the lab virtual machine in **My virtual machines**. ++### User account types ++Azure Lab Services supports different email account types when registering for a lab: ++- An organizational email account that's provided by your Azure Active Directory instance. +- A Microsoft-domain email account, such as *outlook.com*, *hotmail.com*, *msn.com*, or *live.com*. +- A non-Microsoft email account, such as one provided by Yahoo! or Google. You need to link your account with a Microsoft account. ++#### Use a non-Microsoft email account +++# [Teams](#tab/teams) ++When you access a lab in Microsoft Teams, you're automatically registered for the lab, based on your team membership in Microsoft Teams. ++To access your lab in Teams: ++1. Sign into Microsoft Teams with your organizational account. ++1. Select the team and channel that contain the lab. ++1. Select the **Azure Lab Services** tab to view your lab virtual machines. ++ :::image type="content" source="./media/how-to-access-lab-virtual-machine/teams-view-lab.png" alt-text="Screenshot of lab in Teams after it's published."::: ++ You might see a message that the lab isn't available. This error can occur when the lab isn't published yet by the lab creator, or if the Teams membership information still needs to synchronize. ++# [Canvas](#tab/canvas) ++When you access a lab in [Canvas](https://www.instructure.com/canvas), you're automatically registered for the lab, based on your course membership in Canvas. Azure Lab Services supports test users in Canvas and the ability for the educator to act as another user. ++To access your lab in Canvas: ++1. Sign into Canvas by using your Canvas credentials. ++1. Go to the course, and then open the **Azure Lab Services** app. ++ :::image type="content" source="./media/how-to-access-lab-virtual-machine/canvas-view-lab.png" alt-text="Screenshot of a lab in the Canvas portal."::: ++ You might see a message that the lab isn't available. This error can occur when the lab isn't published yet by the lab creator, or if the Canvas course membership still needs to synchronize. ++++## View lab VM details ++When you access your lab, either through the Azure Lab Services website, Microsoft Teams, or Canvas, you get the list of lab virtual machines that are assigned to you. +++For each lab VM, you can view the following information: ++- Lab name: this name is assigned by the lab creator when creating the lab. +- Operating system: an icon represents the operating system of the lab VM. +- Quota hours: a progress bar shows your assigned and consumed number of quota hours. Learn more about the [quota hours](#view-quota-hours). +- Lab VM status: indicates whether the lab VM is starting, running, or stopped. ++In addition, you can also perform specific actions on the lab VM: ++- Start or stop the lab VM: learn more about [starting and stopping a lab VM](#start-or-stop-the-lab-vm). +- Connect to the lab VM: select the computer icon to connect to the lab VM with remote desktop or SSH. Learn more about [connecting to the lab VM](./connect-virtual-machine.md). +- Reset or troubleshoot the lab VM: learn more how you [reset or troubleshoot the lab VM](./how-to-reset-and-redeploy-vm.md) when you experience problems. ++## View quota hours ++Quota hours are the extra time allotted to you outside of the [scheduled time](./classroom-labs-concepts.md#schedule) for the lab. For example, the time outside of classroom time, to complete homework. ++On the lab VM tile, you can view your consumption of [quota hours](how-to-manage-lab-users.md#set-quotas-for-users) in the progress bar. The progress bar color and the message give an indication of the usage: ++| Status | Description | +| | -- | +| The progress bar is grayed out | A class is in progress, based on the lab schedule. You don't consume any quota hours during scheduled hours.<br/><br/>:::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-class-in-progress.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when a schedule started the VM."::: | +| The progress bar is red | You've consumed all your quota hours. If there's a lab schedule, then you can only access the lab VM during the scheduled hours.<br/><br/>:::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-red-color.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when there's quota usage."::: | +| The progress bar is blue | No class is currently in progress and you still have quota hours available to access the lab VM.<br/><br/> :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-blue-color.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when quota has been partially used."::: | +| The text **Available during classes only** is shown | There are no quota hours allocated to the lab. You can only access the lab VM during the scheduled hours for the lab.<br/><br/>:::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/available-during-class.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when there's no quota."::: | ++## Start or stop the lab VM ++You can start and stop a lab virtual machine from the **My virtual machines** page. If the lab creator configured a lab schedule, the lab VM is automatically started and stopped during the scheduled hours. ++Alternately, you can also stop a lab VM by using the operating system shutdown command from within the lab VM. The preferred method to stop a lab VM is to use the **My virtual machines** page to avoid incurring additional costs. ++> [!WARNING] +> If you use the OS shutdown command inside the lab VM, you might still incur costs. The preferred method is to use the stop action on the **My virtual machines** page. When you use lab plans, Azure Lab Services will detect when the lab VM is shut down, marks the lab VM as stopped, and billing stops. ++To start or stop a lab VM: ++1. Go to the **My virtual machines** page in Teams, Canvas, or the [Azure Lab Services website](https://labs.azure.com). ++1. Use the toggle control next to the lab VM status to start or stop the lab VM. ++ When the VM is in progress of starting or stopping, the control is inactive. ++ Starting or stopping the lab VM might take some time to complete. ++ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/start-vm.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the status toggle and status label on the VM tile."::: ++1. After the operation finishes, confirm that the lab VM status is correct. ++ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-running.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the status label on the VM tile."::: ++## Connect to the lab VM ++When the lab virtual machine is running, you can remotely connect to the VM. Depending on the lab VM operating system configuration, you can connect by using remote desktop (RDP) or secure shell (SSH). ++If there are no quota hours available, you can't start the lab VM outside the scheduled lab hours and can't connect to the lab VM. ++Learn more about how to [connect to a lab VM](connect-virtual-machine.md). ++## Next steps ++- Learn how to [change your lab VM password](./how-to-set-virtual-machine-passwords-student.md) +- Learn how to [reset or troubleshoot your lab VM](./how-to-reset-and-redeploy-vm.md) +- Learn about [key concepts in Azure Lab Services](./classroom-labs-concepts.md), such as quota hours or lab schedules. |
lab-services | How To Access Vm For Students Within Canvas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-access-vm-for-students-within-canvas.md | - Title: Access a VM (student view) in Azure Lab Services from Canvas -description: Learn how to access a VM (student view) in Azure Lab Services from Canvas. - Previously updated : 11/01/2021---# Access a VM (student view) in Azure Lab Services from Canvas ---When a lab is created within [Canvas](https://www.instructure.com/canvas), students can view and access all the VMs provisioned by the course educator. Once the lab is published and VMs are created, students will be automatically assigned a VM. Students can view and access the VMs assigned to them by selecting the tab containing **Azure Lab Services** app. --Students must access their VMs through Canvas. Their Canvas credentials will be used to log into Azure Lab Services. For further instructions about connecting to your VM, see [Tutorial: Access a lab in Azure Lab Services](tutorial-connect-lab-virtual-machine.md) --Azure Lab Services supports test users in Canvas and the ability for the educator to act as another user. --## Lab unavailable --If the lab hasn't been published or a synced in a while, students may see a message indicating the lab isn't available yet. Educators should [publish](tutorial-setup-lab.md#publish-lab) and [sync users](how-to-manage-user-lists-within-canvas.md#sync-users) to solve the problem. ---## Next steps --For more information, see the following articles: --- [Use Azure Lab Services within Canvas overview](lab-services-within-canvas-overview.md)-- [Get started and create a lab within Canvas](how-to-configure-canvas-for-lab-plans.md)-- [Manage lab user lists within Canvas](how-to-manage-user-lists-within-canvas.md)-- [Manage lab's VM pool within Canvas](how-to-manage-vm-pool-within-canvas.md)-- [Create and manage lab schedules within Canvas](how-to-create-schedules-within-canvas.md) |
lab-services | How To Access Vm For Students Within Teams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-access-vm-for-students-within-teams.md | - Title: Access a VM (student view) in Azure Lab from Teams -description: Learn how to access a VM (student view) in Azure Lab from Teams. - Previously updated : 03/01/2022----# Access a VM (student view) in Azure Lab from Teams --When a lab is created within Teams, users can view and access all the VMs provisioned by the team owner. When the lab is published and VMs are created, users are automatically registered to the lab. A VM will be assigned when they first sign into Azure Lab Services. Users can view and access the VMs assigned to them by selecting the tab containing **Azure Lab Services** app. ---Students see a message if the lab hasn't been published yet. Lab is also seen as unable if sync is yet to be triggered after they're added to the team. ---## Next steps --- As a student, [start the VM](tutorial-connect-lab-virtual-machine.md#start-the-vm).-- As a student, [connect to a lab VM](connect-virtual-machine.md). |
lab-services | How To Configure Student Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-student-usage.md | - Title: Manage lab users- -description: Learn how to manage lab users in Azure Lab Services. Configure the number of lab users, manage user registrations, and specify the number of hours they can use their lab VM. ----- Previously updated : 03/02/2023---# Manage lab users in Azure Lab Services --This article describes how to manage lab users in Azure Lab Services. Learn how to add users to a lab, manage their registration status, and how to specify the number of additional hours they can use the virtual machine (VM). --The workflow for letting lab users access a lab consists of the following steps: --1. Specify the list of lab users that can access the lab -1. Invite users to the lab by sending a lab registration link -1. Lab users register for the lab by using the registration link -1. Specify a lab schedule or quota hours to control when users can access their lab VM --By default, access to a lab is restricted. Only users that are in the list of lab users can register for a lab, and get access to the lab virtual machine (VM). You can disable restricted access for a lab, which lets any user register for a lab if they have the registration link. --You can [add users from an Azure Active Directory (Azure AD) group](#add-users-to-a-lab-from-an-azure-ad-group), or [manually add a list of users by email](#add-users-manually). If you enable Azure Lab Services integration with [Microsoft Teams](./how-to-manage-labs-within-teams.md) or [Canvas](./how-to-manage-labs-within-canvas.md), Azure Lab Services automatically grants user access to the lab and assigns a lab VM based on their membership in Microsoft or Canvas. In this case, you don't have to specify the lab user list, and users don't have to register for the lab. --Azure Lab Services supports up to 400 users per lab. --## Prerequisites ---## Add users to a lab from an Azure AD group --You can sync a lab user list to an existing Azure AD group. When you use an Azure AD group, you don't have to manually add or delete users in the lab settings. --You can create an Azure AD group within your organization's Azure AD to manage access to organizational resources and cloud-based apps. To learn more, see [Azure AD groups](../active-directory/fundamentals/active-directory-manage-groups.md). If your organization uses Microsoft Office 365 or Azure services, your organization already has admins who manage your Azure Active Directory. --### Sync users with Azure AD group --When you sync a lab with an Azure AD group, Azure Lab Services pulls all users inside the Azure AD group into the lab as lab users. Only people in the Azure AD group have access to the lab. The user list automatically refreshes every 24 hours to match the latest membership of the Azure AD group. You can also manually synchronize the list of lab users at any time. --The option to synchronize the list of lab users with an Azure AD group is only available if you haven't added users to the lab manually or through a CSV import yet. Make sure there are no users in the lab user list. --To sync a lab with an existing Azure AD group: --1. Sign in to the [Azure Lab Services website](https://labs.azure.com/). --1. Select the lab you want to work with. --1. In the left pane, select **Users**, and then select **Sync from group**. -- :::image type="content" source="./media/how-to-configure-student-usage/add-users-sync-group.png" alt-text="Screenshot that shows how to add users by syncing from an Azure AD group."::: --1. Select the Azure AD group you want to sync users with from the list of groups. -- If you don't see any Azure AD groups in the list, this could be because of the following reasons: -- - You're a guest user in Azure Active Directory (usually if you're outside the organization that owns the Azure AD), and you're not allowed to search for groups inside the Azure AD. In this case, you can't add an Azure AD group to the lab. - - Azure AD groups you created through Microsoft Teams don't show up in this list. You can add the Azure Lab Services app inside Microsoft Teams to create and manage labs directly from within Microsoft Teams. Learn more about [managing a labΓÇÖs user list from within Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams). --1. Select **Add** to sync the lab users with the Azure AD group. -- Azure Lab Services automatically pulls the list of users from Azure AD, and refreshes the list every 24 hours. -- Optionally, you can select **Sync** in the **Users** tab to manually synchronize to the latest changes in the Azure AD group. - -Users are auto-registered to the lab and VMs are automatically assigned when the VM pool syncs with the Azure AD group. Educators don't need to send invitations and students don't need to register for the lab separately. --### Automatic management of virtual machines based on changes to the Azure AD group --When you synchronize a lab with an Azure AD group, Azure Lab Services automatically manages the number of lab VMs based on the number of users in the group. You can't manually update the lab capacity in this case. --When a user is added to the Azure AD group, Azure Lab Services automatically adds a lab VM for that user. When a user is no longer a member of the Azure AD group, the lab VM for that user is automatically deleted from the lab. --## Add users manually --You can add lab users manually by providing their email address in the lab configuration or by uploading a CSV file. --### Add users by email address --1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. --1. Select **Users**, and then select **Add users manually**. -- :::image type="content" source="./media/how-to-configure-student-usage/add-users-manually.png" alt-text="Screenshot that shows how to add users manually."::: --1. Select **Add by email address**, enter the users' email addresses on separate lines or on a single line separated by semicolons. -- :::image type="content" source="./media/how-to-configure-student-usage/add-users-email-addresses.png" alt-text="Screenshot that shows how to add users' email addresses in the Lab Services website." lightbox="./media/how-to-configure-student-usage/add-users-email-addresses.png"::: --1. Select **Add**. -- The list displays the email addresses and registration status of the lab users. After a user registers for the lab, the list also displays the user's name. -- :::image type="content" source="./media/how-to-configure-student-usage/list-of-added-users.png" alt-text="Screenshot that shows the lab user list in the Lab Services website." lightbox="./media/how-to-configure-student-usage/list-of-added-users.png"::: --### Add users by uploading a CSV file --You can also add users by uploading a CSV file that contains their email addresses. --You use a CSV text file to store comma-separated (CSV) tabular data (numbers and text). Instead of storing information in columns fields (such as in spreadsheets), a CSV file stores information separated by commas. Each line in a CSV file has the same number of comma-separated *fields*. You can use Microsoft Excel to easily create and edit CSV files. --1. Use Microsoft Excel or a text editor of your choice, to create a CSV file with the users' email addresses in one column. -- :::image type="content" source="./media/how-to-configure-student-usage/csv-file-with-users.png" alt-text="Screenshot that shows the list of users in a CSV file."::: --1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. --1. Select **Users**, select **Add users**, and then select **Upload CSV**. --1. Select the CSV file with the users' email addresses, and then select **Open**. -- The **Add users** page shows the email address list from the CSV file. --1. Select **Add**. -- The **Users** page now shows the list of lab users you uploaded. -- :::image type="content" source="./media/how-to-configure-student-usage/list-of-added-users.png" alt-text="Screenshot that shows the list of added users in the Users page in the Lab Services website." lightbox="./media/how-to-configure-student-usage/list-of-added-users.png"::: --## Send invitations to users --If the **Restrict access** option is enabled for the lab, only listed users can use the registration link to register to the lab. This option is enabled by default. --To send a registration link to new users, use one of the methods in the following sections. --### Invite all users --You can invite all users to the lab by sending an email via the Azure Lab Services website. The email contains the lab registration link, and an optional message. --To invite all users: --1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. --1. Select **Users**, and then select **Invite all**. -- :::image type="content" source="./media/how-to-configure-student-usage/invite-all-button.png" alt-text="Screenshot that shows the Users page in the Azure Lab Services website, highlighting the Invite all button." lightbox="./media/how-to-configure-student-usage/invite-all-button.png"::: --1. In the **Send invitation by email** window, enter an optional message, and then select **Send**. -- The email automatically includes the registration link. To get and save the registration link separately, select the ellipsis (**...**) at the top of the **Users** pane, and then select **Registration link**. -- :::image type="content" source="./media/how-to-configure-student-usage/send-email.png" alt-text="Screenshot that shows the Send registration link by email window in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/send-email.png"::: -- The **Invitation** column of the **Users** list displays the invitation status for each added user. The status should change to **Sending** and then to **Sent on \<date>**. --### Invite selected users --Instead of inviting all users, you can also invite specific users and get a registration link that you can share with other people. --To invite selected users: --1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. --1. Select **Users**, and then select one or more users from the list. --1. In the row for the user you selected, select the **envelope** icon or, on the toolbar, select **Invite**. -- :::image type="content" source="./media/how-to-configure-student-usage/invite-selected-users.png" alt-text="Screenshot that shows how to invite selected users to a lab in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/invite-selected-users.png"::: --1. In the **Send invitation by email** window, enter an optional **message**, and then select **Send**. -- :::image type="content" source="./media/how-to-configure-student-usage/send-invitation-to-selected-users.png" alt-text="Screenshot that shows the Send invitation email for selected users in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/send-invitation-to-selected-users.png"::: -- The **Users** pane displays the status of this operation in the **Invitation** column of the table. The invitation email includes the registration link that users can use to register with the lab. --### Get the registration link --You can get the lab registration link from the Azure Lab Services website, and send it by using your own email application. --1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. --1. Select **Users**, and then select **Registration link**. -- :::image type="content" source="./media/how-to-configure-student-usage/registration-link-button.png" alt-text="Screenshot that shows how to get the lab registration link in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/registration-link-button.png"::: --1. In the **User registration** window, select **Copy**, and then select **Done**. -- :::image type="content" source="./media/how-to-configure-student-usage/registration-link.png" alt-text="Screenshot that shows the User registration window in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/registration-link.png"::: -- The link is copied to the clipboard. In your email application, paste the registration link, and then send the email to a user so that they can register for the class. --## View registered users --To view the list of lab users that have already registered for the lab by using the lab registration link: --1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. --1. Select **Users** to view the list of lab users. -- The list shows the list of lab users with their registration status. The user status should show **Registered**, and their name should also be available after registration. -- :::image type="content" source="./media/tutorial-track-usage/registered-users.png" alt-text="Screenshot that shows the list of registered users for a lab in the Azure Lab Services website." lightbox="./media/tutorial-track-usage/registered-users.png"::: -- > [!NOTE] - > If you [republish a lab](how-to-create-manage-template.md#publish-the-template-vm) or [Reset VMs](how-to-manage-vm-pool.md#reset-vms), the users remain registered for the labs' VMs. However, the contents of the VMs will be deleted and the VMs will be recreated with the template VM's image. --## Set quotas for users --Quotas enable lab users to use the lab for a number of hours outside of scheduled times. For example, users might access the lab to complete their homework. Learn more about [quota hours](./classroom-labs-concepts.md#quota). --You can set an hour quota for a user in one of two ways: --1. In the **Users** pane, select **Quota per user: \<number> hour(s)** on the toolbar. --1. In the **Quota per user** window, specify the number of hours you want to give to each user outside the scheduled time. -- :::image type="content" source="./media/how-to-configure-student-usage/quota-per-user.png" alt-text="Screenshot that shows the Quota per user window in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/quota-per-user.png"::: -- > [!IMPORTANT] - > The [scheduled running time of VMs](how-to-create-schedules.md) does not count against the quota that's allotted to a user. The quota is for the time outside of scheduled hours that a user spends on VMs. --1. Select **Save** to save the changes. -- Notice that the user list shows the updated quota hours for all users. --### Set additional quotas for specific users --You can specify quotas for certain users beyond the common quotas that were set for all users in the preceding section. For example, if you, as a lab creator, set the quota for all users to 10 hours and set an additional quota of 5 hours for a specific user, that user gets 15 (10 + 5) hours of quota. If you change the common quota later to, say, 15, the user gets 20 (15 + 5) hours of quota. Remember that this overall quota is outside the scheduled time. The time that a user spends on a lab VM during the scheduled time doesn't count against this quota. --To set additional quotas, do the following: --1. In the **Users** pane, select one or more users from the list, and then select **Adjust quota** on the toolbar. --1. In the **Adjust quota** window, enter the number of additional lab hours you want to grant to the selected users, and then select **Apply**. -- :::image type="content" source="./media/how-to-configure-student-usage/additional-quota.png" alt-text="Screenshot that shows the Adjust quota window in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/additional-quota.png"::: --1. Select **Apply** to save the changes. -- Notice that the user list shows the updated quota hours for the users you selected. --## User account types --To add users to a lab, you use their email accounts. Users might have the following types of email accounts: --- An organizational email account that's provided by your university's Azure Active Directory instance.-- A Microsoft-domain email account, such as *outlook.com*, *hotmail.com*, *msn.com*, or *live.com*.-- A non-Microsoft email account, such as one provided by Yahoo! or Google. However, these types of accounts must be linked with a Microsoft account.-- A GitHub account. This account must be linked with a Microsoft account.--### Use a non-Microsoft email account --Users can use non-Microsoft email accounts to register and sign in to a lab. However, the registration requires that they first create a Microsoft account that's linked to their non-Microsoft email address. --Many users might already have a Microsoft account that's linked to their non-Microsoft email address. For example, users already have a Microsoft account if they've used their email address with other Microsoft products or services, such as Office, Skype, OneDrive, or Windows. --When users use the registration link to sign in to a classroom, they're prompted for their email address and password. Users who attempt to sign in with a non-Microsoft account that's not linked to a Microsoft account receive the following error message: ---Here's a link for users to [sign up for a Microsoft account](https://signup.live.com). --> [!IMPORTANT] -> When users sign in to a lab, they aren't given the option to create a Microsoft account. For this reason, we recommend that you include this sign-up link, `https://signup.live.com`, in the lab registration email that you send to users who are using non-Microsoft accounts. --### Use a GitHub account --Users can also use an existing GitHub account to register and sign in to a lab. If they already have a Microsoft account linked to their GitHub account, users can sign in and provide their password as shown in the preceding section. --If users haven't yet linked their GitHub account to a Microsoft account, they can do the following: --1. Select the **Sign-in options** link, as shown here: -- :::image type="content" source="./media/how-to-configure-student-usage/signin-options.png" alt-text="Screenshot that shows the Microsoft sign in window, highlighting the Sign-in options link."::: --1. In the **Sign-in options** window, select **Sign in with GitHub**. -- :::image type="content" source="./media/how-to-configure-student-usage/signin-github.png" alt-text="Screenshot that shows the Microsoft sign-in options window, highlighting the option to sign in with GitHub."::: -- At the prompt, users then create a Microsoft account that's linked to their GitHub account. The linking happens automatically when they select **Next**. They're then immediately signed in and connected to the lab. --## Export a list of users to a CSV file --To export the list of users for a lab: --1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. --1. Select **Users**. --1. On the toolbar, select the ellipsis (**...**), and then select **Export CSV**. -- :::image type="content" source="./media/how-to-export-users-virtual-machines-csv/users-export-csv.png" alt-text="Screenshot that shows how to export the list of lab users to a CSV file in the Azure Lab Services website." lightbox="./media/how-to-export-users-virtual-machines-csv/users-export-csv.png"::: --## Next steps --See the following articles: --- For administrators: [Create and manage lab plans](how-to-manage-lab-plans.md)-- For lab owners: [Create and manage labs](how-to-manage-labs.md) and [Set up and publish templates](how-to-create-manage-template.md)-- For lab users: [Access labs](how-to-use-lab.md) |
lab-services | How To Create Lab Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-template.md | Last updated 05/10/2022 # Create a lab in Azure Lab Services using an ARM template -In this article, you learn how to use an Azure Resource Manager (ARM) template to create a lab. You learn how to create a lab with Windows 11 Pro image. Once a lab is created, an educator [configures the template](how-to-create-manage-template.md), [adds lab users](how-to-configure-student-usage.md), and [publishes the lab](tutorial-setup-lab.md#publish-lab). For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). +In this article, you learn how to use an Azure Resource Manager (ARM) template to create a lab. You learn how to create a lab with Windows 11 Pro image. Once a lab is created, an educator [configures the template](how-to-create-manage-template.md), [adds lab users](how-to-manage-lab-users.md), and [publishes the lab](tutorial-setup-lab.md#publish-lab). For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] |
lab-services | How To Create Manage Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-manage-template.md | See the following articles: - [As an admin, create and manage lab plans](how-to-manage-lab-plans.md) - [As a lab owner, create and manage labs](how-to-manage-labs.md)-- [As a lab owner, configure and control usage of a lab](how-to-configure-student-usage.md)+- [As a lab owner, configure and control usage of a lab](how-to-manage-lab-users.md) - [As a lab user, access labs](how-to-use-lab.md) |
lab-services | How To Create Schedules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-schedules.md | Last updated 06/26/2020 Schedules allow you to configure a lab such that VMs in the lab automatically start and shut down at a specified time. You can define a one-time schedule or a recurring schedule. The following procedures give you steps to create and manage schedules for a lab: > [!IMPORTANT]-> The scheduled running time of VMs does not count against the [quota allotted to a user](how-to-configure-student-usage.md#set-quotas-for-users). The quota is for the time outside of schedule hours that a student spends on VMs. +> The scheduled running time of VMs does not count against the [quota allotted to a user](how-to-manage-lab-users.md#set-quotas-for-users). The quota is for the time outside of schedule hours that a student spends on VMs. ## Set a schedule for the lab See the following articles: - [As an admin, create and manage lab plans](how-to-manage-lab-plans.md) - [As a lab owner, create and manage labs](how-to-manage-labs.md)-- [As a lab owner, configure and control usage of a lab](how-to-configure-student-usage.md)+- [As a lab owner, configure and control usage of a lab](how-to-manage-lab-users.md) - [As a lab user, access labs](how-to-use-lab.md) |
lab-services | How To Enable Nested Virtualization Template Vm Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-using-script.md | If you're using the Medium (Nested Virtualization) VM size for the lab, consider Now that you've configured nested virtualization on the template VM, you can [create nested virtual machines with Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v). See [Microsoft Evaluation Center](https://www.microsoft.com/evalcenter/) to check out available operating systems and software. -You can further configure your lab: --- [Add lab users](./how-to-configure-student-usage.md)-- [Set quota hours](how-to-configure-student-usage.md#set-quotas-for-users)-- [Configure a lab schedule](tutorial-setup-lab.md#add-a-lab-schedule)+- [Add lab users](how-to-manage-lab-users.md) +- [Set quota hours](how-to-manage-lab-users.md#set-quotas-for-users) +- [Configure a lab schedule](./how-to-create-schedules.md) |
lab-services | How To Manage Classroom Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-classroom-labs.md | To switch to a different lab account, select the drop-down next to the lab accou See the following articles: - [As a lab owner, set up and publish templates](how-to-create-manage-template.md)-- [As a lab owner, configure and control usage of a lab](how-to-configure-student-usage.md)+- [As a lab owner, configure and control usage of a lab](how-to-manage-lab-users.md) - [As a lab user, access labs](how-to-use-lab.md) |
lab-services | How To Manage Lab Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-lab-users.md | + + Title: Manage lab users ++description: Learn how to manage lab users in Azure Lab Services. Configure the number of lab users, manage user registrations, and specify the number of hours they can use their lab VM. +++++ Last updated : 06/30/2023+++# Manage lab users in Azure Lab Services ++This article describes how to manage lab users in Azure Lab Services. Learn how to add users to a lab, manage their registration status, and how to specify the number of hours they can use the virtual machine (VM). ++Azure Lab Services supports different options for managing the list of lab users: ++- Add users manually to the lab by specifying their email address. Optionally, you can upload a CSV file with email addresses. +- Synchronize the list of users with an Azure Active Directory (Azure AD) group. +- Integrate with Microsoft Teams or Canvas and synchronize the user list with the team (Teams) or course (Canvas) membership. ++When you add users to a lab based on their email address, lab users will first need to register for the lab by using a lab registration link. This registration process is a one-time operation. After a lab users registers for the lab, they can access their lab in the Azure Lab Services website. ++When you use Teams, Canvas, or an Azure AD group, Azure Lab Services automatically grants users access to the lab and assigns a lab VM based on their membership in Microsoft or Canvas. In this case, you don't have to specify the lab user list, and users don't have to register for the lab. ++By default, access to a lab is restricted. Only users that are in the list of lab users can register for a lab, and get access to the lab virtual machine (VM). You can disable restricted access for a lab, which lets any user register for a lab if they have the registration link. ++Azure Lab Services supports up to 400 users per lab. +## Prerequisites +++## Manage lab users ++# [Add users manually](#tab/manual) ++### Add users ++You can add lab users manually by providing their email address in the lab configuration or by uploading a CSV file. ++Azure Lab Services supports different email account types when registering for a lab: ++- An organizational email account that's provided by your Azure Active Directory instance. +- A Microsoft-domain email account, such as *outlook.com*, *hotmail.com*, *msn.com*, or *live.com*. +- A non-Microsoft email account, such as one provided by Yahoo! or Google. You need to link your account with a Microsoft account. +- A GitHub account. You need to link your account with a Microsoft account. ++Learn more about the [supported account types](./how-to-access-lab-virtual-machine.md#user-account-types). ++#### Add users by email address ++1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. ++1. Select **Users**, and then select **Add users manually**. ++ :::image type="content" source="./media/how-to-manage-lab-users/add-users-manually.png" alt-text="Screenshot that shows how to add users manually."::: ++1. Select **Add by email address**, enter the users' email addresses on separate lines or on a single line separated by semicolons. ++ :::image type="content" source="./media/how-to-manage-lab-users/add-users-email-addresses.png" alt-text="Screenshot that shows how to add users' email addresses in the Lab Services website." lightbox="./media/how-to-manage-lab-users/add-users-email-addresses.png"::: ++1. Select **Add**. ++ The list displays the email addresses and registration status of the lab users. After a user registers for the lab, the list also displays the user's name. ++ :::image type="content" source="./media/how-to-manage-lab-users/list-of-added-users.png" alt-text="Screenshot that shows the lab user list in the Lab Services website." lightbox="./media/how-to-manage-lab-users/list-of-added-users.png"::: ++#### Add users by uploading a CSV file ++You can also add users by uploading a CSV file that contains their email addresses. ++You use a CSV text file to store comma-separated (CSV) tabular data (numbers and text). Instead of storing information in columns fields (such as in spreadsheets), a CSV file stores information separated by commas. Each line in a CSV file has the same number of comma-separated *fields*. You can use Microsoft Excel to easily create and edit CSV files. ++1. Use Microsoft Excel or a text editor of your choice, to create a CSV file with the users' email addresses in one column. ++ :::image type="content" source="./media/how-to-manage-lab-users/csv-file-with-users.png" alt-text="Screenshot that shows the list of users in a CSV file."::: ++1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. ++1. Select **Users**, select **Add users**, and then select **Upload CSV**. ++1. Select the CSV file with the users' email addresses, and then select **Open**. ++ The **Add users** page shows the email address list from the CSV file. ++1. Select **Add**. ++ The **Users** page now shows the list of lab users you uploaded. ++ :::image type="content" source="./media/how-to-manage-lab-users/list-of-added-users.png" alt-text="Screenshot that shows the list of added users in the Users page in the Lab Services website." lightbox="./media/how-to-manage-lab-users/list-of-added-users.png"::: ++### Send invitations to users ++If the **Restrict access** option is enabled for the lab, only listed users can use the registration link to register to the lab. This option is enabled by default. ++To send a registration link to new users, use one of the methods in the following sections. ++#### Invite all users ++You can invite all users to the lab by sending an email via the Azure Lab Services website. The email contains the lab registration link, and an optional message. ++> [!TIP] +> When you register for a lab, you aren't given the option to create a new Microsoft account. It's recommended that you include this sign-up link, `https://signup.live.com`, in the lab registration email when you invite users who have non-Microsoft accounts. ++To invite all users: ++1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. ++1. Select **Users**, and then select **Invite all**. ++ :::image type="content" source="./media/how-to-manage-lab-users/invite-all-button.png" alt-text="Screenshot that shows the Users page in the Azure Lab Services website, highlighting the Invite all button." lightbox="./media/how-to-manage-lab-users/invite-all-button.png"::: ++1. In the **Send invitation by email** window, enter an optional message, and then select **Send**. ++ The email automatically includes the registration link. To get and save the registration link separately, select the ellipsis (**...**) at the top of the **Users** pane, and then select **Registration link**. ++ :::image type="content" source="./media/how-to-manage-lab-users/send-email.png" alt-text="Screenshot that shows the Send registration link by email window in the Azure Lab Services website." lightbox="./media/how-to-manage-lab-users/send-email.png"::: ++ The **Invitation** column of the **Users** list displays the invitation status for each added user. The status should change to **Sending** and then to **Sent on \<date>**. ++#### Invite selected users ++To invite selected users and send them a lab registration link: ++1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. ++1. Select **Users**, and then select one or more users from the list. ++1. In the row for the user you selected, select the **envelope** icon or, on the toolbar, select **Invite**. ++ :::image type="content" source="./media/how-to-manage-lab-users/invite-selected-users.png" alt-text="Screenshot that shows how to invite selected users to a lab in the Azure Lab Services website." lightbox="./media/how-to-manage-lab-users/invite-selected-users.png"::: ++1. In the **Send invitation by email** window, enter an optional **message**, and then select **Send**. ++ :::image type="content" source="./media/how-to-manage-lab-users/send-invitation-to-selected-users.png" alt-text="Screenshot that shows the Send invitation email for selected users in the Azure Lab Services website." lightbox="./media/how-to-manage-lab-users/send-invitation-to-selected-users.png"::: ++ The **Users** pane displays the status of this operation in the **Invitation** column of the table. The invitation email includes the registration link that users can use to register with the lab. ++#### Get the registration link ++You can get the lab registration link from the Azure Lab Services website, and send it by using your own email application. ++1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. ++1. Select **Users**, and then select **Registration link**. ++ :::image type="content" source="./media/how-to-manage-lab-users/registration-link-button.png" alt-text="Screenshot that shows how to get the lab registration link in the Azure Lab Services website." lightbox="./media/how-to-manage-lab-users/registration-link-button.png"::: ++1. In the **User registration** window, select **Copy**, and then select **Done**. ++ :::image type="content" source="./media/how-to-manage-lab-users/registration-link.png" alt-text="Screenshot that shows the User registration window in the Azure Lab Services website." lightbox="./media/how-to-manage-lab-users/registration-link.png"::: ++ The link is copied to the clipboard. In your email application, paste the registration link, and then send the email to a user so that they can register for the class. ++### View registered users ++To view the list of lab users that have already registered for the lab by using the lab registration link: ++1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. ++1. Select **Users** to view the list of lab users. ++ The list shows the list of lab users with their registration status. The user status should show **Registered**, and their name should also be available after registration. ++ > [!NOTE] + > If you [republish a lab](how-to-create-manage-template.md#publish-the-template-vm) or [Reset VMs](how-to-manage-vm-pool.md#reset-lab-vms), the users remain registered for the labs' VMs. However, the contents of the VMs will be deleted and the VMs will be recreated with the template VM's image. ++# [Azure AD group](#tab/aad) ++You can manage the lab user list by synchronizing the lab with an Azure AD group. When you use an Azure AD group, you don't have to manually add or delete users in the lab settings. Add or remove users in Teams or Canvas to assign or remove access for a user to a lab VM. ++You can create an Azure AD group within your organization's Azure AD to manage access to organizational resources and cloud-based apps. To learn more, see [Azure AD groups](../active-directory/fundamentals/active-directory-manage-groups.md). If your organization uses Microsoft Office 365 or Azure services, your organization already has admins who manage your Azure Active Directory. ++Lab users don't have to register for their lab and a lab VM is automatically assigned. Lab users can [access the lab directly from the Azure Lab Services website](./how-to-access-lab-virtual-machine.md). ++### Synchronize the lab user list with Azure AD group ++When you sync a lab with an Azure AD group, Azure Lab Services pulls all users inside the Azure AD group into the lab as lab users. Only people in the Azure AD group have access to the lab. The user list automatically refreshes every 24 hours to match the latest membership of the Azure AD group. You can also manually synchronize the list of lab users at any time. ++The option to synchronize the list of lab users with an Azure AD group is only available if you haven't added users to the lab manually or through a CSV import yet. Make sure there are no users in the lab user list. ++To sync a lab with an existing Azure AD group: ++1. Sign in to the [Azure Lab Services website](https://labs.azure.com/). ++1. Select your lab. ++1. In the left pane, select **Users**, and then select **Sync from group**. ++ :::image type="content" source="./media/how-to-manage-lab-users/add-users-sync-group.png" alt-text="Screenshot that shows how to add users by syncing from an Azure AD group."::: ++1. Select the Azure AD group you want to sync users with from the list of groups. ++ If you don't see any Azure AD groups in the list, this could be because of the following reasons: ++ - You're a guest user in Azure Active Directory (usually if you're outside the organization that owns the Azure AD), and you're not allowed to search for groups inside the Azure AD. In this case, you can't add an Azure AD group to the lab. + - Azure AD groups you created through Microsoft Teams don't show up in this list. You can add the Azure Lab Services app inside Microsoft Teams to create and manage labs directly from within Microsoft Teams. Learn more about [managing a labΓÇÖs user list from within Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams). ++1. Select **Add** to sync the lab users with the Azure AD group. ++ Azure Lab Services automatically pulls the list of users from Azure AD, and refreshes the list every 24 hours. ++ Optionally, you can select **Sync** in the **Users** tab to manually synchronize to the latest changes in the Azure AD group. ++### Automatic VM management based on Azure AD group ++When you synchronize a lab with an Azure AD group, Azure Lab Services automatically manages the number of lab VMs based on the number of users in the Azure AD group. ++When a user is added to the Azure AD group, Azure Lab Services automatically adds a lab VM for that user. When a user is no longer a member of the Azure AD group, the lab VM for that user is automatically deleted from the lab. ++You can't manually add or remove lab users, or update the lab capacity when synchronizing with an Azure AD group. ++# [Teams](#tab/teams) ++When you create a lab in Teams, Azure Lab Services automatically grants users access to the lab based on their team membership in Teams. When you use Teams, you can't manually add or delete users in the lab settings. Add or remove users to a team to assign or remove access for a user to a lab VM. ++Lab users don't have to register for their lab and a lab VM is automatically assigned. Lab users can [access the lab directly from within Teams](./how-to-access-lab-virtual-machine.md). ++Before you can use labs in Teams, follow these steps to [configure Teams for using Azure Lab Services](./how-to-configure-teams-for-lab-plans.md). ++### Synchronize the lab user list with Teams ++Azure Lab Services automatically synchronizes the membership information with the lab user list every 24 hours. Lab creators can select **Sync** in the **Users** tab to manually trigger a sync, for example when the team membership is updated. +++### Automatic VM management based on team membership ++When you create labs in Teams, Azure Lab Services also automatically manages the number of lab VMs based on the number of users in the team. ++When a user is added in Teams, Azure Lab Services automatically adds a lab VM for that user. When a user is no longer a member, the lab VM for that user is automatically deleted from the lab. ++You can't manually add or remove lab users, or update the lab capacity when creating labs in Teams. ++# [Canvas](#tab/canvas) ++When you create a lab in Canvas, Azure Lab Services automatically grants users access to the lab based on their course membership in Canvas. When you use Canvas, you can't manually add or delete users in the lab settings. Add or remove users for a course in Canvas to assign or remove access for a user to a lab VM. ++Lab users don't have to register for their lab and a lab VM is automatically assigned. Lab users can [access the lab directly from within Canvas](./how-to-access-lab-virtual-machine.md). ++Before you can use labs in Canvas, follow these steps to [configure Canvas for using Azure Lab Services](./how-to-configure-canvas-for-lab-plans.md). ++### Synchronize the lab user list with Canvas ++Azure Lab Services automatically synchronizes the membership information with the lab user list every 24 hours. Lab creators can select **Sync** in the **Users** tab to manually trigger a sync, for example when the course membership is updated. +++### Automatic VM management based on course membership ++When you create labs in Canvas, Azure Lab Services also automatically manages the number of lab VMs based on the number of users in the course. ++When a user is added in Canvas, Azure Lab Services automatically adds a lab VM for that user. When a user is no longer a member, the lab VM for that user is automatically deleted from the lab. ++You can't manually add or remove lab users, or update the lab capacity when creating labs in Canvas. ++++## Set quotas for users ++Quota hours enable lab users to use the lab for a number of hours outside of scheduled lab times. For example, users might access the lab to complete their homework. Learn more about [quota hours](./classroom-labs-concepts.md#quota). ++You can set an hour quota for a user in one of two ways: ++1. In the **Users** pane, select **Quota per user: \<number> hour(s)** on the toolbar. ++1. In the **Quota per user** window, specify the number of hours you want to give to each user outside the scheduled time. ++ :::image type="content" source="./media/how-to-manage-lab-users/quota-per-user.png" alt-text="Screenshot that shows the Quota per user window in the Azure Lab Services website." lightbox="./media/how-to-manage-lab-users/quota-per-user.png"::: ++ > [!IMPORTANT] + > The [scheduled running time of VMs](how-to-create-schedules.md) does not count against the quota that's allotted to a user. The quota is for the time outside of scheduled hours that a user spends on VMs. ++### Set additional quotas for specific users ++You can specify extra quota hours for individual users, beyond the quotas you define at the lab level. For example, if you set the quota for all users to 10 hours and set an additional quota of 5 hours for a specific user, that user gets 15 (10 + 5) hours of quota. If you change the common quota later to, say, 15, the user gets 20 (15 + 5) hours of quota. ++Remember that this overall quota is outside the scheduled time. The time that a user spends on a lab VM during the scheduled time doesn't count against this quota. ++To set additional quotas for a user: ++1. In the **Users** pane, select one or more users from the list, and then select **Adjust quota** on the toolbar. ++1. In the **Adjust quota** window, enter the number of additional lab hours you want to grant to the selected users, and then select **Apply**. ++ :::image type="content" source="./media/how-to-manage-lab-users/additional-quota.png" alt-text="Screenshot that shows the Adjust quota window in the Azure Lab Services website." lightbox="./media/how-to-manage-lab-users/additional-quota.png"::: ++1. Select **Apply** to save the changes. ++ Notice that the user list shows the updated quota hours for the users you selected. ++## Export the list of users to a CSV file ++To export the list of users for a lab: ++1. Select the lab you want to work with. ++1. Select **Users**. ++1. On the toolbar, select the ellipsis (**...**), and then select **Export CSV**. ++ :::image type="content" source="./media/how-to-manage-lab-users/users-export-csv.png" alt-text="Screenshot that shows how to export the list of lab users to a CSV file in the Azure Lab Services website." lightbox="./media/how-to-manage-lab-users/users-export-csv.png"::: ++## Next steps ++- Learn how to [add lab schedules](./how-to-create-schedules.md) +- Learn how to [manage lab VM pools](./how-to-manage-vm-pool.md) +- Learn how to [access a lab VM](./how-to-access-lab-virtual-machine.md) |
lab-services | How To Manage Labs Within Canvas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-labs-within-canvas.md | - Title: Create and manage labs in Canvas- -description: Learn how to create and manage Azure Lab Services labs in Canvas. Manage user lists, VM pools, and configure lab schedules for your labs. - Previously updated : 11/29/2022------# Create and manage Azure Lab Services labs in Canvas --In this article, you learn how to create and manage Azure Lab Services labs in Canvas. As an educator, you can create new labs or configure existing labs in the Canvas interface. Manage user lists, VM pools, and configure lab schedules for your labs. Students can then access their labs directly from within Canvas. Learn more about the [benefits of using Azure Lab Services within Canvas](./lab-services-within-canvas-overview.md). --This article describes how to: --- [Create a lab](#create-a-lab-in-canvas).-- [Manage lab user lists](#manage-lab-user-lists-in-canvas).-- [Manage a lab virtual machine (VM) pool](#manage-a-lab-vm-pool-in-canvas).-- [Configure lab schedules and settings](#configure-lab-schedules-and-settings-in-canvas).-- [Delete a lab](#delete-a-lab).--For more information about adding lab plans to Canvas, see [Configure Canvas to access Azure Lab Services lab plans](./how-to-configure-canvas-for-lab-plans.md). ---## Prerequisites ---- The Azure Lab Services Canvas app is enabled. Learn how to [configure Canvas for Azure Lab Services](./how-to-configure-canvas-for-lab-plans.md).--## Create a lab in Canvas --Once Azure Lab Services is added to your course, you'll see **Azure Lab Services** in the course navigation menu. If you're authenticated in Canvas as an educator, you'll see this sign in screen before you can use the service. You'll need to sign in here with an Azure AD account or Microsoft account that has been added as a Lab Creator. - :::image type="content" source="./media/how-to-manage-labs-within-canvas/welcome-to-lab-services.png" alt-text="Screenshot that shows the welcome page in Canvas."::: --For instructions on how to create a lab, see [Create a lab](./tutorial-setup-lab.md). Make sure to verify the resource group to use before creating the lab. --> [!IMPORTANT] -> Labs must be created using the Azure Lab Services app in Canvas. Labs created from the Azure Lab Services portal aren't visible from Canvas. --The student list for the course is automatically synced with the course roster. For more information, see [Manage Lab Services user lists from Canvas](how-to-manage-user-lists-within-canvas.md). A lab VM will also be created for the course educator. --## Manage lab user lists in Canvas --When you [create a lab within Canvas](#create-a-lab-in-canvas), Azure Lab Services automatically syncs the lab user list with the course membership. Azure Lab Services adds or deletes users from the lab user list as per changes to the course membership when the sync operation completes. --Azure Lab Services automatically synchronizes the course membership with the lab user list every 24 hours. Educators can select **Sync** in the **Users** tab to manually trigger a sync, for example when the team membership is updated. ---Once the automatic or manual sync is complete, adjustments are made to the lab depending on whether the lab has been [published](tutorial-setup-lab.md#publish-lab) or not. --If the lab has *not* been published at least once: --- Users will be added or deleted from the lab user list as per changes to the course membership.--If the lab has been published at least once: --- Users will be added or deleted from the lab user list as per changes to the course membership.-- New VMs will be created if there are any new students added to the course.-- VM will be deleted if any student has been deleted from the course.-- Lab capacity will be automatically updated as needed.--## Manage a lab VM pool in Canvas --Virtual Machine (VM) creation starts as soon as you publish the template VM. Azure Lab Services creates a number of VMs, known as the VM pool, equivalent to the number of users in the lab user list. --The lab capacity (number of VMs in the lab) automatically matches the course membership. Whenever you add or remove a user from the course, the capacity increases or decreases accordingly. For more information, see [How to manage users within Canvas](#manage-lab-user-lists-in-canvas). --After publishing the lab and VM creation completes, Azure Lab Services automatically registers users to the lab. Azure Lab Services assigns a lab VM to a user when they first access the **Azure Lab Services** tab in Canvas. --Educators can access student VMs directly from the **Virtual machine pool** tab. For more information, see [Manage a VM pool in Azure Lab Services](how-to-manage-vm-pool.md) ---As part of the publish process, Canvas educators are assigned their own lab VMs. The VM can be accessed by clicking on the **My Virtual Machines** button (top/right corner of the screen). ---## Configure lab schedules and settings in Canvas --Lab schedules allow you to configure a classroom lab such that the VMs automatically start and shut down at a specified time. You can define a one-time schedule or a recurring schedule. --Lab schedules affect lab virtual machines in the following way: --- A template VM isn't included in schedules.-- Only assigned VMs are started. If a machine isn't claimed by a user (student), the VM won't start on the scheduled hours.-- All virtual machines, whether claimed by a user or not, are stopped based on the lab schedule.--The scheduled running time of VMs does not count against the [quota](classroom-labs-concepts.md#quota) given to a user. The quota is for the time outside of schedule hours that a student spends on VMs. --Educators can create, edit, and delete lab schedules within Canvas as in the Azure Lab Services portal. For more information on scheduling, see [Creating and managing schedules](how-to-create-schedules.md). --> [!IMPORTANT] -> Schedules will apply at the course level. If you have many sections of a course, consider using [automatic shutdown policies](how-to-configure-auto-shutdown-lab-plans.md) and/or [quotas hours](how-to-configure-student-usage.md#set-quotas-for-users). --## Delete a lab --To delete a lab that was created in Canvas, you use the [Azure Lab Services web portal](https://labs.azure.com). For more information, see [Delete a lab in the Azure Lab Services portal](manage-labs.md#delete-a-lab). --> [!IMPORTANT] -> Uninstalling the Azure Lab Services app from the course will not result in deletion of the lab. Deletion of the course won't cause deletion of the lab. Users can still access the lab VMs on the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). --## Troubleshooting --This section outlines common error messages that you may see, along with the steps to resolve them. --- Insufficient permissions to create lab.-- In Canvas, an educator will see a message indicating that they don't have sufficient permission. Educators should contact their Azure admin so they can be [added as a **Lab Creator**](quick-create-resources.md#add-a-user-to-the-lab-creator-role). For example, educators can be added as a **Lab Creator** to the resource group that contains their lab. --- Message that there isn't enough capacity to create lab VMs.-- [Request a limit increase](capacity-limits.md#request-a-limit-increase) which needs to be done by an Azure Labs Services administrator. --- Student sees warning that the lab isn't available yet.-- In Canvas, you'll see the following message if the educator hasn't published the lab yet. Educators must [publish the lab](tutorial-setup-lab.md#publish-lab) and [sync users](how-to-manage-user-lists-within-canvas.md#sync-users) for students to have access to a lab. -- :::image type="content" source="./media/how-to-manage-labs-within-canvas/troubleshooting-lab-isnt-available-yet.png" alt-text="Troubleshooting -> This lab is not available yet."::: --- Student or educator is prompted to grant access.-- Before a student or educator can first access their lab, some browsers require that they first grant Azure Lab Services access to the browser's local storage. To grant access, educators and students should click the **Grant access** button when they are prompted: -- :::image type="content" source="./media/how-to-manage-labs-within-canvas/canvas-grant-access-prompt.png" alt-text="Screenshot of page to grant Azure Lab Services access to use local storage for the browser."::: -- Educators and students will see the message **Access granted** when access is successfully granted to Azure Lab Services. The educator or student should then reload the browser window to start using Azure Lab Services. -- :::image type="content" source="./media/how-to-manage-labs-within-canvas/canvas-access-granted-success.png" alt-text="Screenshot of access granted page in Azure Lab Services."::: -- > [!IMPORTANT] - > Ensure that students and educators are using an up-to-date version of their browser. For older browser versions, students and educators may experience issues with being able to successfully grant access to Azure Lab Services. -- - Educator isn't prompted for their credentials after they click sign-in. - - When an educator accesses Azure Lab Services within their course, they may be prompted to sign in. Ensure that the browser's settings allow popups from the url of your Canvas instance, otherwise the popup may be blocked by default. -- :::image type="content" source="./media/how-to-manage-labs-within-canvas/canvas-sign-in.png" alt-text="Azure Lab Services sign-in screen."::: --## Next steps --- [Use Azure Lab Services within Canvas overview](lab-services-within-canvas-overview.md)-- As an admin, [configure Teams to access Azure Lab Services lab plans](./how-to-configure-teams-for-lab-plans.md).-- As a student, [access a lab VM within Teams](how-to-access-vm-for-students-within-teams.md). |
lab-services | How To Manage Labs Within Teams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-labs-within-teams.md | - Title: Create and manage labs in Microsoft Teams- -description: Learn how to create and manage Azure Lab Services labs in Microsoft Teams. Manage user lists, VM pools, and configure lab schedules for your labs. - Previously updated : 11/18/2022------# Create and manage Azure Lab Services labs in Microsoft Teams --In this article, you learn how to create and manage Azure Lab Services labs in Microsoft Teams. As an educator, you can create new labs or configure existing labs in the Teams interface. Manage user lists, VM pools, and configure lab schedules for your labs. Students can then access their labs directly from within Teams. Learn more about the [benefits of using Azure Lab Services within Teams](./lab-services-within-teams-overview.md). --This article describes how to: --- [Create a lab in Microsoft Teams](#create-a-lab-in-teams).-- [Manage lab user lists](#manage-lab-user-lists-in-teams).-- [Manage a lab virtual machine (VM) pool](#manage-a-lab-vm-pool-in-teams).-- [Configure lab schedules and settings](#configure-lab-schedules-and-settings-in-teams).-- [Delete a lab](#delete-a-lab).--For more information about adding lab plans to Microsoft Teams, see [Configure Microsoft Teams to access Azure Lab Services lab plans](./how-to-configure-teams-for-lab-plans.md). ---## Prerequisites ---- The Azure Lab Services Teams app is added to your Teams channel. Learn how to [configure Teams for Azure Lab Services](./how-to-configure-teams-for-lab-plans.md).--## Create a lab in Teams --As an educator, you can create a new lab in Teams or by using the Azure Lab Services web portal (https://labs.azure.com). For more information, see how to [create and publish a lab](./tutorial-setup-lab.md). --## Manage lab user lists in Teams --When you [create a lab within Teams](#create-a-lab-in-teams), Azure Lab Services automatically syncs the lab user list with the team membership. Everyone on the team, including owners, members, and guests are automatically added to the lab user list. Azure Lab Services adds or deletes users from the lab user list as per changes to the team membership when the sync operation completes. --Azure Lab Services automatically synchronizes the team membership with the lab user list every 24 hours. Educators can select **Sync** in the **Users** tab to manually trigger a sync, for example when the team membership is updated. ---Azure Lab Services automatically updates the lab capacity after publishing the lab: --- If there are any new additions to the team, new VMs are created.-- If a user is deleted from the team, the associated VM is deleted.--## Manage a lab VM pool in Teams --Virtual Machine (VM) creation starts as soon as you publish the template VM. Azure Lab Services creates a number of VMs, known as the VM pool, equivalent to the number of users in the lab user list. --The lab capacity (number of VMs in the lab) automatically matches the team membership. Whenever you add or remove a user from the team, the capacity increases or decreases accordingly. For more information, see [How to manage users within Teams](#manage-lab-user-lists-in-teams). --After publishing the lab and VM creation completes, Azure Lab Services automatically registers users to the lab. Azure Lab Services assigns a lab VM to a user when they first access the **Azure Lab Services** tab in Teams. --To publish a template VM in Teams: --1. Go to the **Azure Lab Services** tab in your team. -1. Select the **Template** tab, and then select **Publish**. -1. In the **Publish template** window, select **Publish**. --As an educator, you can access student VMs directly from the **Virtual machine pool** tab. You can start, stop, reset, or connect to a student VM. Educators can also access VMs assigned to themselves either from the **Virtual machine pool** tab, or by selecting **My Virtual Machines** in the top-right corner. ---## Configure lab schedules and settings in Teams --Lab schedules allow you to configure a classroom lab such that the VMs automatically start and shut down at a specified time. You can define a one-time schedule or a recurring schedule. --Lab schedules affect lab virtual machines in the following way: --- A template VM isn't included in schedules.-- Only assigned VMs are started. If a machine isn't claimed by a user (student), the VM won't start on the scheduled hours.-- All virtual machines, whether claimed by a user or not, are stopped based on the lab schedule.--> [!IMPORTANT] -> The scheduled run time of VMs doesn't count against the quota allotted to a user. The alloted quota is for the time that a student spends on VMs outside of schedule hours. --### Create a lab schedule --As an educator, you can create, edit, and delete lab schedules within Teams or in the Azure Lab Services web portal (https://labs.azure.com). In Teams, go to the **Schedule** tab, and then select **Add scheduled event** to add a schedule for a lab. ---Learn more about [creating and managing schedules](how-to-create-schedules.md). --### Configure automatic shutdown and disconnect settings --You can enable several automatic shutdown cost control features to prevent extra costs when the VMs aren't being actively used. The combination of the following three automatic shutdown and disconnect features catches most of the cases where users accidentally leave their virtual machines running: --- Automatically disconnect users from virtual machines that the OS considers idle.-- Automatically shut down virtual machines when users disconnect.-- Automatically shut down virtual machines that are started but users don't connect.--In Teams, go to the **Settings** tab to configure these settings. For more information, see the article on [configuring auto-shutdown settings for a lab](how-to-enable-shutdown-disconnect.md). --## Delete a lab --To delete a lab that was created in Teams, you use the [Azure Lab Services web portal](https://labs.azure.com). For more information, see [Delete a lab in the Azure Lab Services portal](manage-labs.md#delete-a-lab). --Azure Lab Services also triggers lab deletion for labs you created in Teams, when the team is deleted. The lab will be automatically deleted after 24 hours of the team deletion, when the automatic user list sync is triggered. --Users can't access their VMs through the [Azure Lab Services web portal](https://labs.azure.com) if the team or the lab is deleted. --> [!IMPORTANT] -> Deletion of the tab or uninstalling the app will not result in deletion of the lab. Users can still access the lab VMs on the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). --## Next steps --- As an admin, [configure Teams to access Azure Lab Services lab plans](./how-to-configure-teams-for-lab-plans.md).-- As student, [access a lab VM within Teams](how-to-access-vm-for-students-within-teams.md). |
lab-services | How To Manage Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-labs.md | This article describes how to create and delete labs. It also shows you how to v ## View all labs +# [Lab Services website](#tab/lab-services-website) + 1. Navigate to Lab Services web portal: [https://labs.azure.com](https://labs.azure.com).+ 1. Select **Sign in**. Select or enter a **user ID** that is a member of the **Lab Creator** role in the lab plan, and enter password. Azure Lab Services supports organizational accounts and Microsoft accounts. [!INCLUDE [Select a tenant](./includes/multi-tenant-support.md)] This article describes how to create and delete labs. It also shows you how to v > [!NOTE] > If you're granted access but are unable to view the labs from other people, you might select *All labs* instead of *My labs* in the **Show** filter. +# [Teams](#tab/teams) ++To access your lab in Teams: ++1. Sign into Microsoft Teams with your organizational account. ++1. Select the team and channel that contain the lab. ++1. Select the **Azure Lab Services** tab. ++ Confirm that you see all labs for the lab plan that's associated with the Teams channel. ++ :::image type="content" source="./media/how-to-manage-labs/teams-view-labs.png" alt-text="Screenshot that shows the list of labs in Microsoft Teams."::: ++# [Canvas](#tab/canvas) ++1. Sign into Canvas, and select your course. ++1. Select **Azure Lab Services** from the course navigation menu. ++ Confirm that you see all labs for the lab plan that's associated with the course. ++ :::image type="content" source="./media/how-to-manage-labs/canvas-view-labs.png" alt-text="Screenshot that shows the list of labs in Canvas."::: ++++## Use a non-organizational account as a lab creator ++You can access the Azure Lab Services website to create and manage labs without an organizational account (a guest account). In this case, you need a Microsoft account, or a GitHub or non-Microsoft email account that is linked to a Microsoft account. ++### Use a non-Microsoft email account +++### Use a GitHub Account ++ ## Delete a lab 1. On the tile for the lab, select three dots (...) in the corner, and then select **Delete**. To switch to a different group, select the left drop-down and choose the lab pla See the following articles: - [As a lab owner, set up and publish templates](how-to-create-manage-template.md)-- [As a lab owner, configure and control usage of a lab](how-to-configure-student-usage.md)+- [As a lab owner, configure and control usage of a lab](how-to-manage-lab-users.md) - [As a lab user, access labs](how-to-use-lab.md) |
lab-services | How To Manage Vm Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-vm-pool.md | Title: Manage a VM pool in Azure Lab Services -description: Learn how to manage a VM pool in Azure Lab Services + Title: Manage a lab VM pool ++description: Learn how to manage a lab VM pool in Azure Lab Services and change the number of lab virtual machines that are available for lab users. ++++ Previously updated : 07/21/2022- Last updated : 07/04/2023 -# Manage a VM pool in Lab Services +# Manage a lab virtual machine pool in Azure Lab Services -The **Virtual machine pool** page of a lab allows educators to set how many VMs are available for use and manage the state of those VMs. +The lab virtual machine pool represents the set of lab virtual machines (VMs) that are available for lab users to connect to. The lab VM creation starts when you publish a lab template, or when you update the lab capacity. Learn how to change the capacity of the lab and modify the number of lab virtual machines, or manage the state of individual lab VMs. -- Start and stop all the VMs at once.-- Start and stop specific VMs.-- Reset a VM.-- Connect to a student's VM.+When you synchronize the lab user list with an Azure AD group, or create a lab in Teams or Canvas, Azure Lab Services manages the lab VM pool automatically based on membership. ++When you manage a lab VM pool, you can: ++- Start and stop all or selected lab VMs. +- Reset a VM +- Connect to a lab user's VM. - Change the lab capacity. -VMs can be in one of a few states. +Lab VMs can be in one of a few states. -- **Unassigned**. These VMs aren't assigned to students yet. These VMs won't be started when a schedule runs.-- **Stopped**. VM is turned off and not available for use.-- **Starting**. VM is starting. It's not yet available for use.-- **Running**. VM is running and available for use.-- **Stopping**. VM is stopping and not available for use.+- **Unassigned**. The lab VM is not assigned to a lab user yet. The lab VM doesn't automatically start with the lab schedule. +- **Stopped**. The lab VM is turned off and not available for use. +- **Starting**. The lab VM is starting. It's not yet available for use. +- **Running**. The lab VM is running and is available for use. +- **Stopping**. The lab VM is stopping and not available for use. > [!WARNING]-> Turning on a student VM will not affect the quota for the student. Make sure to stop all VMs manually or use a [schedule](how-to-create-schedules.md) to avoid unexpected costs. +> When you start a lab VM, it doesn't affect the available [quota hours](./classroom-labs-concepts.md#quota) for the lab user. Make sure to stop all lab VMs manually or use a [schedule](how-to-create-schedules.md) to avoid unexpected costs. ++## Prerequisites +++## Change lab capacity ++When you synchronize the lab user list with an Azure AD group, or create a lab in Teams or Canvas, Azure Lab Services manages the lab VM pool automatically based on membership. When you add or remove a user, the lab capacity increases or decreases accordingly. Lab users are also automatically registered and assigned to their lab VM. ++If you manage the lab user list manually, you can modify the lab capacity to modify the number of lab VMs that are available for lab users. ++1. Go to the **Virtual machine pool** page for the lab. ++1. Select **Lab capacity** on the toolbar ++1. In the **Lab capacity** window, update the number of lab VMs. ++ :::image type="content" source="./media/how-to-manage-vm-pool/virtual-machine-pool-update-lab-capacity.png" alt-text="Screenshot of Lab capacity window."::: ++## Manually start lab VMs ++To manually start all lab VMs: ++1. Go to the **Virtual machine pool** page for the lab. ++1. Select the **Start all** button at the top of the page. ++ :::image type="content" source="./media/how-to-set-virtual-machine-passwords/start-all-vms-button.png" alt-text="Screenshot that shows the Virtual machine pool page and the Start all button is highlighted."::: ++To start individual lab VMs: ++1. Go to the **Virtual machine pool** page for the lab. ++1. In the list of lab VMs, select the state toggle control for individual lab VMs. ++ The toggle text changes to **Starting** as the VM starts up, and then **Running** once the VM has started. ++1. Alternately, select multiple VMs using the checks to the left of the **Name** column, and then select the **Start** button at the top of the page. ++## Manually stop lab VMs ++To manually stop all lab VMs: ++1. Go to the **Virtual machine pool** page for the lab. ++1. Select the **Stop all** button to stop all of the lab VMs. ++ :::image type="content" source="./media/how-to-set-virtual-machine-passwords/stop-all-vms-button.png" alt-text="Screenshot that shows the Virtual machine pool page and the Stop all button is highlighted."::: ++To start individual lab VMs: ++1. Go to the **Virtual machine pool** page for the lab. -## Manually starting VMs +1. In the list of lab VMs, select the state toggle control for individual lab VMs. -You can start all VMs in a lab by selecting the **Start all** button at the top of the page. + The toggle text changes to **Stopping** as the VM starts up, and then **Stopped** once the VM has shut down. +1. Alternately, select multiple VMs using the checks to the left of the **Name** column, and then select the **Stop** button at the top of the page. -Individual VMs can be started by clicking the state toggle. The toggle will read **Starting** as the VM starts up, and then **Running** once the VM has started. You can also select multiple VMs using the checks to the left of the **Name** column. Once the VMs are checked, select the **Start** button at the top of the screen. +## Reset lab VMs -## Manually stopping VMs +When you reset a lab VM, Azure Lab Services shuts down the lab VM, deletes it, and recreates a new lab VM from the original template VM. You can think of a reset as a refresh of the entire lab VM. -You can select the **Stop all** button to stop all of the VMs. +> [!CAUTION] +> After you reset a lab VM, all the data that's saved on the OS disk (usually the C: drive on Windows), and the temporary disk (usually the D: drive on Windows), is lost. Learn how to [store the user data outside the lab VM](/azure/lab-services/troubleshoot-access-lab-vm#store-user-data-outside-the-lab-vm). +To reset one or more lab VMs: -Individual VMs can be stopped by clicking the state toggle. The toggle will read **Stopping** as the VM shuts down, and then **Stopped** once the VM has shutdown. You can also select multiple VMs using the checks to the left of the **Name** column. Once the VMs are checked, select the **Stop** button at the top of the screen. +1. Go to the **Virtual machine pool** page for the lab. -## Reset VMs +1. Select **Reset** in the toolbar. -To reset one or more VMs, select them in the list, and then select **Reset** on the toolbar. + :::image type="content" source="./media/how-to-set-virtual-machine-passwords/reset-vm-button.png" alt-text="Screenshot of virtual machine pool. Reset button is highlighted."::: +1. On the **Reset virtual machine(s)** dialog box, select **Reset**. -On the **Reset virtual machine(s)** dialog box, select **Reset**. + :::image type="content" source="./media/how-to-set-virtual-machine-passwords/reset-vms-dialog.png" alt-text="Screenshot of reset virtual machine confirmation dialog."::: +### Redeploy lab VMs -### Redeploy VMs +When you use [lab plans](./lab-services-whats-new.md), lab users can now redeploy their lab VM. This operation is labeled **Troubleshoot** in Azure Lab Services. When you redeploy a lab VM, Azure Lab Services will shut down the VM, move the VM to a new node within the Azure infrastructure, and then power it back on. -In the [April 2022 Update](lab-services-whats-new.md), redeploying VMs replaces the previous reset VM behavior. In the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com), the command is named **Troubleshoot** and is available in the student's view of their VMs. For more information and instructions on how students can redeploy their VMs, see: [Redeploy VMs](how-to-reset-and-redeploy-vm.md#redeploy-vms). +Learn how [lab users can redeploy their lab VM](./how-to-reset-and-redeploy-vm.md#redeploy-vms). -## Connect to VMs +## Connect to lab VMs -Educators can connect to a student VM as long as it's turned on. Verify the student *isn't* connected to the VM first. By connecting to the VM, you can access local files on the VM and help students troubleshoot issues. +You can connect to a lab user's VM, for example to access local files on the lab VM and help lab users troubleshoot issues. To connect to a lab VM, it must be running. -To connect to the student VM, hover the mouse on the VM in the list and select the **Connect** button. For further instructions based on the operating system you're using, see [Connect to a lab VM](connect-virtual-machine.md). +1. Go to the **Virtual machine pool** page for the lab. -## Set lab capacity +1. Verify that the lab user is *not* connected to the lab VM. -To change the lab capacity (number of VMs in the lab), select **Lab capacity** on the toolbar and update number of VMs on the **Lab capacity** window on the right. +1. Hover over the lab VM in the list, and then select the **Connect** button. + For further instructions based on the operating system you're using, see [Connect to a lab VM](connect-virtual-machine.md). -If using [Teams](./how-to-manage-labs-within-teams.md#manage-a-lab-vm-pool-in-teams) or [Canvas](how-to-manage-vm-pool-within-canvas.md) integration, lab capacity will automatically be updated when Azure Lab Services syncs the user list. +## Export the list of lab VMs -## Export list of VMs +1. Go to the **Virtual machine pool** page for the lab. -1. Switch to the **Virtual machine pool** tab. -2. Select **...** (ellipsis) on the toolbar and then select **Export CSV**. +1. Select **...** (ellipsis) on the toolbar, and then select **Export CSV**. :::image type="content" source="./media/how-to-manage-vm-pool/virtual-machines-export-csv.png" alt-text="Screenshot of virtual machine pool page in Azure Lab Services. The Export CSV menu item is highlighted."::: If using [Teams](./how-to-manage-labs-within-teams.md#manage-a-lab-vm-pool-in-te See the following articles: - [As a lab owner, set up and publish templates](how-to-create-manage-template.md)-- [As a lab owner, configure and control usage of a lab](how-to-configure-student-usage.md)+- [As a lab owner, configure and control usage of a lab](how-to-manage-lab-users.md) - [As a lab user, access labs](how-to-use-lab.md) |
lab-services | How To Set Virtual Machine Passwords Student | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-set-virtual-machine-passwords-student.md | Student also can set the password by clicking the overflow menu (**vertical thre ## Next steps -To learn about other student usage options that a lab owner can configure, see the following article: [Configure student usage](how-to-configure-student-usage.md). +To learn about other student usage options that a lab owner can configure, see the following article: [Configure student usage](how-to-manage-lab-users.md). |
lab-services | How To Set Virtual Machine Passwords | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-set-virtual-machine-passwords.md | By enabling the **Use same password for all virtual machines** option on this pa ## Next steps -To learn about other student usage options you (as a lab owner) can configure, see the following article: [Configure student usage](how-to-configure-student-usage.md). +To learn about other student usage options you (as a lab owner) can configure, see the following article: [Configure student usage](how-to-manage-lab-users.md). To learn about how students can reset passwords for their VMs, see [Set or reset password for virtual machines in labs (students)](how-to-set-virtual-machine-passwords-student.md). |
lab-services | How To Use Lab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-lab.md | - Title: How to access and manage a lab VM- -description: Learn how to register to a lab. Also learn how to view, start, stop, and connect to all the lab VMs assigned to you. ----- Previously updated : 02/01/2023---# Access a lab in Azure Lab Services --Before you can access a lab in Azure Lab Services, you need to first register to the lab. In this article, you learn how to register for a lab, connect to a lab virtual machine (VM), start and stop the lab VM, and how to monitor your quota hours. --## Prerequisites --- To register for a lab, you need a lab registration link.-- To view, start, stop, and connect to a lab VM, you need to register for the lab and have an assigned lab VM.--## Register to the lab --To get access to a lab and connect to the lab VM from the Azure Lab Services website, you first need to register for the lab by using a lab registration link. The lab creator can [provide the registration link for the lab](./how-to-configure-student-usage.md#send-invitations-to-users). --To register for a lab by using the registration link: --1. Open the lab registration URL in a browser. -- After you complete the lab registration, you no longer need the registration link. Instead, you can navigate to the Azure Lab Services website (https://labs.azure.com) to access your labs. -- :::image type="content" source="./media/how-to-use-lab/register-lab.png" alt-text="Screenshot of registration link for lab."::: --1. Sign in to the service using your organizational or school account to complete the registration. -- > [!NOTE] - > You need a Microsoft account to use Azure Lab Services, unless you're using Canvas. If you try to use your non-Microsoft account, such as Yahoo or Google accounts, to sign in to the portal, follow the instructions to create a Microsoft account that's linked to your non-Microsoft account. Then, follow the steps to complete the lab registration process. --1. After the registration finishes, confirm that you see the lab virtual machine in **My virtual machines**. -- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/accessible-vms.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services."::: --## View your lab virtual machines --You can view all your assigned lab virtual machines to you in the Azure Lab Services website. Alternately, if your organization uses Azure Lab Services with Microsoft Teams or Canvas, learn how you can [access your lab VMs in Microsoft Teams](./how-to-access-vm-for-students-within-teams.md) or [access your lab VMs in Canvas](./how-to-access-vm-for-students-within-canvas.md). --1. Go to the [Azure Lab Services website](https://labs.azure.com). --1. The page has a tile for each lab VM that you have access to. The VM tile shows the VM details and provides access to functionality for controlling the lab VM: -- - In the top-left, notice the name of the lab. The lab creator specifies the lab name when creating the lab. - - In the top-right, you can see an icon that represents the operating system (OS) of the VM. - - In the center, you can see a progress bar that shows your [quota hours consumption](#view-quota-hours). - - In the bottom-left, you can see the status of the lab VM and a control to [start or stop the VM](#start-or-stop-the-vm). - - In the bottom-right, you have the control to [connect to the lab VM](./connect-virtual-machine.md) with remote desktop (RDP) or secure shell (SSH). - - Also in the bottom-right, you can [reset or troubleshoot the lab VM](./how-to-reset-and-redeploy-vm.md), if you experience problems with the VM. -- :::image type="content" source="./media/how-to-use-lab/lab-services-virtual-machine-tile.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the VM tile sections."::: --## Start or stop the VM --As a lab user, you can start or stop a lab VM from the Azure Lab Services website. Alternately, you can also stop a lab VM by using the operating system shutdown command from within the lab VM. The preferred method to stop a lab VM is to use the [Azure Lab Services website](https://labs.azure.com) to avoid incurring additional costs. --> [!TIP] -> With the [April 2022 Updates](lab-services-whats-new.md), Azure Lab Services will detect when a lab user shuts down their VM using the OS shutdown command. After a long delay to ensure the VM wasn't being restarted, the lab VM will be marked as stopped and billing will discontinue. --To start or stop a lab VM in the Azure Lab Services website: --1. Go to the [Azure Lab Services website](https://labs.azure.com). --1. Use the toggle control in the bottom-left of the VM tile to start or stop the lab VM. -- Depending on the current status of the lab VM, the toggle control starts or stops the VM. When the VM is in progress of starting or stopping, the control is inactive. -- Starting or stopping the lab VM might take some time to complete. -- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/start-vm.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the status toggle and status label on the VM tile."::: --1. After the operation finishes, confirm that the lab VM status is correct. -- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-running.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the status label on the VM tile."::: --## Connect to the VM --Depending on the lab VM operating system configuration, you can use remote desktop (RDP) or secure shell (SSH) to connect to your lab VM. Learn more about how to [connect to a lab VM](connect-virtual-machine.md). --## View quota hours --On the lab VM tile in the [Azure Lab Services website](https://labs.azure.com), you can view your consumption of [quota hours](how-to-configure-student-usage.md#set-quotas-for-users) in the progress bar. Quota hours are the extra time allotted to you outside of the [scheduled time](./classroom-labs-concepts.md#schedule) for the lab. For example, the time outside of classroom time, to complete homework. --The color of the progress bar and the text under the progress bar changes depending on the scenario: --- A class is in progress, according to the lab schedules: the progress bar is grayed out to represent that you didn't use quota hours.-- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-class-in-progress.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when a schedule started the VM."::: --- The lab has no quota (zero hours): the text **Available during classes only** shows in place of the progress bar.-- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/available-during-class.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when there's no quota."::: --- You ran out of quota: the color of the progress bar is **red**.-- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-red-color.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when there's quota usage."::: --- No class is in progress, according to the lab schedules: the color of the progress bar is **blue** to indicate that it's outside the scheduled time for the lab, and some of the quota time was used.-- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-blue-color.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when quota has been partially used."::: --## Next steps --See the following articles: --- [As an admin, create and manage lab plans](how-to-manage-lab-plans.md)-- [As a lab owner, create and manage labs](how-to-manage-labs.md)-- [As a lab owner, set up and publish templates](how-to-create-manage-template.md)-- [As a lab owner, configure and control usage of a lab](how-to-configure-student-usage.md) |
lab-services | How To Windows Shutdown | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-windows-shutdown.md | Last updated 06/02/2023 In this article, you learn how to remove the shutdown command from the Windows Start menu in lab virtual machines in Azure Lab Services. When a lab user performs a shutdown in the operating system instead of stopping the lab virtual machine, the shutdown might interfere with the Azure Lab Services cost control measures. -Azure Lab Services provides different cost control measures, such as [lab schedules](./how-to-create-schedules.md), [quota hours](./how-to-configure-student-usage.md#set-quotas-for-users), and [automatic shutdown policies](./how-to-enable-shutdown-disconnect.md). +Azure Lab Services provides different cost control measures, such as [lab schedules](./how-to-create-schedules.md), [quota hours](./how-to-manage-lab-users.md#set-quotas-for-users), and [automatic shutdown policies](./how-to-enable-shutdown-disconnect.md). When the Windows shutdown command is used to turn off a lab virtual machine, the service considers the lab virtual machine to still be running and accumulating costs. Instead, lab users should use the [stop functionality of the lab virtual machine](./how-to-use-lab.md#start-or-stop-the-vm). To prevent inadvertently shutting down the lab virtual machine, you can remove the shutdown command from the Windows Start menu. |
lab-services | Lab Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-overview.md | Azure Lab Services supports the following key capabilities and features: - **Advanced virtual networking support**. [Configure advanced networking](./tutorial-create-lab-with-advanced-networking.md) for your labs to apply network traffic control, network ports management, or access resources in a virtual or internal network. For example, your labs might have to connect to an on-premises licensing server. -- **Cost optimization and analysis**. Azure Lab Services uses a consumption-based [cost model](cost-management-guide.md) and you pay only for lab virtual machines when they're running. Further optimize your costs for running labs by [automatically shutting down lab virtual machines](./how-to-configure-auto-shutdown-lab-plans.md), and by configuring [schedules](./how-to-create-schedules.md) and [usage quotas](./how-to-configure-student-usage.md#set-quotas-for-users) to limit the number of hours the labs can be used.+- **Cost optimization and analysis**. Azure Lab Services uses a consumption-based [cost model](cost-management-guide.md) and you pay only for lab virtual machines when they're running. Further optimize your costs for running labs by [automatically shutting down lab virtual machines](./how-to-configure-auto-shutdown-lab-plans.md), and by configuring [schedules](./how-to-create-schedules.md) and [usage quotas](./how-to-manage-lab-users.md#set-quotas-for-users) to limit the number of hours the labs can be used. ## Use cases |
lab-services | Lab Services Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-whats-new.md | We've made fundamental improvements for the service to boost performance, reliab **[Updates to lab owner experience](how-to-manage-labs.md)**. Choose to skip the template creation process when creating a new lab if you already have an image ready to use. We've also added the ability to add a non-admin user to lab VMs. -**[Updates to student experience](how-to-manage-vm-pool.md#redeploy-vms)**. Students can now redeploy their VM without losing data. We also updated the registration experience for some scenarios. A lab VM is assigned to students *automatically* if the lab is set up to use Azure AD group sync, Teams, or Canvas. +**[Updates to student experience](how-to-manage-vm-pool.md#redeploy-lab-vms)**. Students can now redeploy their VM without losing data. We also updated the registration experience for some scenarios. A lab VM is assigned to students *automatically* if the lab is set up to use Azure AD group sync, Teams, or Canvas. **SDKs**. The Azure Lab Services PowerShell is now integrated with the [Az PowerShell module](/powershell/azure/release-notes-azureps). Also, check out the C# SDK. |
lab-services | Migrate To 2022 Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/migrate-to-2022-update.md | Update reports to include the new cost entry type, `Microsoft.LabServices/labs`, - As an admin, [create a lab plan](quick-create-resources.md). - As an admin, [manage your lab plan](how-to-manage-lab-plans.md).-- As an educator, [configure and control usage of a lab](how-to-configure-student-usage.md).+- As an educator, [configure and control usage of a lab](how-to-manage-lab-users.md). |
lab-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md | Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/21/2023 Last updated : 07/06/2023 |
lab-services | Setup Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/setup-guide.md | After you understand the requirements for your class's lab, you're ready to set - [Send invitations to users](./tutorial-setup-lab.md#send-invitation-emails) - [Manage Lab Services user lists in Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) - For information about the types of accounts that students can use, see [Student accounts](./how-to-configure-student-usage.md#user-account-types). + For information about the types of accounts that students can use, see [Student accounts](./how-to-access-lab-virtual-machine.md#user-account-types). 1. **Set cost controls**. To set a schedule, establish quotas, and enable automatic shutdown, see the following tutorials: After you understand the requirements for your class's lab, you're ready to set > [!NOTE] > Depending on the operating system you've installed, a VM might take several minutes to start. To ensure that a lab VM is ready for use during your scheduled hours, we recommend that you start it 30 minutes in advance. - - [Set quotas for users](./how-to-configure-student-usage.md#set-quotas-for-users) and [set additional quotas for specific users](./how-to-configure-student-usage.md#set-additional-quotas-for-specific-users) + - [Set quotas for users](./how-to-manage-lab-users.md#set-quotas-for-users) and [set additional quotas for specific users](./how-to-manage-lab-users.md#set-additional-quotas-for-specific-users) - [Enable automatic shutdown on disconnect](./how-to-enable-shutdown-disconnect.md) |
lab-services | Specify Marketplace Images 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/specify-marketplace-images-1.md | To disable one or images: - As an educator, [create and manage labs](how-to-manage-classroom-labs.md). - As an educator, [configure and publish templates](how-to-create-manage-template.md).-- As an educator, [configure and control usage of a lab](how-to-configure-student-usage.md).+- As an educator, [configure and control usage of a lab](how-to-manage-lab-users.md). - As a student, [access labs](how-to-use-lab.md). |
lab-services | Specify Marketplace Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/specify-marketplace-images.md | To disable one or images: - As an educator, [create and manage labs](how-to-manage-classroom-labs.md). - As an educator, [configure and publish templates](how-to-create-manage-template.md).-- As an educator, [configure and control usage of a lab](how-to-configure-student-usage.md).+- As an educator, [configure and control usage of a lab](how-to-manage-lab-users.md). - As a student, [access labs](how-to-use-lab.md). |
lab-services | Tutorial Access Lab Virtual Machine Teams Canvas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-access-lab-virtual-machine-teams-canvas.md | + + Title: "Tutorial: Access lab VM from Teams/Canvas" ++description: Learn how to access a VM (student view) in Azure Lab Services from Canvas. +++++ Last updated : 07/04/2023+++# Tutorial: Access a lab VM from Teams or Canvas ++In this tutorial, you learn how to access a lab virtual machine by using the Azure Lab Services app in Teams or Canvas. After you start the lab VM, you can then remotely connect to the lab VM by using secure shell (SSH). +++> [!div class="checklist"] +> * Access the lab in Teams or Canvas +> * Start the lab VM +> * Connect to the lab VM ++## Prerequisites ++- A lab that was created in the Teams or Canvas. Complete the steps in [Tutorial: Create and publish a lab in Teams or Canvas](./tutorial-setup-lab-teams-canvas.md) to create a lab. ++## Access a lab ++# [Teams](#tab/teams) ++When you access a lab in Microsoft Teams, you're automatically registered for the lab, based on your team membership in Microsoft Teams. ++To access your lab in Teams: ++1. Sign into Microsoft Teams with your organizational account. ++1. Select the team and channel that contain the lab. ++1. Select the **Azure Lab Services** tab to view your lab virtual machines. ++ :::image type="content" source="./media/tutorial-access-lab-virtual-machine-teams-canvas/teams-view-lab.png" alt-text="Screenshot of lab in Teams after it's published."::: ++ You might see a message that the lab isn't available. This error can occur when the lab isn't published yet by the lab creator, or if the Teams membership information still needs to synchronize. ++# [Canvas](#tab/canvas) ++When you access a lab in [Canvas](https://www.instructure.com/canvas), you're automatically registered for the lab, based on your course membership in Canvas. Azure Lab Services supports test users in Canvas and the ability for the educator to act as another user. ++To access your lab in Canvas: ++1. Sign into Canvas by using your Canvas credentials. ++1. Go to the course, and then open the **Azure Lab Services** app. ++ :::image type="content" source="./media/tutorial-access-lab-virtual-machine-teams-canvas/canvas-view-lab.png" alt-text="Screenshot of a lab in the Canvas portal."::: ++ You might see a message that the lab isn't available. This error can occur when the lab isn't published yet by the lab creator, or if the Canvas course membership still needs to synchronize. ++++## Start the lab VM ++You can start a lab virtual machine from the **My virtual machines** page. If the lab creator configured a lab schedule, the lab VM is automatically started and stopped during the scheduled hours. ++To start the lab VM: ++1. Go to the **My virtual machines** page in Teams or Canvas. ++1. Use the toggle control next to the lab VM status to start the lab VM. ++ When the VM is in progress of starting, the control is inactive. Starting the lab VM might take some time to complete. ++ :::image type="content" source="./media/tutorial-access-lab-virtual-machine-teams-canvas/start-vm.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the status toggle and status label on the VM tile."::: ++1. After the operation finishes, confirm that the lab VM status is *Running*. ++ :::image type="content" source="./media/tutorial-access-lab-virtual-machine-teams-canvas/vm-running.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the status label on the VM tile."::: ++## Connect to the lab VM ++When the lab virtual machine is running, you can remotely connect to the VM. Depending on the lab VM operating system configuration, you can connect by using remote desktop (RDP) or secure shell (SSH). ++If there are no quota hours available, you're can't start the lab VM outside the scheduled lab hours and can't connect to the lab VM. ++Learn more about how to [connect to a lab VM](connect-virtual-machine.md). ++## Next steps ++- [Access lab virtual machines in Azure Lab Services](./how-to-access-lab-virtual-machine.md) +- [Connect remotely to a lab virtual machine](./connect-virtual-machine.md) |
lab-services | Tutorial Connect Lab Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-connect-lab-virtual-machine.md | Title: 'Tutorial: Access a lab in Azure Lab Services' + Title: 'Tutorial: Register & access a lab' -description: In this tutorial, learn how you can register for a lab in Azure Lab Services and connect to the lab virtual machine. +description: In this tutorial, learn how to register for a lab in Azure Lab Services and connect to the lab virtual machine from the Azure Lab Services website. Previously updated : 02/17/2023 Last updated : 06/29/2023 -# Tutorial: Access a lab in Azure Lab Services from the Lab Services website +# Tutorial: Register and access a lab in the Azure Lab Services website -In this tutorial, learn how you can register for a lab as a lab user, and then start and connect to lab virtual machine (VM) by using the Azure Lab Services website. +Azure Lab Services supports inviting lab users based on their email address, by syncing with an Azure Active Directory group, or by integrating with Teams or Canvas. In this tutorial, you learn how to register for a lab with your email address, view the lab in the Azure Lab Services website, and connect to the lab virtual machine with a remote desktop client or SSH. -If you're using Microsoft Teams or Canvas with Azure Lab Services, learn how you can [access your lab from Microsoft Teams](./how-to-access-vm-for-students-within-teams.md) or how you can [access your lab from Canvas](./how-to-access-vm-for-students-within-canvas.md). -> [!div class="checklist"] -> * Register to the lab -> * Start the VM -> * Connect to the VM +If you're using Microsoft Teams or Canvas with Azure Lab Services, learn more in our [Tutorial: access your lab from Microsoft Teams or Canvas](./how-to-access-vm-for-students-within-teams.md). -## Register to the lab +> [!div class="checklist"] +> * Register for the lab by using an email address +> * Access the lab in the Azure Lab Services website +> * Start the lab VM +> * Connect to the lab VM -Before you can use the lab from the Azure Lab Services website, you need to first register for the lab by using a registration link. +## Prerequisites -To register for a lab by using the registration link: +- A lab that was created in the Azure Lab Services website. Follow the steps to create a lab and invite users in [Tutorial: Create a lab for classroom training](./tutorial-setup-lab.md). -1. Navigate to the registration URL that you received from the lab creator. +- You've received a lab registration link. - You have to register for each lab that you want to access. After you complete registration for a lab, you no longer need the registration link for that lab. +## Register for the lab - :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/register-lab.png" alt-text="Screenshot of browser with example registration link for Azure Lab Services, highlighting the registration link."::: -1. Sign in to the service using your organizational or school account to complete the registration. +## Access the lab in the Azure Lab Services website - > [!NOTE] - > You need a Microsoft account to use Azure Lab Services, unless you're using Canvas. If you try to use your non-Microsoft account, such as Yahoo or Google accounts, to sign in to the portal, follow the instructions to create a Microsoft account that's linked to your non-Microsoft account. Then, follow the steps to complete the lab registration process. +After the registration process finishes, you can now view the labs you have access to. Once you've registered for the lab, you can directly access your labs from the Azure Lab Services website (https://labs.azure.com). -1. After the registration finishes, confirm that you see the lab virtual machine in **My virtual machines**. +1. Select **My virtual machines** and confirm that you can see your lab virtual machine. - After you complete the registration, you can directly access your lab VMs by using the Azure Lab Services website (https://labs.azure.com). + The page has a tile for each of your lab virtual machines and shows the lab name, operating system, and the VM status. :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/accessible-vms.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services portal."::: -1. On the **My virtual machines** page, you can see a tile for your lab VM. Confirm that the VM is in the **Stopped** state. +1. Confirm that the lab VM is in the **Stopped** state. - The VM tile shows the lab VM details, such as the lab name, operating system, and status. The VM tile also enables you to perform specific actions on the lab VM, such starting and stopping it. + The VM tile enables you to perform specific actions on the lab VM, such starting and stopping it. :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-in-stopped-state.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services website, highlighting the stopped state."::: -## Start the VM +## Start the lab VM -Before you can connect to a lab VM, the VM must be running. +Before you can connect to a lab VM, the lab VM must be running. To start the lab VM from the Azure Lab Services website: -1. Go to the [Azure Lab Services website](https://labs.azure.com). --1. Start the VM by selecting the status toggle control. -- Starting the lab VM might take some time. +1. Start the VM by selecting the status toggle control. Starting the lab VM might take some time. :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/start-vm.png" alt-text="Screenshot of My virtual machines page in the Azure Lab Services website, highlighting the VM state toggle."::: To start the lab VM from the Azure Lab Services website: :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-running.png" alt-text="Screenshot of My virtual machines page in the Azure Lab Services website, highlighting the VM is running."::: -## Connect to the VM +## Connect to the lab VM -You can now connect to the lab VM. You can retrieve the connection information from the Azure Lab Services website. +Now that the lab VM is running, you can connect to it with a remote desktop client or SSH, depending on the operating system. -1. Go to the [Azure Lab Services website](https://labs.azure.com). +To retrieve the connection information from the Azure Lab Services website: -1. Select the connect button in the lower right of the VM tile to retrieve the connection information. +1. Select the connect button in the lower right of the lab VM tile to retrieve the connection information. :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/connect-vm.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services website, highlighting the Connect button."::: You can now connect to the lab VM. You can retrieve the connection information f ## Next steps -In this tutorial, you accessed a lab using the registration link you got from the lab creator. When done with the VM, you stop the lab VM from the Azure Lab Services website. +In this tutorial, you registered for a lab using the registration link you got from the lab creator. You then accessed the lab in the Azure Lab Services website and connected to the lab VM with a remote desktop client or SSH. ->[!div class="nextstepaction"] ->[Stop the VM](how-to-use-lab.md#start-or-stop-the-vm) +- Learn about the different ways to [access a lab](./how-to-use-lab.md) +- Learn how to [connect to a lab VM with SSH or RDP](./connect-virtual-machine.md) +- Learn how to [stop a lab VM](how-to-use-lab.md#start-or-stop-the-vm) |
lab-services | Tutorial Create Lab With Advanced Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-create-lab-with-advanced-networking.md | First, let's start and connect to a lab VM from each lab. Complete the followin 1. Select the **State** slider to change the state from **Stopped** to **Starting**. > [!NOTE]- > When an educator turns on a student VM, quota for the student isn't affected. Quota for a user specifies the number of lab hours available to a student outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-configure-student-usage.md?#set-quotas-for-users). + > When an educator turns on a student VM, quota for the student isn't affected. Quota for a user specifies the number of lab hours available to a student outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-manage-lab-users.md?#set-quotas-for-users). 1. Once the **State** is **Running**, select the connect icon for the running VM. Open the download RDP file to connect to the VM. For more information about connection experiences on different operating systems, see [Connect to a lab VM](connect-virtual-machine.md). :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/virtual-machine-pool-running-vm.png" alt-text="Screen shot of virtual machine pool page for Azure Lab Services lab."::: If you're not going to continue to use this application, delete the virtual netw ## Next steps >[!div class="nextstepaction"]->[Add students to the labs](how-to-configure-student-usage.md) +>[Add students to the labs](how-to-manage-lab-users.md) |
lab-services | Tutorial Setup Lab Teams Canvas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab-teams-canvas.md | + + Title: "Tutorial: Create a lab in Teams or Canvas" ++description: Learn how to create a lab by using the Teams or Canvas app for Azure Lab Services. + Last updated : 07/04/2023+++++# Tutorial: Create a lab with the Azure Lab Services app in Teams or Canvas ++With Azure Lab Services, you can create labs directly from within Microsoft Teams or Canvas. In this tutorial, you use the Azure Lab Services app for Microsoft Teams or Canvas to create and publish a lab. After you complete this tutorial, lab users can directly access their lab virtual machine from Teams or Canvas. ++With the Azure Lab Services app for Teams or Canvas you can create and manage labs without having to leave the Teams or Canvas environment and lab user management is synchronized based team or course membership. Lab users are automatically registered for a lab and have a lab VM assigned to them. They can also access their lab VM directly from within Teams or Canvas. +++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Configure the Azure Lab Services app +> * Create a lab in Teams or Canvas +> * Publish the lab to create the lab VMs ++## Prerequisites +++# [Teams](#tab/teams) ++- To add the Azure Lab Services Teams app to a channel, your account needs to be an owner of the team in Microsoft Teams. ++# [Canvas](#tab/canvas) ++- To add the Azure Lab Services app to Canvas, your Canvas account needs [Admin permissions](https://community.canvaslms.com/t5/Canvas-Basics-Guide/What-is-the-Admin-role/ta-p/78). ++++## Configure the Azure Lab Services app ++# [Teams](#tab/teams) ++Before you can create and manage labs in Teams, you need to configure Teams to use the Azure Lab Services app and to grant access to your lab plan. Follow these steps to [configure Teams for Azure Lab Services](./how-to-configure-teams-for-lab-plans.md). ++After you configured Teams, you can now access the Azure Lab Services app from a channel in Teams. All users that are a member of the team are automatically added as lab users, and assigned a lab virtual machine. ++In the next step, you use the Azure Lab Services app to create a lab. ++# [Canvas](#tab/canvas) ++Before you can create and manage labs in Canvas, you need to configure Canvas to use the Azure Lab Services app and to grant access to your lab plan. Follow these steps to [configure Canvas for Azure Lab Services](./how-to-configure-canvas-for-lab-plans.md). ++After you configured Canvas, you can now access the Azure Lab Services app from a course in Canvas. All course members are automatically added as lab users, and assigned a lab virtual machine. ++In the next step, you use the Azure Lab Services app to create a lab. ++++## Access the Azure Lab Services app ++# [Teams](#tab/teams) ++1. Open Microsoft Teams, and select your team and channel. ++ You should see the **Azure Lab Services** tab. ++1. Select the **Azure Lab Services** tab. ++ If you don't have any labs, you should see the welcome page. Otherwise, you can see the list of labs you created earlier. ++ :::image type="content" source="./media/tutorial-setup-lab-teams-canvas/teams-azure-lab-services-tab.png" alt-text="Screenshot that shows the Azure Lab Services tab in Teams."::: ++ > [!TIP] + > Use the **Show** filter to switch between your labs and all labs you have access to. ++# [Canvas](#tab/canvas) ++1. Sign into Canvas, and select your course. ++ If you're authenticated in Canvas as an educator, you'll see a sign in screen before you can use the Azure Lab Services app. Sign in here with an Azure AD account or Microsoft account that was added as a lab creator. ++1. Select **Azure Lab Services** from the course navigation menu. ++ If you don't have any labs, you should see the welcome page. Otherwise, you can see the list of labs you created earlier. ++ :::image type="content" source="./media/tutorial-setup-lab-teams-canvas/welcome-to-lab-services.png" alt-text="Screenshot that shows the welcome page in Canvas."::: ++++## Create a new lab ++A lab contains the configuration and settings for creating lab VMs. All lab VMs within a lab are identical. You use the Azure Lab Services app to create a lab in the lab plan. ++> [!IMPORTANT] +> You can only see labs in Teams or Canvas that you created with the Azure Lab Services app. If you created a lab in the Azure Lab Services website, it is not visible in Teams or Canvas. ++1. Select **Create lab** to start creating a new lab. ++1. On the **New Lab** page, enter the following information, and then select **Next**: ++ | Field | Description | + | | -- | + | **Name** | Enter *programming-101*. | + | **Virtual machine image** | Select *Windows Server 2022 Datacenter*. | + | **Virtual machine size** | Select *Small*. | + | **Location** | Leave the default value. | ++ Some virtual machine sizes might not be available depending on the lab plan region and your [subscription core limit](./how-to-request-capacity-increase.md). Learn more about [virtual machine sizes in the administrator's guide](./administrator-guide.md#vm-sizing). ++ You can [enable or disable specific virtual machine images](./specify-marketplace-images.md#enable-and-disable-images) by configuring the lab plan. ++1. On the **Virtual machine credentials** page, specify a default **username** and **password**, and then select **Next**. ++ By default, all the lab VMs use the same credentials. ++ > [!IMPORTANT] + > Make a note of username and password. They won't be shown again. ++1. On the **Lab policies** page, leave the default values and select **Next**. ++ The default settings enable secure shell (SSH) access to the lab virtual machine, provide users with 10 quota hours, and shuts down the lab VMs when there's no activity. ++1. On the **Template virtual machine settings** page, select **Use virtual machine image without customization**. ++ In this tutorial, you use the VM image as-is, known as a *templateless VM*. Azure Lab Services also supports creating a *template VM*, which lets you make configuration changes or install software on top of the VM image. ++ :::image type="content" source="./media/tutorial-setup-lab-teams-canvas/templateless-virtual-machine-settings.png" alt-text="Screenshot of the Template virtual machine settings page, with the option selected to create a templateless VM."::: ++1. Select **Finish** to start the lab creation. It might take several minutes for the lab creation to finish. ++1. When the lab creation finishes, you can see the lab details in the **Template** page. ++ :::image type="content" source="./media/tutorial-setup-lab-teams-canvas/templateless-template.png" alt-text="Screenshot of the Template page for a templateless lab."::: ++## Publish the lab ++Azure Lab Services doesn't create the lab virtual machines until you publish the lab. When you publish the lab, the lab virtual machines are created, and assigned to the individual lab users. ++To publish the lab: + |