Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Add Api Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md | Content-type: application/json { "version": "1.0.0", "action": "ShowBlockPage",- "userMessage": "There was a problem with your request. You are not able to sign up at this time.", + "userMessage": "There was a problem with your request. You are not able to sign up at this time. Please contact your system administrator", } ``` |
active-directory-b2c | Add Sign Up And Sign In Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md | Watch this video to learn how the user sign-up and sign-in policy works. ## Prerequisites +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. ::: zone pivot="b2c-user-flow" |
active-directory-b2c | Configure Authentication Sample Python Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md | This article uses a sample Python web application to illustrate how to add Azure ## Overview -OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. You can use OIDC to securely sign users in to an application. This web app sample uses the [Microsoft Authentication Library (MSAL) for Python](https://github.com/AzureAD/microsoft-authentication-library-for-python). The MSAL for Python simplifies adding authentication and authorization support to Python web apps. +OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. You can use OIDC to securely sign users in to an application. This web app sample uses the [identity package for Python](https://pypi.org/project/identity/) to simplify adding authentication and authorization support to Python web apps. The sign-in flow involves the following steps: The sign-in flow involves the following steps: 1. After users sign in successfully, Azure AD B2C returns an ID token to the app. 1. The app exchanges the authorization code with an ID token, validates the ID token, reads the claims, and then returns a secure page to users. --### Sign-out -- ## Prerequisites -A computer that's running: --* [Visual Studio Code](https://code.visualstudio.com/) or another code editor -* [Python](https://www.python.org/downloads/) 3.9 or above +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. +- [Python 3.7+](https://www.python.org/downloads/) ## Step 1: Configure your user flow Extract the sample file to a folder where the total length of the path is 260 or In the project's root directory, follow these steps: 1. Rename the *app_config.py* file to *app_config.py.OLD*.-1. Rename the *app_config_b2c.py* file to *app_config.py*. --Open the *app_config.py* file. This file contains information about your Azure AD B2C identity provider. Update the following app settings properties: --|Key |Value | -||| -|`b2c_tenant`| The first part of your Azure AD B2C [tenant name]( tenant-management-read-tenant-name.md#get-your-tenant-name) (for example, `contoso`).| -|`CLIENT_ID`| The web API application ID from [step 2.1](#step-21-register-the-app).| -|`CLIENT_SECRET`| The client secret value you created in [step 2.2](#step-22-create-a-web-app-client-secret). To help increase security, consider storing it instead in an environment variable, as recommended in the comments. | -|`*_user_flow`|The user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).| -| | | --Your final configuration file should look like the following Python code: --```python -import os --b2c_tenant = "contoso" -signupsignin_user_flow = "B2C_1_signupsignin" -editprofile_user_flow = "B2C_1_profileediting" -resetpassword_user_flow = "B2C_1_passwordreset" -authority_template = "https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{user_flow}" --CLIENT_ID = "11111111-1111-1111-1111-111111111111" # Application (client) ID of app registration --CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxxxxxx" # Placeholder - for use ONLY during testing. -``` +1. Rename the *app_config_b2c.py* file to *app_config.py*. This file contains information about your Azure AD B2C identity provider. ++1. Create an `.env` file in the root folder of the project using `.env.sample.b2c` as a guide. ++ ```shell + FLASK_DEBUG=True + TENANT_NAME=<tenant name> + CLIENT_ID=<client id> + CLIENT_SECRET=<client secret> + SIGNUPSIGNIN_USER_FLOW=B2C_1_profile_editing + EDITPROFILE_USER_FLOW=B2C_1_reset_password + RESETPASSWORD_USER_FLOW=B2C_1_signupsignin1 + ``` -> [!IMPORTANT] -> As noted in the code snippet comments, we recommend that you *do not store secrets in plaintext* in your application code. The hard-coded variable is used in the code sample *for convenience only*. Consider using an environment variable or a secret store, such as an Azure key vault. + |Key |Value | + ||| + |`TENANT_NAME`| The first part of your Azure AD B2C [tenant name](tenant-management-read-tenant-name.md#get-your-tenant-name) (for example, `contoso`). | + |`CLIENT_ID`| The web API application ID from [step 2.1](#step-21-register-the-app).| + |`CLIENT_SECRET`| The client secret value you created in [step 2.2](#step-22-create-a-web-app-client-secret). | + |`*_USER_FLOW`|The user flows you created in [step 1](#step-1-configure-your-user-flow).| + | | | + The environment variables are referenced in *app_config.py*, and are kept in a separate *.env* file to keep them out of source control. The provided *.gitignore* file prevents the *.env* file from being checked in. ## Step 5: Run the sample web app CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxxxxxx" # Placeholder - for use ONLY during t The console window displays the port number of the locally running application: ```console- * Serving Flask app "app" (lazy loading) - * Environment: production + * Debug mode: on WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.- * Debug mode: off * Running on `http://localhost:5000/` (Press CTRL+C to quit) ``` To enable your app to sign in with Azure AD B2C and call a web API, you must reg The app registrations and the application architecture are described in the following diagrams: - + [!INCLUDE [active-directory-b2c-app-integration-call-api](../../includes/active-directory-b2c-app-integration-call-api.md)] The app registrations and the application architecture are described in the foll ### Step 6.4: Configure your web API -This sample acquires an access token with the relevant scopes, which the web app can use for a web API. To call a web API from the code, use an existing web API or create a new one. For more information, see [Enable authentication in your own web API by using Azure AD B2C](enable-authentication-web-api.md). +This sample acquires an access token with the relevant scopes, which the web app can use for a web API. This sample itself does *not* act as a web API. Instead, you must use an existing web API or create a new one. For a tutorial on creating a web API in your B2C tenant, see [Enable authentication in your own web API by using Azure AD B2C](enable-authentication-web-api.md). ### Step 6.5: Configure the sample app with the web API Open the *app_config.py* file. This file contains information about your Azure A |Key |Value | |||-|`ENDPOINT`| The URI of your web API (for example, `https://localhost:5000/getAToken`).| -|`SCOPE`| The web API [scopes](#step-62-configure-scopes) that you created.| +|`ENDPOINT`| The URI of your web API (for example, `https://localhost:6000/hello`).| +|`SCOPE`| The web API [scopes](#step-62-configure-scopes) that you created (for example, `["https://contoso.onmicrosoft.com/tasks-api/tasks.read", https://contoso.onmicrosoft.com/tasks-api/tasks.write"]`).| | | | -Your final configuration file should look like the following Python code: --```python -import os --b2c_tenant = "contoso" -signupsignin_user_flow = "B2C_1_signupsignin" -editprofile_user_flow = "B2C_1_profileediting" -resetpassword_user_flow = "B2C_1_passwordreset" -authority_template = "https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{user_flow}" --CLIENT_ID = "11111111-1111-1111-1111-111111111111" # Application (client) ID of app registration --CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxxxxxx" # Placeholder - for use ONLY during testing. --### More code here --# This is the API resource endpoint -ENDPOINT = 'https://localhost:5000' ---SCOPE = ["https://contoso.onmicrosoft.com/api/demo.read", "https://contoso.onmicrosoft.com/api/demo.write"] -``` - ### Step 6.6: Run the sample app 1. In your console or terminal, switch to the directory that contains the sample. -1. Stop the app. and then rerun it. -1. Select **Call Microsoft Graph API**. +1. If the app isn't still running, restart it using the command from Step 5. +1. Select **Call a downstream API**. -  +  ## Step 7: Deploy your application In a production application, the app registration redirect URI is ordinarily a p You can add and modify redirect URIs in your registered applications at any time. The following restrictions apply to redirect URIs: -* The reply URL must begin with the scheme `https`. -* The reply URL is case-sensitive. Its case must match the case of the URL path of your running application. +* The redirect URL must begin with the scheme `https`. +* The redirect URL is case-sensitive. Its case must match the case of the URL path of your running application. ## Next steps * Learn how to [Configure authentication options in a Python web app by using Azure AD B2C](enable-authentication-python-web-app-options.md). |
active-directory | Plan Cloud Hr Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md | Azure AD uses this integration to enable the following cloud HR application (app - **Provision cloud-only users to Azure AD:** In scenarios where Active Directory isn't used, provision users directly from the cloud HR app to Azure AD. - **Write back to the cloud HR app:** Write the email addresses and username attributes from Azure AD back to the cloud HR app. +The following video provides guidance on planning your HR-driven provisioning integrations. ++> [!VIDEO https://www.youtube-nocookie.com/embed/HsdBt40xEHs] + > [!NOTE] > This deployment plan shows you how to deploy your cloud HR app workflows with Azure AD user provisioning. For information on how to deploy automatic user provisioning to software as a service (SaaS) apps, see [Plan an automatic user provisioning deployment](./plan-auto-user-provisioning.md). You also need a valid Azure AD Premium P1 or higher subscription license for eve | Videos | [What is user provisioning in Active Azure Directory?](https://youtu.be/_ZjARPpI6NI) | | | [How to deploy user provisioning in Active Azure Directory](https://youtu.be/pKzyts6kfrw) | | Tutorials | [List of tutorials on how to integrate SaaS apps with Azure AD](../saas-apps/tutorial-list.md) |-| | [Tutorial: Configure Workday for automatic user provisioning](../saas-apps/workday-inbound-tutorial.md#frequently-asked-questions-faq) | +| | [Tutorial: Configure automatic user provisioning with Workday](../saas-apps/workday-inbound-tutorial.md) | +| | [Tutorial: Configure automatic user provisioning with SAP SuccessFactors](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) | | FAQ | [Automated user provisioning](../app-provisioning/user-provisioning.md#what-applications-and-systems-can-i-use-with-azure-ad-automatic-user-provisioning) | | | [Provisioning from Workday to Azure AD](../saas-apps/workday-inbound-tutorial.md#frequently-asked-questions-faq) | |
active-directory | Concept Certificate Based Authentication Certificateuserids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md | IIF(IsPresent([alternativeSecurityId]), ## Look up certificateUserIds using Microsoft Graph queries -Tenant admins can run MS Graph queries to find all the users with a given certificateUserId value. +Authorized callers can run Microsoft Graph queries to find all the users with a given certificateUserId value. On the Microsoft Graph [user](/graph/api/resources/user) object, the collection of certificateUserIds are stored in the **authorizationInfo** property. -GET all user objects that have the value 'bob@contoso.com' value in certificateUserIds: +To retrieve all user objects that have the value 'bob@contoso.com' in certificateUserIds: -```http -GET https://graph.microsoft.com/v1.0/users?$filter=certificateUserIds/any(x:x eq 'bob@contoso.com') -``` - -```http -GET https://graph.microsoft.com/v1.0/users?$filter=startswith(certificateUserIds, 'bob@contoso.com') +```msgraph-interactive +GET https://graph.microsoft.com/v1.0/users?$filter=authorizationInfo/certificateUserIds/any(x:x eq 'bob@contoso.com')&$count=true +ConsistencyLevel: eventual ``` -```http -GET https://graph.microsoft.com/v1.0/users?$filter=certificateUserIds eq 'bob@contoso.com' -``` +You can also use the `not` and `startsWith` operators to match the filter condition. To filter against the certificateUserIds object, the request must include the `$count=true` query string and the **ConsistencyLevel** header set to `eventual`. -## Update certificate user IDs using Microsoft Graph queries -PATCH the user object certificateUserIds value for a given userId +## Update certificateUserIds using Microsoft Graph queries ++Run a PATCH request to update the certificateUserIds for a given user. #### Request body: ```http-PATCH https://graph.microsoft.us/v1.0/users/{id} +PATCH https://graph.microsoft.com/v1.0/users/{id} Content-Type: application/json-{ - "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users(authorizationInfo,department)/$entity", - "department": "Accounting", +{ "authorizationInfo": { "certificateUserIds": [ "X509:<PN>123456789098765@mil" |
active-directory | Howto Sspr Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md | Configure both the **Notify users on password resets** and the **Notify all admi > - Public: msonlineservicesteam@microsoft.com > - China: msonlineservicesteam@oe.21vianet.com > - Government: msonlineservicesteam@azureadnotifications.us+> > If you observe issues in receiving notifications, please check your spam settings. ### Customization settings |
active-directory | Azureadjoin Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/azureadjoin-plan.md | Device management for Azure AD joined devices is based on a mobile device manage There are two approaches for managing Azure AD joined devices: - **MDM-only** - A device is exclusively managed by an MDM provider like Intune. All policies are delivered as part of the MDM enrollment process. For Azure AD Premium or EMS customers, MDM enrollment is an automated step that is part of an Azure AD join.-- **Co-management** - A device is managed by an MDM provider and Microsoft Endpoint Configuration Manager. In this approach, the Microsoft Endpoint Configuration Manager agent is installed on an MDM-managed device to administer certain aspects.+- **Co-management** - A device is managed by an MDM provider and Microsoft Configuration Manager. In this approach, the Microsoft Configuration Manager agent is installed on an MDM-managed device to administer certain aspects. If you're using Group Policies, evaluate your GPO and MDM policy parity by using [Group Policy analytics](/mem/intune/configuration/group-policy-analytics) in Microsoft Intune. Review supported and unsupported policies to determine whether you can use an MD If your MDM solution isn't available through the Azure AD app gallery, you can add it following the process outlined in [Azure Active Directory integration with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm). -Through co-management, you can use Microsoft Endpoint Configuration Manager to manage certain aspects of your devices while policies are delivered through your MDM platform. Microsoft Intune enables co-management with Microsoft Endpoint Configuration Manager. For more information on co-management for Windows 10 or newer devices, see [What is co-management?](/configmgr/core/clients/manage/co-management-overview). If you use an MDM product other than Intune, check with your MDM provider on applicable co-management scenarios. +Through co-management, you can use Microsoft Configuration Manager to manage certain aspects of your devices while policies are delivered through your MDM platform. Microsoft Intune enables co-management with Microsoft Configuration Manager. For more information on co-management for Windows 10 or newer devices, see [What is co-management?](/configmgr/core/clients/manage/co-management-overview). If you use an MDM product other than Intune, check with your MDM provider on applicable co-management scenarios. **Recommendation:** Consider MDM only management for Azure AD joined devices. |
active-directory | Concept Azure Ad Join | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-join.md | Any organization can deploy Azure AD joined devices no matter the size or indust Azure AD joined devices are signed in to using an organizational Azure AD account. Access to resources can be controlled based on Azure AD account and [Conditional Access policies](../conditional-access/howto-conditional-access-policy-compliant-device.md) applied to the device. -Administrators can secure and further control Azure AD joined devices using Mobile Device Management (MDM) tools like Microsoft Intune or in co-management scenarios using Microsoft Endpoint Configuration Manager. These tools provide a means to enforce organization-required configurations like: +Administrators can secure and further control Azure AD joined devices using Mobile Device Management (MDM) tools like Microsoft Intune or in co-management scenarios using Microsoft Configuration Manager. These tools provide a means to enforce organization-required configurations like: - Requiring storage to be encrypted - Password complexity |
active-directory | Howto Hybrid Join Downlevel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-join-downlevel.md | You also must enable **Allow updates to status bar via script** in the userΓÇÖs To register Windows downlevel devices, organizations must install [Microsoft Workplace Join for non-Windows 10 computers](https://www.microsoft.com/download/details.aspx?id=53554). Microsoft Workplace Join for non-Windows 10 computers is available in the Microsoft Download Center. -You can deploy the package by using a software distribution system like [Microsoft Endpoint Configuration Manager](/configmgr/). The package supports the standard silent installation options with the `quiet` parameter. The current branch of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations. +You can deploy the package by using a software distribution system like [Microsoft Configuration Manager](/configmgr/). The package supports the standard silent installation options with the `quiet` parameter. The current branch of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations. The installer creates a scheduled task on the system that runs in the user context. The task is triggered when the user signs in to Windows. The task silently joins the device with Azure AD by using the user credentials after it authenticates with Azure AD. |
active-directory | Hybrid Azuread Join Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-control.md | If your Azure AD is federated with AD FS, you first need to configure client-sid To register Windows down-level devices, organizations must install [Microsoft Workplace Join for non-Windows 10 computers](https://www.microsoft.com/download/details.aspx?id=53554) available on the Microsoft Download Center. -You can deploy the package by using a software distribution system likeΓÇ»[Microsoft Endpoint Configuration Manager](/configmgr/). The package supports the standard silent installation options with the quiet parameter. The current branch of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations. +You can deploy the package by using a software distribution system likeΓÇ»[Microsoft Configuration Manager](/configmgr/). The package supports the standard silent installation options with the quiet parameter. The current branch of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations. The installer creates a scheduled task on the system that runs in the user context. The task is triggered when the user signs in to Windows. The task silently joins the device with Azure AD with the user credentials after authenticating with Azure AD. |
active-directory | Plan Device Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/plan-device-deployment.md | Review supported and unsupported platforms for integrated devices: | Device management tools | Azure AD registered | Azure AD joined | Hybrid Azure AD joined | | | :: | :: | :: | | [Mobile Device Management (MDM) ](/windows/client-management/mdm/azure-active-directory-integration-with-mdm) <br>Example: Microsoft Intune |  |  |  | -| [Co-management with Microsoft Intune and Microsoft Endpoint Configuration Manager](/mem/configmgr/comanage/overview) <br>(Windows 10 or newer) | |  |  | +| [Co-management with Microsoft Intune and Microsoft Configuration Manager](/mem/configmgr/comanage/overview) <br>(Windows 10 or newer) | |  |  | | [Group policy](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831791(v=ws.11))<br>(Windows only) | | |  | We recommend that you consider [Microsoft Intune Mobile Application management (MAM)](/mem/intune/apps/app-management) with or without device management for registered iOS or Android devices. |
active-directory | Active Directory Ops Guide Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md | Like a user in your organization, a device is a core identity you want to protec You can carry out this goal by bringing device identities and managing them in Azure AD by using one of the following methods: -- Organizations can use [Microsoft Intune](/intune/what-is-intune) to manage the device and enforce compliance policies, attest device health, and set conditional access policies based on whether the device is compliant. Microsoft Intune can manage iOS devices, Mac desktops (Via JAMF integration), Windows desktops (natively using Mobile Device Management for Windows 10, and co-management with Microsoft Endpoint Configuration Manager) and Android mobile devices.-- [Hybrid Azure AD join](../devices/hybrid-azuread-join-managed-domains.md) provides management with Group Policies or Microsoft Endpoint Configuration Manager in an environment with Active Directory domain-joined computers devices. Organizations can deploy a managed environment either through PHS or PTA with Seamless SSO. Bringing your devices to Azure AD maximizes user productivity through SSO across your cloud and on-premises resources while enabling you to secure access to your cloud and on-premises resources with [Conditional Access](../conditional-access/overview.md) at the same time.+- Organizations can use [Microsoft Intune](/intune/what-is-intune) to manage the device and enforce compliance policies, attest device health, and set conditional access policies based on whether the device is compliant. Microsoft Intune can manage iOS devices, Mac desktops (Via JAMF integration), Windows desktops (natively using Mobile Device Management for Windows 10, and co-management with Microsoft Configuration Manager) and Android mobile devices. +- [Hybrid Azure AD join](../devices/hybrid-azuread-join-managed-domains.md) provides management with Group Policies or Microsoft Configuration Manager in an environment with Active Directory domain-joined computers devices. Organizations can deploy a managed environment either through PHS or PTA with Seamless SSO. Bringing your devices to Azure AD maximizes user productivity through SSO across your cloud and on-premises resources while enabling you to secure access to your cloud and on-premises resources with [Conditional Access](../conditional-access/overview.md) at the same time. If you have domain-joined Windows devices that aren't registered in the cloud, or domain-joined Windows devices that are registered in the cloud but without conditional access policies, then you should register the unregistered devices and, in either case, [use Hybrid Azure AD join as a control](../conditional-access/require-managed-devices.md) in your conditional access policies. |
active-directory | Resilience Client App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-client-app.md | Title: Increase the resilience of authentication and authorization in client applications you develop -description: Guidance for increasing resiliency of authentication and authorization in client application using the Microsoft identity platform +description: Learn to increasing resiliency of authentication and authorization in client application using the Microsoft identity platform -+ Previously updated : 11/23/2020 Last updated : 03/02/2023 # Increase the resilience of authentication and authorization in client applications you develop -This section provides guidance on building resilience into client applications that use the Microsoft identity platform and Azure Active Directory to sign in users and perform actions on behalf of those users. +Learn to build resilience into client applications that use the Microsoft identity platform and Azure Active Directory (Azure AD) to sign in users, and perform actions on behalf of those users. ## Use the Microsoft Authentication Library (MSAL) -The [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) is a key part of the [Microsoft identity platform](../develop/index.yml). It simplifies and manages acquiring, managing, caching, and refreshing tokens, and uses best practices for resilience. MSAL is designed to enable a secure solution without developers having to worry about the implementation details. +The Microsoft Authentication Library (MSAL) is part of the Microsoft identity platform. MSAL acquires, manages, caches, and refreshes tokens; it uses best practices for resilience. MSAL helps developers create secure solutions. -MSAL caches tokens and uses a silent token acquisition pattern. It also automatically serializes the token cache on platforms that natively provide secure storage like Windows UWP, iOS and Android. Developers can customize the serialization behavior when using [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization), [MSAL.NET](../develop/msal-net-token-cache-serialization.md), [MSAL for Java](../develop/msal-java-token-cache-serialization.md), and [MSAL for Python](../develop/msal-python-token-cache-serialization.md). +Learn more: - +* [Overview of the Microsoft Authentication Library](../develop/msal-overview.md) +* [What is the Microsoft identity platform?](../develop/v2-overview.md) +* [Microsoft identity platform documentation](../develop/index.yml) -When using MSAL, token caching, refreshing, and silent acquisition is supported automatically. You can use simple patterns to acquire the tokens necessary for modern authentication. We support many languages, and you can find a sample that matches your language and scenario on our [Samples](../develop/sample-v2-code.md) page. +MSAL caches tokens and uses a silent token acquisition pattern. MSAL serializes the token cache on operating systems that natively provide secure storage like Universal Windows Platform (UWP), iOS, and Android. Customize the serialization behavior when you're using: ++* Microsoft.Identity.Web +* MSAL.NET +* MSAL for Java +* MSAL for Python ++Learn more: ++* [Token cache serialization](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization) +* [Token cache serialization in MSAL.NET](../develop/msal-net-token-cache-serialization.md) +* [Custom token cache serialization in MSAL for Java](../develop/msal-java-token-cache-serialization.md) +* [Custom token cache serialization in MSAL for Python](../develop/msal-python-token-cache-serialization.md). ++  ++When you're using MSAL, token caching, refreshing, and silent acquisition is supported. Use simple patterns to acquire the tokens for authentication. There's support for many languages. Find code sample on, [Microsoft identity platform code samples](../develop/sample-v2-code.md). ## [C#](#tab/csharp) return myMSALObj.acquireTokenSilent(request).catch(error => { -MSAL can in some cases proactively refresh tokens. When Microsoft Identity issues a long-lived token, it can send information to the client for the optimal time to refresh the token ("refresh\_in"). MSAL will proactively refresh the token based on this information. The app will continue to run while the old token is valid but will have a longer timeframe during which to make another successful token acquisition. --### Stay up to date +MSAL is able to refresh tokens. When the Microsoft identity platform issues a long-lived token, it can send information to the client to refresh the token (refresh\_in). The app runs while the old token is valid, but it takes longer for another token acquisition. -Developers should have a process for updating to the latest MSAL release. Authentication is part of your app security and your app needs to stay current with the security improvements contained in new MSAL releases. This is generally good practice for libraries under continuous development and doing so will ensure you have the most up to date code with respect to app resilience. As Microsoft Identity continues to innovate on ways for applications to be more resilient, apps that use the latest MSAL will be the most prepared to build on these innovations. +### MSAL releases -[Check the latest MSAL.js version and release notes](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) +We recommend developers build a process to use the latest MSAL release because authentication is part of app security. Use this practice for libraries under development and improve app resilience. -[Check the latest MSAL .NET version and release notes](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/releases) +Find the latest version and release notes: -[Check the latest MSAL Python version and release notes](https://github.com/AzureAD/microsoft-authentication-library-for-python/releases) +* [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) +* [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/releases) +* [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python/releases) +* [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) +* [microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc/releases) +* [microsoft-authentication-library-for-android](https://github.com/AzureAD/microsoft-authentication-library-for-android/releases) +* [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) +* [microsoft-identity-web](https://github.com/AzureAD/microsoft-identity-web/releases) -[Check the latest MSAL Java version and release notes](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) +## Resilient patterns for token handling -[Check the latest MSAL iOS and macOS version and release notes](https://github.com/AzureAD/microsoft-authentication-library-for-objc/releases) +If you don't use MSAL, use resilient patterns for token handling. The MSAL library implements best practices. -[Check the latest MSAL Android version and release notes](https://github.com/AzureAD/microsoft-authentication-library-for-android/releases) +Generally, applications using modern authentication call an endpoint to retrieve tokens that authenticate the user, or authorize the application to call protected APIs. MSAL handles authentication and implements patterns to improve resilience. If you don't use MSAL, use the guidance in this section for best practices. Otherwise, MSAL implements best practices automatically. -[Check the latest MSAL Angular version and release notes](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) +### Cache tokens -[Check the latest Microsoft.Identity.Web version and release notes](https://github.com/AzureAD/microsoft-identity-web/releases) +Ensure apps cache tokens accurately from the Microsoft identity platform. After your app receives tokens, the HTTP response with tokens has an `expires_in` property that indicates the duration to cache, and when to reuse it. Confirm application don't attempt to decode an API access token. -## Use resilient patterns for token handling +  -If you are not using MSAL, you can use these resilient patterns for token handling. These best practices are implemented automatically by the MSAL library. +Cached tokens prevent unnecessary traffic between an app and the Microsoft identity platform. This scenario makes the app less susceptible to token acquisition failures by reducing token acquisition calls. Cached tokens improve application performance, because the app blocks acquiring tokens less frequently. Users remain signed in to your application for the token lifetime. -In general, an application that uses modern authentication will call an endpoint to retrieve tokens that authenticate the user or authorize the application to call protected APIs. MSAL is meant to handle the details of authentication and implements several patterns to improve resilience of this process. Use the guidance in this section to implement best practices if you choose to use a library other than MSAL. If you use MSAL, you get all of these best-practices for free, as MSAL implements them automatically. +### Serialize and persist tokens -### Cache tokens +Ensure apps serialize their token cache securely to persist the tokens between app instances. Reuse tokens during their lifetime. Refresh tokens and access tokens are issued for many hours. During this time, users might start your application several times. When an app starts, confirm it looks for valid access, or a refresh token. This increases app resilience and performance. -Apps should properly cache tokens received from Microsoft Identity. When your app receives tokens, the HTTP response that contains the tokens also contains an "expires_in" property that tells the application how long to cache, and reuse, the token. It is important that applications use the "expires_in" property to determine the lifespan of the token. Application must never attempt to decode an API access token. +Learn more: - +* [Refresh the access tokens](../develop/v2-oauth2-auth-code-flow.md#refresh-the-access-token) +* [Microsoft identity platform access tokens](../develop/access-tokens.md) -Using the cached token prevents unnecessary traffic between your app and Microsoft Identity, and makes your app less susceptible to token acquisition failures by reducing the number of token acquisition calls. Cached tokens also improve your application's performance as the app needs to block on acquiring tokens less. Your user can stay signed-in to your application for the length of that token's lifetime. +  -### Serialize and persist tokens +Ensure persistent token storage has access control and encryption, in relation to the user-owner, or process identity. On various operating systems, there are credential storage features. -Apps should securely serialize their token cache to persist the tokens between instances of the app. Tokens can be reused as long as they are within their valid lifetime. [Refresh tokens](../develop/v2-oauth2-auth-code-flow.md#refresh-the-access-token), and, increasingly, [access tokens](../develop/access-tokens.md), are issued for many hours. This valid time can span a user starting your application many times. When your app starts, it should check to see if there is a valid access or refresh token that can be used. This will increase the app's resilience and performance as it avoids any unnecessary calls to Microsoft Identity. +### Acquire tokens silently - +Authenticating a user or retrieving authorization to call an API entails multiple steps in Microsoft identity platform. For example, users signing in for the first time enter credentials and perform a multi-factor authentication. Each step affects the resource that provides the service. The best user experience with the least dependencies is silent token acquisition. -The persistent token storage should be access controlled and encrypted to the owning user or process identity. On platforms like mobile, Windows and Mac, the developer should take advantage of built-in capabilities for storing credentials. +  -### Acquire tokens silently +Silent token acquisition starts with a valid token from the app token cache. If there's no valid token, the app attempts to acquire a token using an available refresh token, and the token endpoint. If neither option is available, the app acquires a token using the `prompt=none` parameter. This action uses the authorization endpoint, but no UI appears for the user. If possible, the Microsoft identity platform provides a token to the app without user interaction. If no method results in a token, then the user manually reauthenticates. -The process of authenticating a user or retrieving authorization to call an API can require multiple steps in Microsoft Identity. For example, when the user signs in for the first time they may need to enter credentials and perform a multi-factor authentication via a text message. Each step adds a dependency on the resource that provides that service. The best experience for the user, and the one with the least dependencies, is to attempt to acquire a token silently to avoid these extra steps before requesting user interaction. +> [!NOTE] +> In general, ensure apps don't use prompts like 'login' and 'consent'. These prompts force user interaction, when no interaction is required. - +## Response code handling -Acquiring a token silently starts with using a valid token from the app's token cache. If there is no valid token available, the app should attempt to acquire a token using a refresh token, if available, and the token endpoint. If neither of these options is available, the app should acquire a token using the "prompt=none" parameter. This will use the authorization endpoint, but not show any UI to the user. If the Microsoft Identity can provide a token to the app without interacting with the user, it will. If none of these methods result in a token, then a user will need to re-authenticate interactively. +Use the following sections to learn about response codes. -> [!NOTE] -> In general, apps should avoid using prompts like "login" and "consent" as they will force user interaction even when no interaction is required. +### HTTP 429 response code -## Handle service responses properly +There are error responses that affect resilience. If your application receives an HTTP 429 response code, Too Many Requests, Microsoft identity platform is throttling your requests. If an app makes too many requests, it's throttled to prevent the app from receiving tokens. Don't allow an app to attempt token acquisition, before the **Retry-After** response field time is complete. Often, a 429 response indicates the application isn't caching and reusing tokens correctly. Confirm how tokens are cached and reused in the application. -While applications should handle all error responses, there are some responses that can impact resilience. If your application receives an HTTP 429 response code, Too Many Requests, Microsoft Identity is throttling your requests. If your app continues to make too many requests, it will continue to be throttled preventing your app from receiving tokens. Your application should not attempt to acquire a token again until after the time, in seconds, in the Retry-After response field has passed. Receiving a 429 response is often an indication that the application is not caching and reusing tokens correctly. Developers should review how tokens are cached and reused in the application. +### HTTP 5x response code -When an application receives an HTTP 5xx response code the app must not enter a fast retry loop. When present, the application should honor the same Retry-After handling as it does for a 429 response. If no Retry-After header is provided by the response, we recommend implementing an exponential back-off retry with the first retry at least 5 seconds after the response. +If an application receives an HTTP 5x response code, the app must not enter a fast retry loop. Use the same handling for a 429 response. If no Retry-After header appears, implement an exponential back-off retry with the first retry, at least 5 seconds after the response. -When a request times out applications should not retry immediately. Implement an exponential back-off retry with the first retry at least 5 seconds after the response. +When a request times out, immediate retries are discouraged. Implement an exponential back-off retry, with the first retry, at least 5 seconds after the response. -## Evaluate options for retrieving authorization related information +## Retrieving authorization related information -Many applications and APIs need specific information about the user to make authorization decisions. There are a few ways for an application to get this information. Each method has its advantages and disadvantages. Developers should weigh these to determine which strategy is best for resilience in their app. +Many applications and APIs need user information to authorize. Available methods have advantages and disadvantages. ### Tokens -Identity (ID) tokens and access tokens contain standard claims that provide information about the subject. These are documented in [Microsoft identity platform ID tokens](../develop/id-tokens.md) and [Microsoft identity platform access tokens](../develop/access-tokens.md). If the information your app needs is already in the token, then the most efficient technique for retrieving that data is to use token claims as that will save the overheard of an additional network call to retrieve information separately. Fewer network calls mean higher overall resilience for the application. +Identity (ID) tokens and access tokens have standard claims that provide information. If needed information is in the token, the most efficient technique is token claims, because that prevents another network call. Fewer network calls equate better resilience. ++Learn more: ++* [Microsoft identity platform ID tokens](../develop/id-tokens.md) +* [Microsoft identity platform access tokens](../develop/access-tokens.md) > [!NOTE]-> Some applications call the UserInfo endpoint to retrieve claims about the user that authenticated. The information available in the ID token that your app can receive is a superset of the information it can get from the UserInfo endpoint. Your app should use the ID token to get information about the user instead of calling the UserInfo endpoint. +> Some applications call the UserInfo endpoint to retrieve claims about the authenticated user. The information in the ID token is a superset of information from the UserInfo endpoint. Enable apps to use the ID token instead of calling the UserInfo endpoint. ++Augment standard token claims with optional claims, such as groups. The **Application Group** option includes groups assigned to the application. The **All** or **Security groups** options include groups from apps in the same tenant, which can add groups to the token. Evaluate the effect, because it can negate the efficiency of requesting groups in the token by causing token bloat, and requiring more calls to get the groups. -An app developer can augment standard token claims with [optional claims](../develop/active-directory-optional-claims.md). One common optional claim is [groups](../develop/active-directory-optional-claims.md#configuring-groups-optional-claims). There are several ways to add group claims. The "Application Group" option only includes groups assigned to the application. The "All" or "Security groups" options include groups from all apps in the same tenant, which can add many groups to the token. It is important to evaluate the effect in your case, as it can potentially negate the efficiency gained by requesting groups in the token by causing token bloat and even requiring additional calls to get the full list of groups. +Learn more: -Instead of using groups in your token you can instead use and include app roles. Developers can define [app roles](../develop/howto-add-app-roles-in-azure-ad-apps.md) for their apps and APIs which the customer can manage from their directory using the portal or APIs. IT Pros can then assign roles to different users and groups to control who has access to what content and functionality. When a token is issued for the application or API, the roles assigned to the user will be available in the roles claim in the token. Getting this information directly in a token can save additional APIs calls. +* [Provide optional claims to your app](../develop/active-directory-optional-claims.md) +* [Configuring groups optional claims](../develop/active-directory-optional-claims.md#configuring-groups-optional-claims) -Finally, IT Admins can also add claims based on specific information in a tenant. For example, an enterprise can have an extension to have an enterprise specific User ID. +We recommend you use and include app roles, which customers manage by using the portal or APIs. Assign roles to users and groups to control access. When a token is issued, the assigned roles are in the token roles claim. Information derived from a token prevents more APIs calls. -In all cases, adding information from the directory directly to a token can be efficient and increase the apps resilience by reducing the number of dependencies the app has. On the other hand, it does not address any resilience issues from being unable to acquire a token. You should only add optional claims for the main scenarios of your application. If the app requires information only for the admin functionality, then it is best for the application to obtain that information only as needed. +See, [Add app roles to your application and receive them in the token](../develop/howto-add-app-roles-in-azure-ad-apps.md) ++Add claims based on tenant information. For example, an extension has an enterprise-specific User ID. ++Adding information from the directory to a token is efficient and increases resiliency by reducing dependencies. It doesn't address resilience issues due to an inability to acquire a token. Add optional claims for the application's primary scenarios. If the app requires information for administrative functionality, the application can obtain that information, as needed. ### Microsoft Graph -Microsoft Graph provides a unified API endpoint to access the Microsoft 365 data that describes the patterns of productivity, identity and security in an organization. Applications that use Microsoft Graph can potentially use any of the information across Microsoft 365 for authorization decisions. +Microsoft Graph has a unified API endpoint to access Microsoft 365 data about productivity patterns, identity, and security. Applications using Microsoft Graph can use Microsoft 365 information for authorization. ++Apps require one token to access Microsoft 365, which is more resilient than previous APIs for Microsoft 365 components like Microsoft Exchange or Microsoft SharePoint that required multiple tokens. -Apps require just a single token to access all of Microsoft 365. This is more resilient than using the older APIs that are specific to Microsoft 365 components like Microsoft Exchange or Microsoft SharePoint where multiple tokens are required. +When using Microsoft Graph APIs, use a Microsoft Graph SDK that simplifies building resilient applications that access Microsoft Graph. -When using Microsoft Graph APIs, we suggest your use a [Microsoft Graph SDK](/graph/sdks/sdks-overview). The Microsoft Graph SDKs are designed to simplify building high-quality, efficient, and resilient applications that access Microsoft Graph. +See, [Microsoft Graph SDK overview](/graph/sdks/sdks-overview) -For authorization decisions, developers should consider when to use the claims available in a token as an alternative to some Microsoft Graph calls. As mentioned above, developers could request groups, app roles, and optional claims in their tokens. In terms of resilience, using Microsoft Graph for authorization requires additional network calls that rely on Microsoft Identity (to get the token to access Microsoft Graph) as well as Microsoft Graph itself. However, if your application already relies on Microsoft Graph as its data layer, then relying on the Graph for authorization is not an additional risk to take. +For authorization, consider using token claims instead of some Microsoft Graph calls. Request groups, app roles, and optional claims in tokens. Microsoft Graph for authorization requires more network calls that rely on the Microsoft identity platform and Microsoft Graph. However, if your application relies on Microsoft Graph as its data layer, then Microsoft Graph for authorization isn't more risk. ## Use broker authentication on mobile devices -On mobile devices, using an authentication broker like Microsoft Authenticator will improve resilience. The broker adds benefits above what is available with other options such as the system browser or an embedded WebView. The authentication broker can utilize a [primary refresh token](../devices/concept-primary-refresh-token.md) (PRT) that contains claims about the user and the device and can be used to get authentication tokens to access other applications from the device. When a PRT is used to request access to an application, its device and MFA claims are trusted by Azure AD. This increases resilience by avoiding additional steps to authenticate the device again. Users won't be challenged with multiple MFA prompts on the same device, therefore increasing resilience by reducing dependencies on external services and improving the user experience. +On mobile devices, an authentication broker like Microsoft Authenticator improves resilience. The authentication broker uses a primary refresh token (PRT) with claims about the user and device. Use PRT for authentication tokens to access other applications from the device. When a PRT requests application access, Azure Active Directory (Azure AD) trusts its device and MFA claims. This increases resilience by reducing steps to authenticate the device. Users aren't challenged with multiple MFA prompts on the same device. ++See, [What is a Primary Refresh Token?](../devices/concept-primary-refresh-token.md) ++  - +MSAL supports broker authentication. Learn more: -Broker authentication is automatically supported by MSAL. You can find more information on using brokered authentication on the following pages: +* [SSO through Authentication broker on iOS](../develop/single-sign-on-macos-ios.md#sso-through-authentication-broker-on-ios) +* [Enable cross-app SSO on Android using MSAL](../develop/msal-android-single-sign-on.md) -- [Configure SSO on macOS and iOS](../develop/single-sign-on-macos-ios.md#sso-through-authentication-broker-on-ios)-- [How to enable cross-app SSO on Android using MSAL](../develop/msal-android-single-sign-on.md)+## Continuous Access Evaluation -## Adopt Continuous Access Evaluation +Continuous Access Evaluation (CAE) increases application security and resilience with long-lived tokens. With CAE, an access token is revoked based on critical events and policy evaluation, rather than short token lifetimes. For some resource APIs, because risk and policy are evaluated in real time, CAE increases token lifetime up to 28 hours. MSAL refreshes long-lived tokens. -[Continuous Access Evaluation (CAE)](../conditional-access/concept-continuous-access-evaluation.md) is a recent development that can increase application security and resilience with long-lived tokens. CAE is an emerging industry standard being developed in the Shared Signals and Events Working Group of the OpenID Foundation. With CAE, an access token can be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation), rather than relying on a short token lifetime. For some resource APIs, because risk and policy are evaluated in real time, CAE can substantially increase token lifetime up to 28 hours. As resource APIs and applications adopt CAE, Microsoft Identity will be able to issue access tokens that are revocable and are valid for extended periods of time. These long-lived tokens will be proactively refreshed by MSAL. +Learn more: -While CAE is in early phases, it is possible to [develop client applications today that will benefit from CAE](../develop/app-resilience-continuous-access-evaluation.md) when the resources (APIs) the application uses adopt CAE. As more resources adopt CAE, your application will be able to acquire CAE enabled tokens for those resources as well. The Microsoft Graph API, and [Microsoft Graph SDKs](/graph/sdks/sdks-overview), will preview CAE capability early 2021. If you would like to participate in the public preview of Microsoft Graph with CAE, you can let us know you are interested here: [https://aka.ms/GraphCAEPreview](https://aka.ms/GraphCAEPreview). +* [Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) +* [Securing applications with Continuous Access Evaluation](/security/zero-trust/develop/secure-with-cae) +* [Critical event evaluation](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) +* [Conditional Access policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation) +* [How to use CAE enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) -If you develop resource APIs, we encourage you to participate in the [Shared Signals and Events WG](https://openid.net/wg/sse/). We are working with this group to enable the sharing of security events between Microsoft Identity and resource providers. +If you develop resource APIs, go to openid.net for [Shared Signals ΓÇô A Secure Webhooks Framework](https://openid.net/wg/sse/). ## Next steps -- [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md)-- [Build resilience into daemon applications](resilience-daemon-app.md)-- [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md)-- [Build resilience in your CIAM systems](resilience-b2c.md)+* [How to use CAE enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) +* [Increase the resilience of authentication and authorization in daemon applications you develop](resilience-daemon-app.md) +* [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md) +* [Build resilience in your customer identity and access management with Azure AD B2C](resilience-b2c.md) |
active-directory | Resilience Daemon App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-daemon-app.md | Title: Increase the resilience of authentication and authorization in daemon applications you develop -description: Guidance for increasing resiliency of authentication and authorization in daemon application using the Microsoft identity platform +description: Learn to increase authentication and authorization resiliency in daemon application using the Microsoft identity platform -+ Previously updated : 11/23/2020 Last updated : 03/03/2023 # Increase the resilience of authentication and authorization in daemon applications you develop -This article provides guidance on how developers can use the Microsoft identity platform and Azure Active Directory to increase the resilience of daemon applications. This includes background processes, services, server to server apps, and applications without users. +Learn to use the Microsoft identity platform and Azure Active Directory (Azure AD) to increase the resilience of daemon applications. Find information about background processes, services, server to server apps, and applications without users. - +See, [What is the Microsoft identity platform?](../develop/v2-overview.md) -## Use Managed Identities for Azure Resources +The following diagram illustrates a daemon application making a call to Microsoft identity platform. -Developers building daemon apps on Microsoft Azure can use [Managed Identities for Azure Resources](../managed-identities-azure-resources/overview.md). Managed Identities eliminate the need for developers to manage secrets and credentials. The feature improves resilience by avoiding mistakes around certificate expiry, rotation errors, or trust. It also has several built-in features meant specifically to increase resilience. +  -Managed Identities use long lived access tokens and information from Microsoft Identity to proactively acquire new tokens within a large window of time before the existing token expires. Your app can continue to run while attempting to acquire a new token. +## Managed identities for Azure resources -Managed Identities also use regional endpoints to improve performance and resilience against out-of-region failures. Using a regional endpoint helps to keep all traffic inside a geographical area. For example, if your Azure Resource is in WestUS2, all the traffic, including Microsoft Identity generated traffic, should stay in WestUS2. This eliminates possible points of failure by consolidating the dependencies of your service. +If you're building daemon apps on Microsoft Azure, use managed identities for Azure resources, which handle secrets and credentials. The feature improves resilience by handling certificate expiry, rotation, or trust. -## Use the Microsoft Authentication Library +See, [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) -Developers of daemon apps who do not use Managed Identities can use the [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md), which makes implementing authentication and authorization simple, and automatically uses best practices for resilience. MSAL will make the process of providing the required Client Credentials easier. For example, your application does not need to implement creating and signing JSON Web Token assertions when using certificate-based credentials. +Managed identities use long-lived access tokens and information from Microsoft identity platform to acquire new tokens before tokens expire. Your app runs while acquiring new tokens. -### Use Microsoft.Identity.Web for .NET Developers +Managed identities use regional endpoints, which help prevent out-of-region failures by consolidating service dependencies. Regional endpoints help keep traffic in a geographical area. For example, if your Azure resource is in WestUS2, all traffic stays in WestUS2. -Developers building daemon apps on ASP.NET Core can use the [Microsoft.Identity.Web](../develop/microsoft-identity-web.md) library. This library is built on top of MSAL to make implementing authorization even easier for ASP.NET Core apps. It includes several [distributed token cache](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization#distributed-token-cache) strategies for distributed apps that can run in multiple regions. +## Microsoft Authentication Library ++If you develop daemon apps and don't use managed identities, use the Microsoft Authentication Library (MSAL) for authentication and authorization. MSAL eases the process of providing client credentials. For example, your application doesn't need to create and sign JSON web token assertions with certificate-based credentials. ++See, [Overview of the Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) ++### Microsoft.Identity.Web for .NET developers ++If you develop daemon apps on ASP.NET Core, use the Microsoft.Identity.Web library to ease authorization. It includes distributed token cache strategies for distributed apps that run in multiple regions. ++Learn more: ++* [Microsoft Identity Web authentication library](../develop/microsoft-identity-web.md) +* [Distributed token cache](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization#distributed-token-cache) ## Cache and store tokens -If you are not using MSAL to implement authentication and authorization, you can implement some best practices for caching and storing tokens. MSAL implements and follows these best practices automatically. +If you don't use MSAL for authentication and authorization, there are best practices for caching and storing tokens. MSAL implements and follows these best practices. ++An application acquires tokens from an identity provider (IdP) to authorize the application to call protected APIs. When your app receives tokens, the response with the tokens contains an `expires\_in` property that tells the application how long to cache, and reuse, the token. Ensure applications use the `expires\_in` property to determine token lifespan. Confirm application don't attempt to decode an API access token. Using the cached token prevents unnecessary traffic between an app and Microsoft identity platform. Users are signed in to your application for the token's lifetime. ++## HTTP 429 and 5xx error codes ++Use the following sections to learn about HTTP 429 and 5xx error codes -An application acquires tokens from an Identity provider to authorize the application to call protected APIs. When your app receives tokens, the response that contains the tokens also contains an "expires\_in" property that tells the application how long to cache, and reuse, the token. It is important that applications use the "expires\_in" property to determine the lifespan of the token. Application must never attempt to decode an API access token. Using the cached token prevents unnecessary traffic between your app and Microsoft Identity. Your user can stay signed-in to your application for the length of that token's lifetime. +### HTTP 429 -## Properly handle service responses +There are HTTP errors that affect resilience. If your application receives an HTTP 429 error code, Too Many Requests, Microsoft identity platform is throttling your requests, which prevents your app from receiving tokens. Ensure your apps don't attempt to acquire a token until the time in the **Retry-After** response field expires. The 429 error often indicates the application doesn't cache and reuse tokens correctly. -Finally, while applications should handle all error responses, there are some responses that can impact resilience. If your application receives an HTTP 429 response code, Too Many Requests, Microsoft Identity is throttling your requests. If your app continues to make too many requests, it will continue to be throttled preventing your app from receiving tokens. Your application should not attempt to acquire a token again until after the time, in seconds, in the "Retry-After" response field has passed. Receiving a 429 response is often an indication that the application is not caching and reusing tokens correctly. Developers should review how tokens are cached and reused in the application. +### HTTP 5xx -When an application receives an HTTP 5xx response code the app must not enter a fast retry loop. When present, the application should honor the same "Retry-After" handling as it does for a 429 response. If no "Retry-After" header is provided by the response, we recommend implementing an exponential back-off retry with the first retry at least 5 seconds after the response. +If an application receives an HTTP 5x error code, the app must not enter a fast retry loop. Ensure applications wait until the **Retry-After** field expires. If the response provides no Retry-After header, use an exponential back-off retry with the first retry, at least 5 seconds after the response. -When a request times out applications should not retry immediately. Implement an exponential back-off retry with the first retry at least 5 seconds after the response. +When a request times out, confirm that applications don't retry immediately. Use the previously cited exponential back-off retry. ## Next steps -- [Build resilience into applications that sign-in users](resilience-client-app.md)-- [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md)-- [Build resilience in your CIAM systems](resilience-b2c.md)+* [Increase the resilience of authentication and authorization in client applications you develop](resilience-client-app.md) +* [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md) +* [Build resilience in your customer identity and access management with Azure AD B2C](resilience-b2c.md) |
active-directory | Road To The Cloud Implement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-implement.md | You and your team might feel compelled to change your current employee provision ## Devices -Client workstations are traditionally joined to Active Directory and managed via Group Policy objects (GPOs) or device management solutions such as Microsoft Endpoint Configuration Manager. Your teams will establish a new policy and process to prevent newly deployed workstations from being domain joined. Key points include: +Client workstations are traditionally joined to Active Directory and managed via Group Policy objects (GPOs) or device management solutions such as Microsoft Configuration Manager. Your teams will establish a new policy and process to prevent newly deployed workstations from being domain joined. Key points include: * Mandate [Azure AD join](../devices/concept-azure-ad-join.md) for new Windows client workstations to achieve "no more domain join." For more information, see [Learn more about cloud-native endpoints](/mem/cloud-n ## Applications -Traditionally, application servers are often joined to an on-premises Active Directory domain so that they can use Windows Integrated Authentication (Kerberos or NTLM), directory queries through LDAP, and server management through GPO or Microsoft Endpoint Configuration Manager. +Traditionally, application servers are often joined to an on-premises Active Directory domain so that they can use Windows Integrated Authentication (Kerberos or NTLM), directory queries through LDAP, and server management through GPO or Microsoft Configuration Manager. The organization has a process to evaluate Azure AD alternatives when it's considering new services, apps, or infrastructure. Directives for a cloud-first approach to applications should be as follows. (New on-premises applications or legacy applications should be a rare exception when no modern alternative exists.) |
active-directory | Road To The Cloud Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md | This project focuses on migrating SSO capability from WAM systems to Azure AD. T ### Define an application server management strategy -In terms of infrastructure management, on-premises environments often use a combination of Group Policy objects (GPOs) and Microsoft Endpoint Configuration Manager features to segment management duties. For example, duties can be segmented into security policy management, update management, configuration management, and monitoring. +In terms of infrastructure management, on-premises environments often use a combination of Group Policy objects (GPOs) and Microsoft Configuration Manager features to segment management duties. For example, duties can be segmented into security policy management, update management, configuration management, and monitoring. Active Directory is for on-premises IT environments, and Azure AD is for cloud-based IT environments. One-to-one parity of features isn't present here, so you can manage application servers in several ways. Use the following table to determine what Azure-based tools you can use to repla | Management area | On-premises (Active Directory) feature | Equivalent Azure AD feature | | - | - | -|-| Security policy management| GPO, Microsoft Endpoint Configuration Manager| [Microsoft 365 Defender for Cloud](https://azure.microsoft.com/services/security-center/) | -| Update management| Microsoft Endpoint Configuration Manager, Windows Server Update Services| [Azure Automation Update Management](../../automation/update-management/overview.md) | -| Configuration management| GPO, Microsoft Endpoint Configuration Manager| [Azure Automation State Configuration](../../automation/automation-dsc-overview.md) | +| Security policy management| GPO, Microsoft Configuration Manager| [Microsoft 365 Defender for Cloud](https://azure.microsoft.com/services/security-center/) | +| Update management| Microsoft Configuration Manager, Windows Server Update Services| [Azure Automation Update Management](../../automation/update-management/overview.md) | +| Configuration management| GPO, Microsoft Configuration Manager| [Azure Automation State Configuration](../../automation/automation-dsc-overview.md) | | Monitoring| System Center Operations Manager| [Azure Monitor Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) | Here's more information that you can use for application server management: Here's more information that you can use for application server management: * If you must wait to migrate or perform a partial migration, you can use GPOs with [Azure AD DS](https://azure.microsoft.com/services/active-directory-ds/). -If you require management of application servers with Microsoft Endpoint Configuration Manager, you can't achieve this by using Azure AD DS. Microsoft Endpoint Configuration Manager isn't supported to run in an Azure AD DS environment. Instead, you'll need to extend your on-premises Active Directory instance to a domain controller running on an Azure VM. Or, you'll need to deploy a new Active Directory instance to an Azure IaaS virtual network. +If you require management of application servers with Microsoft Configuration Manager, you can't achieve this by using Azure AD DS. Microsoft Configuration Manager isn't supported to run in an Azure AD DS environment. Instead, you'll need to extend your on-premises Active Directory instance to a domain controller running on an Azure VM. Or, you'll need to deploy a new Active Directory instance to an Azure IaaS virtual network. ### Define the migration strategy for legacy applications |
active-directory | Road To The Cloud Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-posture.md | -Active Directory, Azure Active Directory (Azure AD), and other Microsoft tools are at the core of identity and access management (IAM). For example, Active Directory Domain Services (AD DS) and Microsoft Endpoint Configuration Manager provide device management in Active Directory. In Azure AD, Intune provides the same capability. +Active Directory, Azure Active Directory (Azure AD), and other Microsoft tools are at the core of identity and access management (IAM). For example, Active Directory Domain Services (AD DS) and Microsoft Configuration Manager provide device management in Active Directory. In Azure AD, Intune provides the same capability. As part of most modernization, migration, or Zero Trust initiatives, organizations shift IAM activities from using on-premises or infrastructure-as-a-service (IaaS) solutions to using built-for-the-cloud solutions. For an IT environment that uses Microsoft products and services, Active Directory and Azure AD play a role. |
active-directory | Plan Hybrid Identity Design Considerations Nextsteps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-nextsteps.md | Monitoring the following resources often provides the latest news and updates on * [Microsoft Enterprise Mobility blog](https://cloudblogs.microsoft.com/ENTERPRISEMOBILITY/) * [Microsoft In The Cloud blog](https://cloudblogs.microsoft.com/) * [Microsoft Intune blog](https://techcommunity.microsoft.com/t5/intune-customer-success/welcome-to-the-new-intune-customer-success-blog/ba-p/281367)-* [Microsoft Endpoint Configuration Manager blog](https://techcommunity.microsoft.com/t5/Configuration-Manager-Blog/bg-p/ConfigurationManagerBlog) +* [Microsoft Configuration Manager blog](https://techcommunity.microsoft.com/t5/Configuration-Manager-Blog/bg-p/ConfigurationManagerBlog) ## See also [Design considerations overview](plan-hybrid-identity-design-considerations-overview.md) |
active-directory | Concept Pim For Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md | Azure AD role-assignable group feature is not part of Azure AD Privileged Identi Groups can be role-assignable or non-role-assignable. The group can be enabled in PIM for Groups or not enabled in PIM for Groups. These are independent properties of the group. Any Azure AD security group and any Microsoft 365 group (except dynamic groups and groups synchronized from on-premises environment) can be enabled in PIM for Groups. The group does not have to be role-assignable group to be enabled in PIM for Groups. -If you want to assign Azure AD role to a group, it has to be role-assignable. Even if you do not intend to assign Azure AD role to the group but the group provides access to sensitive resources, it is still recommended to consider creating the group as role-assignable. This is because of extra protections role-assignable groups have ΓÇô see ΓÇ£What are Azure AD role-assignable groups?ΓÇ¥ in the section above. +If you want to assign Azure AD role to a group, it has to be role-assignable. Even if you do not intend to assign Azure AD role to the group but the group provides access to sensitive resources, it is still recommended to consider creating the group as role-assignable. This is because of extra protections role-assignable groups have ΓÇô see [ΓÇ£What are Azure AD role-assignable groups?ΓÇ¥](#what-are-azure-ad-role-assignable-groups) in the section above. Up until January 2023, it was required that every Privileged Access Group (former name for this PIM for Groups feature) had to be role-assignable group. This restriction is currently removed. Because of that, it is now possible to enable more than 500 groups per tenant in PIM, but only up to 500 groups can be role-assignable. |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | This article lists the Azure AD built-in roles you can assign to allow managemen > | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Can see only tenant level aggregates in Microsoft 365 Usage Analytics and Productivity Score. | 75934031-6c7e-415a-99d7-48dbd49e875e | > | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins. | fe930be7-5e62-47db-91af-98c3a49a38b1 | > | [Virtual Visits Administrator](#virtual-visits-administrator) | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app. | e300d9e7-4a2b-4295-9eff-f1c78b36cc98 |+> | [Viva Goals Administrator](#viva-goals-administrator) | Manage and configure all aspects of Microsoft Viva Goals. | 92b086b3-e367-4ef2-b869-1de128fb986e | > | [Windows 365 Administrator](#windows-365-administrator) | Can provision and manage all aspects of Cloud PCs. | 11451d60-acb2-45eb-a7d6-43d0f0125c13 | > | [Windows Update Deployment Administrator](#windows-update-deployment-administrator) | Can create and manage all aspects of Windows Update deployments through the Windows Update for Business deployment service. | 32696413-001a-46ae-978c-ce0f6b3620d2 | > | [Yammer Administrator](#yammer-administrator) | Manage all aspects of the Yammer service. | 810a2642-a034-447f-a5e8-41beaa378541 | Virtual Visits are a simple way to schedule and manage online and video appointm > | microsoft.virtualVisits/allEntities/allProperties/allTasks | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | +## Viva Goals Administrator ++Assign the Viva Goals Administrator role to users who need to do the following tasks: ++- Manage and configure all aspects of the Microsoft Viva Goals application +- Configure Microsoft Viva Goals admin settings +- Read Azure AD tenant information +- Monitor Microsoft 365 service health +- Create and manage Microsoft 365 service requests ++For more information, see [Roles and permissions in Viva Goals](/viva/goals/roles-permissions-in-viva-goals) and [Introduction to Microsoft Viva Goals](/viva/goals/intro-to-ms-viva-goals). ++> [!div class="mx-tableFixed"] +> | Actions | Description | +> | | | +> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | +> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | + ## Windows 365 Administrator Users with this role have global permissions on Windows 365 resources, when the service is present. Additionally, this role contains the ability to manage users and devices in order to associate policy, as well as create and manage groups. Privileged Auth Admin | | | | | :heavy_check_mark: | Privileged Role Admin | | | | | :heavy_check_mark: | :heavy_check_mark: Reports Reader | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:-User<br/>(no admin role, but member or owner of a role-assignable group) | | | | | :heavy_check_mark: | :heavy_check_mark: +User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | | | | | :heavy_check_mark: | :heavy_check_mark: User Admin | | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: All custom roles | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Privileged Auth Admin | | | :heavy_check_mark: | :heavy_check_mark Privileged Role Admin | | | :heavy_check_mark: | :heavy_check_mark: Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:-User<br/>(no admin role, but member or owner of a role-assignable group) | | | :heavy_check_mark: | :heavy_check_mark: +User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | | | :heavy_check_mark: | :heavy_check_mark: User Admin | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: All custom roles | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
active-directory | Sap Successfactors Inbound Provisioning Cloud Only Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md | The objective of this tutorial is to show the steps you need to perform to provi >[!NOTE] >Use this tutorial if the users you want to provision from SuccessFactors are cloud-only users who don't need an on-premises AD account. If the users require only on-premises AD account or both AD and Azure AD account, then please refer to the tutorial on [configure SAP SuccessFactors to Active Directory](sap-successfactors-inbound-provisioning-tutorial.md#overview) user provisioning. +The following video provides a quick overview of the steps involved when planning your provisioning integration with SAP SuccessFactors. ++> [!VIDEO https://www.youtube-nocookie.com/embed/66v2FR2-QrY] + ## Overview The [Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with the [SuccessFactors Employee Central](https://www.successfactors.com/products-services/core-hr-payroll/employee-central.html) in order to manage the identity life cycle of users. |
active-directory | Sap Successfactors Inbound Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md | The objective of this tutorial is to show the steps you need to perform to provi >[!NOTE] >Use this tutorial if the users you want to provision from SuccessFactors need an on-premises AD account and optionally an Azure AD account. If the users from SuccessFactors only need Azure AD account (cloud-only users), then please refer to the tutorial on [configure SAP SuccessFactors to Azure AD](sap-successfactors-inbound-provisioning-cloud-only-tutorial.md) user provisioning. +The following video provides a quick overview of the steps involved when planning your provisioning integration with SAP SuccessFactors. ++> [!VIDEO https://www.youtube-nocookie.com/embed/66v2FR2-QrY] ## Overview |
active-directory | Workday Inbound Cloud Only Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-cloud-only-tutorial.md | The objective of this tutorial is to show the steps you need to perform to provi >[!NOTE] >Use this tutorial if the users you want to provision from Workday are cloud-only users who don't need an on-premises AD account. If the users require only on-premises AD account or both AD and Azure AD account, then please refer to the tutorial on [configure Workday to Active Directory](workday-inbound-tutorial.md) user provisioning. +The following video provides a quick overview of the steps involved when planning your provisioning integration with Workday. ++> [!VIDEO https://www.youtube-nocookie.com/embed/TfndXBlhlII] + ## Overview The [Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with the [Workday Human Resources API](https://community.workday.com/sites/default/files/file-hosting/productionapi/Human_Resources/v21.1/Get_Workers.html) in order to provision user accounts. The Workday user provisioning workflows supported by the Azure AD user provisioning service enable automation of the following human resources and identity lifecycle management scenarios: |
active-directory | Workday Inbound Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-tutorial.md | The objective of this tutorial is to show the steps you need to perform to provi >* If the users from Workday only need Azure AD account (cloud-only users), then please refer to the tutorial on [configure Workday to Azure AD](workday-inbound-cloud-only-tutorial.md) user provisioning. >* To configure writeback of attributes such as email address, username and phone number from Azure AD to Workday, please refer to the tutorial on [configure Workday writeback](workday-writeback-tutorial.md). +The following video provides a quick overview of the steps involved when planning your provisioning integration with Workday. ++> [!VIDEO https://www.youtube-nocookie.com/embed/TfndXBlhlII] + ## Overview The [Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with the [Workday Human Resources API](https://community.workday.com/sites/default/files/file-hosting/productionapi/Human_Resources/v21.1/Get_Workers.html) in order to provision user accounts. The Workday user provisioning workflows supported by the Azure AD user provisioning service enable automation of the following human resources and identity lifecycle management scenarios: This section describes the end-to-end user provisioning solution architecture fo Configuring Workday to Active Directory user provisioning requires considerable planning covering different aspects such as: * Setup of the Azure AD Connect provisioning agent * Number of Workday to AD user provisioning apps to deploy-* Selecting the the right matching identifier, attribute mapping, transformation and scoping filters +* Selecting the right matching identifier, attribute mapping, transformation and scoping filters Please refer to the [cloud HR deployment plan](../app-provisioning/plan-cloud-hr-provision.md) for comprehensive guidelines and recommended best practices. |
active-directory | Nist Authenticator Assurance Level 3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-3.md | To meet the requirement for reauthentication, regardless of user activity, Micro Use NIST for compensating controls to confirm subscriber presence: -* Set a session inactivity time out of 15 minutes: Lock the device at the OS level by using Microsoft Endpoint Configuration Manager, Group Policy Object (GPO), or Intune. For the subscriber to unlock it, require local authentication. +* Set a session inactivity time out of 15 minutes: Lock the device at the OS level by using Microsoft Configuration Manager, Group Policy Object (GPO), or Intune. For the subscriber to unlock it, require local authentication. * Set timeout, regardless of activity, by running a scheduled task using Configuration Manager, GPO, or Intune. Lock the machine after 12 hours, regardless of activity. |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | The traditional [Azure Container Networking Interface (CNI)](./configure-azure-c With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS. > [!NOTE]-> Azure CNI Overlay is currently **_unavailable_** in the following regions: -> - South Central US -> - West US +> Azure CNI Overlay is currently **_unavailable_** in the **West US** region. All other public regions are supported. ## Overview of overlay networking Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa | Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking | | Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency | | Kubernetes Network Policies | Azure Network Policies, Calico, Cilium | Calico |-| OS platforms supported | Linux and Windows Server 2022 | Linux only | +| OS platforms supported | Linux and Windows Server 2022 | Linux only | ## IP address planning The following are additional factors to consider when planning pods IP address s * **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address. +## Network security groups ++Pod to pod traffic with Azure CNI Overlay is not encapsulated and subnet [network security group][nsgs] rules are applied. If the subnet NSG contains deny rules that would impact the pod CIDR traffic, make sure the following rules are in place to ensure proper cluster functionality (in addition to all [AKS egress requirements][aks-egress]): ++* Traffic from the node CIDR to the node CIDR on all ports and protocols +* Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing) +* Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS) ++Traffic from a pod to any destination outside of the pod CIDR block will utilize SNAT to set the source IP to the IP of the node where the pod is running. ++If you wish to restrict traffic between workloads in the cluster, [network policies][aks-network-policies] are the recommended solution. + ## Maximum pods per node You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool. The default for Azure CNI Overlay is 30. The maximum value that you can specify in Azure CNI Overlay is 250, and the minimum value is 10. The maximum pods per node value configured during creation of a node pool applies to the nodes in that node pool only. location="westcentralus" az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 ``` +## Upgrade existing clusters ++To update an existing cluster to use Azure CNI overlay, there are a couple prerequisites: ++1. The cluster must use Azure CNI without the pod subnet feature. +1. The cluster is _not_ using network policies. +1. The Overlay Pod CIDR needs to be an address range that _does not_ overlap with the existing cluster's VNet. ++To update a cluster, run the following Azure CLI command. ++```azurecli +az aks update --name $clusterName --resource-group $resourceGroup --network-plugin azure --network-plugin-mode overlay --pod-cidr $overlayPodCidr +``` ++This will perform a rolling upgrade of nodes in **all** nodepools simultaneously to Azure CNI overlay and should be treated like a node image upgrade. During the upgrade, traffic from an Overlay pod to a CNI v1 pod will be SNATed(Source Network Address Translation) + ## Next steps To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md). To learn how to utilize AKS with your own Container Network Interface (CNI) plug [az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show+[aks-egress]: limit-egress-traffic.md +[aks-network-policies]: use-network-policies.md +[nsg]: /azure/virtual-network/network-security-groups-overview |
aks | Coredns Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md | description: Learn how to customize CoreDNS to add subdomains or extend custom D Previously updated : 03/15/2019 Last updated : 03/03/2023 #Customer intent: As a cluster operator or developer, I want to learn how to customize the CoreDNS configuration to add sub domains or extend to custom DNS endpoints within my network-Azure Kubernetes Service (AKS) uses the [CoreDNS][coredns] project for cluster DNS management and resolution with all *1.12.x* and higher clusters. Previously, the kube-dns project was used. This kube-dns project is now deprecated. For more information about CoreDNS customization and Kubernetes, see the [official upstream documentation][corednsk8s]. +Azure Kubernetes Service (AKS) uses the [CoreDNS][coredns] project for cluster DNS management and resolution with all *1.12.x* and higher clusters. For more information about CoreDNS customization and Kubernetes, see the [official upstream documentation][corednsk8s]. -As AKS is a managed service, you cannot modify the main configuration for CoreDNS (a *CoreFile*). Instead, you use a Kubernetes *ConfigMap* to override the default settings. To see the default AKS CoreDNS ConfigMaps, use the `kubectl get configmaps --namespace=kube-system coredns -o yaml` command. +AKS is a managed service, so you can't modify the main configuration for CoreDNS (a *CoreFile*). Instead, you use a Kubernetes *ConfigMap* to override the default settings. To see the default AKS CoreDNS ConfigMaps, use the `kubectl get configmaps --namespace=kube-system coredns -o yaml` command. -This article shows you how to use ConfigMaps for basic customization options of CoreDNS in AKS. This approach differs from configuring CoreDNS in other contexts such as using the CoreFile. Verify the version of CoreDNS you are running as the configuration values may change between versions. +This article shows you how to use ConfigMaps for basic CoreDNS customization options of in AKS. This approach differs from configuring CoreDNS in other contexts, such as CoreFile. > [!NOTE]-> `kube-dns` offered different [customization options][kubednsblog] via a Kubernetes config map. CoreDNS is **not** backwards compatible with kube-dns. Any customizations you previously used must be updated for use with CoreDNS. +> Previously, kube-dns was used for cluster DNS management and resolution, but it's now deprecated. `kube-dns` offered different [customization options][kubednsblog] via a Kubernetes config map. CoreDNS is **not** backwards compatible with kube-dns. Any customizations you previously used must be updated for CoreDNS. ## Before you begin -This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal]. +* This article assumes that you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal]. +* Verify the version of CoreDNS you're running. The configuration values may change between versions. +* When you create configurations like the examples below, your names in the *data* section must end in *.server* or *.override*. This naming convention is defined in the default AKS CoreDNS ConfigMap, which you can view using the `kubectl get configmaps --namespace=kube-system coredns -o yaml` command. -When creating a configuration like the examples below, your names in the *data* section must end in either *.server* or *.override*. This naming convention is defined in the default AKS CoreDNS Configmap which you can view using the `kubectl get configmaps --namespace=kube-system coredns -o yaml` command. --## What is supported/unsupported +## Plugin support All built-in CoreDNS plugins are supported. No add-on/third party plugins are supported. ## Rewrite DNS -One scenario you have is to perform on-the-fly DNS name rewrites. In the following example, replace `<domain to be written>` with your own fully qualified domain name. Create a file named `corednsms.yaml` and paste the following example configuration: +You can customize CoreDNS with AKS to perform on-the-fly DNS name rewrites. -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: coredns-custom - namespace: kube-system -data: - test.server: | # you may select any name here, but it must end with the .server file extension - <domain to be rewritten>.com:53 { - log - errors - rewrite stop { - name regex (.*)\.<domain to be rewritten>.com {1}.default.svc.cluster.local - answer name (.*)\.default\.svc\.cluster\.local {1}.<domain to be rewritten>.com - } - forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name - } -``` +1. Create a file named `corednsms.yaml` and paste the following example configuration. Make sure to replace `<domain to be rewritten>` with your own fully qualified domain name. -> [!IMPORTANT] -> If you redirect to a DNS server, such as the CoreDNS service IP, that DNS server must be able to resolve the rewritten domain name. + ```yaml + apiVersion: v1 + kind: ConfigMap + metadata: + name: coredns-custom + namespace: kube-system + data: + test.override: | + <domain to be rewritten>.com:53 { + log + errors + rewrite stop { + name regex (.*)\.<domain to be rewritten>.com {1}.default.svc.cluster.local + answer name (.*)\.default\.svc\.cluster\.local {1}.<domain to be rewritten>.com + } + forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name + } + ``` -Create the ConfigMap using the [kubectl apply configmap][kubectl-apply] command and specify the name of your YAML manifest: + > [!IMPORTANT] + > If you redirect to a DNS server, such as the CoreDNS service IP, that DNS server must be able to resolve the rewritten domain name. -```console -kubectl apply -f corednsms.yaml -``` +2. Create the ConfigMap using the [`kubectl apply configmap`][kubectl-apply] command and specify the name of your YAML manifest. -To verify the customizations have been applied, use the [kubectl get configmaps][kubectl-get] and specify your *coredns-custom* ConfigMap: + ```console + kubectl apply -f corednsms.yaml + ``` -``` -kubectl get configmaps --namespace=kube-system coredns-custom -o yaml -``` +3. Verify the customizations have been applied using the [`kubectl get configmaps`][kubectl-get] and specify your *coredns-custom* ConfigMap. -Now force CoreDNS to reload the ConfigMap. The [kubectl delete pod][kubectl delete] command isn't destructive and doesn't cause down time. The `kube-dns` pods are deleted, and the Kubernetes Scheduler then recreates them. These new pods contain the change in TTL value. + ```console + kubectl get configmaps --namespace=kube-system coredns-custom -o yaml + ``` -```console -kubectl delete pod --namespace kube-system -l k8s-app=kube-dns -``` +4. Force CoreDNS to reload the ConfigMap using the [`kubectl delete pod`][kubectl delete] command and the `kube-dns` label. This command deletes the `kube-dns` pods, and then the Kubernetes Scheduler recreates them. The new pods contain the change in TTL value. -> [!Note] -> The command above is correct. While we're changing `coredns`, the deployment is under the **kube-dns** label. + ```console + kubectl delete pod --namespace kube-system -l k8s-app=kube-dns + ``` ## Custom forward server -If you need to specify a forward server for your network traffic, you can create a ConfigMap to customize DNS. In the following example, update the `forward` name and address with the values for your own environment. Create a file named `corednsms.yaml` and paste the following example configuration: +If you need to specify a forward server for your network traffic, you can create a ConfigMap to customize DNS. -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: coredns-custom - namespace: kube-system -data: - test.server: | # you may select any name here, but it must end with the .server file extension - <domain to be rewritten>.com:53 { - forward foo.com 1.1.1.1 - } -``` +1. Create a file named `corednsms.yaml` and paste the following example configuration. Make sure to replace the `forward` name and the address with the values for your own environment. -As in the previous examples, create the ConfigMap using the [kubectl apply configmap][kubectl-apply] command and specify the name of your YAML manifest. Then, force CoreDNS to reload the ConfigMap using the [kubectl delete pod][kubectl delete] for the Kubernetes Scheduler to recreate them: + ```yaml + apiVersion: v1 + kind: ConfigMap + metadata: + name: coredns-custom + namespace: kube-system + data: + test.server: | # you may select any name here, but it must end with the .server file extension + <domain to be rewritten>.com:53 { + forward foo.com 1.1.1.1 + } + ``` -```console -kubectl apply -f corednsms.yaml -kubectl delete pod --namespace kube-system --selector k8s-app=kube-dns -``` +2. Create the ConfigMap using the [`kubectl apply configmap`][kubectl-apply] command and specify the name of your YAML manifest. ++ ```console + kubectl apply -f corednsms.yaml + ``` ++3. Force CoreDNS to reload the ConfigMap using the [`kubectl delete pod`][kubectl delete] so the Kubernetes Scheduler can recreate them. ++ ```console + kubectl delete pod --namespace kube-system -l k8s-app=kube-dns + ``` ## Use custom domains You may want to configure custom domains that can only be resolved internally. For example, you may want to resolve the custom domain *puglife.local*, which isn't a valid top-level domain. Without a custom domain ConfigMap, the AKS cluster can't resolve the address. -In the following example, update the custom domain and IP address to direct traffic to with the values for your own environment. Create a file named `corednsms.yaml` and paste the following example configuration: +1. Create a new file named `corednsms.yaml` and paste the following example configuration. Make sure to update the custom domain and IP address with the values for your own environment. -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: coredns-custom - namespace: kube-system -data: - puglife.server: | # you may select any name here, but it must end with the .server file extension - puglife.local:53 { - errors - cache 30 - forward . 192.11.0.1 # this is my test/dev DNS server - } -``` + ```yaml + apiVersion: v1 + kind: ConfigMap + metadata: + name: coredns-custom + namespace: kube-system + data: + puglife.server: | # you may select any name here, but it must end with the .server file extension + puglife.local:53 { + errors + cache 30 + forward . 192.11.0.1 # this is my test/dev DNS server + } + ``` -As in the previous examples, create the ConfigMap using the [kubectl apply configmap][kubectl-apply] command and specify the name of your YAML manifest. Then, force CoreDNS to reload the ConfigMap using the [kubectl delete pod][kubectl delete] for the Kubernetes Scheduler to recreate them: +2. Create the ConfigMap using the [`kubectl apply configmap`][kubectl-apply] command and specify the name of your YAML manifest. -```console -kubectl apply -f corednsms.yaml -kubectl delete pod --namespace kube-system --selector k8s-app=kube-dns -``` + ```console + kubectl apply -f corednsms.yaml + ``` ++3. Force CoreDNS to reload the ConfigMap using the [`kubectl delete pod`][kubectl delete] so the Kubernetes Scheduler can recreate them. ++ ```console + kubectl delete pod --namespace kube-system -l k8s-app=kube-dns + ``` ## Stub domains -CoreDNS can also be used to configure stub domains. In the following example, update the custom domains and IP addresses with the values for your own environment. Create a file named `corednsms.yaml` and paste the following example configuration: +CoreDNS can also be used to configure stub domains. -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: coredns-custom - namespace: kube-system -data: - test.server: | # you may select any name here, but it must end with the .server file extension - abc.com:53 { - errors - cache 30 - forward . 1.2.3.4 - } - my.cluster.local:53 { - errors - cache 30 - forward . 2.3.4.5 - } +1. Create a file named `corednsms.yaml` and paste the following example configuration. Make sure to update the custom domains and IP addresses with the values for your own environment. -``` + ```yaml + apiVersion: v1 + kind: ConfigMap + metadata: + name: coredns-custom + namespace: kube-system + data: + test.server: | # you may select any name here, but it must end with the .server file extension + abc.com:53 { + errors + cache 30 + forward . 1.2.3.4 + } + my.cluster.local:53 { + errors + cache 30 + forward . 2.3.4.5 + } -As in the previous examples, create the ConfigMap using the [kubectl apply configmap][kubectl-apply] command and specify the name of your YAML manifest. Then, force CoreDNS to reload the ConfigMap using the [kubectl delete pod][kubectl delete] for the Kubernetes Scheduler to recreate them: + ``` -```console -kubectl apply -f corednsms.yaml -kubectl delete pod --namespace kube-system --selector k8s-app=kube-dns -``` +2. Create the ConfigMap using the [`kubectl apply configmap`][kubectl-apply] command and specify the name of your YAML manifest. ++ ```console + kubectl apply -f corednsms.yaml + ``` ++3. Force CoreDNS to reload the ConfigMap using the [`kubectl delete pod`][kubectl delete] so the Kubernetes Scheduler can recreate them. ++ ```console + kubectl delete pod --namespace kube-system -l k8s-app=kube-dns + ``` ## Hosts plugin -As all built-in plugins are supported this means that the CoreDNS [Hosts][coredns hosts] plugin is available to customize as well: +All built-in plugins are supported, so the [CoreDNS hosts][coredns hosts] plugin is available to customize as well. ```yaml apiVersion: v1 data: ## Troubleshooting -For general CoreDNS troubleshooting steps, such as checking the endpoints or resolution, see [Debugging DNS Resolution][coredns-troubleshooting]. +For general CoreDNS troubleshooting steps, such as checking the endpoints or resolution, see [Debugging DNS resolution][coredns-troubleshooting]. -To enable DNS query logging, apply the following configuration in your coredns-custom ConfigMap: +### Enable DNS query logging -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: coredns-custom - namespace: kube-system -data: - log.override: | # you may select any name here, but it must end with the .override file extension - log -``` +1. Add the following configuration to your coredns-custom ConfigMap: -After you apply the configuration changes, use the `kubectl logs` command to view the CoreDNS debug logging. For example: + ```yaml + apiVersion: v1 + kind: ConfigMap + metadata: + name: coredns-custom + namespace: kube-system + data: + log.override: | # you may select any name here, but it must end with the .override file extension + log + ``` -```console -kubectl logs --namespace kube-system --selector k8s-app=kube-dns -``` +2. Apply the configuration changes and force CoreDNS to reload the ConfigMap using the following commands: ++ ```console + # Apply configuration changes + kubectl apply -f corednsms.yaml ++ # Force CoreDNS to reload the ConfigMap + kubectl delete pod --namespace kube-system -l k8s-app=kube-dns + ``` ++3. View the CoreDNS debug logging using the `kubectl logs` command. ++ ```console + kubectl logs --namespace kube-system -l k8s-app=kube-dns + ``` ## Next steps To learn more about core network concepts, see [Network concepts for application [kubednsblog]: https://www.danielstechblog.io/using-custom-dns-server-for-domain-specific-name-resolution-with-azure-kubernetes-service/ [coredns]: https://coredns.io/ [corednsk8s]: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-[dnscache]: https://coredns.io/plugins/cache/ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete |
aks | Internal Lb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md | internal-app LoadBalancer 10.0.184.168 10.240.0.25 80:30225/TCP 4m For more information on configuring your load balancer in a different subnet, see [Specify a different subnet][different-subnet] -## Connect Azure Private Link service to internal load balancer (Preview) +## Connect Azure Private Link service to internal load balancer ### Before you begin You must have the following resources: -* Azure CLI version 2.0.59 or later. * Kubernetes version 1.22.x or later. * An existing resource group with a VNet and subnet. This resource group is where you'll [create the private endpoint](#create-a-private-endpoint-to-the-private-link-service). If you don't have these resources, see [Create a virtual network and subnet][aks-vnet-subnet]. |
aks | Quickstart Helm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md | Title: Develop on Azure Kubernetes Service (AKS) with Helm description: Use Helm with AKS and Azure Container Registry to package and run application containers in a cluster. Previously updated : 12/17/2021 Last updated : 03/03/2023 # Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm [Helm][helm] is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers like *APT* and *Yum*, Helm manages Kubernetes charts, which are packages of pre-configured Kubernetes resources. -In this quickstart, you'll use Helm to package and run an application on AKS. For more details on installing an existing application using Helm, see the [Install existing applications with Helm in AKS][helm-existing] how-to guide. +In this quickstart, you'll use Helm to package and run an application on AKS. For more details on installing an existing application using Helm, see [Install existing applications with Helm in AKS][helm-existing]. ## Prerequisites In this quickstart, you'll use Helm to package and run an application on AKS. Fo * [Helm v3 installed][helm-install]. ## Create an Azure Container Registry-You'll need to store your container images in an Azure Container Registry (ACR) to run your application in your AKS cluster using Helm. Provide your own registry name unique within Azure and containing 5-50 alphanumeric characters. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput. ++You'll need to store your container images in an Azure Container Registry (ACR) to run your application in your AKS cluster using Helm. Provide your own registry name unique within Azure and containing 5-50 alphanumeric characters. Only lowercase characters are allowed. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput. ### [Azure CLI](#tab/azure-cli) -The below example uses [az acr create][az-acr-create] to create an ACR named *MyHelmACR* in *MyResourceGroup* with the *Basic* SKU. +The below example uses the [`az acr create`][az-acr-create] command to create an ACR named *myhelmacr* in *myResourceGroup* with the *Basic* SKU. ```azurecli-interactive-az group create --name MyResourceGroup --location eastus -az acr create --resource-group MyResourceGroup --name MyHelmACR --sku Basic +az group create --name myResourceGroup --location eastus +az acr create --resource-group MyResourceGroup --name myhelmacr --sku Basic ``` -Output will be similar to the following example. Take note of your *loginServer* value for your ACR since you'll use it in a later step. In the below example, *myhelmacr.azurecr.io* is the *loginServer* for *MyHelmACR*. +Your output will be similar to the following example output. Take note of your *loginServer* value for your ACR since you'll use it in a later step. ```console { Output will be similar to the following example. Take note of your *loginServer* ### [Azure PowerShell](#tab/azure-powershell) -The below example uses the [New-AzContainerRegistry][new-azcontainerregistry] cmdlet to create an ACR named *MyHelmACR* in *MyResourceGroup* with the *Basic* SKU. +The below example uses the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet to create an ACR named *myhelmacr* in *myResourceGroup* with the *Basic* SKU. ```azurepowershell-interactive-New-AzResourceGroup -Name MyResourceGroup -Location eastus -New-AzContainerRegistry -ResourceGroupName MyResourceGroup -Name MyHelmACR -Sku Basic +New-AzResourceGroup -Name myResourceGroup -Location eastus +New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myhelmacr -Sku Basic ``` -Output will be similar to the following example. Take note of your *LoginServer* value for your ACR since you'll use it in a later step. In the below example, *myhelmacr.azurecr.io* is the *LoginServer* for *MyHelmACR*. +Your output will be similar to the following example output. Take note of your *LoginServer* value for your ACR since you'll use it in a later step. ```output Registry Name Sku LoginServer CreationDate Provisioni AdminUserE StorageAccountName ngState nabled - -- - - -MyHelmACR Basic myhelmacr.azurecr.io 5/30/2022 9:16:14 PM Succeeded False +myhelmacr Basic myhelmacr.azurecr.io 5/30/2022 9:16:14 PM Succeeded False ``` ## Create an AKS cluster -Your new AKS cluster needs access to your ACR to pull the container images and run them. Use the following command to: -* Create an AKS cluster called *MyAKS* and attach *MyHelmACR*. -* Grant the *MyAKS* cluster access to your *MyHelmACR* ACR. +Your new AKS cluster needs access to your ACR to pull the container images and run them. ### [Azure CLI](#tab/azure-cli) +Use the [`az aks create`][az-aks-create] command to create an AKS cluster called *myAKSCluster* and the `--attach-acr` parameter to grant the cluster access to the *myhelmacr* ACR. + ```azurecli-interactive-az aks create --resource-group MyResourceGroup --name MyAKS --location eastus --attach-acr MyHelmACR --generate-ssh-keys +az aks create --resource-group myResourceGroup --name myAKSCluster --location eastus --attach-acr myhelmacr --generate-ssh-keys ``` ### [Azure PowerShell](#tab/azure-powershell) +Use the [`New-AzAksCluster`][new-azakscluster] cmdlet to create an AKS cluster called *myAKSCluster* and the `-AcrNameToAttach` parameter to grant the cluster access to the *myhelmacr* ACR. + ```azurepowershell-interactive-New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus -AcrNameToAttach MyHelmACR -GenerateSshKey +New-AzAksCluster -ResourceGroupName MyResourceGroup -Name myAKSCluster -Location eastus -AcrNameToAttach myhelmacr -GenerateSshKey ``` ## Connect to your AKS cluster -To connect a Kubernetes cluster locally, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. +To connect a Kubernetes cluster locally, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. ### [Azure CLI](#tab/azure-cli) -1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command: +1. Install `kubectl` locally using the [`az aks install-cli`][az-aks-install-cli] command. ```azurecli az aks install-cli ``` -2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command example gets credentials for the AKS cluster named *MyAKS* in the *MyResourceGroup*: +2. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. The following command gets credentials for the AKS cluster named *myAKSCluster* in *myResourceGroup*: ```azurecli-interactive- az aks get-credentials --resource-group MyResourceGroup --name MyAKS + az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ``` ### [Azure PowerShell](#tab/azure-powershell) -1. Install `kubectl` locally using the [Install-AzAksKubectl][install-azakskubectl] cmdlet: +1. Install `kubectl` locally using the [`Install-AzAksKubectl`][install-azakskubectl] cmdlet. ```azurepowershell Install-AzAksKubectl ``` -2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following command example gets credentials for the AKS cluster named *MyAKS* in the *MyResourceGroup*: +2. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. The following command gets credentials for the AKS cluster named *myAKSCluste* in *myResourceGroup*: ```azurepowershell-interactive- Import-AzAksCredential -ResourceGroupName MyResourceGroup -Name MyAKS + Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ``` ## Download the sample application -This quickstart uses the [Azure Vote application][azure-vote-app]. Clone the application from GitHub and navigate to the `azure-vote` directory. +This quickstart uses the [Azure Vote application][azure-vote-app]. Clone the application from GitHub and navigate to the `azure-vote` directory using the following commands: ```console git clone https://github.com/Azure-Samples/azure-voting-app-redis.git cd azure-voting-app-redis/azure-vote/ ## Build and push the sample application to the ACR -Using the preceding Dockerfile, run the [az acr build][az-acr-build] command to build and push an image to the registry. The `.` at the end of the command provides the location of the source code directory path (in this case, the current directory). The `--file` parameter takes in the path of the Dockerfile relative to this source code directory path. +Using the preceding Dockerfile, run the [`az acr build`][az-acr-build] command to build and push an image to the registry. The `.` at the end of the command provides the location of the source code directory path (in this case, the current directory). The `--file` parameter takes in the path of the Dockerfile relative to this source code directory path. ```azurecli-interactive-az acr build --image azure-vote-front:v1 --registry MyHelmACR --file Dockerfile . +az acr build --image azure-vote-front:v1 --registry myhelmacr --file Dockerfile . ``` > [!NOTE] az acr build --image azure-vote-front:v1 --registry MyHelmACR --file Dockerfile ## Create your Helm chart -Generate your Helm chart using the `helm create` command. +1. Generate your Helm chart using the `helm create` command. -```console -helm create azure-vote-front -``` + ```console + helm create azure-vote-front + ``` -Update *azure-vote-front/Chart.yaml* to add a dependency for the *redis* chart from the `https://charts.bitnami.com/bitnami` chart repository and update `appVersion` to `v1`. For example: +2. Update *azure-vote-front/Chart.yaml* to add a dependency for the *redis* chart from the `https://charts.bitnami.com/bitnami` chart repository and update `appVersion` to `v1`. For example: -> [!NOTE] -> The container image versions shown in this guide have been tested to work with this example but may not be the latest version available. --```yml -apiVersion: v2 -name: azure-vote-front -description: A Helm chart for Kubernetes --dependencies: - - name: redis - version: 17.3.17 - repository: https://charts.bitnami.com/bitnami --... -# This is the version number of the application being deployed. This version number should be -# incremented each time you make changes to the application. -appVersion: v1 -``` + > [!NOTE] + > The container image versions shown in this guide have been tested to work with this example but may not be the latest version available. -Update your helm chart dependencies using `helm dependency update`: + ```yaml + apiVersion: v2 + name: azure-vote-front + description: A Helm chart for Kubernetes -```console -helm dependency update azure-vote-front -``` + dependencies: + - name: redis + version: 17.3.17 + repository: https://charts.bitnami.com/bitnami -Update *azure-vote-front/values.yaml*: -* Add a *redis* section to set the image details, container port, and deployment name. -* Add a *backendName* for connecting the frontend portion to the *redis* deployment. -* Change *image.repository* to `<loginServer>/azure-vote-front`. -* Change *image.tag* to `v1`. -* Change *service.type* to *LoadBalancer*. + ... + # This is the version number of the application being deployed. This version number should be + # incremented each time you make changes to the application. + appVersion: v1 + ``` -For example: +3. Update your helm chart dependencies using the `helm dependency update` command. -```yml -# Default values for azure-vote-front. -# This is a YAML-formatted file. -# Declare variables to be passed into your templates. + ```console + helm dependency update azure-vote-front + ``` -replicaCount: 1 -backendName: azure-vote-backend-master -redis: - image: - registry: mcr.microsoft.com - repository: oss/bitnami/redis - tag: 6.0.8 - fullnameOverride: azure-vote-backend - auth: - enabled: false --image: - repository: myhelmacr.azurecr.io/azure-vote-front - pullPolicy: IfNotPresent - tag: "v1" -... -service: - type: LoadBalancer - port: 80 -... -``` +4. Update *azure-vote-front/values.yaml* with the following changes: -Add an `env` section to *azure-vote-front/templates/deployment.yaml* for passing the name of the *redis* deployment. --```yml -... - containers: - - name: {{ .Chart.Name }} - securityContext: - {{- toYaml .Values.securityContext | nindent 12 }} - image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - env: - - name: REDIS - value: {{ .Values.backendName }} -... -``` + * Add a *redis* section to set the image details, container port, and deployment name. + * Add a *backendName* for connecting the frontend portion to the *redis* deployment. + * Change *image.repository* to `<loginServer>/azure-vote-front`. + * Change *image.tag* to `v1`. + * Change *service.type* to *LoadBalancer*. ++ For example: ++ ```yaml + # Default values for azure-vote-front. + # This is a YAML-formatted file. + # Declare variables to be passed into your templates. ++ replicaCount: 1 + backendName: azure-vote-backend-master + redis: + image: + registry: mcr.microsoft.com + repository: oss/bitnami/redis + tag: 6.0.8 + fullnameOverride: azure-vote-backend + auth: + enabled: false ++ image: + repository: myhelmacr.azurecr.io/azure-vote-front + pullPolicy: IfNotPresent + tag: "v1" + ... + service: + type: LoadBalancer + port: 80 + ... + ``` ++5. Add an `env` section to *azure-vote-front/templates/deployment.yaml* for passing the name of the *redis* deployment. ++ ```yaml + ... + containers: + - name: {{ .Chart.Name }} + securityContext: + {{- toYaml .Values.securityContext | nindent 12 }} + image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + env: + - name: REDIS + value: {{ .Values.backendName }} + ... + ``` ## Run your Helm chart -Install your application using your Helm chart using the `helm install` command. +1. Install your application using your Helm chart using the `helm install` command. -```console -helm install azure-vote-front azure-vote-front/ -``` + ```console + helm install azure-vote-front azure-vote-front/ + ``` -It takes a few minutes for the service to return a public IP address. Monitor progress using the `kubectl get service` command with the `--watch` argument. +2. It takes a few minutes for the service to return a public IP address. Monitor progress using the `kubectl get service` command with the `--watch` argument. -```console -$ kubectl get service azure-vote-front --watch -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -azure-vote-front LoadBalancer 10.0.18.228 <pending> 80:32021/TCP 6s -... -azure-vote-front LoadBalancer 10.0.18.228 52.188.140.81 80:32021/TCP 2m6s -``` + ```console + $ kubectl get service azure-vote-front --watch + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + azure-vote-front LoadBalancer 10.0.18.228 <pending> 80:32021/TCP 6s + ... + azure-vote-front LoadBalancer 10.0.18.228 52.188.140.81 80:32021/TCP 2m6s + ``` -Navigate to your application's load balancer in a browser using the `<EXTERNAL-IP>` to see the sample application. +3. Navigate to your application's load balancer in a browser using the `<EXTERNAL-IP>` to see the sample application. ## Delete the cluster ### [Azure CLI](#tab/azure-cli) -Use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, the container registry, the container images stored in the ACR, and all related resources. +Use the [`az group delete`][az-group-delete] command to remove the resource group, the AKS cluster, the container registry, the container images stored in the ACR, and all related resources. ```azurecli-interactive-az group delete --name MyResourceGroup --yes --no-wait +az group delete --name myResourceGroup --yes --no-wait ``` ### [Azure PowerShell](#tab/azure-powershell) -Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, the AKS cluster, the container registry, the container images stored in the ACR, and all related resources. +Use the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet to remove the resource group, the AKS cluster, the container registry, the container images stored in the ACR, and all related resources. ```azurepowershell-interactive-Remove-AzResourceGroup -Name MyResourceGroup +Remove-AzResourceGroup -Name myResourceGroup ``` > [!NOTE]-> If the AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal. -> -> If the AKS cluster was created with service principal as the identity option instead, then when you delete the cluster, the service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. +> If the AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and doesn't require removal. +> +> If the AKS cluster was created with service principal as the identity option instead, the service principal used by the AKS cluster isn't removed when you delete the cluster. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. ## Next steps -For more information about using Helm, see the Helm documentation. --> [!div class="nextstepaction"] -> [Helm documentation][helm-documentation] +For more information about using Helm, see the [Helm documentation][helm-documentation]. [azure-cli-install]: /cli/azure/install-azure-cli [azure-powershell-install]: /powershell/azure/install-az-ps [az-acr-create]: /cli/azure/acr#az_acr_create [new-azcontainerregistry]: /powershell/module/az.containerregistry/new-azcontainerregistry+[new-azakscluster]: /powershell/module/az.aks/new-azakscluster [az-acr-build]: /cli/azure/acr#az_acr_build [az-group-delete]: /cli/azure/group#az_group_delete [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials+[az-aks-create]: /cli/azure/aks#az_aks_create [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli [install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool |
aks | Trusted Access Feature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/trusted-access-feature.md | Title: Enable Azure resources to access Azure Kubernetes Service (AKS) clusters description: Learn how to use the Trusted Access feature to enable Azure resources to access Azure Kubernetes Service (AKS) clusters. Previously updated : 02/23/2023 Last updated : 03/03/2023 az aks trustedaccess rolebinding create --resource-group <AKS resource group> - az aks trustedaccess rolebinding create \ -g myResourceGroup \ --cluster-name myAKSCluster -n test-binding \--s /subscriptions/000-000-000-000-000/resourceGroups/myResourceGroup/providers/Microsoft.MachineLearningServices/workspaces/MyMachineLearning \+--source-resource-id /subscriptions/000-000-000-000-000/resourceGroups/myResourceGroup/providers/Microsoft.MachineLearningServices/workspaces/MyMachineLearning \ --roles Microsoft.Compute/virtualMachineScaleSets/test-node-reader,Microsoft.Compute/virtualMachineScaleSets/test-admin ``` |
api-management | Api Version Retirement Sep 2023 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/api-version-retirement-sep-2023.md | After 30 September 2023, if you prefer not to update your tools, scripts, and pr * **ARM, Bicep, or Terraform templates** - Update the template to use API version 2021-08-01 or later. -* **Azure CLI** - Run `az version -help` to check your version. If you're running version 2.38.0 or later, no action is required. Use the `az upgrade` command to upgrade the Azure CLI if necessary. For more information, see [How to update the Azure CLI](/cli/azure/update-azure-cli). +* **Azure CLI** - Run `az version` to check your version. If you're running version 2.38.0 or later, no action is required. Use the `az upgrade` command to upgrade the Azure CLI if necessary. For more information, see [How to update the Azure CLI](/cli/azure/update-azure-cli). * **Azure PowerShell** - Run `Get-Module -ListAvailable -Name Az` to check your version. If you're running version 8.1.0 or later, no action is required. Use `Update-Module -Name Az -Repository PSGallery` to update the module if necessary. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps). After 30 September 2023, if you prefer not to update your tools, scripts, and pr ## Next steps -See all [upcoming breaking changes and feature retirements](overview.md). +See all [upcoming breaking changes and feature retirements](overview.md). |
app-service | App Service Asp Net Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-asp-net-migration.md | These tools are developed to support different kinds of scenarios, focused on di ## Migrate from multiple servers at-scale > [!NOTE]-> [Learn how to migrate .NET apps to App Service using the .NET migration tutorial.](../migrate/tutorial-migrate-webapps.md) +> [Learn how to migrate .NET apps to App Service using the .NET migration tutorial.](../migrate/tutorial-modernize-asp-net-appservice-code.md) > Azure Migrate recently announced at-scale, agentless discovery, and assessment of ASP.NET web apps. You can now easily discover ASP.NET web apps running on Internet Information Services (IIS) servers in a VMware environment and assess them for migration to Azure App Service. Assessments will help you determine the web app migration readiness, migration blockers, remediation guidance, recommended SKU, and hosting costs. At-scale migration resources for are found below. Once you have successfully assessed readiness, you should proceed with migration of ASP.NET web apps to Azure App Services. -There are existing tools which enable migration of a standalone ASP.Net web app or multiple ASP.NET web apps hosted on a single IIS server as explained in [Migrate .NET apps to Azure App Service](../migrate/tutorial-migrate-webapps.md). With introduction of At-Scale or bulk migration feature integrated with Azure Migrate we are now opening up the possibilities to migrate multiple ASP.NET application hosted on multiple on-premises IIS servers. +There are existing tools which enable migration of a standalone ASP.NET web app or multiple ASP.NET web apps hosted on a single IIS server as explained in [Migrate .NET apps to Azure App Service](../migrate/tutorial-modernize-asp-net-appservice-code.md). With introduction of At-Scale or bulk migration feature integrated with Azure Migrate we are now opening up the possibilities to migrate multiple ASP.NET application hosted on multiple on-premises IIS servers. Azure Migrate provides at-scale, agentless discovery, and assessment of ASP.NET web apps. You can discover ASP.NET web apps running on Internet Information Services (IIS) servers in a VMware environment and assess them for migration to Azure App Service. Assessments will help you determine the web app migration readiness, migration blockers, remediation guidance, recommended SKU, and hosting costs. At-scale migration resources for are found below. Bulk migration provides the following key capabilities: | [Create an Azure App Service assessment](../migrate/how-to-create-azure-app-service-assessment.md) | | [Tutorial to assess web apps for migration to Azure App Service](../migrate/tutorial-assess-webapps.md) | | [Discover software inventory on on-premises servers with Azure Migrate](../migrate/how-to-discover-applications.md) |-| [Migrate .NET apps to App Service](../migrate/tutorial-migrate-webapps.md) | +| [Migrate .NET apps to App Service](../migrate/tutorial-modernize-asp-net-appservice-code.md) | | **Blog** | | [Discover and assess ASP.NET apps at-scale with Azure Migrate](https://azure.microsoft.com/blog/discover-and-assess-aspnet-apps-atscale-with-azure-migrate/) | | **FAQ** | |
app-service | App Service Hybrid Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-hybrid-connections.md | ms.assetid: 66774bde-13f5-45d0-9a70-4e9536a4f619 Last updated 2/10/2022 -+ # Azure App Service Hybrid Connections |
app-service | App Service Web Tutorial Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md | ms.assetid: a820e400-06af-4852-8627-12b3db4a8e70 ms.devlang: csharp Last updated 01/31/2023-+ # Tutorial: Host a RESTful API with CORS in Azure App Service |
app-service | Deploy Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-best-practices.md | keywords: azure app service, web app, deploy, deployment, pipelines, build ms.assetid: bb51e565-e462-4c60-929a-2ff90121f41d Last updated 07/31/2019+ # Deployment Best Practices |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md | |
app-service | Ip Address Change Inbound | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/ip-address-change-inbound.md | description: If your inbound IP address is going to be changed, learn what to do Last updated 06/28/2018-+ # How to prepare for an inbound IP address change |
app-service | Ip Address Change Outbound | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/ip-address-change-outbound.md | description: If your outbound IP address is going to be changed, learn what to d Last updated 06/28/2018-+ # How to prepare for an outbound IP address change |
app-service | Ip Address Change Ssl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/ip-address-change-ssl.md | description: If your TLS/SSL IP address is going to be changed, learn what to do Last updated 06/28/2018-+ # How to prepare for a TLS/SSL IP address change |
app-service | Manage Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-disaster-recovery.md | description: Learn how Azure App Service helps you maintain business continuity Last updated 06/09/2020-+ #Customer intent: As an Azure service administrator, I want to recover my App Service app from a region-wide failure in Azure. |
app-service | Networking Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking-features.md | ms.assetid: 5c61eed1-1ad1-4191-9f71-906d610ee5b7 Last updated 01/23/2023 + # App Service networking features |
app-service | Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md | ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Last updated 02/09/2023 + # Using Private Endpoints for App Service apps |
app-service | Operating System Functionality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/operating-system-functionality.md | description: Learn about the OS functionality in Azure App Service on Windows. F ms.assetid: 39d5514f-0139-453a-b52e-4a1c06d8d914 Last updated 01/21/2022-+ # Operating system functionality on Azure App Service |
app-service | Overview Access Restrictions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-access-restrictions.md | |
app-service | Overview Authentication Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md | ms.assetid: b7151b57-09e5-4c77-a10c-375a262f17e5 Last updated 02/03/2023 -+ # Authentication and authorization in Azure App Service and Azure Functions |
app-service | Overview Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md | keywords: app service, azure app service, diagnostics, support, web app, trouble Last updated 10/18/2019-+ # Azure App Service diagnostics overview |
app-service | Overview Hosting Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-hosting-plans.md | keywords: app service, azure app service, scale, scalable, scalability, app serv ms.assetid: dea3f41e-cf35-481b-a6bc-33d7fc9d01b1 Last updated 10/01/2020-+ # Azure App Service plan overview |
app-service | Overview Inbound Outbound Ips | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-inbound-outbound-ips.md | Title: Inbound/Outbound IP addresses description: Learn how inbound and outbound IP addresses are used in Azure App Service, when they change, and how to find the addresses for your app.+ Last updated 08/25/2020-+ |
app-service | Overview Local Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-local-cache.md | tags: optional ms.assetid: e34d405e-c5d4-46ad-9b26-2a1eda86ce80 Last updated 03/04/2016-+ # Azure App Service Local Cache overview |
app-service | Overview Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-monitoring.md | keywords: app service, azure app service, monitoring, diagnostic settings, suppo Last updated 02/25/2022 + # Azure App Service monitoring overview |
app-service | Overview Patch Os Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-patch-os-runtime.md | Title: OS and runtime patching cadence description: Learn how Azure App Service updates the OS and runtimes, what runtimes and patch level your apps has, and how you can get update announcements. Last updated 01/21/2021-+ |
app-service | Overview Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-security.md | description: Learn about how App Service helps secure your app, and how you can keywords: azure app service, web app, mobile app, api app, function app, security, secure, secured, compliance, compliant, certificate, certificates, https, ftps, tls, trust, encryption, encrypt, encrypted, ip restriction, authentication, authorization, authn, autho, msi, managed service identity, managed identity, secrets, secret, patching, patch, patches, version, isolation, network isolation, ddos, mitm Last updated 08/24/2018-+ # Security in Azure App Service |
app-service | Overview Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md | |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md | description: Learn how Azure App Service helps you develop and host web applicat ms.assetid: 94af2caf-a2ec-4415-a097-f60694b860b3 Last updated 07/21/2021-+ # App Service overview |
app-service | Reference Dangling Subdomain Prevention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-dangling-subdomain-prevention.md | description: Describes options for dangling subdomain prevention on Azure App Se Last updated 10/14/2022 + |
app-service | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md | description: Lists Azure Policy Regulatory Compliance controls available for Azu Last updated 02/14/2023 -+ # Azure Policy Regulatory Compliance controls for Azure App Service |
app-service | Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-recommendations.md | |
application-gateway | Configuration Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md | Subnet Size /24 = 256 IP addresses - 5 reserved from the platform = 251 availabl > [!TIP] > It is possible to change the subnet of an existing Application Gateway within the same virtual network. You can do this using Azure PowerShell or Azure CLI. For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway) +### DNS Servers for name resolution +The virtual network resource supports [DNS server](../virtual-network/manage-virtual-network.md#view-virtual-networks-and-settings-using-the-azure-portal) configuration, allowing you to choose between Azure-provided default or Custom DNS servers. The instances of your application gateway also honor this DNS configuration for any name resolution. Thus, after you change this setting, you must restart ([Stop](/powershell/module/az.network/Stop-AzApplicationGateway) and [Start](/powershell/module/az.network/start-azapplicationgateway)) your application gateway for these changes to take effect on the instances. + ### Virtual network permission Since the application gateway resource is deployed inside a virtual network, we also perform a check to verify the permission on the provided virtual network resource. This validation is performed during both creation and management operations. You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify the users or service principals that operate application gateways also have at least **Microsoft.Network/virtualNetworks/subnets/join/action** permission on the Virtual Network or Subnet. |
automation | Automation Hrw Run Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md | There are two ways to use the Managed Identities in Hybrid Runbook Worker script > This will **NOT** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, it is no longer possible to use the Arc Managed Identity and then, it is **only** possible to use the Automation Account System-Assigned Managed Identity as mentioned in option 1 above. >[!NOTE]->By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence?view=azps-7.3.2&preserve-view=true). -For instance, a runbook with `Get-AzVM` can return all the VMs in the subscription with no call to `Connect-AzAccount`, and the user would be able to access Azure resources without having to authenticate within that runbook. You can disable context autosave in Azure PowerShell, as detailed [here](/powershell/azure/context-persistence?view=azps-7.3.2&preserve-view=true#save-azure-contexts-across-powershell-sessions). +>By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence). +For instance, a runbook with `Get-AzVM` can return all the VMs in the subscription with no call to `Connect-AzAccount`, and the user would be able to access Azure resources without having to authenticate within that runbook. You can disable context autosave in Azure PowerShell, as detailed [here](/powershell/azure/context-persistence#save-azure-contexts-across-powershell-sessions). ### Use runbook authentication with Hybrid Worker Credentials |
automation | Migrate Existing Agent Based Hybrid Worker To Extension Based Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md | You can use the following PowerShell cmdlets to manage Hybrid Runbook Worker and | PowerShell cmdlet | Description | | -- | -- |-|[`Get-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/get-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Gets Hybrid Runbook Worker group| -|[`Remove-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/remove-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Removes Hybrid Runbook Worker group| -|[`Set-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/set-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Updates Hybrid Worker group with Hybrid Worker credentials| -|[`New-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/new-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Creates new Hybrid Runbook Worker group| -|[`Get-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/get-azautomationhybridrunbookworker?view=azps-9.1.0) | Gets Hybrid Runbook Worker| -|[`Move-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/move-azautomationhybridrunbookworker?view=azps-9.1.0) | Moves Hybrid Worker from one group to other| -|[`New-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/new-azautomationhybridrunbookworker?view=azps-9.1.0) | Creates new Hybrid Runbook Worker| -|[`Remove-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/remove-azautomationhybridrunbookworker?view=azps-9.1.0)| Removes Hybrid Runbook Worker| +|[`Get-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/get-azautomationhybridrunbookworkergroup) | Gets Hybrid Runbook Worker group| +|[`Remove-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/remove-azautomationhybridrunbookworkergroup) | Removes Hybrid Runbook Worker group| +|[`Set-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/set-azautomationhybridrunbookworkergroup) | Updates Hybrid Worker group with Hybrid Worker credentials| +|[`New-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/new-azautomationhybridrunbookworkergroup) | Creates new Hybrid Runbook Worker group| +|[`Get-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/get-azautomationhybridrunbookworker) | Gets Hybrid Runbook Worker| +|[`Move-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/move-azautomationhybridrunbookworker) | Moves Hybrid Worker from one group to other| +|[`New-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/new-azautomationhybridrunbookworker) | Creates new Hybrid Runbook Worker| +|[`Remove-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/remove-azautomationhybridrunbookworker)| Removes Hybrid Runbook Worker| After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker. |
automation | Migrate Run As Accounts Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md | Title: Migrate from a Run As account to Managed identities description: This article describes how to migrate from a Run As account to managed identities in Azure Automation. Previously updated : 02/15/2023 Last updated : 03/03/2023 The following steps include an example to show how a graphical runbook that uses 1. Replace the Run As connection that uses `AzureRunAsConnection` and the connection asset that internally uses the PowerShell `Get-AutomationConnection` cmdlet with the `Connect-AzAccount` cmdlet. -1. Add identity support for use in the runbook by using the `Connect-AzAccount` activity from the `Az.Accounts` cmdlet that uses the PowerShell code to connect to the managed identity. +1. Add identity support for use in the runbook by adding a new code activity as mentioned in the following step, which leverages the `Connect-AzAccount` cmdlet to connect to the Managed Identity. :::image type="content" source="./media/migrate-run-as-account-managed-identity/add-functionality-inline.png" alt-text="Screenshot of adding functionality to a graphical runbook." lightbox="./media/migrate-run-as-account-managed-identity/add-functionality-expanded.png"::: For example, in the runbook **Start Azure V2 VMs** in the runbook gallery, you m For more information, see the sample runbook name **AzureAutomationTutorialWithIdentityGraphical** that's created with the Automation account. +> [!NOTE] +> AzureRM PowerShell modules are retiring on 29 February 2024. If you are using AzureRM PowerShell modules in Graphical runbooks, you must upgrade them to use Az PowerShell modules. [Learn more](https://learn.microsoft.com/powershell/azure/migrate-from-azurerm-to-az?view=azps-9.4.0). ## Next steps |
automation | Update Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md | If updates run locally, try removing and reinstalling the agent on the machine b ### I know updates are available, but they don't show as available on my machines -This often happens if machines are configured to get updates from WSUS or Microsoft Endpoint Configuration Manager but WSUS and Configuration Manager haven't approved the updates. +This often happens if machines are configured to get updates from WSUS or Microsoft Configuration Manager but WSUS and Configuration Manager haven't approved the updates. You can check to see if the machines are configured for WSUS and SCCM by cross-referencing the `UseWUServer` registry key to the registry keys in the [Configuring Automatic Updates by Editing the Registry section of this article](https://support.microsoft.com/help/328010/how-to-configure-automatic-updates-by-using-group-policy-or-registry-s). |
automation | Mecmintegration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/mecmintegration.md | Title: Integrate Azure Automation Update Management with Microsoft Endpoint Configuration Manager -description: This article tells how to configure Microsoft Endpoint Configuration Manager with Update Management to deploy software updates to manager clients. + Title: Integrate Azure Automation Update Management with Microsoft Configuration Manager +description: This article tells how to configure Microsoft Configuration Manager with Update Management to deploy software updates to manager clients. Last updated 07/14/2021 -# Integrate Update Management with Microsoft Endpoint Configuration Manager +# Integrate Update Management with Microsoft Configuration Manager -Customers who have invested in Microsoft Endpoint Configuration Manager to manage PCs, servers, and mobile devices also rely on its strength and maturity in managing software updates as part of their software update management (SUM) cycle. +Customers who have invested in Microsoft Configuration Manager to manage PCs, servers, and mobile devices also rely on its strength and maturity in managing software updates as part of their software update management (SUM) cycle. -You can report and update managed Windows servers by creating and pre-staging software update deployments in Microsoft Endpoint Configuration Manager, and get detailed status of completed update deployments using [Update Management](overview.md). If you use Microsoft Endpoint Configuration Manager for update compliance reporting, but not for managing update deployments with your Windows servers, you can continue reporting to Microsoft Endpoint Configuration Manager while security updates are managed with Azure Automation Update Management. +You can report and update managed Windows servers by creating and pre-staging software update deployments in Microsoft Configuration Manager, and get detailed status of completed update deployments using [Update Management](overview.md). If you use Microsoft Configuration Manager for update compliance reporting, but not for managing update deployments with your Windows servers, you can continue reporting to Microsoft Configuration Manager while security updates are managed with Azure Automation Update Management. >[!NOTE]->While Update Management supports update assessment and patching of Windows Server 2008 R2, it does not support clients managed by Microsoft Endpoint Configuration Manager running this operating system. +>While Update Management supports update assessment and patching of Windows Server 2008 R2, it does not support clients managed by Microsoft Configuration Manager running this operating system. ## Prerequisites * You must have [Azure Automation Update Management](overview.md) added to your Automation account.-* Windows servers currently managed by your Microsoft Endpoint Configuration Manager environment also need to report to the Log Analytics workspace that also has Update Management enabled. -* This feature is enabled in Microsoft Endpoint Configuration Manager current branch version 1606 and higher. To integrate your Microsoft Endpoint Configuration Manager central administration site or a standalone primary site with Azure Monitor logs and import collections, review [Connect Configuration Manager to Azure Monitor logs](../../azure-monitor/logs/collect-sccm.md). -* Windows agents must either be configured to communicate with a Windows Server Update Services (WSUS) server or have access to Microsoft Update if they don't receive security updates from Microsoft Endpoint Configuration Manager. +* Windows servers currently managed by your Microsoft Configuration Manager environment also need to report to the Log Analytics workspace that also has Update Management enabled. +* This feature is enabled in Microsoft Configuration Manager current branch version 1606 and higher. To integrate your Microsoft Configuration Manager central administration site or a standalone primary site with Azure Monitor logs and import collections, review [Connect Configuration Manager to Azure Monitor logs](../../azure-monitor/logs/collect-sccm.md). +* Windows agents must either be configured to communicate with a Windows Server Update Services (WSUS) server or have access to Microsoft Update if they don't receive security updates from Microsoft Configuration Manager. -How you manage clients hosted in Azure IaaS with your existing Microsoft Endpoint Configuration Manager environment primarily depends on the connection you have between Azure datacenters and your infrastructure. This connection affects any design changes you may need to make to your Microsoft Endpoint Configuration Manager infrastructure and related cost to support those necessary changes. To understand what planning considerations you need to evaluate before proceeding, review [Configuration Manager on Azure - Frequently Asked Questions](/configmgr/core/understand/configuration-manager-on-azure#networking). +How you manage clients hosted in Azure IaaS with your existing Microsoft Configuration Manager environment primarily depends on the connection you have between Azure datacenters and your infrastructure. This connection affects any design changes you may need to make to your Microsoft Configuration Manager infrastructure and related cost to support those necessary changes. To understand what planning considerations you need to evaluate before proceeding, review [Configuration Manager on Azure - Frequently Asked Questions](/configmgr/core/understand/configuration-manager-on-azure#networking). -## Manage software updates from Microsoft Endpoint Configuration Manager +## Manage software updates from Microsoft Configuration Manager -Perform the following steps if you are going to continue managing update deployments from Microsoft Endpoint Configuration Manager. Azure Automation connects to Microsoft Endpoint Configuration Manager to apply updates to the client computers connected to your Log Analytics workspace. Update content is available from the client computer cache as if the deployment were managed by Microsoft Endpoint Configuration Manager. +Perform the following steps if you are going to continue managing update deployments from Microsoft Configuration Manager. Azure Automation connects to Microsoft Configuration Manager to apply updates to the client computers connected to your Log Analytics workspace. Update content is available from the client computer cache as if the deployment were managed by Microsoft Configuration Manager. -1. Create a software update deployment from the top-level site in your Microsoft Endpoint Configuration Manager hierarchy using the process described in [Deploy software updates](/configmgr/sum/deploy-use/deploy-software-updates). The only setting that must be configured differently from a standard deployment is the **Installation deadline** option in Endpoint Configuration Manager. It needs to be set to a future date to ensure only Automation Update Management initiates the update deployment. This setting is described under [Step 4, Deploy the software update group](/configmgr/sum/deploy-use/manually-deploy-software-updates#BKMK_4DeployUpdateGroup). +1. Create a software update deployment from the top-level site in your Microsoft Configuration Manager hierarchy using the process described in [Deploy software updates](/configmgr/sum/deploy-use/deploy-software-updates). The only setting that must be configured differently from a standard deployment is the **Installation deadline** option in Configuration Manager. It needs to be set to a future date to ensure only Automation Update Management initiates the update deployment. This setting is described under [Step 4, Deploy the software update group](/configmgr/sum/deploy-use/manually-deploy-software-updates#BKMK_4DeployUpdateGroup). -2. In Endpoint Configuration Manager, configure the **User notifications** option to prevent displaying notifications on the target machines. We recommend setting the **Hide in Software Center and all notifications** option to avoid a logged on user from being notified of a scheduled update deployment and manually deploying those updates. This setting is described under [Step 4, Deploy the software update group](/configmgr/sum/deploy-use/manually-deploy-software-updates#BKMK_4DeployUpdateGroup). +2. In Configuration Manager, configure the **User notifications** option to prevent displaying notifications on the target machines. We recommend setting the **Hide in Software Center and all notifications** option to avoid a logged on user from being notified of a scheduled update deployment and manually deploying those updates. This setting is described under [Step 4, Deploy the software update group](/configmgr/sum/deploy-use/manually-deploy-software-updates#BKMK_4DeployUpdateGroup). -3. In Azure Automation, select **Update Management**. Create a new deployment following the steps described in [Creating an Update Deployment](deploy-updates.md#schedule-an-update-deployment) and select **Imported groups** on the **Type** dropdown to select the appropriate Microsoft Endpoint Configuration Manager collection. Keep in mind the following important points: +3. In Azure Automation, select **Update Management**. Create a new deployment following the steps described in [Creating an Update Deployment](deploy-updates.md#schedule-an-update-deployment) and select **Imported groups** on the **Type** dropdown to select the appropriate Microsoft Configuration Manager collection. Keep in mind the following important points: - a. If a maintenance window is defined on the selected Microsoft Endpoint Configuration Manager device collection, members of the collection honor it instead of the **Duration** setting defined in the scheduled deployment. + a. If a maintenance window is defined on the selected Microsoft Configuration Manager device collection, members of the collection honor it instead of the **Duration** setting defined in the scheduled deployment. b. Members of the target collection must have a connection to the Internet (either direct, through a proxy server or through the Log Analytics gateway). After completing the update deployment through Azure Automation, the target comp ## Manage software updates from Azure Automation -To manage updates for Windows Server VMs that are Microsoft Endpoint Configuration Manager clients, you need to configure client policy to disable the Software Update Management feature for all clients managed by Update Management. By default, client settings target all devices in the hierarchy. For more information about this policy setting and how to configure it, review [How to configure client settings in Configuration Manager](/configmgr/core/clients/deploy/configure-client-settings). +To manage updates for Windows Server VMs that are Microsoft Configuration Manager clients, you need to configure client policy to disable the Software Update Management feature for all clients managed by Update Management. By default, client settings target all devices in the hierarchy. For more information about this policy setting and how to configure it, review [How to configure client settings in Configuration Manager](/configmgr/core/clients/deploy/configure-client-settings). -After performing this configuration change, you create a new deployment following the steps described in [Creating an Update Deployment](deploy-updates.md#schedule-an-update-deployment) and select **Imported groups** on the **Type** drop-down to select the appropriate Microsoft Endpoint Configuration Manager collection. +After performing this configuration change, you create a new deployment following the steps described in [Creating an Update Deployment](deploy-updates.md#schedule-an-update-deployment) and select **Imported groups** on the **Type** drop-down to select the appropriate Microsoft Configuration Manager collection. ## Next steps |
automation | Operating System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md | The following table lists operating systems not supported by Update Management: |Operating system |Notes | |||-|Windows client | Client operating systems (such as Windows 7 and Windows 10) aren't supported.<br>For Azure Virtual Desktop, the recommended method to manage updates is [Microsoft Endpoint Configuration Manager](../../virtual-desktop/configure-automatic-updates.md) for Windows 10 client machine patch management. | +|Windows client | Client operating systems (such as Windows 7 and Windows 10) aren't supported.<br>For Azure Virtual Desktop, the recommended method to manage updates is [Microsoft Configuration Manager](../../virtual-desktop/configure-automatic-updates.md) for Windows 10 client machine patch management. | |Windows Server 2016 Nano Server | Not supported. | |Azure Kubernetes Service Nodes | Not supported. Use the patching process described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](../../aks/node-updates-kured.md)| The section describes operating system-specific requirements. For additional gui Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Microsoft Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. -You can use Update Management with Microsoft Endpoint Configuration Manager. To learn more about integration scenarios, see [Integrate Update Management with Windows Endpoint Configuration Manager](mecmintegration.md). The [Log Analytics agent for Windows](../../azure-monitor/agents/agent-windows.md) is required for Windows servers managed by sites in your Configuration Manager environment. +You can use Update Management with Microsoft Configuration Manager. To learn more about integration scenarios, see [Integrate Update Management with Windows Configuration Manager](mecmintegration.md). The [Log Analytics agent for Windows](../../azure-monitor/agents/agent-windows.md) is required for Windows servers managed by sites in your Configuration Manager environment. By default, Windows VMs that are deployed from Azure Marketplace are set to receive automatic updates from Windows Update Service. This behavior doesn't change when you add Windows VMs to your workspace. If you don't actively manage updates by using Update Management, the default behavior (to automatically apply updates) applies. |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md | The following table summarizes the supported connected sources with Update Manag | Linux |Yes |Update Management collects information about system updates from Linux machines with the Log Analytics agent and installation of required updates on supported distributions.<br> Machines need to report to a local or remote repository. | | Operations Manager management group |Yes |Update Management collects information about software updates from agents in a connected management group.<br/><br/>A direct connection from the Operations Manager agent to Azure Monitor logs isn't required. Log data is forwarded from the management group to the Log Analytics workspace. | -The machines assigned to Update Management report how up to date they are based on what source they are configured to synchronize with. Windows machines need to be configured to report to either [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or [Microsoft Update](https://www.update.microsoft.com), and Linux machines need to be configured to report to a local or public repository. You can also use Update Management with Microsoft Endpoint Configuration Manager, and to learn more see [Integrate Update Management with Windows Endpoint Configuration Manager](mecmintegration.md). +The machines assigned to Update Management report how up to date they are based on what source they are configured to synchronize with. Windows machines need to be configured to report to either [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or [Microsoft Update](https://www.update.microsoft.com), and Linux machines need to be configured to report to a local or public repository. You can also use Update Management with Microsoft Configuration Manager, and to learn more see [Integrate Update Management with Windows Configuration Manager](mecmintegration.md). If the Windows Update Agent (WUA) on the Windows machine is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft Update, the results might differ from what Microsoft Update shows. This behavior is the same for Linux machines that are configured to report to a local repo instead of a public repo. On a Windows machine, the compliance scan is run every 12 hours by default. For a Linux machine, the compliance scan is performed every hour by default. If the Log Analytics agent is restarted, a compliance scan is started within 15 minutes. When a machine completes a scan for update compliance, the agent forwards the information in bulk to Azure Monitor Logs. sudo yum -q --security check-update ## Integrate Update Management with Configuration Manager -Customers who have invested in Microsoft Endpoint Configuration Manager for managing PCs, servers, and mobile devices also rely on the strength and maturity of Configuration Manager to help manage software updates. To learn how to integrate Update Management with Configuration Manager, see [Integrate Update Management with Windows Endpoint Configuration Manager](mecmintegration.md). +Customers who have invested in Microsoft Configuration Manager for managing PCs, servers, and mobile devices also rely on the strength and maturity of Configuration Manager to help manage software updates. To learn how to integrate Update Management with Configuration Manager, see [Integrate Update Management with Windows Configuration Manager](mecmintegration.md). ## Third-party updates on Windows |
azure-arc | Automated Integration Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md | At a high-level, the launcher performs the following sequence of steps: 3. Perform CRD metadata scan to discover existing Arc and Arc Data Services Custom Resources 4. Clean up any existing Custom Resources in Kubernetes, and subsequent resources in Azure. If any mismatch between the credentials in `.test.env` compared to resources existing in the cluster, quit. 5. Generate a unique set of environment variables based on timestamp for Arc Cluster name, Data Controller and Custom Location/Namespace. Prints out the environment variables, obfuscating sensitive values (e.g. Service Principal Password etc.)-6. a. For Direct Mode - Onboard the Cluster to Azure Arc, then deploys the Controller via the [unified experience](create-data-controller-direct-cli.md?tabs=linux#deployunified-experience) +6. a. For Direct Mode - Onboard the Cluster to Azure Arc, then deploys the controller. + b. For Indirect Mode: deploy the Data Controller 7. Once Data Controller is `Ready`, generate a set of Azure CLI ([`az arcdata dc debug`](/cli/azure/arcdata/dc/debug?view=azure-cli-latest&preserve-view=true)) logs and store locally, labeled as `setup-complete` - as a baseline. 8. Use the `TESTS_DIRECT/INDIRECT` environment variable from `.test.env` to launch a set of parallelized Sonobuoy test runs based on a space-separated array (`TESTS_(IN)DIRECT`). These runs execute in a new `sonobuoy` namespace, using `arc-sb-plugin` pod that contains the Pytest validation tests. |
azure-arc | Configure Transparent Data Encryption Manually | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-manually.md | This article describes how to enable transparent data encryption on a database c Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and connect to it. -- [An Azure Arc-enabled SQL Managed Instance created](./create-sql-managed-instance.md)+- [Create an Azure Arc-enabled SQL Managed Instance](./create-sql-managed-instance.md) - [Connect to Azure Arc-enabled SQL Managed Instance](./connect-managed-instance.md) ## Turn on transparent data encryption on a database in the managed instance |
azure-arc | Configure Transparent Data Encryption Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-sql-managed-instance.md | Turning on the TDE feature does the following: Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and connect to it. -- [An Azure Arc-enabled SQL Managed Instance created](./create-sql-managed-instance.md)+- [Create an Azure Arc-enabled SQL Managed Instance](./create-sql-managed-instance.md) - [Connect to Azure Arc-enabled SQL Managed Instance](./connect-managed-instance.md) ## Limitations |
azure-arc | Create Data Controller Direct Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-azure-portal.md | The progress of Azure Arc data controller deployment can be monitored as follows ## Next steps -[Create an Azure Arc-enabled SQL managed instance](create-sql-managed-instance.md) +[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md) [Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) |
azure-arc | Create Data Controller Direct Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md | Creating an Azure Arc data controller in direct connectivity mode involves the f 1. Create a custom location. 1. Create the data controller. -You can create them individually or in a unified experience. --## Deploy - unified experience --In the unified experience, you can create the Arc data controller extension, custom location, and Arc data controller all in one command as follows: -+Create the Arc data controller extension, custom location, and Arc data controller all in one command as follows: ##### [Linux](#tab/linux) az arcdata dc create --name arc-dc1 --resource-group $ENV:resourceGroup --custom -## Deploy - individual experience - -### Step 1: Create an Azure Arc-enabled data services extension --Use the k8s-extension CLI to create a data services extension. --#### Set environment variables --Set the following environment variables, which will be then used in later steps. --Following are two sets of environment variables. The first set of variables identifies your Azure subscription, resource group, cluster name, location, extension, and namespace. The second defines credentials to access the metrics and logs dashboards. --The environment variables include passwords for log and metric services. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters. ---##### [Linux](#tab/linux) --```console -## variables for Azure subscription, resource group, cluster name, location, extension, and namespace. -export subscription=<Your subscription ID> -export resourceGroup=<Your resource group> -export clusterName=<name of your connected Kubernetes cluster> -export location=<Azure location> -export adsExtensionName="<extension name>" -export namespace="<namespace>" -## variables for logs and metrics dashboard credentials -export AZDATA_LOGSUI_USERNAME=<username for Kibana dashboard> -export AZDATA_LOGSUI_PASSWORD=<password for Kibana dashboard> -export AZDATA_METRICSUI_USERNAME=<username for Grafana dashboard> -export AZDATA_METRICSUI_PASSWORD=<password for Grafana dashboard> -``` --##### [Windows (PowerShell)](#tab/windows) --``` PowerShell -## variables for Azure location, extension and namespace -$ENV:subscription="<Your subscription ID>" -$ENV:resourceGroup="<Your resource group>" -$ENV:clusterName="<name of your connected Kubernetes cluster>" -$ENV:location="<Azure location>" -$ENV:adsExtensionName="<name of Data controller extension" -$ENV:namespace="<namespace where you will deploy the extension and data controller>" -## variables for Metrics and Monitoring dashboard credentials -$ENV:AZDATA_LOGSUI_USERNAME="<username for Kibana dashboard>" -$ENV:AZDATA_LOGSUI_PASSWORD="<password for Kibana dashboard>" -$ENV:AZDATA_METRICSUI_USERNAME="<username for Grafana dashboard>" -$ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>" -``` -- --#### Create the Arc data services extension --The following command creates the Arc data services extension. --##### [Linux](#tab/linux) --```azurecli -az k8s-extension create --cluster-name ${clusterName} --resource-group ${resourceGroup} --name ${adsExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --auto-upgrade-minor-version false --scope cluster --release-namespace ${namespace} --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper -az k8s-extension show --resource-group ${resourceGroup} --cluster-name ${resourceName} --name ${adsExtensionName} --cluster-type connectedclusters -``` --##### [Windows (PowerShell)](#tab/windows) --```azurecli -az k8s-extension create --cluster-name $ENV:clusterName --resource-group $ENV:resourceGroup --name $ENV:adsExtensionName --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --auto-upgrade-minor-version false --scope cluster --release-namespace $ENV:namespace --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper -az k8s-extension show --resource-group $ENV:resourceGroup --cluster-name $ENV:clusterName --name $ENV:adsExtensionName --cluster-type connectedclusters -``` ----##### Deploy Azure Arc data services extension using private container registry and credentials --Use the below command if you are deploying from your private repository: --```azurecli -az k8s-extension create --cluster-name "<connected cluster name>" --resource-group "<resource group>" --name "<extension name>" --cluster-type connectedClusters -auto-upgrade false --auto-upgrade-minor-version false --extension-type microsoft.arcdataservices --scope cluster --release-namespace "<namespace>" --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper --config imageCredentials.registry=<registry info> --config imageCredentials.username=<username> --config systemDefaultValues.image=<registry/repo/arc-bootstrapper:<imagetag>> --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug -``` --For example: --```azurecli -az k8s-extension create --cluster-name "my-connected-cluster" --resource-group "my-resource-group" --name "arc-data-services" --cluster-type connectedClusters -auto-upgrade false --auto-upgrade-minor-version false --extension-type microsoft.arcdataservices --scope cluster --release-namespace "arc" --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper --config imageCredentials.registry=mcr.microsoft.com --config imageCredentials.username=arcuser --config systemDefaultValues.image=mcr.microsoft.com/arcdata/arc-bootstrapper:latest --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug -``` ---> [!NOTE] -> The Arc data services extension install can take a few minutes to complete. --#### Verify the Arc data services extension is created --You can verify the status of the deployment of Azure Arc-enabled data services extension. Use the Azure portal or Cube --##### Check status from Azure portal --1. Log in to the Azure portal and browse to the resource group where the Kubernetes connected cluster resource is located. -1. Select the Azure Arc-enabled Kubernetes cluster (Type = "Kubernetes - Azure Arc") where the extension was deployed. -1. In the navigation on the left side, under **Settings**, select **Extensions**. -1. The portal shows the extension that was created earlier in an installed state. --##### Check status using kubectl CLI --1. Connect to your Kubernetes cluster via a Terminal window. -1. Run the below command and ensure: - - The namespace mentioned above is created -- and -- - The `bootstrapper` pod state is **running** before proceeding to the next step. -- ``` console - kubectl get pods --name <name of namespace used in the json template file above> - ``` --For example, the following example gets the pods from `arc` namespace. --```console -#Example: -kubectl get pods --name arc -``` --### Retrieve the managed identity and grant roles --When the Arc data services extension is created, Azure creates a managed identity. You need to assign certain roles to this managed identity for usage and/or metrics to be uploaded. --#### Retrieve managed identity of the Arc data controller extension --```azurecli -$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group <resource group> --cluster-name <connectedclustername> --cluster-type connectedClusters --name <name of extension> | convertFrom-json).identity.principalId -#Example -$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group myresourcegroup --cluster-name myconnectedcluster --cluster-type connectedClusters --name ads-extension | convertFrom-json).identity.principalId -``` --#### Assign role to the managed identity --Run the below command to assign the **Contributor** and **Monitoring Metrics Publisher** roles: --```azurecli -az role assignment create --assignee $Env:MSI_OBJECT_ID --role "Contributor" --scope "/subscriptions/$ENV:subscription/resourceGroups/$ENV:resourceGroup" -az role assignment create --assignee $Env:MSI_OBJECT_ID --role "Monitoring Metrics Publisher" --scope "/subscriptions/$ENV:subscription/resourceGroups/$ENV:resourceGroup" -``` --### Step 2: Create a custom location using `customlocation` CLI extension --A custom location is an Azure resource that is equivalent to a namespace in a Kubernetes cluster. Custom locations are used as a target to deploy resources to or from Azure. Learn more about custom locations in the [Custom locations on top of Azure Arc-enabled Kubernetes documentation](../kubernetes/conceptual-custom-locations.md). --#### Set environment variables --##### [Linux](#tab/linux) --```azurecli -export clName=mycustomlocation -export hostClusterId=$(az connectedk8s show --resource-group ${resourceGroup} --name ${clusterName} --query id -o tsv) -export extensionId=$(az k8s-extension show --resource-group ${resourceGroup} --cluster-name ${clusterName} --cluster-type connectedClusters --name ${adsExtensionName} --query id -o tsv) -az customlocation create --resource-group ${resourceGroup} --name ${clName} --namespace ${namespace} --host-resource-id ${hostClusterId} --cluster-extension-ids ${extensionId} --location ${location} -``` --##### [Windows (PowerShell)](#tab/windows) --```azurecli -$ENV:clName="mycustomlocation" -$ENV:hostClusterId=(az connectedk8s show --resource-group $ENV:resourceGroup --name $ENV:clusterName --query id -o tsv) -$ENV:extensionId=(az k8s-extension show --resource-group $ENV:resourceGroup --cluster-name $ENV:clusterName --cluster-type connectedClusters --name $ENV:adsExtensionName --query id -o tsv) -az customlocation create --resource-group $ENV:resourceGroup --name $ENV:clName --namespace $ENV:namespace --host-resource-id $ENV:hostClusterId --cluster-extension-ids $ENV:extensionId -``` ----### Validate the custom location is created --From the terminal, run the below command to list the custom locations, and validate that the **Provisioning State** shows Succeeded: --```azurecli -az customlocation list -o table -``` --### Create certificates for logs and metrics UI dashboards --Optionally, you can specify certificates for logs and metrics UI dashboards. See [Provide certificates for monitoring](monitor-certificates.md) for examples. The December, 2021 release introduces this option. --### Step 3: Create the Azure Arc data controller --After the extension and custom location are created, proceed to deploy the Azure Arc data controller as follows. --```azurecli -az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --profile-name <profile name> --auto-upload-metrics true --custom-location <name of custom location> --storage-class <storageclass> -# Example -az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --custom-location mycustomlocation --storage-class mystorageclass -``` --If you want to create the Azure Arc data controller using a custom configuration template, follow the steps described in [Create custom configuration profile](create-custom-configuration-template.md) and provide the path to the file as follows: ---```azurecli -az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --custom-location <name of custom location> -# Example -az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --custom-location mycustomlocation -``` - ## Monitor the status of Azure Arc data controller deployment The deployment status of the Arc data controller on the cluster can be monitored as follows: kubectl get datacontrollers --namespace arc [Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) -[Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md) +[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md) + |
azure-arc | Create Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md | Title: Create an Azure SQL managed instance on Azure Arc -description: Create an Azure SQL managed instance on Azure Arc + Title: Create an Azure Arc-enabled SQL Managed Instance +description: Deploy Azure Arc-enabled SQL Managed Instance Last updated 07/30/2021 -# Create an Azure SQL managed instance on Azure Arc +# Create an Azure Arc-enabled SQL Managed Instance [!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)] |
azure-arc | Migrate To Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-to-managed-instance.md | GO [Start by creating a Data Controller](create-data-controller-indirect-cli.md) -[Already created a Data Controller? Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md) +[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md) |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md | To see the regions that currently support Azure Arc-enabled data services, go to [Plan your Azure Arc data services deployment](plan-azure-arc-data-services.md) (requires installing the client tools first) -[Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first) +[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first) [Create an Azure Database for PostgreSQL server on Azure Arc](create-postgresql-server.md) (requires creation of an Azure Arc data controller first) |
azure-arc | Plan Azure Arc Data Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md | In order to experience Azure Arc-enabled data services, you'll need to complete 1. Create data services. - For example, [Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md). + For example, [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md). 1. Connect with Azure Data Studio. |
azure-arc | Point In Time Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md | Point-in-time restore to Azure Arc-enabled SQL Managed Instance has the followin [Start by creating a Data Controller](create-data-controller-indirect-cli.md) -[Already created a Data Controller? Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md) +[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md) |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md | To see how all Azure Arc-enabled components are validated, see [Validation progr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-| [Unity XT](https://www.dell.com/en-us/dt/storage/unity.htm) |1.25.4|1.15.0_2023-01-10|16.0.816.19223 |Not validated| -| [PowerStore T](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance.htm) |1.25.4|1.15.0_2023-01-10|16.0.816.19223 |Not validated| +| [Unity XT](https://www.dell.com/en-us/dt/storage/unity.htm) |1.24.3|1.15.0_2023-01-10|16.0.816.19223 |Not validated| +| [PowerStore T](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance.htm) |1.24.3|1.15.0_2023-01-10|16.0.816.19223 |Not validated| | [PowerFlex](https://www.dell.com/en-us/dt/storage/powerflex.htm) |1.21.5|1.4.1_2022-03-08|15.0.2255.119 | 12.3 (Ubuntu 12.3-1) | | [PowerStore X](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance/powerstore-x-series.htm)|1.20.6|1.0.0_2021-07-30|15.0.2148.140 | 12.3 (Ubuntu 12.3-1) | To see how all Azure Arc-enabled components are validated, see [Validation progr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-| TKG 2.1.0 | 1.26.0 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04) +| TKG 2.1.0 | 1.24.9 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04) | TKG-1.6.0 | 1.23.8 | 1.11.0_2022-09-13 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1) | TKGm v1.5.3 | 1.22.8 | 1.9.0_2022-07-12 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)| |
azure-arc | View Billing Data In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-billing-data-in-azure.md | In the indirectly connected mode, billing data is periodically exported out of t To upload billing data to Azure, the following should happen first: 1. Create an Azure Arc-enabled data service if you don't have one already. For example create one of the following:- - [Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md) + - [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md) - [Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) 2. Wait for at least 2 hours since the creation of the data service so that the billing telemetry collection process can collect some billing data. 3. Follow the steps described in [Upload resource inventory, usage data, metrics and logs to Azure Monitor](upload-metrics-and-logs-to-azure-monitor.md) to get setup with prerequisites for uploading usage/billing/logs data and then proceed to the [Upload usage data to Azure](upload-usage-data.md) to upload the billing data. |
azure-arc | Identity Access Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/identity-access-overview.md | description: "Understand identity and access options for Arc-enabled Kubernetes # Azure Arc-enabled Kubernetes identity and access overview -You can authenticate, authorize, and control access to your Azure Arc-enabled Kubernetes clusters. Kubernetes role-based access control (Kubernetes RBAC) lets you grant users, groups, and service accounts access to only the resources they need. You can further enhance the security and permissions structure by using Azure Active Directory and Azure role-based access control (RBAC). +You can authenticate, authorize, and control access to your Azure Arc-enabled Kubernetes clusters. Kubernetes role-based access control (Kubernetes RBAC) lets you grant users, groups, and service accounts access to only the resources they need. You can further enhance the security and permissions structure by using Azure Active Directory and Azure role-based access control (Azure RBAC). While Kubernetes RBAC works only on Kubernetes resources within your cluster, Azure RBAC works on resources across your Azure subscription. |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md | Title: "Overview of Azure Arc-enabled Kubernetes" Previously updated : 05/03/2022 Last updated : 03/03/2022 description: "This article provides an overview of Azure Arc-enabled Kubernetes." # What is Azure Arc-enabled Kubernetes? -Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers (such as GCP or AWS) or clusters running on your on-premises data center (such as VMware vSphere or Azure Stack HCI) to Azure through the Arc platform. +Azure Arc-enabled Kubernetes allows you to attach Kubernetes clusters running anywhere so that you can manage and configure them in Azure. -When you connect a Kubernetes cluster to Azure, it will: +Once your Kubernetes clusters are connected to Azure, at scale you can: -* Be represented in Azure Resource Manager by a unique ID -* Be placed in an Azure subscription and resource group -* Receive tags just like any other Azure resource +* View all [connected Kubernetes clusters](quickstart-connect-cluster.md) running outside of Azure for inventory, grouping, and tagging, along with Azure Kubernetes Service (AKS) clusters. -Azure Arc-enabled Kubernetes supports industry-standard SSL to secure data in transit. For the connected clusters, cluster extensions, and custom locations, data at rest is stored encrypted in an Azure Cosmos DB database to ensure confidentiality. --Azure Arc-enabled Kubernetes supports the following scenarios for connected clusters: --* Single pane of glass to view all [connected Kubernetes clusters](quickstart-connect-cluster.md) running outside of Azure for inventory, grouping, and tagging, along with Azure Kubernetes Service (AKS) clusters. --* Deploy applications and apply configuration using [GitOps-based configuration management](tutorial-use-gitops-connected-cluster.md). +* Configure clusters and deploy applications using [GitOps-based configuration management](tutorial-use-gitops-connected-cluster.md). * View and monitor your clusters using [Azure Monitor for containers](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json). Azure Arc-enabled Kubernetes supports the following scenarios for connected clus * Ensure governance through applying policies with [Azure Policy for Kubernetes](../../governance/policy/concepts/policy-for-kubernetes.md?toc=/azure/azure-arc/kubernetes/toc.json). -* Manage access by using [Azure Active Directory for authentication and authorization checks](azure-rbac.md) on your cluster. +* Grant access and [connect](cluster-connect.md) to your Kubernetes clusters from anywhere, and manage access by using [Azure role-based access control (RBAC)](azure-rbac.md) on your cluster. -* Securely access your Kubernetes cluster from anywhere without opening inbound port on firewall using [Cluster Connect](cluster-connect.md). +* Deploy machine learning workloads using [Azure Machine Learning for Kubernetes clusters](../../machine-learning/how-to-attach-kubernetes-anywhere.md?toc=/azure/azure-arc/kubernetes/toc.json). -* Deploy [Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) on top of your cluster for observability and policy enforcement on service-to-service interactions +* Deploy services to your cluster to take advantage of specific hardware and comply with data residency requirements: -* Deploy machine learning workloads using [Azure Machine Learning for Kubernetes clusters](../../machine-learning/how-to-attach-kubernetes-anywhere.md?toc=/azure/azure-arc/kubernetes/toc.json). + * [Azure Arc-enabled data services](../dat) + * [Azure Machine Learning for Kubernetes clusters](../../machine-learning/how-to-attach-kubernetes-anywhere.md?toc=/azure/azure-arc/kubernetes/toc.json) + * [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) + * [App Services on Azure Arc](../../app-service/overview-arc-integration.md) + * [Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) -* Create [custom locations](./custom-locations.md) as target locations for deploying Azure Arc-enabled data services (SQL Managed Instances, PostgreSQL server (preview)), [App Services on Azure Arc](../../app-service/overview-arc-integration.md) (including web, function, and logic apps), and [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md). +## Azure Arc connection +When the [Azure Arc agents are deployed to the cluster](quickstart-connect-cluster.md), an outbound connection to Azure is initiated, using industry-standard SSL to secure data in transit. ++Once connected to Azure, the cluster will be represented as its own resource in Azure Resource Manager and can be organized using resource groups and tagging. ## Supported Kubernetes distributions -Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. The Azure Arc team has worked with [key industry partners to validate conformance](./validation-program.md) of their Kubernetes distributions with Azure Arc-enabled Kubernetes. +Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. This includes clusters running on other public cloud providers (such as GCP or AWS) and clusters running on your on-premises data center (such as VMware vSphere or Azure Stack HCI). The Azure Arc team has worked with [key industry partners to validate conformance](./validation-program.md) of their Kubernetes distributions with Azure Arc-enabled Kubernetes. + ## Next steps -Learn how to connect your existing Kubernetes cluster to Azure Arc. -> [!div class="nextstepaction"] -> [Connect an existing Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md) +* Explore the [Cloud Adoption Framework for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-kubernetes/eslz-arc-kubernetes-identity-access-management) +* [Connect an existing Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md) |
azure-arc | Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md | The latest version of the Azure Connected Machine agent for Windows-based machin #### Microsoft Update configuration -The recommended way of keeping the Windows agent up to date is to automatically obtain the latest version through Microsoft Update. This allows you to utilize your existing update infrastructure (such as Microsoft Endpoint Configuration Manager or Windows Server Update Services) and include Azure Connected Machine agent updates with your regular OS update schedule. +The recommended way of keeping the Windows agent up to date is to automatically obtain the latest version through Microsoft Update. This allows you to utilize your existing update infrastructure (such as Microsoft Configuration Manager or Windows Server Update Services) and include Azure Connected Machine agent updates with your regular OS update schedule. Windows Server doesn't check for updates in Microsoft Update by default. To receive automatic updates for the Azure Connected Machine Agent, you must configure the Windows Update client on the machine to check for other Microsoft products. For Windows Servers that belong to a domain and connect to the Internet to check The next time computers in your selected scope refresh their policy, they will start to check for updates in both Windows Update and Microsoft Update. -For organizations that use Microsoft Endpoint Configuration Manager (MECM) or Windows Server Update Services (WSUS) to deliver updates to their servers, you need to configure WSUS to synchronize the Azure Connected Machine Agent packages and approve them for installation on your servers. Follow the guidance for [Windows Server Update Services](/windows-server/administration/windows-server-update-services/manage/setting-up-update-synchronizations#to-specify-update-products-and-classifications-for-synchronization) or [MECM](/mem/configmgr/sum/get-started/configure-classifications-and-products#to-configure-classifications-and-products-to-synchronize) to add the following products and classifications to your configuration: +For organizations that use Microsoft Configuration Manager (MECM) or Windows Server Update Services (WSUS) to deliver updates to their servers, you need to configure WSUS to synchronize the Azure Connected Machine Agent packages and approve them for installation on your servers. Follow the guidance for [Windows Server Update Services](/windows-server/administration/windows-server-update-services/manage/setting-up-update-synchronizations#to-specify-update-products-and-classifications-for-synchronization) or [MECM](/mem/configmgr/sum/get-started/configure-classifications-and-products#to-configure-classifications-and-products-to-synchronize) to add the following products and classifications to your configuration: * **Product Name**: Azure Connected Machine Agent (select all 3 sub-options) * **Classifications**: Critical Updates, Updates |
azure-arc | Manage Automatic Vm Extension Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md | If you continue to have trouble upgrading an extension, you can [disable automat ### Timing of automatic extension upgrades -When a new version of a VM extension is published, it becomes available for installation and manual upgrade on Arc-enabled servers. For servers that already have the extension installed and automatic extension upgrade enabled, it may take up to 5 weeks for every server with that extension to get the automatic upgrade. Upgrades are issued in batches across Azure regions and subscriptions, so you may see the extension get upgraded on some of your servers before others. If you need to upgrade an extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-cli.md#upgrade-extensions) or [Azure CLI](manage-vm-extensions-powershell.md#upgrade-extension). +When a new version of a VM extension is published, it becomes available for installation and manual upgrade on Arc-enabled servers. For servers that already have the extension installed and automatic extension upgrade enabled, it may take up to 5 weeks for every server with that extension to get the automatic upgrade. Upgrades are issued in batches across Azure regions and subscriptions, so you may see the extension get upgraded on some of your servers before others. If you need to upgrade an extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-powershell.md#upgrade-extension) or [Azure CLI](manage-vm-extensions-cli.md#upgrade-extensions). ## Supported extensions |
azure-arc | Onboard Configuration Manager Custom Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-custom-task.md | -Microsoft Endpoint Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager offers the custom task sequence as a flexible paradigm for application deployment. +Microsoft Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager offers the custom task sequence as a flexible paradigm for application deployment. You can use a custom task sequence, that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers. |
azure-arc | Onboard Configuration Manager Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-powershell.md | -Microsoft Endpoint Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager has an integrated ability to run PowerShell scripts. +Microsoft Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager has an integrated ability to run PowerShell scripts. You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers. |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | The Azure Arc service and Azure Connected Machine Agent are supported on Windows * Connected to a power source * Powered on -For example, a computer running Windows 11 that's responsible for digital signage, point-of-sale solutions, and general back office management tasks is a good candidate for Azure Arc. End-user productivity machines, such as a laptop, which may go offline for long periods of time, shouldn't use Azure Arc and instead should consider [Microsoft Intune](/mem/intune) or [Microsoft Endpoint Configuration Manager](/mem/configmgr). +For example, a computer running Windows 11 that's responsible for digital signage, point-of-sale solutions, and general back office management tasks is a good candidate for Azure Arc. End-user productivity machines, such as a laptop, which may go offline for long periods of time, shouldn't use Azure Arc and instead should consider [Microsoft Intune](/mem/intune) or [Microsoft Configuration Manager](/mem/configmgr). ### Short-lived servers and virtual desktop infrastructure |
azure-cache-for-redis | Cache Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-managed-identity.md | az redis identity assign \--mi-system-assigned \--name MyCacheName \--resource-g ## Enable managed identity using Azure PowerShell -Use Azure PowerShell for creating a new cache with managed identity or updating an existing cache to use managed identity. For more information, see [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache?view=azps-7.1.0&preserve-view=true) or [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache?view=azps-7.1.0&preserve-view=true). +Use Azure PowerShell for creating a new cache with managed identity or updating an existing cache to use managed identity. For more information, see [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache) or [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache). For example, to update a cache to use system-managed identity, use the following PowerShell command: |
azure-functions | Create First Function Vs Code Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md | In this section, you use Visual Studio Code to create a local Azure Functions pr 1. For **Select a .NET runtime**, choose from one of the following options: - | .NET runtime | Process model | Description | - | | | | - | **.NET 6.0 (LTS)** | [In-process](functions-dotnet-class-library.md) | _In-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions. Function code runs in the same process as the Functions host. | - | **.NET 6.0 Isolated (LTS)** | [Isolated worker process](dotnet-isolated-process-guide.md) | Functions run on .NET 6, but in a separate process from the Functions host. | - | **.NET 7.0 Isolated** | [Isolated worker process](dotnet-isolated-process-guide.md) | Because .NET 7 isn't an LTS version of .NET, your functions must run in an isolated process on .NET 7. | - | **.NET Framework Isolated** | [Isolated worker process](dotnet-isolated-process-guide.md) | Choose this option when your functions need to use libraries only supported on the .NET Framework. | + | Option | .NET version | Process model | Description | + | | | | | + | **.NET 6.0 (LTS)** | .NET 6 | [In-process](functions-dotnet-class-library.md) | _In-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions. Function code runs in the same process as the Functions host. | + | **.NET 6.0 Isolated (LTS)** | .NET 6 | [Isolated worker process](dotnet-isolated-process-guide.md) | Functions run on .NET 6, but in a separate process from the Functions host. | + | **.NET 7.0 Isolated** | .NET 7 | [Isolated worker process](dotnet-isolated-process-guide.md) | Because .NET 7 isn't an LTS version of .NET, your functions must run in an isolated process on .NET 7. | + | **.NET Framework Isolated** | .NET 7 | [Isolated worker process](dotnet-isolated-process-guide.md) | Choose this option when your functions need to use libraries only supported on the .NET Framework. | The two process models use different APIs, and each process model uses a different template when generating the function project code. If you don't see these options, press F1 and type `Preferences: Open user settings`, then search for `Azure Functions: Project Runtime` and make sure that the default runtime version is set to `~4`. |
azure-functions | Durable Functions Storage Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md | Additional properties may be set to customize the connection. See [Common proper ### Configuring the Netherite storage provider -To use the Netherite storage provider, you must first add a reference to the [Microsoft.Azure.DurableTask.Netherite.AzureFunctions](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions) NuGet package in your **csproj** file (.NET apps) or your **extensions.proj** file (JavaScript, Python, and PowerShell apps). +Enabling the Netherite storage provider requires a configuration change in your `host.json`. For C# users, it also requires an additional installation step. ++#### `host.json` Configuration The following host.json example shows the minimum configuration required to enable the Netherite storage provider. The following host.json example shows the minimum configuration required to enab For more detailed setup instructions, see the [Netherite getting started documentation](https://microsoft.github.io/durabletask-netherite/#/?id=getting-started). +#### Install the Netherite extension (.NET only) ++> [!NOTE] +> If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), you should ignore this section as Extension Bundles removes the need for manual Extension management. ++You'll need to install the latest version of the Netherite Extension on NuGet. This usually means including a reference to it in your `.csproj` file and building the project. ++The Extension package to install depends on the .NET worker you are using: +- For the _in-process_ .NET worker, install [`Microsoft.Azure.DurableTask.Netherite.AzureFunctions`](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions). +- For the _isolated_ .NET worker, install [`Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite`](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite). + ### Configuring the MSSQL storage provider -To use the MSSQL storage provider, you must first add a reference to the [Microsoft.DurableTask.SqlServer.AzureFunctions](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions) NuGet package in your **csproj** file (.NET apps) or your **extensions.proj** file (JavaScript, Python, and PowerShell apps). +Enabling the MSSQL storage provider requires a configuration change in your `host.json`. For C# users, it also requires an additional installation step. ++#### `host.json` Configuration The following example shows the minimum configuration required to enable the MSSQL storage provider. The following example shows the minimum configuration required to enable the MSS For more detailed setup instructions, see the [MSSQL provider's getting started documentation](https://microsoft.github.io/durabletask-mssql/#/quickstart). +#### Install the Durable Task MSSQL extension (.NET only) ++> [!NOTE] +> If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), you should ignore this section as Extension Bundles removes the need for manual Extension management. ++You'll need to install the latest version of the MSSQL storage provider Extension on NuGet. This usually means including a reference to it in your `.csproj` file and building the project. ++The Extension package to install depends on the .NET worker you are using: +- For the _in-process_ .NET worker, install [`Microsoft.DurableTask.SqlServer.AzureFunctions`](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions). +- For the _isolated_ .NET worker, install [`Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer`](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer). + ## Comparing storage providers There are many significant tradeoffs between the various supported storage providers. The following table can be used to help you understand these tradeoffs and decide which storage provider is best for your needs. |
azure-functions | Quickstart Mssql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-mssql.md | Durable Functions supports several [storage providers](durable-functions-storage ## Note on data migration -Migration of [Task Hub data](durable-functions-task-hubs.md) across storage providers is not currently supported. Function apps with existing runtime data will start with a fresh, empty task hub after switching to the MSSQL backend. Similarly, the task hub contents created with MSSQL cannot be preserved when switching to a different storage provider. +Migration of [Task Hub data](durable-functions-task-hubs.md) across storage providers isn't currently supported. Function apps with existing runtime data will start with a fresh, empty task hub after switching to the MSSQL backend. Similarly, the task hub contents created with MSSQL can't be preserved when switching to a different storage provider. ## Prerequisites -The following steps assume that you are starting with an existing Durable Functions app and are familiar with how to operate it. +The following steps assume that you're starting with an existing Durable Functions app and are familiar with how to operate it. In particular, this quickstart assumes that you have already: 1. Created an Azure Functions project on your local machine. 2. Added Durable Functions to your project with an [orchestrator function](durable-functions-bindings.md#orchestration-trigger) and a [client function](durable-functions-bindings.md#orchestration-client) that triggers it. 3. Configured the project for local debugging. -If this is not the case, we suggest you start with one of the following articles, which provides detailed instructions on how to achieve all the requirements above: +If this isn't the case, we suggest you start with one of the following articles, which provides detailed instructions on how to achieve all the requirements above: - [Create your first durable function - C#](durable-functions-create-first-csharp.md) - [Create your first durable function - JavaScript](quickstart-js-vscode.md) If this is not the case, we suggest you start with one of the following articles > [!NOTE] > If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), you should ignore this section as Extension Bundles removes the need for manual Extension management. -You will need to install the latest version of the `Microsoft.DurableTask.SqlServer.AzureFunctions` [Extension on NuGet](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions) on your app. This usually means to include a reference to it in your `.csproj` file and building the project. +You'll need to install the latest version of the MSSQL storage provider Extension on NuGet. This usually means including a reference to it in your `.csproj` file and building the project. ++The Extension package to install depends on the .NET worker you're using: +- For the _in-process_ .NET worker, install [`Microsoft.DurableTask.SqlServer.AzureFunctions`](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions). +- For the _isolated_ .NET worker, install [`Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer`](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer). You can install the Extension using the following [Azure Functions Core Tools CLI](../functions-run-local.md#install-the-azure-functions-core-tools) command ```cmd-func extensions install --package Microsoft.DurableTask.SqlServer.AzureFunctions --version <latestVersionOnNuget> +func extensions install --package <package name depending on your worker model> --version <latest version> ``` For more information on installing Azure Functions Extensions via the Core Tools CLI, see [this guide](../functions-run-local.md#install-extensions). Below is an example `local.settings.json` assigning the default Docker-based SQL ### Update host.json -Edit the storage provider section of the `host.json` file so it sets the `type` to `mssql`. We'll also specify the connection string variable name, `SQLDB_Connection`, under `connectionStringName`. We'll set `createDatabaseIfNotExists` to `true`; this setting creates a database named `DurableDB` if one does not already exists, with collation `Latin1_General_100_BIN2_UTF8`. +Edit the storage provider section of the `host.json` file so it sets the `type` to `mssql`. We'll also specify the connection string variable name, `SQLDB_Connection`, under `connectionStringName`. We'll set `createDatabaseIfNotExists` to `true`; this setting creates a database named `DurableDB` if one doesn't already exist, with collation `Latin1_General_100_BIN2_UTF8`. ```json { |
azure-functions | Quickstart Netherite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-netherite.md | If this isn't the case, we suggest you start with one of the following articles, > [!NOTE] > If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), you should ignore this section as Extension Bundles removes the need for manual Extension management. -You'll need to install the latest version of the `Microsoft.Azure.DurableTask.Netherite.AzureFunctions` [Extension on NuGet](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions) on your app. This usually means to include a reference to it in your `.csproj` file and building the project. +You'll need to install the latest version of the Netherite Extension on NuGet. This usually means including a reference to it in your `.csproj` file and building the project. ++The Extension package to install depends on the .NET worker you are using: +- For the _in-process_ .NET worker, install [`Microsoft.Azure.DurableTask.Netherite.AzureFunctions`](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions). +- For the _isolated_ .NET worker, install [`Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite`](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite). You can install the Extension using the following [Azure Functions Core Tools CLI](../functions-run-local.md#install-the-azure-functions-core-tools) command ```cmd-func extensions install --package Microsoft.Azure.DurableTask.Netherite.AzureFunctions --version <latestVersionOnNuget> +func extensions install --package <package name depending on your worker model> --version <latest version> ``` For more information on installing Azure Functions Extensions via the Core Tools CLI, see [this guide](../functions-run-local.md#install-extensions). |
azure-functions | Quickstart Python Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md | In this article, you learn how to use the Visual Studio Code Azure Functions ext :::image type="content" source="./media/quickstart-python-vscode/functions-vs-code-complete.png" alt-text="Screenshot of the running durable function in Azure."::: > [!NOTE]-> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to have a more idiomatic and intuitive. To learn more, see Azure Functions Python [developer guide](../functions-reference-python.md?pivots=python-mode-decorators). +> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for Python programmers. To learn more, see Azure Functions Python [developer guide](../functions-reference-python.md?pivots=python-mode-decorators). ## Prerequisites |
azure-functions | Functions Bindings Cosmosdb V2 Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md | Title: Azure Cosmos DB trigger for Functions 2.x and higher description: Learn to use the Azure Cosmos DB trigger in Azure Functions. Previously updated : 11/29/2022 Last updated : 03/03/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers Apps using [Azure Cosmos DB extension version 4.x](./functions-bindings-cosmosdb ```cs namespace CosmosDBSamplesV2 {+ // Customize the model with your own desired properties public class ToDoItem {- public string Id { get; set; } + public string id { get; set; } public string Description { get; set; } } } namespace CosmosDBSamplesV2 if (input != null && input.Count > 0) { log.LogInformation("Documents modified " + input.Count);- log.LogInformation("First document Id " + input[0].Id); + log.LogInformation("First document Id " + input[0].id); } } } Here's the binding data in the *function.json* file: Here's the C# script code: ```cs- #r "Microsoft.Azure.DocumentDB.Core" - using System;- using Microsoft.Azure.Documents; using System.Collections.Generic; using Microsoft.Extensions.Logging; - public static void Run(IReadOnlyList<Document> documents, ILogger log) + // Customize the model with your own desired properties + public class ToDoItem + { + public string id { get; set; } + public string Description { get; set; } + } ++ public static void Run(IReadOnlyList<ToDoItem> documents, ILogger log) { log.LogInformation("Documents modified " + documents.Count);- log.LogInformation("First document Id " + documents[0].Id); + log.LogInformation("First document Id " + documents[0].id); } ``` |
azure-functions | Functions Create Your First Function Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md | Title: "Quickstart: Create your first C# function in Azure using Visual Studio" description: "In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions." ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 01/05/2023 Last updated : 02/28/2023 ms.devlang: csharp The Azure Functions project template in Visual Studio creates a C# class library 1. In **Additional information** choose from one of the following options for **Functions worker**: - | .NET runtime | Process model | Description | - | | | | - | **.NET 6.0 (Long Term Support)** | [In-process](functions-dotnet-class-library.md) | _In-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions. Function code runs in the same process as the Functions host. | - | **.NET 6.0 Isolated (Long Term Support)** | [Isolated worker process](dotnet-isolated-process-guide.md) | Functions run on .NET 6, but in a separate process from the Functions host. | - | **.NET 7.0 Isolated** | [Isolated worker process](dotnet-isolated-process-guide.md) | Because .NET 7 isn't an LTS version of .NET, your functions must run in an isolated process on .NET 7. | - | **.NET Framework Isolated v4** | [Isolated worker process](dotnet-isolated-process-guide.md) | Choose this option when your functions need to use libraries only supported on the .NET Framework. | - | **.NET Core 3.1 (Long Term Support)** | [In-process](functions-dotnet-class-library.md) | .NET Core 3.1 is no longer a supported version of .NET and isn't supported by Functions version 4.x. Use .NET 6.0 instead. | - | **.NET Framework v1** | [In-process](functions-dotnet-class-library.md) | Choose this option when your functions need to use libraries only supported on older versions of .NET Framework. Requires version 1.x of the Functions runtime. | + | Option | .NET version | Process model | Description | + | | | | | + | **.NET 6.0 (Long Term Support)** | .NET 6 | [In-process](functions-dotnet-class-library.md) | _In-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions. Function code runs in the same process as the Functions host. | + | **.NET 6.0 Isolated (Long Term Support)** | .NET 6 | [Isolated worker process](dotnet-isolated-process-guide.md) | Functions run on .NET 6, but in a separate process from the Functions host. | + | **.NET 7.0 Isolated** | .NET 7 | [Isolated worker process](dotnet-isolated-process-guide.md) | Because .NET 7 isn't an LTS version of .NET, your functions must run in an isolated process on .NET 7. | + | **.NET Framework Isolated v4** | .NET Framework 4.8 | [Isolated worker process](dotnet-isolated-process-guide.md) | Choose this option when your functions need to use libraries only supported on the .NET Framework. | + | **.NET Core 3.1 (Long Term Support)** | .NET Core 3.1 | [In-process](functions-dotnet-class-library.md) | .NET Core 3.1 is no longer a supported version of .NET and isn't supported by Functions version 4.x. Use .NET 6.0 instead. | + | **.NET Framework v1** | .NET Framework | [In-process](functions-dotnet-class-library.md) | Choose this option when your functions need to use libraries only supported on older versions of .NET Framework. Requires version 1.x of the Functions runtime. | The two process models use different APIs, and each process model uses a different template when generating the function project code. If you don't see options for .NET 6.0 and later .NET runtime versions, you may need to [update your Azure Functions tools installation](https://developercommunity.visualstudio.com/t/Sometimes-the-Visual-Studio-functions-wo/10224478?). |
azure-functions | Functions Hybrid Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-hybrid-powershell.md | Hybrid connections are configured from the networking section of the function ap :::image type="content" source="./media/functions-hybrid-powershell/hybrid-connection-overview.png" alt-text="Add a hybrid connection." border="true"::: -1. Enter information about the hybrid connection as shown right after the following screenshot. You have the option of making the **Endpoint Host** setting match the host name of the on-premises server to make it easier to remember the server later when you're running remote commands. The port matches the default Windows remote management service port that was defined on the server earlier. +1. Enter information about the hybrid connection as shown after the following screenshot. For **Endpoint Host**, use the host name of the on-premises server for which you created the self-signed certificate. You'll have connection issues when the certificate name and the host name of the on-premise server don't match. The port matches the default Windows remote management service port that was defined on the server earlier. :::image type="content" source="./media/functions-hybrid-powershell/add-hybrid-connection.png" alt-text="Add hybrid connection." border="true"::: |
azure-functions | Migrate Version 1 Version 4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md | After you make these changes, your updated project should look like the followin <OutputType>Exe</OutputType> </PropertyGroup> <ItemGroup>- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.8.0" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.10.0" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.0.13" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.7.0" /> </ItemGroup> <ItemGroup> |
azure-maps | How To Creator Feature Stateset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-feature-stateset.md | + + Title: Create a feature stateset ++description: How to create a feature stateset using the Creator REST API. ++ Last updated : 03/03/2023++++++# Create a feature stateset ++[Feature statesets] define dynamic properties and values on specific features that support them. This article explains how to create a stateset that defines values and corresponding styles for a property and changing a property's state. ++## Prerequisites ++* Successful completion of [Query datasets with WFS API]. +* The `datasetId` obtained in the [Check the dataset creation status] section of the *Use Creator to create indoor maps* tutorial. ++>[!IMPORTANT] +> +> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services]. +> * In the URL examples in this article you will need to replace: +> * `{Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +> * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status] section of the *Use Creator to create indoor maps* tutorial ++## Create the feature stateset ++To create a stateset: ++Create a new **HTTP POST Request** that uses the [Stateset API]. The request should look like the following URL: ++```http +https://us.atlas.microsoft.com/featurestatesets?api-version=2.0&datasetId={datasetId}&subscription-key={Your-Azure-Maps-Subscription-key} +``` ++Next, set the `Content-Type` to `application/json` in the **Header** of the request. ++If using a tool like [Postman], it should look like this: +++Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, this applies different colors to the `occupied` property depending on its value: ++```json +{ + "styles":[ + { + "keyname":"occupied", + "type":"boolean", + "rules":[ + { + "true":"#FF0000", + "false":"#00FF00" + } + ] + } + ] +} +``` ++After the response returns successfully, copy the `statesetId` from the response body. In the next section, you'll use the `statesetId` to change the `occupancy` property state of the unit with feature `id` "UNIT26". If using Postman, it will appear as follows: +++## Update a feature state ++In this section you will learn how to update the `occupied` state of the unit with feature `id` "UNIT26". To do this, create a new **HTTP PUT Request** calling the [Feature Statesets API]. The request should look like the following URL (replace `{statesetId}` with the `statesetId` obtained in [Create a feature stateset](#create-a-feature-stateset)): ++```http +https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} +``` ++Next, set the `Content-Type` to `application/json` in the **Header** of the request. ++If using a tool like [Postman], it should look like this: +++Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, this applies different colors to the `occupied` property depending on its value: ++```json +{ + "states": [ + { + "keyName": "occupied", + "value": true, + "eventTimestamp": "2020-11-14T17:10:20" + } + ] +} +``` ++>[!NOTE] +> The update will be saved only if the time posted stamp is after the time stamp of the previous request. ++Once the HTTP request is sent and the update completes, you'll receive a `200 OK` HTTP status code. If you implemented [dynamic styling] for an indoor map, the update displays at the specified time stamp in your rendered map. ++## Additional information ++* For information on how to retrieve the state of a feature using its feature id, see [Feature State - List States]. +* For information on how to delete the stateset and its resources, see [Feature State - Delete Stateset]. +* For information on using the Azure Maps Creator [Feature State service] to apply styles that are based on the dynamic properties of indoor map data features, see how to article [Implement dynamic styling for Creator indoor maps]. ++* For more information on the different Azure Maps Creator services discussed in this article, see [Creator Indoor Maps]. ++## Next steps ++Learn how to implement dynamic styling for indoor maps. ++> [!div class="nextstepaction"] +> [dynamic styling] ++[Access to Creator Services]: how-to-manage-creator.md#access-to-creator-services +[Query datasets with WFS API]: how-to-creator-wfs.md +[Stateset API]: /rest/api/maps/v2/feature-state/create-stateset +[Feature Statesets API]: /rest/api/maps/v2/feature-state/create-stateset +[Feature statesets]: /rest/api/maps/v2/feature-state +[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status +[dynamic styling]: indoor-map-dynamic-styling.md +[Feature State - List States]: /rest/api/maps/v2/feature-state/list-states +[Feature State - Delete Stateset]: /rest/api/maps/v2/feature-state/delete-stateset +[Feature State service]: /rest/api/maps/v2/feature-state +[Implement dynamic styling for Creator indoor maps]: indoor-map-dynamic-styling.md +[Creator Indoor Maps]: creator-indoor-maps.md +[Postman]: https://www.postman.com/ |
azure-maps | How To Creator Wayfinding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md | Title: Indoor Maps wayfinding service description: How to use the wayfinding service to plot and display routes for indoor maps in Microsoft Azure Maps Creator--++ Last updated 10/25/2022 |
azure-maps | How To Creator Wfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md | + + Title: Query datasets using the Web Feature Service ++description: How to Query datasets with Web Feature Service (WFS) ++ Last updated : 03/03/2023++++++# Query datasets using the Web Feature Service ++This article describes how to query Azure Maps Creator [datasets] using [Web Feature Service (WFS)]. You can use the WFS API to query for all feature collections or a specific collection within a dataset. For example, you can use WFS to find all mid-size meeting rooms in a specific building and floor level. ++## Prerequisites ++* Successful completion of [Tutorial: Use Creator to create indoor maps]. +* The `datasetId` obtained in [Check dataset creation status] section of the *Use Creator to create indoor maps* tutorial. ++This article uses the same sample indoor map as used in the Tutorial: Use Creator to create indoor maps. ++>[!IMPORTANT] +> +> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services]. +> * In the URL examples in this article you will need to replace: +> * `{Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +> * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status] section of the *Use Creator to create indoor maps* tutorial. ++## Query for feature collections ++To query all collections in your dataset, create a new **HTTP GET Request**: ++Enter the following URL to [WFS API]. The request should look like the following URL: ++```http +https://us.atlas.microsoft.com/wfs/datasets/{datasetId}/collections?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0 +``` ++The response body is returned in GeoJSON format and contains all collections in the dataset. For simplicity, the example here only shows the `unit` collection. To see an example that contains all collections, see [WFS Describe Collections API]. To learn more about any collection, you can select any of the URLs inside the `links` element. ++```json +{ +"collections": [ + { + "name": "unit", + "description": "A physical and non-overlapping area which might be occupied and traversed by a navigating agent. Can be a hallway, a room, a courtyard, etc. It is surrounded by physical obstruction (wall), unless the is_open_area attribute is equal to true, and one must add openings where the obstruction shouldn't be there. If is_open_area attribute is equal to true, all the sides are assumed open to the surroundings and walls are to be added where needed. Walls for open areas are represented as a line_element or area_element with is_obstruction equal to true.", + "links": [ + { + "href": "https://atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/definition?api-version=1.0", + "rel": "describedBy", + "title": "Metadata catalogue for unit" + }, + { + "href": "https://atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/items?api-version=1.0", + "rel": "data", + "title": "unit" + } + { + "href": "https://atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit?api-version=1.0", + "rel": "self", + "title": "Metadata catalogue for unit" + } + ] + }, +``` ++## Query for unit feature collection ++In this section, you'll query [WFS API] for the `unit` feature collection. ++To query the unit collection in your dataset, create a new **HTTP GET Request**: ++```http +https://us.atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/items?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0 +``` ++After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". You'll use "UNIT26" as your feature `id` when you [Update a feature state]. ++```json +{ + "type": "FeatureCollection", + "features": [ + { + "type": "Feature", + "geometry": { + "type": "Polygon", + "coordinates": ["..."] + }, + "properties": { + "original_id": "b7410920-8cb0-490b-ab23-b489fd35aed0", + "category_id": "CTG8", + "is_open_area": true, + "navigable_by": [ + "pedestrian" + ], + "route_through_behavior": "allowed", + "level_id": "LVL14", + "occupants": [], + "address_id": "DIR1", + "name": "157" + }, + "id": "UNIT26", + "featureType": "" + }, {"..."} + ] +} +``` ++> [!div class="nextstepaction"] +> [How to create a feature stateset] ++[datasets]: /rest/api/maps/v2/dataset +[WFS API]: /rest/api/maps/v2/wfs + [Web Feature Service (WFS)]: /rest/api/maps/v2/wfs +[Tutorial: Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md +[Check dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status +[Access to Creator Services]: how-to-manage-creator.md#access-to-creator-services +[WFS Describe Collections API]: /rest/api/maps/v2/wfs/get-collection-definition +[Update a feature state]: how-to-creator-feature-stateset.md#update-a-feature-state +[How to create a feature stateset]: how-to-creator-feature-stateset.md |
azure-maps | How To Dataset Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md | -Azure Maps Creator enables users to import their indoor map data in GeoJSON format with [Facility Ontology 2.0][Facility Ontology], which can then be used to create a [dataset][dataset-concept]. +Azure Maps Creator enables users to import their indoor map data in GeoJSON format with [Facility Ontology 2.0][Facility Ontology], which can then be used to create a [dataset]. > [!NOTE] > This article explains how to create a dataset from a GeoJSON package. For information on additional steps required to complete an indoor map, see [Next steps](#next-steps). ## Prerequisites -- Basic understanding of [Creator for indoor maps](creator-indoor-maps.md).+- Basic understanding of [Creator for indoor maps]. - Basic understanding of [Facility Ontology 2.0][Facility Ontology].-- [Azure Maps account][Azure Maps account].+- [Azure Maps account]. - [Azure Maps Creator resource][Creator resource].-- [Subscription key][Subscription key].+- [Subscription key]. - Zip package containing all required GeoJSON files. If you don't have GeoJSON- files, you can download the [Contoso building sample][Contoso building sample]. + files, you can download the [Contoso building sample]. >[!IMPORTANT] > For more information on the GeoJSON package, see the [Geojson zip package requir ### Upload the GeoJSON package -Use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload the drawing package to Azure Maps Creator account. +Use the [Data Upload API] to upload the Drawing package to Azure Maps Creator account. -The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2](creator-long-running-operation-v2.md). +The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2]. To upload the GeoJSON package: -1. Execute the following HTTP POST request that uses the [Data Upload API](/rest/api/maps/data-v2/upload): +1. Execute the following HTTP POST request that uses the [Data Upload API]: ```http https://us.atlas.microsoft.com/mapData?api-version=2.0&dataFormat=zip&subscription-key={Your-Azure-Maps-Subscription-key} https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversio ## Geojson zip package requirements -The GeoJSON zip package consists of one or more [RFC 7946][RFC 7946] compliant GeoJSON files, one for each feature class, all in the root directory (subdirectories aren't supported), compressed with standard Zip compression and named using the `.ZIP` extension. +The GeoJSON zip package consists of one or more [RFC 7946] compliant GeoJSON files, one for each feature class, all in the root directory (subdirectories aren't supported), compressed with standard Zip compression and named using the `.ZIP` extension. Each feature class file must match its definition in the [Facility ontology 2.0][Facility ontology] and each feature must have a globally unique identifier. Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) and underscore (_) characters. > [!TIP]-> If you want to be certain you have a globally unique identifier (GUID), consider creating it by running a GUID generating tool such as the Guidgen.exe command line program (Available with [Visual Studio][Visual Studio]). Guidgen.exe never produces the same number twice, no matter how many times it is run or how many different machines it runs on. +> If you want to be certain you have a globally unique identifier (GUID), consider creating it by running a GUID generating tool such as the Guidgen.exe command line program (Available with [Visual Studio]). Guidgen.exe never produces the same number twice, no matter how many times it is run or how many different machines it runs on. ### Facility ontology 2.0 validations in the Dataset -[Facility ontology][Facility ontology] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks as well as geometry and attribute validations. These validations are described in more detail below. +[Facility ontology] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks and geometry and attribute validations. These validations are described in more detail below. - The maximum number of features that can be imported into a dataset at a time is 150,000. - The facility area can be between 4 and 4,000 Sq Km.-- The top level element is [facility][facility], which defines each building in the file *facility.geojson*.+- The top level element is [facility], which defines each building in the file *facility.geojson*. - Each facility has one or more levels, which are defined in the file *levels.goejson*. - Each level must be inside the facility.-- Each [level][level] contain [units][unit], [structures][structure], [verticalPenetrations][verticalPenetration] and [openings][opening]. All of the items defined in the level must be fully contained within the Level geometry.- - `unit` can consist of an array of items such as hallways, offices and courtyards, which are defined by [area][areaElement], [line][lineElement] or [point][pointElement] elements. Units are defined in the file *unit.goejson*. +- Each [level] contain [units], [structures], [verticalPenetrations] and [openings]. All of the items defined in the level must be fully contained within the Level geometry. + - `unit` can consist of an array of items such as hallways, offices and courtyards, which are defined by [area], [line] or [point] elements. Units are defined in the file *unit.goejson*. - All `unit` elements must be fully contained within their level and intersect with their children. - `structure` defines physical, non-overlapping areas that can't be navigated through, such as a wall. Structures are defined in the file *structure.goejson*. - `verticalPenetration` represents a method of navigating vertically between levels, such as stairs and elevators and are defined in the file *verticalPenetration.geojson*. Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) ## Next steps > [!div class="nextstepaction"]-> [Create a tileset](tutorial-creator-indoor-maps.md#create-a-tileset) --> [!div class="nextstepaction"] -> [Query datasets with WFS API](tutorial-creator-wfs.md) --> [!div class="nextstepaction"] -> [Create a feature stateset](tutorial-creator-feature-stateset.md) +> [Create a tileset] [Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples-[unit]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit -[structure]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure +[units]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit +[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure [level]: creator-facility-ontology.md?pivots=facility-ontology-v2#level [facility]: creator-facility-ontology.md?pivots=facility-ontology-v2#facility-[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration -[opening]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening -[areaElement]: creator-facility-ontology.md?pivots=facility-ontology-v2#areaelement -[lineElement]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement -[pointElement]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement +[verticalPenetrations]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration +[openings]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening +[area]: creator-facility-ontology.md?pivots=facility-ontology-v2#areaelement +[line]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement +[point]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement [conversion]: tutorial-creator-indoor-maps.md#convert-a-drawing-package [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) [Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account [Facility Ontology]: creator-facility-ontology.md?pivots=facility-ontology-v2 [RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html-[dataset-concept]: creator-indoor-maps.md#datasets +[dataset]: creator-indoor-maps.md#datasets [Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create [Dataset Create]: /rest/api/maps/v2/dataset/create [Visual Studio]: https://visualstudio.microsoft.com/downloads/+[Data Upload API]: /rest/api/maps/data-v2/upload +[Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md +[Creator for indoor maps]: creator-indoor-maps.md +[Create a tileset]: tutorial-creator-indoor-maps.md#create-a-tileset |
azure-maps | Indoor Map Dynamic Styling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md | Title: Implement dynamic styling for Azure Maps Creator indoor maps description: Learn how to Implement dynamic styling for Creator indoor maps -- Previously updated : 10/28/2021++ Last updated : 03/03/2023 -You can use Azure Maps Creator [Feature State service](/rest/api/maps/v2/feature-state) to apply styles that are based on the dynamic properties of indoor map data features. For example, you can render facility meeting rooms with a specific color to reflect occupancy status. This article describes how to dynamically render indoor map features with the [Feature State service](/rest/api/maps/v2/feature-state) and the [Indoor Web module](how-to-use-indoor-module.md). +You can use the Azure Maps Creator [Feature State service] to apply styles that are based on the dynamic properties of indoor map data features. For example, you can render facility meeting rooms with a specific color to reflect occupancy status. This article describes how to dynamically render indoor map features with the [Feature State service] and the [Indoor Web module]. ## Prerequisites -1. [Create an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) -2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. -3. [Create a Creator resource](how-to-manage-creator.md) -4. Download the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). -5. [Create an indoor map](tutorial-creator-indoor-maps.md) to obtain a `tilesetId` and `statesetId`. -6. Build a web application by following the steps in [How to use the Indoor Map module](how-to-use-indoor-module.md). +- A `statesetId`. For more information, see [How to create a feature stateset]. +- A web application. For more information, see [How to use the Indoor Map module]. -This tutorial uses the [Postman](https://www.postman.com/) application, but you may choose a different API development environment. +This article uses the [Postman] application, but you may choose a different API development environment. ## Implement dynamic styling -After you complete the prerequisites, you should have a simple web application configured with your subscription key, `tilesetId`, and `statesetId`. +After you complete the prerequisites, you should have a simple web application configured with your subscription key, and `statesetId`. ### Select features -To implement dynamic styling, a feature - such as a meeting or conference room - must be referenced by its feature `id`. You use the feature `id` to update the dynamic property or *state* of that feature. To view the features defined in a dataset, you can use one of the following methods: +You reference a feature, such as a meeting or conference room, by its ID to implement dynamic styling. Use the feature ID to update the dynamic property or *state* of that feature. To view the features defined in a dataset, use one of the following methods: -* WFS API (Web Feature service). You can use the [WFS API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](https://docs.opengeospatial.org/DRAFTS/17-069r4.html). The WFS API is helpful for querying features within a dataset. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level. +- WFS API (Web Feature service). Use the [WFS API] to query datasets. WFS follows the [Open Geospatial Consortium API Features]. The WFS API is helpful for querying features within a dataset. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level. -* Implement customized code that a user can use to select features on a map using your web application. We use this option in this article. +- Implement customized code that a user can use to select features on a map using your web application, as demonstrated in this article. -The following script implements the mouse-click event. The code retrieves the feature `id` based on the clicked point. In your application, you can insert the code after your Indoor Manager code block. Run your application, and then check the console to obtain the feature `id` of the clicked point. +The following script implements the mouse-click event. The code retrieves the feature ID based on the clicked point. In your application, you can insert the code after your Indoor Manager code block. Run your application, and then check the console to obtain the feature ID of the clicked point. ```javascript /* Upon a mouse click, log the feature properties to the browser's console. */ map.events.add("click", function(e){ }); ``` -The [Create an indoor map](tutorial-creator-indoor-maps.md) tutorial configured the feature stateset to accept state updates for `occupancy`. +The [Create an indoor map] tutorial configured the feature stateset to accept state updates for `occupancy`. -In the next section, we'll set the occupancy *state* of office `UNIT26` to `true` and office `UNIT27` to `false`. +In the next section, you'll set the occupancy *state* of office `UNIT26` to `true` and office `UNIT27` to `false`. ### Set occupancy status - We'll now update the state of the two offices, `UNIT26` and `UNIT27`: +Update the state of the two offices, `UNIT26` and `UNIT27`: 1. In the Postman app, select **New**. In the next section, we'll set the occupancy *state* of office `UNIT26` to `true 3. Enter a **Request name** for the request, such as *POST Data Upload*. -4. Enter the following URL to the [Feature Update States API](/rest/api/maps/v2/feature-state/update-states) (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `statesetId` with the `statesetId`): +4. Enter the following URL to the [Feature Update States API] (replace `{Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `statesetId` with the `statesetId`): ```http https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} In the next section, we'll set the occupancy *state* of office `UNIT26` to `true ``` >[!IMPORTANT]- >The update will be saved only if the posted time stamp is after the time stamp used in previous feature state update requests for the same feature `ID`. + >The update will be saved only if the posted time stamp is after the time stamp used in previous feature state update requests for the same feature ID. 10. Change the URL you used in step 7 by replacing `UNIT26` with `UNIT27`: In the next section, we'll set the occupancy *state* of office `UNIT26` to `true The web application that you previously opened in a browser should now reflect the updated state of the map features: -* Office `UNIT27`(142) should appear green. -* Office `UNIT26`(143) should appear red. +- Office `UNIT27`(142) should appear green. +- Office `UNIT26`(143) should appear red.  -[See live demo](https://samples.azuremaps.com/?sample=creator-indoor-maps) +[See live demo] ## Next steps Learn more by reading: > [!div class="nextstepaction"]-> [Creator for indoor mapping](creator-indoor-maps.md) --See the references for the APIs mentioned in this article: --> [!div class="nextstepaction"] -> [Data Upload](creator-indoor-maps.md#upload-a-drawing-package) --> [!div class="nextstepaction"] -> [Data Conversion](creator-indoor-maps.md#convert-a-drawing-package) --> [!div class="nextstepaction"] -> [Dataset](creator-indoor-maps.md#datasets) --> [!div class="nextstepaction"] -> [Tileset](creator-indoor-maps.md#tilesets) --> [!div class="nextstepaction"] -> [Feature State set](creator-indoor-maps.md#feature-statesets) --> [!div class="nextstepaction"] -> [WFS service](creator-indoor-maps.md#web-feature-service-api) +> [Creator for indoor maps](creator-indoor-maps.md) ++[Feature State service]: /rest/api/maps/v2/feature-state +[Indoor Web module]: how-to-use-indoor-module.md +<!--[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account +[A Creator resource]: how-to-manage-creator.md +[Sample Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples--> +[How to use the Indoor Map module]: how-to-use-indoor-module.md +[Postman]: https://www.postman.com/ +[How to create a feature stateset]: how-to-creator-feature-stateset.md +[See live demo]: https://samples.azuremaps.com/?sample=creator-indoor-maps +[Feature Update States API]: /rest/api/maps/v2/feature-state/update-states +[Create an indoor map]: tutorial-creator-indoor-maps.md +[Open Geospatial Consortium API Features]: https://docs.opengeospatial.org/DRAFTS/17-069r4.html +[WFS API]: /rest/api/maps/v2/wfs |
azure-maps | Tutorial Creator Feature Stateset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-feature-stateset.md | - Title: 'Tutorial: Create a feature stateset'- -description: The third tutorial on Microsoft Azure Maps Creator. How to create a feature stateset. -- Previously updated : 01/28/2022------# Tutorial: Create a feature stateset --[Feature statesets](/rest/api/maps/v2/feature-state) define dynamic properties and values on specific features that support them. In this Tutorial, you'll: --> [!div class="checklist"] -> -> * Create a stateset that defines boolean values and corresponding styles for the **occupancy** property. -> * Change the `occupancy` property state of the desired unit. --## Prerequisites --* Successful completion of [Tutorial: Query datasets with WFS API](tutorial-creator-wfs.md). -* The `datasetId` obtained in the [Check the dataset creation status](tutorial-creator-indoor-maps.md#check-the-dataset-creation-status) section of the *Use Creator to create indoor maps* tutorial. --This tutorial uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment. -->[!IMPORTANT] -> -> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). -> * In the URL examples in this article you will need to replace: -> * `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. -> * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status](tutorial-creator-indoor-maps.md#check-the-dataset-creation-status) section of the *Use Creator to create indoor maps* tutorial --## Create a feature stateset --To create a stateset: --1. In the Postman app, create a new **HTTP Request** and save it as *POST Create Stateset*. --2. Select the **POST** HTTP method. --3. Enter the following URL to the [Stateset API](/rest/api/maps/v2/feature-state/create-stateset). The request should look like the following URL: -- ```http - https://us.atlas.microsoft.com/featurestatesets?api-version=2.0&datasetId={datasetId}&subscription-key={Your-Azure-Maps-Subscription-key} - ``` --4. Select the **Headers** tab. --5. In the **KEY** field, select `Content-Type`. --6. In the **VALUE** field, select `application/json`. -- :::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="A screenshot of Postman showing the Header tab of the POST request that shows the Content Type Key with a value of application forward slash json."::: --7. Select the **Body** tab. --8. Select **raw** and **JSON**. --9. Copy the following JSON styles, and then paste them in the **Body** window: -- ```json - { - "styles":[ - { - "keyname":"occupied", - "type":"boolean", - "rules":[ - { - "true":"#FF0000", - "false":"#00FF00" - } - ] - } - ] - } - ``` --10. Select **Send**. --11. After the response returns successfully, copy the `statesetId` from the response body. In the next section, you'll use the `statesetId` to change the `occupancy` property state of the unit with feature `id` "UNIT26". -- :::image type="content" source="./media/tutorial-creator-indoor-maps/response-stateset-id.png"alt-text="A screenshot of Postman showing the resource Stateset ID value in the responses body."::: --## Update a feature state --To update the `occupied` state of the unit with feature `id` "UNIT26": --1. In the Postman app, create a new **HTTP Request** and save it as *PUT Set Stateset*. --2. Select the **PUT** HTTP method. --3. Enter the following URL to the [Feature Statesets API](/rest/api/maps/v2/feature-state/create-stateset). The request should look like the following URL (replace `{statesetId`} with the `statesetId` obtained in [Create a feature stateset](#create-a-feature-stateset)): -- ```http - https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} - ``` --4. Select the **Headers** tab. --5. In the **KEY** field, select `Content-Type`. --6. In the **VALUE** field, select `application/json`. -- :::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="Header tab information for stateset creation."::: --7. Select the **Body** tab. --8. Select **raw** and **JSON**. --9. Copy the following JSON style, and then paste it in the **Body** window: -- ```json - { - "states": [ - { - "keyName": "occupied", - "value": true, - "eventTimestamp": "2020-11-14T17:10:20" - } - ] - } - ``` -- >[!NOTE] - > The update will be saved only if the time posted stamp is after the time stamp of the previous request. --10. Select **Send**. --11. After the update completes, you'll receive a `200 OK` HTTP status code. If you implemented [dynamic styling](indoor-map-dynamic-styling.md) for an indoor map, the update displays at the specified time stamp in your rendered map. --## Additional information --* For information on how to retrieve the state of a feature using its feature id, see [Feature State - List States](/rest/api/maps/v2/feature-state/list-states). -* For information on how to delete the stateset and its resources, see [Feature State - Delete Stateset](/rest/api/maps/v2/feature-state/delete-stateset) . -* For information on using the Azure Maps Creator [Feature State service](/rest/api/maps/v2/feature-state) to apply styles that are based on the dynamic properties of indoor map data features, see how to article [Implement dynamic styling for Creator indoor maps](indoor-map-dynamic-styling.md). --* For more information on the different Azure Maps Creator services discussed in this tutorial, see [Creator Indoor Maps](creator-indoor-maps.md). |
azure-maps | Tutorial Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md | Title: 'Tutorial: Use Microsoft Azure Maps Creator to create indoor maps' description: Tutorial on how to use Microsoft Azure Maps Creator to create indoor maps--++ Last updated 01/28/2022 This tutorial describes how to create indoor maps for use in Microsoft Azure Map > * Create a tileset from the data in your dataset. > * Get the default map configuration ID from your tileset. -In the next tutorials in the Creator series you'll learn to: --> * Query the Azure Maps Web Feature Service (WFS) API to learn about your map features. -> * Create a feature stateset that can be used to set the states of features in your dataset. -> * Update the state of a given map feature. - > [!TIP] > You can also create a dataset from a GeoJSON package. For more information, see [Create a dataset using a GeoJson package (Preview)](how-to-dataset-geojson.md). Once your tileset creation completes, you can get the `mapConfigurationId` using For more information, see [Map configuration](creator-indoor-maps.md#map-configuration) in the indoor maps concepts article. -<!--For additional information, see [Create custom styles for indoor maps](how-to-create-custom-styles.md).--> --## Additional information --* For additional information see the how to [Use the Azure Maps Indoor Maps module](how-to-use-indoor-module.md) article. -* See [Azure IoT Maps Creator Functional API](/rest/api/maps-creator/) for additional information on the Creator REST API. - ## Next steps -To learn how to query Azure Maps Creator [datasets](/rest/api/maps/v2/dataset) using [WFS API](/rest/api/maps/v2/wfs) in the next Creator tutorial. --> [!div class="nextstepaction"] -> [Tutorial: Query datasets with WFS API](tutorial-creator-wfs.md) - > [!div class="nextstepaction"]-> [Create custom styles for indoor maps](how-to-create-custom-styles.md) +> [Use the Azure Maps Indoor Maps module with custom styles](how-to-use-indoor-module.md) |
azure-maps | Tutorial Creator Wfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-wfs.md | - Title: 'Tutorial: Query datasets with WFS API'- -description: The second tutorial on Microsoft Azure Maps Creator. How to Query datasets with WFS API -- Previously updated : 01/28/2022------# Tutorial: Query datasets with WFS API --This tutorial describes how to query Azure Maps Creator [datasets](/rest/api/maps/v2/dataset) using [WFS API](/rest/api/maps/v2/wfs). You can use the WFS API to query features within a dataset. For example, you can use WFS to find all mid-size meeting rooms in a specific building and floor level. --In this tutorial, you'll learn how to: --> [!div class="checklist"] -> -> * Query the Azure Maps Web Feature Service (WFS) API to query for all feature collections. -> * Query the Azure Maps Web Feature Service (WFS) API to query for a specific collection. --First you'll query all collections, and then you'll query for the `unit` collection. --## Prerequisites --* Successful completion of [Tutorial: Use Creator to create indoor maps](tutorial-creator-indoor-maps.md). -* The `datasetId` obtained in [Check dataset creation status](tutorial-creator-indoor-maps.md#check-the-dataset-creation-status) section of the previous tutorial. --This tutorial uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment. -->[!IMPORTANT] -> -> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). -> * In the URL examples in this article you will need to replace: -> * `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. -> * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status](tutorial-creator-indoor-maps.md#check-the-dataset-creation-status) section of the *Use Creator to create indoor maps* tutorial --## Query for feature collections --To query all collections in your dataset: --1. In the Postman app, create a new **HTTP Request** and save it as *GET Dataset Collections*. --2. Select the **GET** HTTP method. --3. Enter the following URL to [WFS API](/rest/api/maps/v2/wfs). The request should look like the following URL: -- ```http - https://us.atlas.microsoft.com/wfs/datasets/{datasetId}/collections?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0 - ``` --4. Select **Send**. --5. The response body is returned in GeoJSON format and contains all collections in the dataset. For simplicity, the example here only shows the `unit` collection. To see an example that contains all collections, see [WFS Describe Collections API](/rest/api/maps/v2/wfs/get-collection-definition). To learn more about any collection, you can select any of the URLs inside the `links` element. -- ```json - { - "collections": [ - { - "name": "unit", - "description": "A physical and non-overlapping area which might be occupied and traversed by a navigating agent. Can be a hallway, a room, a courtyard, etc. It is surrounded by physical obstruction (wall), unless the is_open_area attribute is equal to true, and one must add openings where the obstruction shouldn't be there. If is_open_area attribute is equal to true, all the sides are assumed open to the surroundings and walls are to be added where needed. Walls for open areas are represented as a line_element or area_element with is_obstruction equal to true.", - "links": [ - { - "href": "https://atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/definition?api-version=1.0", - "rel": "describedBy", - "title": "Metadata catalogue for unit" - }, - { - "href": "https://atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/items?api-version=1.0", - "rel": "data", - "title": "unit" - } - { - "href": "https://atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit?api-version=1.0", - "rel": "self", - "title": "Metadata catalogue for unit" - } - ] - }, - ``` --## Query for unit feature collection --In this section, you'll query [WFS API](/rest/api/maps/v2/wfs) for the `unit` feature collection. --To query the unit collection in your dataset: --1. In the Postman app, create a new **HTTP Request** and save it as *GET Unit Collection*. --2. Select the **GET** HTTP method. --3. Enter the following URL: -- ```http - https://us.atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/items?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0 - ``` --4. Select **Send**. --5. After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". You'll use "UNIT26" as your feature `id` when you [Update a feature state](tutorial-creator-feature-stateset.md#update-a-feature-state) in the next tutorial. -- ```json - { - "type": "FeatureCollection", - "features": [ - { - "type": "Feature", - "geometry": { - "type": "Polygon", - "coordinates": ["..."] - }, - "properties": { - "original_id": "b7410920-8cb0-490b-ab23-b489fd35aed0", - "category_id": "CTG8", - "is_open_area": true, - "navigable_by": [ - "pedestrian" - ], - "route_through_behavior": "allowed", - "level_id": "LVL14", - "occupants": [], - "address_id": "DIR1", - "name": "157" - }, - "id": "UNIT26", - "featureType": "" - }, {"..."} - ] - } - ``` --## Additional information --See [WFS](/rest/api/maps/v2/wfs) for information on the Creator Web Feature Service REST API. --## Next steps --To learn how to use feature statesets to define dynamic properties and values on specific features in the final Creator tutorial. --> [!div class="nextstepaction"] -> [Tutorial: Create a feature stateset](tutorial-creator-feature-stateset.md) |
azure-monitor | Data Collection Rule Azure Monitor Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md | You can define a data collection rule to send data from multiple machines to mul | Action | Command | |:|:|-| Get rules | [Get-AzDataCollectionRule](/powershell/module/az.monitor/get-azdatacollectionrule?view=azps-5.4.0&preserve-view=true) | -| Create a rule | [New-AzDataCollectionRule](/powershell/module/az.monitor/new-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | -| Update a rule | [Set-AzDataCollectionRule](/powershell/module/az.monitor/set-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | -| Delete a rule | [Remove-AzDataCollectionRule](/powershell/module/az.monitor/remove-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | -| Update "Tags" for a rule | [Update-AzDataCollectionRule](/powershell/module/az.monitor/update-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | +| Get rules | [Get-AzDataCollectionRule](/powershell/module/az.monitor/get-azdatacollectionrule) | +| Create a rule | [New-AzDataCollectionRule](/powershell/module/az.monitor/new-azdatacollectionrule) | +| Update a rule | [Set-AzDataCollectionRule](/powershell/module/az.monitor/set-azdatacollectionrule) | +| Delete a rule | [Remove-AzDataCollectionRule](/powershell/module/az.monitor/remove-azdatacollectionrule) | +| Update "Tags" for a rule | [Update-AzDataCollectionRule](/powershell/module/az.monitor/update-azdatacollectionrule) | **Data collection rule associations** | Action | Command | |:|:|-| Get associations | [Get-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/get-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | -| Create an association | [New-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/new-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | -| Delete an association | [Remove-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/remove-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | +| Get associations | [Get-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/get-azdatacollectionruleassociation) | +| Create an association | [New-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/new-azdatacollectionruleassociation) | +| Delete an association | [Remove-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/remove-azdatacollectionruleassociation) | ### [Azure CLI](#tab/cli) |
azure-monitor | Javascript Framework Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md | If a custom `PageView` duration isn't provided, `PageView` duration defaults to ### Sample app -Check out the [Application Insights React demo](https://github.com/Azure-Samples/application-insights-react-demo). +Check out the [Application Insights React demo](https://github.com/microsoft/applicationinsights-react-js/tree/main/sample/applicationinsights-react-sample). ## [React Native](#tab/reactnative) If a custom `PageView` duration isn't provided, `PageView` duration defaults to ## Next steps - To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md).-- To learn about the Kusto Query Language and querying data in Log Analytics, see the [Log query overview](../../azure-monitor/logs/log-query-overview.md).+- To learn about the Kusto Query Language and querying data in Log Analytics, see the [Log query overview](../../azure-monitor/logs/log-query-overview.md). |
azure-monitor | Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md | After a few moments, the new setting appears in your list of settings for this r # [PowerShell](#tab/powershell) -Use the [New-AzDiagnosticSetting](/powershell/module/az.monitor/new-azdiagnosticsetting?view=azps-9.1.0&preserve-view=true) cmdlet to create a diagnostic setting with [Azure PowerShell](../powershell-samples.md). See the documentation for this cmdlet for descriptions of its parameters. +Use the [New-AzDiagnosticSetting](/powershell/module/az.monitor/new-azdiagnosticsetting) cmdlet to create a diagnostic setting with [Azure PowerShell](../powershell-samples.md). See the documentation for this cmdlet for descriptions of its parameters. > [!IMPORTANT] > You can't use this method for an activity log. Instead, use [Create diagnostic setting in Azure Monitor by using an Azure Resource Manager template](./resource-manager-diagnostic-settings.md) to create a Resource Manager template and deploy it with PowerShell. |
azure-monitor | Computer Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/computer-groups.md | You can create a computer group in Azure Monitor using any of the methods in the | Log query |Create a log query that returns a list of computers. | | Log Search API |Use the Log Search API to programmatically create a computer group based on the results of a log query. | | Active Directory |Automatically scan the group membership of any agent computers that are members of an Active Directory domain and create a group in Azure Monitor for each security group. (Windows machines only)|-| Configuration Manager | Import collections from Microsoft Endpoint Configuration Manager and create a group in Azure Monitor for each. | +| Configuration Manager | Import collections from Microsoft Configuration Manager and create a group in Azure Monitor for each. | | Windows Server Update Services |Automatically scan WSUS servers or clients for targeting groups and create a group in Azure Monitor for each. | ### Log query |
azure-monitor | Data Retention Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md | az monitor log-analytics workspace table update --subscription ContosoSID --reso # [PowerShell](#tab/PowerShell-1) -Use the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable?view=azps-9.1.0) cmdlet to set the retention and archive duration for a table. This example sets the table's interactive retention to 30 days, and the total retention to two years, which means that the archive duration is 23 months: +Use the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable) cmdlet to set the retention and archive duration for a table. This example sets the table's interactive retention to 30 days, and the total retention to two years, which means that the archive duration is 23 months: ```powershell Update-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName ContosoWorkspace -TableName AzureMetrics -RetentionInDays 30 -TotalRetentionInDays 730 ``` -To reapply the workspace's default interactive retention value to the table and reset its total retention to 0, run the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable?view=azps-9.1.0) cmdlet with the `-RetentionInDays` and `-TotalRetentionInDays` parameters set to `-1`. +To reapply the workspace's default interactive retention value to the table and reset its total retention to 0, run the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable) cmdlet with the `-RetentionInDays` and `-TotalRetentionInDays` parameters set to `-1`. For example: az monitor log-analytics workspace table show --subscription ContosoSID --resour # [PowerShell](#tab/PowerShell-2) -To get the retention policy of a particular table, run the [Get-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/get-azoperationalinsightstable?view=azps-9.1.0) cmdlet. +To get the retention policy of a particular table, run the [Get-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/get-azoperationalinsightstable) cmdlet. For example: |
azure-resource-manager | Resource Name Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md | In the following tables, the term alphanumeric refers to: > [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |-> | spring | resource group | 4-32 | Lowercase letters, numbers, and hyphens. | +> | spring | global | 4-32 | Lowercase letters, numbers, and hyphens. | ## Microsoft.Authorization |
azure-resource-manager | Tls Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tls-support.md | For a more detailed guidance, see the [checklist to deprecate older TLS versions ## Next steps * [Solving the TLS 1.0 Problem, 2nd Edition](/security/engineering/solving-tls1-problem) ΓÇô deep dive into migrating to TLS 1.2.-* [How to enable TLS 1.2 on clients](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client) ΓÇô for Microsoft Endpoint Configuration Manager. +* [How to enable TLS 1.2 on clients](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client) ΓÇô for Microsoft Configuration Manager. * [Configure Transport Layer Security (TLS) for a client application](../../storage/common/transport-layer-security-configure-client-version.md) ΓÇô contains instructions to update TLS version in PowerShell * [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment) ΓÇô contains information on updating TLS version for WinHTTP. * [Transport Layer Security (TLS) best practices with the .NET Framework](/dotnet/framework/network-programming/tls) ΓÇô best practices when configuring security protocols for applications targeting .NET Framework. |
backup | Backup Blobs Storage Account Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-ps.md | See the [prerequisites](./blob-backup-configure-manage.md#before-you-start) and A Backup vault is a storage entity in Azure that holds backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, Azure blobs and Azure blobs. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data. -Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault _TestBkpVault_ in region _westus_, under the resource group _testBkpVaultRG_. Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault?view=azps-5.9.0&preserve-view=true) command to create a backup vault.Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault _TestBkpVault_ in region _westus_, under the resource group _testBkpVaultRG_. Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault) command to create a backup vault.Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). ```azurepowershell-interactive $storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant/GeoRedundant -DataStoreType VaultStore After creation of vault, let's create a backup policy to protect Azure blobs. > [!IMPORTANT] > Read [this section](blob-backup-configure-manage.md#before-you-start) before proceeding to create the policy and configuring backups for Azure blobs. -To understand the inner components of a backup policy for Azure blob backup, retrieve the policy template using the [Get-AzDataProtectionPolicyTemplate](/powershell/module/az.dataprotection/get-azdataprotectionpolicytemplate?view=azps-5.9.0&preserve-view=true) command. This command returns a default policy template for a given datasource type. Use this policy template to create a new policy. +To understand the inner components of a backup policy for Azure blob backup, retrieve the policy template using the [Get-AzDataProtectionPolicyTemplate](/powershell/module/az.dataprotection/get-azdataprotectionpolicytemplate) command. This command returns a default policy template for a given datasource type. Use this policy template to create a new policy. ```azurepowershell-interactive $policyDefn = Get-AzDataProtectionPolicyTemplate -DatasourceType AzureBlob TargetDataStoreCopySetting : > [!NOTE] > Restoring over long durations may lead to restore operations taking longer to complete. Also, the time that it takes to restore a set of data is based on the number of write and delete operations made during the restore period. For example, an account with one million objects with 3,000 objects added per day and 1,000 objects deleted per day will require approximately two hours to restore to a point 30 days in the past.<br><br>We do not recommend a retention period and restoration more than 90 days in the past for an account with this rate of change. -Once the policy object has all the desired values, proceed to create a new policy from the policy object using the [New-AzDataProtectionBackupPolicy](/powershell/module/az.dataprotection/new-azdataprotectionbackuppolicy?view=azps-5.9.0&preserve-view=true) command. +Once the policy object has all the desired values, proceed to create a new policy from the policy object using the [New-AzDataProtectionBackupPolicy](/powershell/module/az.dataprotection/new-azdataprotectionbackuppolicy) command. ```azurepowershell-interactive New-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name blobBkpPolicy -Policy $policyDefn You need to assign a few permissions via RBAC to vault (represented by vault MSI ### Prepare the request -Once all the relevant permissions are set, the configuration of backup is performed in 2 steps. First, we prepare the relevant request by using the relevant vault, policy, storage account using the [Initialize-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/initialize-azdataprotectionbackupinstance?view=azps-5.9.0&preserve-view=true) command. Then, we submit the request to protect the blobs within the storage account using the [New-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/new-azdataprotectionbackupinstance?view=azps-5.9.0&preserve-view=true) command. +Once all the relevant permissions are set, the configuration of backup is performed in 2 steps. First, we prepare the relevant request by using the relevant vault, policy, storage account using the [Initialize-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/initialize-azdataprotectionbackupinstance) command. Then, we submit the request to protect the blobs within the storage account using the [New-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/new-azdataprotectionbackupinstance) command. ```azurepowershell-interactive $instance = Initialize-AzDataProtectionBackupInstance -DatasourceType AzureBlob -DatasourceLocation $TestBkpvault.Location -PolicyId $blobBkpPol[0].Id -DatasourceId $SAId |
backup | Backup Managed Disks Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-ps.md | For information on the Azure Disk backup region availability, supported scenario A Backup vault is a storage entity in Azure that holds backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data. -Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault "TestBkpVault" in "westus" region under the resource group "testBkpVaultRG". Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault?view=azps-5.7.0&preserve-view=true) command to create a backup vault.Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault "TestBkpVault" in "westus" region under the resource group "testBkpVaultRG". Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault) command to create a backup vault.Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). ```azurepowershell-interactive $storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant/GeoRedundant -DataStoreType VaultStore After creation of vault, let's create a backup policy to protect Azure disks. ## Create a Backup policy -To understand the inner components of a backup policy for Azure disk backup, retrieve the policy template using the command [Get-AzDataProtectionPolicyTemplate](/powershell/module/az.dataprotection/get-azdataprotectionpolicytemplate?view=azps-5.7.0&preserve-view=true). This command returns a default policy template for a given datasource type. Use this policy template to create a new policy. +To understand the inner components of a backup policy for Azure disk backup, retrieve the policy template using the command [Get-AzDataProtectionPolicyTemplate](/powershell/module/az.dataprotection/get-azdataprotectionpolicytemplate). This command returns a default policy template for a given datasource type. Use this policy template to create a new policy. ```azurepowershell-interactive $policyDefn = Get-AzDataProtectionPolicyTemplate -DatasourceType AzureDisk Azure Disk Backup offers multiple backups per day. If you require more frequent To know more details about policy creation, refer to the [Azure Disk Backup policy](backup-managed-disks.md#create-backup-policy) document. -If you want to edit the hourly frequency or the retention period, use the [Edit-AzDataProtectionPolicyTriggerClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicytriggerclientobject?view=azps-5.7.0&preserve-view=true) and/or [Edit-AzDataProtectionPolicyRetentionRuleClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicyretentionruleclientobject?view=azps-5.7.0&preserve-view=true) commands. Once the policy object has all the desired values, proceed to create a new policy from the policy object using the [New-AzDataProtectionBackupPolicy](/powershell/module/az.dataprotection/new-azdataprotectionbackuppolicy?view=azps-5.7.0&preserve-view=true). +If you want to edit the hourly frequency or the retention period, use the [Edit-AzDataProtectionPolicyTriggerClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicytriggerclientobject) and/or [Edit-AzDataProtectionPolicyRetentionRuleClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicyretentionruleclientobject) commands. Once the policy object has all the desired values, proceed to create a new policy from the policy object using the [New-AzDataProtectionBackupPolicy](/powershell/module/az.dataprotection/new-azdataprotectionbackuppolicy). ```azurepowershell-interactive New-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name diskBkpPolicy -Policy $policyDefn $snapshotrg = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/snapshotrg" #### Backup vault -The Backup vaults require permissions on disk and the snapshot resource group to be able to trigger snapshots and manage their lifecycle. The system-assigned managed identity of the vault is used for assigning such permissions. Use the [Update-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/update-azrecoveryservicesvault?view=azps-5.7.0&preserve-view=true) command to enable system-assigned managed identity for the recovery services vault. +The Backup vaults require permissions on disk and the snapshot resource group to be able to trigger snapshots and manage their lifecycle. The system-assigned managed identity of the vault is used for assigning such permissions. Use the [Update-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/update-azrecoveryservicesvault) command to enable system-assigned managed identity for the recovery services vault. ### Assign permissions To configure backup of managed disks, ensure the following prerequisites: ### Prepare the request -Once all the relevant permissions are set, the configuration of backup is performed in 2 steps. First, we prepare the relevant request by using the relevant vault, policy, disk and snapshot resource group using the [Initialize-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/initialize-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command. Then, we submit the request to protect the disk using the [New-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/new-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command. +Once all the relevant permissions are set, the configuration of backup is performed in 2 steps. First, we prepare the relevant request by using the relevant vault, policy, disk and snapshot resource group using the [Initialize-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/initialize-azdataprotectionbackupinstance) command. Then, we submit the request to protect the disk using the [New-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/new-azdataprotectionbackupinstance) command. ```azurepowershell-interactive $instance = Initialize-AzDataProtectionBackupInstance -DatasourceType AzureDisk -DatasourceLocation $TestBkpvault.Location -PolicyId $diskBkpPol[0].Id -DatasourceId $DiskId diskrg-PSTestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166 Microsoft.DataProtection/ ## Run an on-demand backup -Fetch the relevant backup instance on which the user desires to trigger a backup using the [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) +Fetch the relevant backup instance on which the user desires to trigger a backup using the [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance) ```azurepowershell-interactive $instance = Get-AzDataProtectionBackupInstance -SubscriptionId "xxxx-xxx-xxx" -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name "BackupInstanceName" Name : Default ObjectType : AzureRetentionRule ``` -Trigger an on-demand backup using the [Backup-AzDataProtectionBackupInstanceAdhoc](/powershell/module/az.dataprotection/backup-azdataprotectionbackupinstanceadhoc?view=azps-5.7.0&preserve-view=true) command. +Trigger an on-demand backup using the [Backup-AzDataProtectionBackupInstanceAdhoc](/powershell/module/az.dataprotection/backup-azdataprotectionbackupinstanceadhoc) command. ```azurepowershell-interactive $AllInstances = Get-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name Backup-AzDataProtectionBackupInstanceAdhoc -BackupInstanceName $AllInstances[0]. ## Tracking jobs -Track all the jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob?view=azps-5.7.0&preserve-view=true) command. You can list all jobs and fetch a particular job detail. +Track all the jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob) command. You can list all jobs and fetch a particular job detail. -You can also use Az.ResourceGraph to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.7.0&preserve-view=true) command to get the relevant job which can be across any backup vault. +You can also use Az.ResourceGraph to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph) command to get the relevant job which can be across any backup vault. ```azurepowershell-interactive $job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName "testBkpVaultRG" -Vault $TestBkpVault.Name -DatasourceType AzureDisk -Operation OnDemandBackup |
backup | Backup Postgresql Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-postgresql-ps.md | Backup vault is a storage entity in Azure that stores the backup data for variou Before you creat a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. -In this article, we'll create a Backup vault *TestBkpVault*, in *westus* region, under the resource group *testBkpVaultRG*. Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault?view=azps-5.7.0&preserve-view=true) command to create a Backup vault. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +In this article, we'll create a Backup vault *TestBkpVault*, in *westus* region, under the resource group *testBkpVaultRG*. Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault) command to create a Backup vault. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). ```azurepowershell-interactive $storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant/GeoRedundant -DataStoreType VaultStore The resultant PowerShell object is as follows: ### Retrieving the Policy template -To understand the inner components of a backup policy for Azure PostgreSQL database backup, retrieve the policy template using the [Get-AzDataProtectionPolicyTemplate](/powershell/module/az.dataprotection/get-azdataprotectionpolicytemplate?view=azps-5.7.0&preserve-view=true) command. This command returns a default policy template for a given datasource type. Use this policy template to create a new policy. +To understand the inner components of a backup policy for Azure PostgreSQL database backup, retrieve the policy template using the [Get-AzDataProtectionPolicyTemplate](/powershell/module/az.dataprotection/get-azdataprotectionpolicytemplate) command. This command returns a default policy template for a given datasource type. Use this policy template to create a new policy. ```azurepowershell-interactive $policyDefn = Get-AzDataProtectionPolicyTemplate -DatasourceType AzureDatabaseForPostgreSQL TargetDataStoreCopySetting : {} #### Modify the schedule -The default policy template offers a backup once per week. You can modify the schedule for the backup to happen multiple days per week. To change the schedule, use the [Edit-AzDataProtectionPolicyTriggerClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicytriggerclientobject?view=azps-6.5.0&preserve-view=true) command. +The default policy template offers a backup once per week. You can modify the schedule for the backup to happen multiple days per week. To change the schedule, use the [Edit-AzDataProtectionPolicyTriggerClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicytriggerclientobject) command. The following example modifies the weekly backup to back up happening on every Sunday, Wednesday, and Friday of every week. The schedule date array mentions the dates, and the days of the week of those dates are taken as days of the week. You also need to specify that these schedules should repeat every week. Therefore, the schedule interval is "1" and the interval type is "Weekly". Edit-AzDataProtectionPolicyTriggerClientObject -Schedule $trigger -Policy $polic So, if you want to add the _archive_ protection, you need to modify the policy template as below. -The default template will have a lifecycle for the initial datastore under the default retention rule. In this scenario, the rule says to delete the backup data after three months. You should add a new retention rule that defines when the data is *moved* to *archive* datastore, that is, backup data is first copied to archive datastore, and then deleted in vault datastore. Also, the rule should define for how long the data is kept in the *archive* datastore. Use the [New-AzDataProtectionRetentionLifeCycleClientObject](/powershell/module/az.dataprotection/new-azdataprotectionretentionlifecycleclientobject?view=azps-6.5.0&preserve-view=true) command to create new lifecycles and use the [Edit-AzDataProtectionPolicyRetentionRuleClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicyretentionruleclientobject?view=azps-6.5.0&preserve-view=true) command to associate them with new the rules or to the existing rules. +The default template will have a lifecycle for the initial datastore under the default retention rule. In this scenario, the rule says to delete the backup data after three months. You should add a new retention rule that defines when the data is *moved* to *archive* datastore, that is, backup data is first copied to archive datastore, and then deleted in vault datastore. Also, the rule should define for how long the data is kept in the *archive* datastore. Use the [New-AzDataProtectionRetentionLifeCycleClientObject](/powershell/module/az.dataprotection/new-azdataprotectionretentionlifecycleclientobject) command to create new lifecycles and use the [Edit-AzDataProtectionPolicyRetentionRuleClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicyretentionruleclientobject) command to associate them with new the rules or to the existing rules. The following example creates a new retention rule named *Monthly* where the first successful backup of every month should be retained in vault for six months, moved to archive tier and kept in archive tier 24 months. Edit-AzDataProtectionPolicyRetentionRuleClientObject -Policy $policyDefn -Name M #### Add a tag and the relevant criteria -Once a retention rule is created, you've to create a corresponding *tag* in the *Trigger* property of the backup policy. Use the [New-AzDataProtectionPolicyTagCriteriaClientObject](/powershell/module/az.dataprotection/new-azdataprotectionpolicytagcriteriaclientobject?view=azps-6.5.0&preserve-view=true) command to create a new tagging criteria and use the [Edit-AzDataProtectionPolicyTagClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicytagclientobject?view=azps-6.5.0&preserve-view=true) command to update the existing tag or create a new tag. +Once a retention rule is created, you've to create a corresponding *tag* in the *Trigger* property of the backup policy. Use the [New-AzDataProtectionPolicyTagCriteriaClientObject](/powershell/module/az.dataprotection/new-azdataprotectionpolicytagcriteriaclientobject) command to create a new tagging criteria and use the [Edit-AzDataProtectionPolicyTagClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicytagclientobject) command to update the existing tag or create a new tag. The following example creates a new *tag* along with the criteria (which is the first successful backup of the month) with the same name as the corresponding retention rule to be applied. Edit-AzDataProtectionPolicyTagClientObject -Policy $policyDefn -Name Monthly -Cr ### Create a new PostgreSQL backup policy -Once the template is modified as per the requirements, use the [New-AzDataProtectionBackupPolicy](/powershell/module/az.dataprotection/new-azdataprotectionbackuppolicy?view=azps-6.5.0&preserve-view=true) command to create a policy using the modified template. +Once the template is modified as per the requirements, use the [New-AzDataProtectionBackupPolicy](/powershell/module/az.dataprotection/new-azdataprotectionbackuppolicy) command to create a policy using the modified template. ```azurepowershell-interactive $polOss = New-AzDataProtectionBackupPolicy -ResourceGroupName testBkpVaultRG -VaultName TestBkpVault -Name "TestOSSPolicy" -Policy $policyDefn You need to connect the Backup vault to the PostgreSQL server, and then access t Once all the relevant permissions are set, the configuration of the backup is performed in two steps. -1. We prepare the relevant request by using the relevant vault, policy, PostgreSQL database using the [Initialize-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/initialize-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command. -1. We submit the request to protect the database using the [New-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/new-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command. +1. We prepare the relevant request by using the relevant vault, policy, PostgreSQL database using the [Initialize-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/initialize-azdataprotectionbackupinstance) command. +1. We submit the request to protect the database using the [New-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/new-azdataprotectionbackupinstance) command. ```azurepowershell-interactive $instance = Initialize-AzDataProtectionBackupInstance -DatasourceType AzureDatabaseForPostgreSQL -DatasourceLocation $TestBkpvault.Location -PolicyId $polOss[0].Id -DatasourceId $ossId -SecretStoreURI $keyURI -SecretStoreType AzureKeyVault ossrg-empdb11 Microsoft.DataProtection/backupVaults/backupInstances ossrg- ## Run an on-demand backup -Fetch the relevant backup instance on which you need to trigger a backup using the [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command. +Fetch the relevant backup instance on which you need to trigger a backup using the [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance) command. ```azurepowershell-interactive $instance = Get-AzDataProtectionBackupInstance -SubscriptionId "xxxx-xxx-xxx" -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name "BackupInstanceName" Name : Default ObjectType : AzureRetentionRule ``` -To trigger an on-demand backup, use the [Backup-AzDataProtectionBackupInstanceAdhoc](/powershell/module/az.dataprotection/backup-azdataprotectionbackupinstanceadhoc?view=azps-5.7.0&preserve-view=true) command. +To trigger an on-demand backup, use the [Backup-AzDataProtectionBackupInstanceAdhoc](/powershell/module/az.dataprotection/backup-azdataprotectionbackupinstanceadhoc) command. ```azurepowershell-interactive $AllInstances = Get-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name Backup-AzDataProtectionBackupInstanceAdhoc -BackupInstanceName $AllInstances[0]. ## Track jobs -Track all jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob?view=azps-5.7.0&preserve-view=true) command. You can list all jobs and fetch a particular job detail. +Track all jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob) command. You can list all jobs and fetch a particular job detail. -You can also use _Az.ResourceGraph_ to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.7.0&preserve-view=true) command to fetch the relevant jobs that are across any backup vault. +You can also use _Az.ResourceGraph_ to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph) command to fetch the relevant jobs that are across any backup vault. ```azurepowershell-interactive $job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName "testBkpVaultRG" -Vault $TestBkpVault.Name -DatasourceType AzureDisk -Operation OnDemandBackup |
backup | Backup Vault Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md | In the **Backup Instances** tile, you get a summarized view of all backup instan This section explains how to move a Backup vault (configured for Azure Backup) across Azure subscriptions and resource groups using the Azure portal. >[!Note]->You can also move Backup vaults to a different resource group or subscription using [PowerShell](/powershell/module/az.resources/move-azresource?view=azps-6.3.0&preserve-view=true) and [CLI](/cli/azure/resource#az-resource-move). +>You can also move Backup vaults to a different resource group or subscription using [PowerShell](/powershell/module/az.resources/move-azresource) and [CLI](/cli/azure/resource#az-resource-move). ### Supported regions |
backup | Disk Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-support-matrix.md | Title: Azure Disk Backup support matrix description: Provides a summary of support settings and limitations Azure Disk Backup. Previously updated : 03/30/2022 Last updated : 03/03/2023 Azure Disk Backup is available in all public cloud and Sovereign cloud regions. ## Limitations -- Azure Disk Backup is supported for Azure Managed Disks, including shared disks (Shared premium SSDs). Unmanaged disks aren't supported. Currently this solution doesn't support Ultra-disks, including shared ultra-disks, because of lack of snapshot capability.+- Azure Disk Backup is supported for Azure Managed Disks, including shared disks (Shared premium SSDs). Unmanaged disks aren't supported. Currently, this solution doesn't support Premium SSD v2 disks and Ultra-disks, including shared disk, because of lack of snapshot capability. - Azure Disk Backup supports backup of Write Accelerator disk. However, during restore the disk would be restored as a normal disk. Write Accelerated cache can be enabled on the disk after mounting it to a VM. Azure Disk Backup is available in all public cloud and Sovereign cloud regions. - Currently, the Original-Location Recovery (OLR) option to restore by replacing existing source disks from where the backups were taken isn't supported. You can restore from recovery point to create a new disk either in the same resource group as that of the source disk from where the backups were taken or in any other resource group. This is known as Alternate-Location Recovery (ALR). -- Azure Backup for Managed Disks uses incremental snapshots, which are limited to 200 snapshots per disk. To allow you to take on-demand backup aside from scheduled backups, Backup policy limits the total backups to 180. Learn more about [incremental snapshot](../virtual-machines/disks-incremental-snapshots.md#restrictions) for managed disks.+- Azure Backup for Managed Disks uses incremental snapshots, which are limited to 500 snapshots per disk. To allow you to take on-demand backup aside from scheduled backups, Backup policy limits the total backups to 480. Learn more about [incremental snapshot](../virtual-machines/disks-incremental-snapshots.md#restrictions) for managed disks. - Azure [subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machine-disk-limits) apply to the total number of disk snapshots per region per subscription. |
backup | Move To Azure Monitor Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/move-to-azure-monitor-alerts.md | The following example of the vault settings property shows that the classic aler #### Using Azure PowerShell -To modify the alert settings of the vault, use the [Update-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/update-azrecoveryservicesvault?view=azps-8.2.0&preserve-view=true) command. +To modify the alert settings of the vault, use the [Update-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/update-azrecoveryservicesvault) command. The following example helps you to enable built-in Azure Monitor alerts for job failures and disables classic alerts: az backup vault backup-properties set \ You can use the following standard programmatic interfaces supported by Azure Monitor to manage action groups and alert processing rules. - [Azure Monitor REST API reference](/rest/api/monitor/)-- [Azure Monitor PowerShell reference](/powershell/module/az.monitor/?view=azps-8.0.0&preserve-view=true)+- [Azure Monitor PowerShell reference](/powershell/module/az.monitor/) - [Azure Monitor CLI reference](/cli/azure/monitor?view=azure-cli-latest&preserve-view=true) #### Using Azure Resource Manager (ARM)/ Bicep/ REST API As described in earlier sections, you need an action group (notification channel To configure the notification, run the following cmdlet: -1. Create an action group associated with an email ID using the [New-AzActionGroupReceiver](/powershell/module/az.monitor/new-azactiongroupreceiver?view=azps-8.2.0&preserve-view=true) cmdlet and the [Set-AzActionGroup](/powershell/module/az.monitor/set-azactiongroup?view=azps-8.2.0&preserve-view=true) cmdlet. +1. Create an action group associated with an email ID using the [New-AzActionGroupReceiver](/powershell/module/az.monitor/new-azactiongroupreceiver) cmdlet and the [Set-AzActionGroup](/powershell/module/az.monitor/set-azactiongroup) cmdlet. ```powershell $email1 = New-AzActionGroupReceiver -Name 'user1' -EmailReceiver -EmailAddress 'user1@contoso.com' Set-AzActionGroup -Name "testActionGroup" -ResourceGroupName "testRG" -ShortName "testAG" -Receiver $email1 ``` -1. Create an alert processing rule that's linked to the above action group using the [Set-AzAlertProcessingRule](/powershell/module/az.alertsmanagement/set-azalertprocessingrule?view=azps-8.2.0&preserve-view=true) cmdlet. +1. Create an alert processing rule that's linked to the above action group using the [Set-AzAlertProcessingRule](/powershell/module/az.alertsmanagement/set-azalertprocessingrule) cmdlet. ```powershell Set-AzAlertProcessingRule -ResourceGroupName "testRG" -Name "AddActionGroupToSubscription" -Scope "/subscriptions/xxxx-xxx-xxxx" -FilterTargetResourceType "Equals:Microsoft.RecoveryServices/vaults" -Description "Add ActionGroup1 to alerts on all RS vaults in subscription" -Enabled "True" -AlertProcessingRuleType "AddActionGroups" -ActionGroupId "/subscriptions/xxxx-xxx-xxxx/resourcegroups/testRG/providers/microsoft.insights/actiongroups/testActionGroup" |
backup | Restore Blobs Storage Account Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-ps.md | $startDate = (Get-Date).AddDays(-30) $endDate = Get-Date ``` -First fetch all instances using [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance?view=azps-5.9.0&preserve-view=true) command and identify the relevant instance. +First fetch all instances using [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance) command and identify the relevant instance. ```azurepowershell-interactive $AllInstances = Get-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name ``` -You can also use Az.Resourcegraph and the [Search-AzDataProtectionBackupInstanceInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionbackupinstanceinazgraph?view=azps-5.9.0&preserve-view=true) command to search across instances in many vaults and subscriptions. +You can also use Az.Resourcegraph and the [Search-AzDataProtectionBackupInstanceInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionbackupinstanceinazgraph) command to search across instances in many vaults and subscriptions. ```azurepowershell-interactive $AllInstances = Search-AzDataProtectionBackupInstanceInAzGraph -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -DatasourceType AzureBlob -ProtectionStatus ProtectionConfigured $DesiredPIT = (Get-Date -Date "2021-04-23T02:47:02.9500000Z") ### Preparing the restore request -Once the point-in-time to restore is fixed, there are multiple options to restore. Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest?view=azps-5.9.0&preserve-view=true) command to prepare the restore request with all relevant details. +Once the point-in-time to restore is fixed, there are multiple options to restore. Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest) command to prepare the restore request with all relevant details. #### Restoring all the blobs to a point-in-time $restorerequest = Initialize-AzDataProtectionRestoreRequest -DatasourceType Azur ### Trigger the restore -Use the [Start-AzDataProtectionBackupInstanceRestore](/powershell/module/az.dataprotection/start-azdataprotectionbackupinstancerestore?view=azps-5.9.0&preserve-view=true) command to trigger the restore with the request prepared above. +Use the [Start-AzDataProtectionBackupInstanceRestore](/powershell/module/az.dataprotection/start-azdataprotectionbackupinstancerestore) command to trigger the restore with the request prepared above. ```azurepowershell-interactive Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $AllInstances[2].BackupInstanceName -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Parameter $restorerequest Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $AllInstances[2] ## Tracking job -Track all jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob?view=azps-5.9.0&preserve-view=true) command. You can list all jobs and fetch a particular job detail. +Track all jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob) command. You can list all jobs and fetch a particular job detail. -You can also use Az.ResourceGraph to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.9.0&preserve-view=true) command to get the relevant job which can be across any backup vault. +You can also use Az.ResourceGraph to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph) command to get the relevant job which can be across any backup vault. ```azurepowershell-interactive $job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName "testBkpVaultRG" -Vault $TestBkpVault.Name -DatasourceType AzureBlob -Operation Restore |
backup | Restore Managed Disks Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-managed-disks-ps.md | Assign the relevant permissions for vault's system assigned managed identity on ### Fetching the relevant recovery point -Fetch all instances using [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command and identify the relevant instance. +Fetch all instances using [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance) command and identify the relevant instance. ```azurepowershell-interactive $AllInstances = Get-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name ``` -You can also use **Az.Resourcegraph** and the [Search-AzDataProtectionBackupInstanceInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionbackupinstanceinazgraph?view=azps-5.7.0&preserve-view=true) command to search across instances in many vaults and subscriptions. +You can also use **Az.Resourcegraph** and the [Search-AzDataProtectionBackupInstanceInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionbackupinstanceinazgraph) command to search across instances in many vaults and subscriptions. ```azurepowershell-interactive $AllInstances = Search-AzDataProtectionBackupInstanceInAzGraph -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -DatasourceType AzureDisk -ProtectionStatus ProtectionConfigured Construct the ARM ID of the new disk to be created with the target resource grou $targetDiskId = /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/targetrg/providers/Microsoft.Compute/disks/PSTestDisk2 ``` -Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest?view=azps-5.7.0&preserve-view=true) command to prepare the restore request with all relevant details. +Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest) command to prepare the restore request with all relevant details. ```azurepowershell-interactive $restorerequest = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureDisk -SourceDataStore OperationalStore -RestoreLocation $TestBkpVault.Location -RestoreType AlternateLocation -TargetResourceId $targetDiskId -RecoveryPoint $rp[0].Name $restorerequest = Initialize-AzDataProtectionRestoreRequest -DatasourceType Azur ### Trigger the restore -Use the [Start-AzDataProtectionBackupInstanceRestore](/powershell/module/az.dataprotection/start-azdataprotectionbackupinstancerestore?view=azps-5.7.0&preserve-view=true) command to trigger the restore with the request prepared above. +Use the [Start-AzDataProtectionBackupInstanceRestore](/powershell/module/az.dataprotection/start-azdataprotectionbackupinstancerestore) command to trigger the restore with the request prepared above. ```azurepowershell-interactive Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $AllInstances[2].BackupInstanceName -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Parameter $restorerequest Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $AllInstances[2] ## Tracking job -Track all the jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob?view=azps-5.7.0&preserve-view=true) command. You can list all jobs and fetch a particular job detail. +Track all the jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob) command. You can list all jobs and fetch a particular job detail. -You can also use **Az.ResourceGraph** to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.7.0&preserve-view=true) command to get the relevant job, which can be across any backup vault. +You can also use **Az.ResourceGraph** to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph) command to get the relevant job, which can be across any backup vault. ```azurepowershell-interactive $job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName "testBkpVaultRG" -Vault $TestBkpVault.Name -DatasourceType AzureDisk -Operation OnDemandBackup |
backup | Restore Postgresql Database Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-postgresql-database-ps.md | To restore the recovery point as files to a storage account, the [Backup vault's ### Fetching the relevant recovery point -Fetch all instances using [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command and identify the relevant instance. +Fetch all instances using [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance) command and identify the relevant instance. ```azurepowershell-interactive $AllInstances = Get-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name ``` -You can also use **Az.Resourcegraph** and the [Search-AzDataProtectionBackupInstanceInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionbackupinstanceinazgraph?view=azps-5.7.0&preserve-view=true) command to search recovery points across instances in many vaults and subscriptions. +You can also use **Az.Resourcegraph** and the [Search-AzDataProtectionBackupInstanceInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionbackupinstanceinazgraph) command to search recovery points across instances in many vaults and subscriptions. ```azurepowershell-interactive $AllInstances = Search-AzDataProtectionBackupInstanceInAzGraph -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -DatasourceType AzureDatabaseForPostgreSQL -ProtectionStatus ProtectionConfigured Construct the Azure Resource Managed ID (ARM ID) of the new PostgreSQL database $targetOssId = /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/targetrg/providers/providers/Microsoft.DBforPostgreSQL/servers/targetossserver/databases/emprestored21 ``` -Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest?view=azps-5.7.0&preserve-view=true) command to prepare the restore request with all relevant details. +Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest) command to prepare the restore request with all relevant details. ```azurepowershell-interactive $OssRestoreReq = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureDatabaseForPostgreSQL -SourceDataStore VaultStore -RestoreLocation $TestBkpVault.Location -RestoreType AlternateLocation -RecoveryPoint $rps[0].Property.RecoveryPointId -TargetResourceId $targetOssId -SecretStoreURI "https://restoreoss-test.vault.azure.net/secrets/dbauth3" -SecretStoreType AzureKeyVault Fetch the URI of the container, within the storage account to which permissions $contURI = "https://testossstorageaccount.blob.core.windows.net/testcontainerrestore" ``` -Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest?view=azps-5.7.0&preserve-view=true) command to prepare the restore request with all relevant details. +Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest) command to prepare the restore request with all relevant details. ```azurepowershell-interactive $OssRestoreAsFilesReq = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureDatabaseForPostgreSQL -SourceDataStore VaultStore -RestoreLocation $TestBkpVault.Location -RestoreType RestoreAsFiles -RecoveryPoint $rps[0].Property.RecoveryPointId -TargetContainerURI $contURI -FileNamePrefix "empdb11_postgresql-westus_1628853549768" $OssRestoreAsFilesFromArchiveReq = Initialize-AzDataProtectionRestoreRequest -Da ### Trigger the restore -Use the [Start-AzDataProtectionBackupInstanceRestore](/powershell/module/az.dataprotection/start-azdataprotectionbackupinstancerestore?view=azps-5.7.0&preserve-view=true) command to trigger the restore with the request prepared above. +Use the [Start-AzDataProtectionBackupInstanceRestore](/powershell/module/az.dataprotection/start-azdataprotectionbackupinstancerestore) command to trigger the restore with the request prepared above. ```azurepowershell-interactive Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $AllInstances[2].BackupInstanceName -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Parameter $OssRestoreReq Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $AllInstances[2] ## Tracking job -Track all the jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob?view=azps-5.7.0&preserve-view=true) command. You can list all jobs and fetch a particular job detail. +Track all the jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob) command. You can list all jobs and fetch a particular job detail. -You can also use *Az.ResourceGraph* to track jobs across all Backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.7.0&preserve-view=true) command to get the relevant job, which is across all backup vault. +You can also use *Az.ResourceGraph* to track jobs across all Backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph) command to get the relevant job, which is across all backup vault. ```azurepowershell-interactive $job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName "testBkpVaultRG" -Vault $TestBkpVault.Name -DatasourceType AzureDatabaseForPostgreSQL -Operation OnDemandBackup |
chaos-studio | Chaos Studio Private Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md | -VNet injection allows Chaos resource provider to inject containerized workloads into your VNet. This means that resources without public endpoints can be accessed via a private IP address on the VNet. Below are the steps you can follow for vnet injection: ++VNet injection allows a Chaos resource provider to inject containerized workloads into your VNet so that resources without public endpoints can be accessed via a private IP address on the VNet. To configure VNet injection: 1. Register the `Microsoft.ContainerInstance` resource provider with your subscription (if applicable). VNet injection allows Chaos resource provider to inject containerized workloads az provider show --namespace 'Microsoft.ContainerInstance' | grep registrationState ``` - You should see output similar to the following: + In the output, you should see something similar to: ```bash "registrationState": "Registered", ``` -2. Re-register the `Microsoft.Chaos` resource provider with your subscription. +1. Register the `Microsoft.Relay` resource provider with your subscription. ++ ```bash + az provider register --namespace 'Microsoft.Relay' --wait + ``` ++ Verify the registration by running the following command: ++ ```bash + az provider show --namespace 'Microsoft.Relay' | grep registrationState + ``` ++ In the output, you should see something similar to: ++ ```bash + "registrationState": "Registered", + ``` ++1. Re-register the `Microsoft.Chaos` resource provider with your subscription. ```bash az provider register --namespace 'Microsoft.Chaos' --wait VNet injection allows Chaos resource provider to inject containerized workloads az provider show --namespace 'Microsoft.Chaos' | grep registrationState ``` - You should see output similar to the following: + In the output, you should see something similar to: ```bash "registrationState": "Registered", ``` -3. Create a subnet named `ChaosStudioSubnet` in the VNet you want to inject into. And delegate the subnet to `Microsoft.ContainerInstance/containerGroups` service. +1. Create two subnets in the VNet you want to inject into: ++ - `ChaosStudioContainerSubnet` + - Delegate the subnet to `Microsoft.ContainerInstance/containerGroups` service. + - This subnet must have at least /28 in address space + - `ChaosStudioRelaySubnet` + - This subnet must have at least /28 in address space -4. Set the `properties.subnetId` property when you create or update the Target resource. The value should be the resource ID of the subnet created in step 3. +1. Set the `properties.subnets.containerSubnetId` and `properties.subnets.relaySubnetId` properties when you create or update the Target resource. The value should be the resource ID of the subnet created in step 3. Replace `$SUBSCRIPTION_ID` with your Azure subscription ID, `$RESOURCE_GROUP` and `$AKS_CLUSTER` with the resource group name and your AKS cluster resource name. Also, replace `$AKS_INFRA_RESOURCE_GROUP` and `$AKS_VNET` with your AKS's infrastructure resource group name and VNet name. ```bash- URL=https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ContainerService/managedClusters/$AKS_CLUSTER/providers/Microsoft.Chaos/targets/microsoft-azurekubernetesservicechaosmesh?api-version=2022-10-01-preview - SUBNET_ID=/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$AKS_INFRA_RESOURCE_GROUP/providers/Microsoft.Network/virtualNetworks/$AKS_VNET/subnets/ChaosStudioSubnet - BODY="{ \"properties\": { \"subnetId\": \"$SUBNET_ID\" } }" + CONTAINER_SUBNET_ID=/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$AKS_INFRA_RESOURCE_GROUP/providers/Microsoft.Network/virtualNetworks/$AKS_VNET/subnets/ChaosStudioContainerSubnet + RELAY_SUBNET_ID=/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$AKS_INFRA_RESOURCE_GROUP/providers/Microsoft.Network/virtualNetworks/$AKS_VNET/subnets/ChaosStudioRelaySubnet + BODY="{ \"properties\": { \"subnets\": { \"containerSubnetId\": \"$CONTAINER_SUBNET_ID\", \"relaySubnetId\": \"$RELAY_SUBNET_ID\" } } }" az rest --method put --url $URL --body "$BODY" ``` -5. Start the experiment. +1. Start the experiment. ## Limitations-* At present the VNet injection will only be possible in subscriptions/regions where Azure Container Instances and Azure Relay are available. They are deployed to target regions. -* When you create a Target resource that you would like to enable with VNet injection, you will need Microsoft.Network/virtualNetworks/subnets/write access to the virtual network. For example, if the AKS cluster is deployed to VNet_A, then you must have permissions to create subnets in VNet_A in order to enable VNet injection for the AKS cluster. You will have to specify a subnet (in VNet_A) that the container will be deployed to. +* VNet injection is currently only possible in subscriptions/regions where Azure Container Instances and Azure Relay are available. They're deployed to target regions. +* When you create a Target resource that you'll enable with VNet injection, you need Microsoft.Network/virtualNetworks/subnets/write access to the virtual network. For example, if the AKS cluster is deployed to VNet_A, then you must have permissions to create subnets in VNet_A in order to enable VNet injection for the AKS cluster. You have to specify a subnet (in VNet_A) that the container will be deployed to. Request Body when created Target resource with VNet injection enabled: +```json +{ + "properties": { + "subnets": { + "containerSubnetId": "/subscriptions/.../subnets/ChaosStudioContainerSubnet", + "relaySubnetId": "/subscriptions/.../subnets/ChaosStudioRelaySubnet" + } + } +} +``` +<!-- +--> ## Next steps Now that you understand how VNet Injection can be achieved for Chaos Studio, you're ready to: |
cloud-shell | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md | description: Overview of features in Azure Cloud Shell ms.contributor: jahelmic Previously updated : 11/14/2022 Last updated : 03/03/2023 vm-linux Cloud Shell allocates machines on a per-request basis and as a result machine st persist across sessions. Since Cloud Shell is built for interactive sessions, shells automatically terminate after 20 minutes of shell inactivity. -<!-- -TODO: -- need to verify Distro - showing Ubuntu currently-- need to verify all experiences described here eg. cd Azure: - I have different results> Azure Cloud Shell runs on **Common Base Linux - Mariner** (CBL-Mariner), Microsoft's Linux distribution for cloud-infrastructure-edge products and services. Microsoft internally compiles all the packages included in the **CBL-Mariner** repository to help guard against supply chain attacks. Tooling has been updated to reflect the new base image-CBL-Mariner. You can get a full list of installed package versions using the following command: -`tdnf list installed`. If these changes affected your Cloud Shell environment, contact Azure Support -or create an issue in the [Cloud Shell repository][12]. +CBL-Mariner. If these changes affected your Cloud Shell environment, contact Azure Support or create +an issue in the [Cloud Shell repository][17]. ## Features first launch. Once completed, Cloud Shell will automatically attach your storage sessions. Use best practices when storing secrets such as SSH keys. Services, like Azure Key Vault, have [tutorials for setup][02]. -[Learn more about persisting files in Cloud Shell.][29] +[Learn more about persisting files in Cloud Shell.][28] ### Azure drive (Azure:) PowerShell in Cloud Shell provides the Azure drive (`Azure:`). You can switch to the Azure drive with `cd Azure:` and back to your home directory with `cd ~`. The Azure drive enables easy discovery and navigation of Azure resources such as Compute, Network, Storage etc. similar to-filesystem navigation. You can continue to use the familiar [Azure PowerShell cmdlets][07] to manage +filesystem navigation. You can continue to use the familiar [Azure PowerShell cmdlets][06] to manage these resources regardless of the drive you are in. Any changes made to the Azure resources, either made directly in Azure portal or through Azure PowerShell cmdlets, are reflected in the Azure drive. You can run `dir -Force` to refresh your resources. -![Screenshot of an Azure Cloud Shell being initialized and a list of directory resources.][26] +![Screenshot of an Azure Cloud Shell being initialized and a list of directory resources.][25] ### Manage Exchange Online PowerShell in Cloud Shell contains a private build of the Exchange Online module. Run `Connect-EXOPSSession` to get your Exchange cmdlets. -![Screenshot of an Azure Cloud Shell running the commands Connect-EXOPSSession and Get-User.][27] +![Screenshot of an Azure Cloud Shell running the commands Connect-EXOPSSession and Get-User.][26] Run `Get-Command -Module tmp_*` PowerShell in Cloud Shell contains a private build of the Exchange Online module > The module name should begin with `tmp_`, if you have installed modules with the same prefix, > their cmdlets will also be surfaced. -![Screenshot of an Azure Cloud Shell running the command Get-Command -Module tmp_*.][28] +![Screenshot of an Azure Cloud Shell running the command Get-Command -Module tmp_*.][27] ### Deep integration with open source tooling and Chef InSpec. Try it out from the example walkthroughs. ### Pre-installed tools -<!-- -TODO: -- remove obsolete tools-- separate by bash vs. pwsh-- link to docs rather than github>+The most commonly used tools are preinstalled in Cloud Shell. -Linux tools +#### Azure tools ++Cloud Shell comes with the following Azure command-line tools preinstalled: ++| Tool | Version | Command | +| - | -- | | +| [Azure CLI][08] | 2.45.0 | `az --version` | +| [Azure PowerShell][06] | 9.4.0 | `Get-Module Az -ListAvailable` | +| [AzCopy][04] | 10.15.0 | `azcopy --version` | +| [Azure Functions CLI][01] | 4.0.3971 | `func --version` | +| [Service Fabric CLI][03] | 11.2.0 | `sfctl --version` | +| [Batch Shipyard][09] | 3.9.1 | `shipyard --version` | +| [blobxfer][10] | 1.11.0 | `blobxfer --version` | ++You can verify the version of the language using the command listed in the table. ++#### Linux tools - bash - zsh Linux tools - tmux - dig -Azure tools --- [Azure CLI][09]-- [AzCopy][04]-- [Azure Functions CLI][05]-- [Service Fabric CLI][03]-- [Batch Shipyard][10]-- [blobxfer][11]+#### Text editors -Text editors --- code (Cloud Shell editor)+- Cloud Shell editor (code) - vim - nano - emacs -Source control +#### Source control -- git+- Git +- GitHub CLI -Build tools +#### Build tools - make - maven - npm - pip -Containers +#### Containers - [Docker Desktop][15]-- [Kubectl][19]-- [Helm][17]+- [Kubectl][20] +- [Helm][19] - [DC/OS CLI][14] -Databases +#### Databases - MySQL client - PostgreSql client-- [sqlcmd Utility][09]+- [sqlcmd Utility][08] - [mssql-scripter][18] -Other +#### Other - iPython Client - [Cloud Foundry CLI][13]-- [Terraform][25]-- [Ansible][22]-- [Chef InSpec][23]-- [Puppet Bolt][21]-- [HashiCorp Packer][24]-- [Office 365 CLI][20]--### Language support --| Language | Version | -| - | | -| .NET Core | [6.0.402][16] | -| Go | 1.9 | -| Java | 1.8 | -| Node.js | 8.16.0 | -| PowerShell | [7.2][08] | -| Python | 2.7 and 3.7 (default) | +- [Terraform][24] +- [Ansible][23] +- [Chef InSpec][12] +- [Puppet Bolt][22] +- [HashiCorp Packer][11] +- [Office 365 CLI][21] ++### Preinstalled developer languages ++Cloud Shell comes with the following languages preinstalled: ++| Language | Version | Command | +| - | - | | +| .NET Core | [6.0.405][16] | `dotnet --version` | +| Go | 1.17.13 | `go version` | +| Java | 11.0.18 | `java --version` | +| Node.js | 16.18.1 | `node --version` | +| PowerShell | [7.3.2][07] | `pwsh -Version` | +| Python | 3.9.14 | `python --version` | +| Ruby | 3.1.3p185 | `ruby --version` | ++You can verify the version of the language using the command listed in the table. ## Next steps -- [Bash in Cloud Shell Quickstart][31]-- [PowerShell in Cloud Shell Quickstart][30]-- [Learn about Azure CLI][06]-- [Learn about Azure PowerShell][07]+- [Bash in Cloud Shell Quickstart][30] +- [PowerShell in Cloud Shell Quickstart][29] +- [Learn about Azure CLI][05] +- [Learn about Azure PowerShell][06] <!-- link references -->+[01]: ../azure-functions/functions-run-local.md [02]: ../key-vault/general/manage-with-cli2.md#prerequisites [03]: ../service-fabric/service-fabric-cli.md [04]: ../storage/common/storage-use-azcopy-v10.md-[05]: ../azure-functions/functions-run-local.md -[06]: /cli/azure/ -[07]: /powershell/azure -[08]: /powershell/scripting/whats-new/what-s-new-in-powershell-72 -[09]: /sql/tools/sqlcmd-utility -[10]: https://batch-shipyard.readthedocs.io/en/latest/ -[11]: https://blobxfer.readthedocs.io/en/latest/ -[12]: https://github.com/Azure/CloudShell/issues +[05]: /cli/azure/ +[06]: /powershell/azure +[07]: /powershell/scripting/whats-new/what-s-new-in-powershell-73 +[08]: /sql/tools/sqlcmd-utility +[09]: https://batch-shipyard.readthedocs.io/en/latest/ +[10]: https://blobxfer.readthedocs.io/en/latest/ +[11]: https://developer.hashicorp.com/packer/docs +[12]: https://docs.chef.io/ [13]: https://docs.cloudfoundry.org/cf-cli/ [14]: https://docs.d2iq.com/dkp/2.3/azure-quick-start [15]: https://docs.docker.com/desktop/ [16]: https://dotnet.microsoft.com/download/dotnet/6.0-[17]: https://helm.sh/docs/ +[17]: https://github.com/Azure/CloudShell/issues [18]: https://github.com/microsoft/mssql-scripter/blob/dev/doc/usage_guide.md-[19]: https://kubernetes.io/docs/user-guide/kubectl-overview/ -[20]: https://pnp.github.io/office365-cli/ -[21]: https://puppet.com/docs/bolt/latest/bolt.html -[22]: https://www.ansible.com/microsoft-azure -[23]: https://docs.chef.io/ -[24]: https://developer.hashicorp.com/packer/docs -[25]: https://www.terraform.io/docs/providers/azurerm/ -[26]: media/features/azure-drive.png -[27]: media/features/exchangeonline.png -[28]: medilets.png -[29]: persisting-shell-storage.md -[30]: quickstart-powershell.md -[31]: quickstart.md +[19]: https://helm.sh/docs/ +[20]: https://kubernetes.io/docs/user-guide/kubectl-overview/ +[21]: https://pnp.github.io/office365-cli/ +[22]: https://puppet.com/docs/bolt/latest/bolt.html +[23]: https://www.ansible.com/microsoft-azure +[24]: https://www.terraform.io/docs/providers/azurerm/ +[25]: media/features/azure-drive.png +[26]: media/features/exchangeonline.png +[27]: medilets.png +[28]: persisting-shell-storage.md +[29]: quickstart-powershell.md +[30]: quickstart.md |
cloud-shell | Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/limitations.md | description: Overview of limitations of Azure Cloud Shell ms.contributor: jahelmic Previously updated : 11/14/2022 Last updated : 03/03/2023 vm-linux Azure Cloud Shell has the following known limitations: ## General limitations ### System state and persistence-<!-- -TODO: -- verify the regions>+ The machine that provides your Cloud Shell session is temporary, and it's recycled after your session is inactive for 20 minutes. Cloud Shell requires an Azure file share to be mounted. As a result, your subscription must be able to set up storage resources to access Cloud Shell. Other considerations include: - With mounted storage, only modifications within the `$HOME` directory are persisted.-- Azure file shares can be mounted only from within your [assigned region][02].+- Azure file shares can be mounted only from within your [assigned region][01]. - In Bash, run `env` to find your region set as `ACC_LOCATION`. ### Browser support-<!-- -TODO: -- Do we still support Microsoft Internet Explorer?>-Cloud Shell supports the latest versions of Microsoft Edge, Microsoft Internet Explorer, Google -Chrome, Mozilla Firefox, and Apple Safari. Safari in private mode isn't supported. ++Cloud Shell supports the latest versions of Microsoft Edge, Google Chrome, Mozilla Firefox, and +Apple Safari. Safari in private mode isn't supported. ### Copy and paste -- Windows: <kbd>Ctrl</kbd>-<kbd>C</kbd> to copy is supported but use- <kbd>Shift</kbd>-<kbd>Insert</kbd> to paste. - - FireFox/IE may not support clipboard permissions properly. -- macOS: <kbd>Cmd</kbd>-<kbd>C</kbd> to copy and <kbd>Cmd</kbd>-<kbd>V</kbd> to paste.+- Windows: <kbd>Ctrl</kbd>+<kbd>c</kbd> to copy is supported but use + <kbd>Shift</kbd>+<kbd>Insert</kbd> to paste. + - FireFox may not support clipboard permissions properly. +- macOS: <kbd>Cmd</kbd>+<kbd>c</kbd> to copy and <kbd>Cmd</kbd>+<kbd>v</kbd> to paste. ### Only one shell can be active for a given user Users can only launch one Cloud Shell session at a time. However, you may have multiple instances of Bash or PowerShell running within that session. Switching between Bash or PowerShell using the menu-restarts the Cloud Shell session and terminate the existing session. To avoid losing your current +terminates the existing session and starts a new Cloud Shell instance. To avoid losing your current session, you can run `bash` inside PowerShell and you can run `pwsh` inside of Bash. ### Usage limits Permissions are set as regular users without sudo access. Any installation outsi directory isn't persisted. ## PowerShell limitations-<!-- -TODO: -- outdated info about AzureAD and SQL-- Not running on Windows so the GUI comment not valid>+ ### `AzureAD` module name The `AzureAD` module name is currently `AzureAD.Standard.Preview`, the module provides the same functionality. -### `SqlServer` module functionality --The `SqlServer` module included in Cloud Shell has only prerelease support for PowerShell Core. In -particular, `Invoke-SqlCmd` isn't available yet. - ### Default file location when created from Azure drive You can't create files under the `Azure:` drive. When users create new files using other tools, such-as vim or nano, the files are saved to the `$HOME` by default. --### GUI applications aren't supported --If the user runs a command that would create a dialog box, one sees an error message such -as: --> Unable to load DLL 'IEFRAME.dll': The specified module couldn't be found. -+as `vim` or `nano`, the files are saved to the `$HOME` by default. ### Large Gap after displaying progress bar progress bar was previously. ## Next steps -- [Troubleshooting Cloud Shell][05]-- [Quickstart for Bash][04]-- [Quickstart for PowerShell][03]+- [Troubleshooting Cloud Shell][04] +- [Quickstart for Bash][03] +- [Quickstart for PowerShell][02] <!-- link references -->-[02]: persisting-shell-storage.md#mount-a-new-clouddrive -[03]: quickstart-powershell.md -[04]: quickstart.md -[05]: troubleshooting.md +[01]: persisting-shell-storage.md#mount-a-new-clouddrive +[02]: quickstart-powershell.md +[03]: quickstart.md +[04]: troubleshooting.md |
communication-services | Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/authentication.md | Each authorization option is briefly described below: ### Access Key -Access key authentication is suitable for service applications running in a trusted service environment. Your access key can be found in the Azure Communication Services portal. The service application uses it as a credential to initialize the corresponding SDKs. See an example of how it is used in the [Identity SDK](../quickstarts/access-tokens.md). +Access key authentication is suitable for service applications running in a trusted service environment. Your access key can be found in the Azure Communication Services portal. The service application uses it as a credential to initialize the corresponding SDKs. See an example of how it is used in the [Identity SDK](../quickstarts/identity/access-tokens.md). Since the access key is part of the connection string of your resource, authentication with a connection string is equivalent to authentication with an access key. Use our [Trusted authentication service hero sample](../samples/trusted-auth-sam ### User Access Tokens -User access tokens are generated using the Identity SDK and are associated with users created in the Identity SDK. See an example of how to [create users and generate tokens](../quickstarts/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and Azure AD authentication in that it is used to authenticate a user rather than a secured Azure resource. +User access tokens are generated using the Identity SDK and are associated with users created in the Identity SDK. See an example of how to [create users and generate tokens](../quickstarts/identity/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and Azure AD authentication in that it is used to authenticate a user rather than a secured Azure resource. ## Using identity for monitoring and metrics The user identity is intended to act as a primary key for logs and metrics colle > [Create an Azure Active Directory service principal application from the Azure CLI](../quickstarts/identity/service-principal.md?pivots=platform-azcli) > [!div class="nextstepaction"]-> [Create user access tokens](../quickstarts/access-tokens.md) +> [Create user access tokens](../quickstarts/identity/access-tokens.md) > [!div class="nextstepaction"] > [Trusted authentication service hero sample](../samples/trusted-auth-sample.md) |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md | There are two core parts to chat architecture: 1) Trusted Service and 2) Client :::image type="content" source="../../media/chat-architecture.svg" alt-text="Diagram showing Communication Services' chat architecture."::: + - **Trusted service:** To properly manage a chat session, you need a service that helps you connect to Communication Services by using your resource connection string. This service is responsible for creating chat threads, adding and removing participants, and issuing access tokens to users. More information about access tokens can be found in our [access tokens](../../quickstarts/identity/access-tokens.md) quickstart. - **Client app:** The client application connects to your trusted service and receives the access tokens that are used by users to connect directly to Communication Services. After creating the chat thread and adding users as participants, they can use the client application to connect to the chat thread and send messages. Use real-time notifications feature, which we discuss below, in your client application to subscribe to message & thread updates from other participants. |
communication-services | Credentials Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/credentials-best-practices.md | tokenCredential.dispose() Depending on your scenario, you may want to sign a user out from one or more - To sign a user out from a single service, [dispose](#cleaning-up-resources) of the Credential object.-- To sign a user out from multiple services, implement a signaling mechanism to notify all services to [dispose](#cleaning-up-resources) of the Credential object, and additionally, [revoke all access tokens](../quickstarts/access-tokens.md?tabs=windows&pivots=programming-language-javascript#revoke-access-tokens) for a given identity.+- To sign a user out from multiple services, implement a signaling mechanism to notify all services to [dispose](#cleaning-up-resources) of the Credential object, and additionally, [revoke all access tokens](../quickstarts/identity/access-tokens.md?tabs=windows&pivots=programming-language-javascript#revoke-access-tokens) for a given identity. In this article, you learned how to: To learn more, you may want to explore the following quickstart guides: -* [Create and manage access tokens](../quickstarts/access-tokens.md) +* [Create and manage access tokens](../quickstarts/identity/access-tokens.md) * [Manage access tokens for Teams users](../quickstarts/manage-teams-identity.md) |
communication-services | Identity Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/identity-model.md | If you cache access tokens to a backing store, we recommend using encryption. An ## Next steps -* For an introduction to access token management, see [Create and manage access tokens](../quickstarts/access-tokens.md). +* For an introduction to access token management, see [Create and manage access tokens](../quickstarts/identity/access-tokens.md). * For an introduction to authentication, see [Authenticate to Azure Communication Services](./authentication.md). * For an introduction to data residency and privacy, see [Region availability and data residency](./privacy.md). * To learn how to quickly create identities for testing, see the [quick-create identity quickstart](../quickstarts/identity/quick-create-identity.md). |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md | The following table shows supported Teams capabilities: ## Next steps -- [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)+- [Authenticate as Teams external user](../../../quickstarts/identity/access-tokens.md) - [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md) - [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md) - [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) |
communication-services | Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/limitations.md | +- [Authenticate as Teams external user](../../../quickstarts/identity/access-tokens.md) - [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md) - [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md) - [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/overview.md | The [Azure Communication Services Authentication Hero Sample](../../../samples/t The data flow for joining Teams meetings is available at the [client and server architecture page](../../client-and-server-architecture.md). When implementing the experience, you must implement client logic for real-time communication and server logic for authentication. The following articles will guide you in implementing the communication for Teams external users. High-level coding articles:-- [Authenticate as Teams external user](../../../quickstarts/identity/access-token-teams-external-users.md) +- [Authenticate as Teams external user](../../../quickstarts/identity/access-tokens.md) - [Call with Chat Composite](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) Low-level coding articles: Any licensed Teams users can schedule Teams meetings and share the invite with e ## Next steps -- [Authenticate as Teams external user](../../../quickstarts/identity/access-token-teams-external-users.md)+- [Authenticate as Teams external user](../../../quickstarts/identity//access-tokens.md) - [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md) - [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md) - [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) |
communication-services | Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/privacy.md | Azure Communication Services will delete all copies of the most recently sent me ## Next steps -- [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)+- [Authenticate as Teams external user](../../../quickstarts/identity/access-tokens.md) - [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md) - [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md) - [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) |
communication-services | Teams Administration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md | Teams meeting organizers can also configure the Teams meeting options to adjust ## Next steps -- [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)+- [Authenticate as Teams external user](../../../quickstarts/identity/access-tokens.md) - [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md) - [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md) - [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) |
communication-services | Teams Client Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-client-experience.md | Teams external user joining Teams meeting with Azure Communication Services SDKs ## Next steps -- [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)+- [Authenticate as Teams external user](../../../quickstarts/identity/access-tokens.md) - [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md) - [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md) - [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) |
communication-services | Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md | There are two categories of Communication Service data: ### Identities -Azure Communication Services maintains a directory of identities, use the [DeleteIdentity](/rest/api/communication/communication-identity/delete?tabs=HTTP) API to remove them. Deleting an identity will revoke all associated access tokens and delete their chat messages. For more information on how to remove an identity [see this page](../quickstarts/access-tokens.md). +Azure Communication Services maintains a directory of identities, use the [DeleteIdentity](/rest/api/communication/communication-identity/delete?tabs=HTTP) API to remove them. Deleting an identity will revoke all associated access tokens and delete their chat messages. For more information on how to remove an identity [see this page](../quickstarts/identity/access-tokens.md). - DeleteIdentity |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md | An exception policy controls the behavior of a Job based on a trigger and execut [cla]: https://cla.microsoft.com [nuget]: https://www.nuget.org/ [netstandars2mappings]: https://github.com/dotnet/standard/blob/master/docs/versions.md-[useraccesstokens]: ../../quickstarts/access-tokens.md?pivots=programming-language-csharp +[useraccesstokens]: ../../quickstarts/identity/access-tokens.md?pivots=programming-language-csharp [communication_resource_docs]: ../../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows [communication_resource_create_portal]: ../../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows [communication_resource_create_power_shell]: /powershell/module/az.communication/new-azcommunicationservice |
communication-services | Sdk Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md | For more information, see the following SDK overviews: To get started with Azure Communication - [Create an Azure Communication Services resource](../quickstarts/create-communication-resource.md)-- Generate [User Access Tokens](../quickstarts/access-tokens.md)+- Generate [User Access Tokens](../quickstarts/identity/access-tokens.md) |
communication-services | Teams Interop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md | Azure Communication Services interoperability isn't compatible with Teams deploy ## Next steps Find more details for External user interoperability:-- [Get access tokens for external user](../quickstarts/access-tokens.md)+- [Get access tokens for external user](../quickstarts/identity/access-tokens.md) - [Join Teams meeting call as a external user](../quickstarts/voice-video-calling/get-started-teams-interop.md) - [Join Teams meeting chat as a external user](../quickstarts/chat/meeting-interop.md) |
communication-services | Pre Call Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md | The Pre-Call API enables developers to programmatically validate a clientΓÇÖs re - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended. - An active Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](../../quickstarts/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)+- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](../../quickstarts/identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.) ```azurecli-interactive az communication identity token issue --scope voip --connection-string "yourConnectionString" ``` - For details, see [Use Azure CLI to Create and Manage Access Tokens](../../quickstarts/access-tokens.md?pivots=platform-azcli). + For details, see [Use Azure CLI to Create and Manage Access Tokens](../../quickstarts/identity/access-tokens.md?pivots=platform-azcli). ## Accessing Pre-Call APIs |
communication-services | Video Effects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-effects.md | The Azure Communication Calling SDK allows you to create video effects that othe - An Azure account with an active subscription is required. See [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) on how to create an Azure account. - [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended. - An active Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A User Access Token to instantiate a call client. Learn how to [create and manage user access tokens](../../quickstarts/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)+- A User Access Token to instantiate a call client. Learn how to [create and manage user access tokens](../../quickstarts/identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.) - Azure Communication Calling client library is properly set up and configured (https://www.npmjs.com/package/@azure/communication-calling). An example using the Azure CLI to ```azurecli-interactive az communication identity token issue --scope voip --connection-string "yourConnectionString" ```-For details on using the CLI, see [Use Azure CLI to Create and Manage Access Tokens](../../quickstarts/access-tokens.md?pivots=platform-azcli). +For details on using the CLI, see [Use Azure CLI to Create and Manage Access Tokens](../../quickstarts/identity/access-tokens.md?pivots=platform-azcli). ## Install the Calling effects SDK Use ΓÇÿnpm installΓÇÖ command to install the Azure Communication Calling Effects SDK for JavaScript. |
communication-services | Record Every Call | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/record-every-call.md | + + Title: Record a call when it starts ++description: In this how-to document, you can learn how to record a call through Azure Communication Services once it starts. ++++++ Last updated : 03/01/2023+++# Record a call when it starts ++Call recording is often used directly through the UI of a calling application, where the user triggers the recording. For applications within industries like banking or healthcare, call recording is required from the get-go. The service needs to automatically record for compliance purposes. This sample shows how to record a call when it starts. It uses Azure Communication Services and Azure Event Grid to trigger an Azure Function when a call starts. It automatically records every call within your Azure Communication Services resource. ++In this QuickStart, we focus on showcasing the processing of call started events through Azure Functions using Event Grid triggers. We use the Call Automation SDK for Azure Communication Services to start recording. ++The Call Started event when a call start is formatted in the following way: ++```json ++[ + { + "id": "a8bcd8a3-12d7-46ba-8cde-f6d0bda8feeb", + "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}", + "subject": "call/{serverCallId}/startedBy/8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", + "data": { + "startedBy": { + "communicationIdentifier": { + "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", + "communicationUser": { + "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + } + }, + "role": "{role}" + }, + "serverCallId": "{serverCallId}", + "group": { + "id": "00000000-0000-0000-0000-000000000000" + }, + "room": { + "id": "{roomId}" + }, + "isTwoParty": false, + "correlationId": "{correlationId}", + "isRoomsCall": true + }, + "eventType": "Microsoft.Communication.CallStarted", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2021-09-22T17:02:38.6905856Z" + } +] ++``` ++## Pre-requisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- An active Communication Services resource and connection string. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). +- Install [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli). ++## Setting up our local environment ++1. Using [Visual Studio Code](https://code.visualstudio.com/), install the [Azure Functions Extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). +2. With the extension, create an Azure Function following these [instructions](../../../azure-functions/create-first-function-vs-code-csharp.md). ++ Configure the function with the following instructions: + - Language: C# + - Template: Azure Event Grid Trigger + - Function Name: User defined ++ Once created, you see a function created in your directory like this: ++ ```csharp + + using System; + using Microsoft.Azure.WebJobs; + using Microsoft.Azure.WebJobs.Extensions.EventGrid; + using Microsoft.Extensions.Logging; + using Azure.Messaging.EventGrid; + using System.Threading.Tasks; + + namespace Company.Function + { + public static class acs_recording_test + { + [FunctionName("acs_recording_test")] + public static async Task RunAsync([EventGridTrigger]EventGridEvent eventGridEvent, ILogger log) + { + log.LogInformation(eventGridEvent.EventType); + } + + } + } ++ ``` ++## Configure Azure Function to receive `CallStarted` event ++1. Configure Azure Function to perform actions when the `CallStarted` event triggers. ++```csharp ++ public static async Task RunAsync([EventGridTrigger]EventGridEvent eventGridEvent, ILogger log) + { + if(eventGridEvent.EventType == "Microsoft.Communication.CallStarted") + { + log.LogInformation("Call started"); + var callEvent = eventGridEvent.Data.ToObjectFromJson<CallStartedEvent>(); + + // CallStartedEvent class is defined in documentation, but the objects looks like this: + // public class CallStartedEvent + // { + // public StartedBy startedBy { get; set; } + // public string serverCallId { get; set; } + // public Group group { get; set; } + // public bool isTwoParty { get; set; } + // public string correlationId { get; set; } + // public bool isRoomsCall { get; set; } + // } + // public class Group + // { + // public string id { get; set; } + // } + // public class StartedBy + // { + // public CommunicationIdentifier communicationIdentifier { get; set; } + // public string role { get; set; } + // } + // public class CommunicationIdentifier + // { + // public string rawId { get; set; } + // public CommunicationUser communicationUser { get; set; } + // } + // public class CommunicationUser + // { + // public string id { get; set; } + // } + } + } ++``` ++## Start recording ++1. Create a method to handle the `CallStarted` events. This method trigger recording to start when the call started. ++```csharp ++ public static async Task RunAsync([EventGridTrigger]EventGridEvent eventGridEvent, ILogger log) + { + if(eventGridEvent.EventType == "Microsoft.Communication.CallStarted") + { + log.LogInformation("Call started"); + var callEvent = eventGridEvent.Data.ToObjectFromJson<CallStartedEvent>(); + await startRecordingAsync(callEvent.serverCallId); + } + } ++ public static async Task startRecordingAsync (String serverCallId) + { + CallAutomationClient callAutomationClient = new CallAutomationClient(Environment.GetEnvironmentVariable("ACS_CONNECTION_STRING")); + StartRecordingOptions recordingOptions = new StartRecordingOptions(new ServerCallLocator(serverCallId)); + recordingOptions.RecordingChannel = RecordingChannel.Mixed; + recordingOptions.RecordingContent = RecordingContent.AudioVideo; + recordingOptions.RecordingFormat = RecordingFormat.Mp4; + var startRecordingResponse = await callAutomationClient.GetCallRecording() + .StartRecordingAsync(recordingOptions).ConfigureAwait(false); + } ++``` ++### Running locally ++To run the function locally, you can press `F5` in Visual Studio Code. We use [ngrok](https://ngrok.com/) to hook our locally running Azure Function with Azure Event Grid. ++1. Once the function is running, we configure ngrok. (You need to [download ngrok](https://ngrok.com/download) for your environment.) ++ ```bash ++ ngrok http 7071 ++ ``` ++ Copy the ngrok link provided where your function is running. ++2. Configure C`allStarted` events through Event Grid within your Azure Communication Services resource. We do this using the [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli). You need the resource ID for your Azure Communication Services resource found in the Azure portal. (The resource ID looks something like:  `/subscriptions/<<AZURE SUBSCRIPTION ID>>/resourceGroups/<<RESOURCE GROUP NAME>>/providers/Microsoft.Communication/CommunicationServices/<<RESOURCE NAME>>`) ++ ```bash ++ az eventgrid event-subscription create --name "<<EVENT_SUBSCRIPTION_NAME>>" --endpoint-type webhook --endpoint "<<NGROK URL>>/runtime/webhooks/EventGrid?functionName=<<FUNCTION NAME>> " --source-resource-id "<<RESOURCE_ID>>" --included-event-types Microsoft.Communication.CallStarted ++ ``` ++3. Now that everything is hooked up, test the flow by starting a call on your resource. You should see the console logs on your terminal where the function is running. You can check that the recording is starting by using the [call recording feature](../calling-sdk/record-calls.md?pivots=platform-web) on the calling SDK and check for the boolean to turn TRUE. ++### Deploy to Azure ++To deploy the Azure Function to Azure, you need to follow these [instructions](../../../azure-functions/create-first-function-vs-code-csharp.md#deploy-the-project-to-azure). Once deployed, we configure Event Grid for the Azure Communication Services resource. With the URL for the Azure Function that was deployed (URL found in the Azure portal under the function), we run a similar command: ++```bash ++az eventgrid event-subscription update --name "<<EVENT_SUBSCRIPTION_NAME>>" --endpoint-type azurefunction --endpoint "<<AZ FUNCTION URL>> " --source-resource-id "<<RESOURCE_ID>>" ++``` ++Since we're updating the event subscription we created, make sure to use the same event subscription name you used in the previous step. ++You can test by starting a call in your resource, similar to the previous step. |
communication-services | Call Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/call-transcription.md | When using call transcription you may want to let your users know that a call is - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ::: zone pivot="platform-android" |
communication-services | Callkit Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/callkit-integration.md | description: Steps on how to integrate CallKit with ACS Calling SDK - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).- - A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md). + - A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ## CallKit Integration (within SDK) |
communication-services | Dominant Speaker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/dominant-speaker.md | During an active call, you may want to get a list of active speakers in order to - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ::: zone pivot="platform-web" |
communication-services | Manage Calls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-calls.md | Learn how to manage calls with the Azure Communication Services SDKS. We'll lear - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/identity/access-tokens.md) - Optional: Complete the quickstart for [getting started with adding calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ::: zone pivot="platform-web" |
communication-services | Manage Video | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-video.md | Learn how to manage video calls with the Azure Communication Services SDKS. We'l - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ::: zone pivot="platform-web" |
communication-services | Push Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/push-notifications.md | Here, we'll learn how to enable push notifications for Azure Communication Servi - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ::: zone pivot="platform-android" |
communication-services | Raise Hand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/raise-hand.md | During an active call, you may want to send or receive states from other users. - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ::: zone pivot="platform-web" |
communication-services | Record Calls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/record-calls.md | zone_pivot_groups: acs-web-ios-android - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ::: zone pivot="platform-web" |
communication-services | Teams Interoperability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/teams-interoperability.md | Azure Communication Services SDKs can allow your users to join regular Microsoft - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) To join a Teams meeting, use the `join` method and pass a meeting link or a meeting's coordinates. |
communication-services | Transfer Calls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/transfer-calls.md | During an active call, you may want to transfer the call to another person or nu - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) [!INCLUDE [Transfer calls Client-side JavaScript](./includes/transfer-calls/transfer-calls-web.md)] |
communication-services | Data Loss Prevention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/chat-sdk/data-loss-prevention.md | -# How to integrate with Microsoft Teams Data Loss Prevention policies by subscribing to real-time chat notifications +# How to integrate with Microsoft Teams Data Loss Prevention policies Microsoft Teams administrator can configure policies for data loss prevention (DLP) to prevent leakage of sensitive information from Teams users in Teams meetings. Developers can integrate chat in Teams meetings with Azure Communication Services for Communication Services users via the Communication Services UI library or custom integration. This article describes how to incorporate data loss prevention without a UI library. You need to subscribe to real-time notifications and listen for message updates. Data Loss Prevention policies only apply to messages sent by Teams users and aren't meant to protect Azure Communications users from sending out sensitive information. +#### Data Loss Prevention with subscribing to real-time chat notifications ```javascript let endpointUrl = '<replace with your resource endpoint>'; let chatClient = new ChatClient(endpointUrl, new AzureCommunicationTokenCredenti await chatClient.startRealtimeNotifications(); chatClient.on("chatMessageEdited", (e) => { - if(e.messageBody == ΓÇ£ΓÇ¥ && e.sender.kind == "microsoftTeamsUser") - // Show UI message blocked + if (e.messageBody == "" && + e.sender.kind == "microsoftTeamsUser") { + // Show UI message blocked + } }); ``` +#### Data Loss Prevention with retrieving previous chat messages +```javascript +const messages = chatThreadClient.listMessages(); +for await (const message of messages) { + if (message.content?.message == "" && + message.sender?.kind == "microsoftTeamsUser") { + // Show UI message blocked + } +} +``` + ## Next steps - [Learn how to enable Microsoft Teams Data Loss Prevention](/microsoft-365/compliance/dlp-microsoft-teams?view=o365-worldwide) |
communication-services | Local Testing Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/event-grid/local-testing-event-grid.md | + + Title: Test your Event Grid handler locally ++description: In this how-to document, you can learn how to locally test your Event Grid handler for Azure Communication Services events with Postman. ++++ Last updated : 02/09/2023+++++# Test your Event Grid handler locally ++Testing Event Grid triggered Azure Functions locally can be complicated. You don't want to have to trigger events over and over to test your flow. It can also get expensive as triggering those events might require you perform an event that costs money like sending an SMS or placing a phone call. To help with testing, we show you how to use Postman to trigger your Azure Function with a payload that mimics the Event Grid event. ++## Pre-requisites ++- Install [Postman](https://www.postman.com/downloads/). +- Have a running Azure Function that can be triggered by Event Grid. If you don't have one, you can follow the [quickstart](../../../azure-functions/functions-bindings-event-grid-trigger.md?tabs=in-process%2Cextensionv3&pivots=programming-language-javascript) to create one. ++The Azure Function can be running either in Azure if you want to test it with some test events or if you want to test the entire flow locally (press `F5` in Visual Studio Code to run it locally). If you want to test the entire flow locally, you need to use [ngrok](https://ngrok.com/) to hook your locally running Azure Function. Configure ngrok by running the command: ++```bash ++ngrok http 7071 ++``` ++## Configure Postman ++1. Open Postman and create a new request. ++  ++2. Select the `POST` method. ++3. Enter the URL of your Azure Function. Can either be the URL of the Azure Function running in Azure or the ngrok URL if you're running it locally. Ensure that you add the function name at the end of the URL: `/runtime/webhooks/EventGrid?functionName=<<FUNCTION_NAME>>`. ++4. Select the `Body` tab and select `raw` and `JSON` from the dropdown. In the body, you add a test schema for the event you want to trigger. For example, if you're testing an Azure Function that is triggered by receiving SMS events, you add the following: ++ ```json + + { + "id": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e", + "topic": "/subscriptions/50ad1522-5c2c-4d9a-a6c8-67c11ecb75b8/resourcegroups/acse2e/providers/microsoft.communication/communicationservices/{communication-services-resource-name}", + "subject": "/phonenumber/15555555555", + "data": { + "MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e", + "From": "15555555555", + "To": "15555555555", + "Message": "Great to connect with ACS events", + "ReceivedTimestamp": "2020-09-18T00:27:45.32Z" + }, + "eventType": "Microsoft.Communication.SMSReceived", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2020-09-18T00:27:47Z" + } + + ``` ++ You can find more information for the different event types used for Azure Communication Services in the [documentation](../../../event-grid/event-schema-communication-services.md). ++5. Select the `Headers` tab and add the following headers: ++ - `Content-Type`: `application/json` + - `aeg-event-type`: `Notification` ++  ++6. Select the `Send` button to trigger the event. ++  ++ At this point, an event should trigger in your Azure Function. You can verify the event by looking at the execution of your Azure Function. You can then validate that the function is doing its job correctly. |
communication-services | View Events Request Bin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/event-grid/view-events-request-bin.md | + + Title: Validate Azure Communication Services events ++description: In this how-to document, you can learn how to validate Azure Communication Services events with RequestBin or Azure Event Viewer. ++++ Last updated : 02/09/2023+++++# Validate Azure Communication Services events ++This document shows you how to validate that your Azure Communication Services resource sends events using Azure Event Grid viewer or RequestBin. ++## Pre-requisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- An active Communication Services resource and connection string. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). +- Install [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli). ++If you already have an Azure Event Grid viewer deployed or would like to have a more robust viewer in place, you can follow instructions to [deploy it](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/). You need the endpoint generated by the Event Grid viewer. ++Alternatively, if you want a quick and easy way to validate your events, you can use [RequestBin](https://requestbin.com/). RequestBin offers two modalities to pick from. If you want to quickly test your events, you can use the [public endpoint](https://requestbin.com/r) setup. These public endpoints make event data accessible to anyone with a URL. If you prefer to keep it private, you can create a RequestBin account and create a private endpoint. For more information, see RequestBin [public vs private endpoints](https://requestbin.com/docs/#public-vs-private-endpoints). ++ ++The next steps are the same for both options. ++## Configure your Azure Communication Services resource to send events to your endpoint ++1. Using [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli), we configure the endpoint we created in the pre-requisites to receive events from your Azure Communication Services resource. You need the resource ID for your Azure Communication Services resource found in the Azure portal. ++ ```bash ++ az eventgrid event-subscription create --name "<<EVENT_SUBSCRIPTION_NAME>>" --endpoint-type webhook --endpoint "<<URL>> " --source-resource-id "<<RESOURCE_ID>>" --included-event-types Microsoft.Communication.SMSReceived ++ ``` ++ In the command, we only added the `Microsoft.Communication.SMSReceived` event type. You can add more event types to the command if you would like to receive more events. For a list of all the event types, see [Azure Communication Services events](../../../event-grid/event-schema-communication-services.md). ++2. (Optional, only if using RequestBin) You need to copy the `validationURL` on the first event the gets posted to your endpoint. You need to paste that URL on your browser to validate the endpoint. The page should saw 'Webhook successfully validated as a subscription endpoint'. ++ ++## View events ++Now that you've configured your endpoint to receive events, you can use it to view events as they come through. |
communication-services | Data Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/data-model.md | The UI Library makes it simple for developers to inject that user data model int - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/identity/access-tokens.md) - Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ::: zone pivot="platform-web" |
communication-services | Localization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/localization.md | Learn how to set up the localization correctly using the UI Library in your appl - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/identity/access-tokens.md) - Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ::: zone pivot="platform-web" |
communication-services | Theming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/theming.md | ACS UI Library uses components and icons from both [Fluent UI](https://developer - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/identity/access-tokens.md) - Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ::: zone pivot="platform-web" |
communication-services | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/troubleshooting.md | When troubleshooting happens for voice or video calls, you may be asked to provi - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/identity/access-tokens.md) - Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ::: zone pivot="platform-web" |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md | After creating a Communication Services resource you can start building client s | Resource |Description | | | |-|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens authenticate clients against your Azure Communication Services resource. These tokens are provisioned and reissued using Communication Services Identity APIs and SDKs.| +|**[Create your first user access token](./quickstarts/identity/access-tokens.md)**|User access tokens authenticate clients against your Azure Communication Services resource. These tokens are provisioned and reissued using Communication Services Identity APIs and SDKs.| |**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your browser or native apps using the Calling SDK. | |**[Add telephony calling to your app](./quickstarts/telephony/pstn-call.md)**|With Azure Communication Services you can add telephony calling capabilities to your application.| |**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.| |
communication-services | Access Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/access-tokens.md | - Title: Quickstart - Create and manage access tokens- -description: Learn how to manage identities and access tokens by using the Azure Communication Services Identity SDK. ---- Previously updated : 11/17/2021----zone_pivot_groups: acs-azcli-js-csharp-java-python ----# Quickstart: Create and manage access tokens --Access tokens let Azure Communication Services SDKs [authenticate](../concepts/authentication.md) directly against Azure Communication Services as a particular identity. You'll need to create access tokens if you want your users to join a call or chat thread within your application. --In this quickstart, you'll learn how to use the Azure Communication Services SDKs to create identities and manage your access tokens. For production use cases, we recommend that you generate access tokens on a [server-side service](../concepts/client-and-server-architecture.md). -------## Use identity for monitoring and metrics --The user ID is intended to act as a primary key for logs and metrics that are collected through Azure Monitor. To view all of a user's calls, for example, you can set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a single user. --Learn more about [authentication concepts](../concepts/authentication.md), call diagnostics through [log analytics](../concepts/analytics/log-analytics.md), and [metrics](../concepts/metrics.md) that are available to you. --## Clean up resources --To clean up and remove a Communication Services subscription, delete the resource or resource group. Deleting a resource group also deletes any other resources that are associated with it. For more information, see the "Clean up resources" section of [Create and manage Communication Services resources](./create-communication-resource.md#clean-up-resources). --## Next steps --In this quickstart, you learned how to: --> [!div class="checklist"] -> * Manage identities -> * Issue access tokens -> * Use the Communication Services Identity SDK ---> [!div class="nextstepaction"] -> [Add voice calling to your app](./voice-video-calling/getting-started-with-calling.md) --You might also want to: - |
communication-services | Quickstart Botframework Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md | Now that your bot is created and deployed, create a Communication Services resou 1. Complete the steps to [create a Communication Services resource](../../quickstarts/create-communication-resource.md). -1. Create a Communication Services user and issue a [user access token](../../quickstarts/access-tokens.md). Be sure to set the scope to **chat**. *Copy the token string and the user ID string*. +1. Create a Communication Services user and issue a [user access token](../../quickstarts/identity/access-tokens.md). Be sure to set the scope to **chat**. *Copy the token string and the user ID string*. ## Enable the Communication Services Chat channel |
communication-services | Create Communication Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/create-communication-resource.md | In this quickstart you learned how to: > * Delete the resource > [!div class="nextstepaction"]-> [Create your first user access tokens](access-tokens.md) +> [Create your first user access tokens](identity/access-tokens.md) |
communication-services | Access Token Teams External Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/access-token-teams-external-users.md | - Title: Quickstart - Create and manage access tokens for Teams external users- -description: Learn how to manage identities and access tokens for Teams external users by using the Azure Communication Services Identity SDK. ---- Previously updated : 08/05/2022----zone_pivot_groups: acs-azcli-js-csharp-java-python ----# Quickstart: Create and manage access tokens for Teams external users --Teams external users are authenticated as Azure Communication Services users in Teams. With an access token for Azure Communication Services users, you can use chat and calling SDKs to join Teams meeting audio, video, and chat as Teams external user. The quickstart here is identical to [identity and access token management of Azure Communication Services users](../access-tokens.md). --In this quickstart, you'll learn how to use the Azure Communication Services SDKs to create identities and manage your access tokens. For production use cases, we recommend that you generate access tokens on a [server-side service](../../concepts/client-and-server-architecture.md). -------## Use identity for monitoring and metrics --The user ID is a primary key for logs and metrics collected through Azure Monitor. To view all of a user's calls, for example, you can set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a single user. --Learn more about [authentication concepts](../../concepts/authentication.md), call diagnostics through [log analytics](../../concepts/analytics/log-analytics.md), and [metrics](../../concepts/metrics.md) that are available to you. --## Clean up resources --Delete the resource or resource group to clean up and remove a Communication Services subscription. Deleting a resource group also deletes any other resources that are associated with it. For more information, see the "Clean up resources" section of [Create and manage Communication Services resources](../create-communication-resource.md#clean-up-resources). --## Next steps --In this quickstart, you learned how to: --> [!div class="checklist"] -> * Manage Teams external user identity -> * Issue access tokens for Teams external user -> * Use the Communication Services Identity SDK ---> [!div class="nextstepaction"] -> [Add Teams meeting voice to your app](../voice-video-calling/get-started-teams-interop.md) --You might also want to: - |
communication-services | Access Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/access-tokens.md | + + Title: Quickstart - Create and manage access tokens ++description: Learn how to manage identities and access tokens by using the Azure Communication Services Identity SDK. ++++ Last updated : 11/17/2021++++zone_pivot_groups: acs-azcli-js-csharp-java-python-portal-nocode ++++# Quickstart: Create and manage access tokens ++Access tokens let Azure Communication Services SDKs [authenticate](../../concepts/authentication.md) directly against Azure Communication Services as a particular identity. You'll need to create access tokens if you want your users to join a call or chat thread within your application. ++In this quickstart, you'll learn how to use the Azure Communication Services SDKs to create identities and manage your access tokens. For production use cases, we recommend that you generate access tokens on a [server-side service](../../concepts/client-and-server-architecture.md). +++++++++## Use identity for monitoring and metrics ++The user ID is intended to act as a primary key for logs and metrics that are collected through Azure Monitor. To view all of a user's calls, for example, you can set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a single user. ++Learn more about [authentication concepts](../../concepts/authentication.md), call diagnostics through [log analytics](../../concepts/analytics/log-analytics.md), and [metrics](../../concepts/metrics.md) that are available to you. ++## Clean up resources ++To clean up and remove a Communication Services subscription, delete the resource or resource group. Deleting a resource group also deletes any other resources that are associated with it. For more information, see the "Clean up resources" section of [Create and manage Communication Services resources](../create-communication-resource.md#clean-up-resources). ++To clean up your logic app workflow and related resources, review [how to clean up Logic Apps resources](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#clean-up-resources). +++## Next steps ++In this quickstart, you learned how to: ++> [!div class="checklist"] +> * Issue access tokens +> * Manage identities ++> [!div class="nextstepaction"] +> [Add voice calling to your app](../voice-video-calling/getting-started-with-calling.md) ++You might also want to: ++ - [Learn about authentication](../../concepts/authentication.md) + - [Add chat to your app](../chat/get-started.md) + - [Learn about client and server architecture](../../concepts/client-and-server-architecture.md) + - [Deploy trusted authentication service hero sample](../../samples/trusted-auth-sample.md) +++## Next steps ++In this quickstart, you learned how to create a user, delete a user, issue a user an access token and remove user access token using the Azure Communication Services Identity connector. To learn more check the [Azure Communication Services Identity Connector](/connectors/acsidentity/) documentation. ++To see how tokens are use by other connectors, check out [how to send a chat message](../chat/logic-app.md) from Power Automate using Azure Communication Services. ++To learn more about how to send an email using the Azure Communication Services Email connector check [Send email message in Power Automate with Azure Communication Services](../email/logic-app.md). |
communication-services | Logic App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/logic-app.md | - Title: Quickstart - Create and Manage Azure Communication Services users and access tokens in Microsoft Power Automate- -description: In this quickstart, learn how to manage users and access tokens in Azure Logic Apps workflows by using the Azure Communication Services Identity connector. ---- Previously updated : 07/20/2022------# Quickstart: Create and Manage Azure Communication Services users and access tokens in Microsoft Power Automate --Access tokens let Azure Communication Services connectors authenticate directly against Azure Communication Services as a particular identity. You'll need to create access tokens if you want to perform actions like send a message in a chat using the [Azure Communication Services Chat](../chat/logic-app.md) connector. -This quickstart shows how to [create a user](#create-user), [delete a user](#delete-a-user), [issue a user an access token](#issue-a-user-access-token) and [remove user access token](#revoke-user-access-tokens) using the [Azure Communication Services Identity](https://powerautomate.microsoft.com/connectors/details/shared_acsidentity/azure-communication-services-identity/) connector. ---## Prerequisites --- An Azure account with an active subscription, or [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- An active Azure Communication Services resource, or [create a Communication Services resource](../create-communication-resource.md).--- An active Logic Apps resource (logic app), or [create a blank logic app but with the trigger that you want to use](../../../logic-apps/quickstart-create-first-logic-app-workflow.md). Currently, the Azure Communication Services Identity connector provides only actions, so your logic app requires a trigger, at minimum.---## Create user --Add a new step in your workflow by using the Azure Communication Services Identity connector, follow these steps in Power Automate with your Power Automate flow open in edit mode. -1. On the designer, under the step where you want to add the new action, select New step. Alternatively, to add the new action between steps, move your pointer over the arrow between those steps, select the plus sign (+), and select Add an action. --1. In the Choose an operation search box, enter Communication Services Identity. From the actions list, select Create a user. -- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Create user action."::: --1. Provide the Connection String. This can be found in the [Microsoft Azure](https://portal.azure.com/), within your Azure Communication Service Resource, on the Keys option from the left menu > Connection String -- :::image type="content" source="./media/logic-app/azure-portal-connection-string.png" alt-text="Screenshot that shows the Keys page within an Azure Communication Services Resource." lightbox="./media/logic-app/azure-portal-connection-string.png"::: --1. Provide a Connection Name --1. Click **Create** -- This action will output a User ID, which is a Communication Services user identity. - Additionally, if you click ΓÇ£Show advanced optionsΓÇ¥ and select the Token Scope the action will also output an access token and its expiration time with the specified scope. -- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action.png" alt-text="Screenshot that shows the Azure Communication Services connector Create user action."::: -- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action-advanced.png" alt-text="Screenshot that shows the Azure Communication Services connector Create user action advanced options."::: ---## Issue a user access token --After you have a Communication Services identity, you can use the Issue a user access token action to issue an access token. The following steps will show you how: -1. Add a new action and enter Communication Services Identity in the search box. From the actions list, select Issue a user access token. -- :::image type="content" source="./media/logic-app/azure-communications-services-connector-issue-access-token-action.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Issue access token action."::: -- -1. Then, you can use the User ID output from the previous [Create a user](#create-user) step. --1. Specify the token scope: VoIP or chat. [Learn more about tokens and authentication](../../concepts/authentication.md). - - :::image type="content" source="./media/logic-app/azure-communications-services-connector-issue-access-token-action-token-scope.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Issue access token action, specifying the token scope."::: --This will output an access token and its expiration time with the specified scope. --## Revoke user access tokens --After you have a Communication Services identity, you can use the Issue a user access token action to revoke an access token . The following steps will show you how: -1. Add a new action and enter Communication Services Identity in the search box. From the actions list, select Revoke user access tokens. - - :::image type="content" source="./media/logic-app/azure-communications-services-connector-revoke-access-token-action.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Revoke access token action."::: --1. Specify the User ID -- :::image type="content" source="./media/logic-app/azure-communications-services-connector-revoke-access-token-action-user-id.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Revoke access token action input."::: - -This will revoke all user access tokens for the specified user, there are no outputs for this action. ---## Delete a user --After you have a Communication Services identity, you can use the Issue a user access token action to delete an access token . The following steps will show you how: -1. Add a new action and enter Communication Services Identity in the search box. From the actions list, select Delete a user. -- :::image type="content" source="./media/logic-app/azure-communications-services-connector-delete-user.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Delete user action."::: --1. Specify the User ID - - :::image type="content" source="./media/logic-app/azure-communications-services-connector-delete-user-id.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Delete user action input."::: -- This will remove the user and revoke all user access tokens for the specified user, there are no outputs for this action. ---## Test your logic app --To manually start your workflow, on the designer toolbar, select **Run**. The workflow should create a user, issue an access token for that user, then remove it and delete the user. For more information, review [how to run your workflow](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#run-workflow). You can check the outputs of these actions after the workflow runs successfully. --## Clean up resources --To remove a Communication Services subscription, delete the Communication Services resource or resource group. Deleting the resource group also deletes any other resources in that group. For more information, review [how to clean up Communication Services resources](../create-communication-resource.md#clean-up-resources). --To clean up your logic app workflow and related resources, review [how to clean up Logic Apps resources](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#clean-up-resources). --## Next steps --In this quickstart, you learned how to create a user, delete a user, issue a user an access token and remove user access token using the Azure Communication Services Identity connector. To learn more check the [Azure Communication Services Identity Connector](/connectors/acsidentity/) documentation. --To see how tokens are use by other connectors, check out [how to send a chat message](../chat/logic-app.md) from Power Automate using Azure Communication Services. --To learn more about how to send an email using the Azure Communication Services Email connector check [Send email message in Power Automate with Azure Communication Services](../email/logic-app.md). |
communication-services | Quick Create Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/quick-create-identity.md | - Title: Quickstart - Quickly create Azure Communication Services identities for testing- -description: Learn how to use the Identities & Access Tokens tool in the Azure portal to use with samples and for troubleshooting. ---- Previously updated : 07/19/2021-------# Quickstart: Quickly create Azure Communication Services access tokens for testing --In the [Azure portal](https://portal.azure.com) Communication Services extension, you can generate a Communication Services identity and access token. This lets you skip creating an authentication service, which makes it easier for you to test the sample apps and simple development scenarios. This feature is intended for small-scale validation and testing and should not be used for production scenarios. For production code, refer to the [creating access tokens quickstart](../access-tokens.md). --The tool showcases the behavior of the ```Identity SDK``` in a simple user experience. Tokens and identities that are created through this tool follow the same behaviors and rules as if they were created using the ```Identity SDK```. For example, access tokens expire after 24 hours. --## Prerequisites --- An [Azure Communication Services resource](../create-communication-resource.md)--## Create the access tokens --In the [Azure portal](https://portal.azure.com), navigate to the **Identities & User Access Tokens** blade within your Communication Services resource. --Choose the scope of the access tokens. You can select none, one, or multiple. Click **Generate**. -- --You'll see an identity and corresponding user access token generated. You can copy these strings and use them in the [sample apps](../../samples/overview.md) and other testing scenarios. -- --## Next steps ---You may also want to: - |
communication-services | Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/service-principal.md | This quickstart shows you how to authorize access to the Identity and SMS SDKs f - [Learn more about Azure role-based access control](../../../../articles/role-based-access-control/index.yml) - [Learn more about Azure identity library for .NET](/dotnet/api/overview/azure/identity-readme)-- [Creating user access tokens](../../quickstarts/access-tokens.md)+- [Creating user access tokens](../../quickstarts/identity/access-tokens.md) - [Send an SMS message](../../quickstarts/sms/send.md) - [Learn more about SMS](../../concepts/sms/concepts.md) - [Quickly create an identity for testing](./quick-create-identity.md). |
communication-services | Join Rooms Call | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/join-rooms-call.md | zone_pivot_groups: acs-web-ios-android - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).-- Two or more Communication User Identities. [Create and manage access tokens](../access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).+- Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md). - A room resource. [Create and manage rooms](get-started-rooms.md) ## Obtain user access token -You'll need to create a User Access Token for each call participant. [Learn how to create and manage user access tokens](../access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. +You'll need to create a User Access Token for each call participant. [Learn how to create and manage user access tokens](../identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. ```azurecli-interactive az communication identity token issue --scope voip --connection-string "yourConnectionString" ``` -For details, see [Use Azure CLI to Create and Manage Access Tokens](../access-tokens.md?pivots=platform-azcli). +For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli). ::: zone pivot="platform-web" [!INCLUDE [Join a room call from web calling SDK](./includes/rooms-quickstart-call-web.md)] |
communication-services | Receive Sms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/receive-sms.md | + + Title: Quickstart - Receive and Reply to SMS ++description: "In this quickstart, you'll learn how to receive an SMS message by using Azure Communication Services." ++++ Last updated : 02/09/2023++++zone_pivot_groups: acs-js-power +++# Quickstart: Receive and Reply to SMS ++Azure Communication Services SMS capabilities provide developers options to consume SMS received events. The events are posted to Azure Event Grid which provides out of the box integrations to process those using webhooks, Azure Functions, Power Automate / Logic App connectors, and more. ++Once received, SMS messages can be processed to respond to them or to simply log them to a database for future access. ++In this QuickStart, we will focus on showcasing the processing of SMS received events through Azure Functions using Event Grid triggers and no-code connectors for Power Automate / Logic Apps. ++The `SMSReceived` event generated when an SMS is sent to an Azure Communication Services phone number is formatted in the following way: ++```json +[{ + "id": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e", + "topic": "/subscriptions/50ad1522-5c2c-4d9a-a6c8-67c11ecb75b8/resourcegroups/acse2e/providers/microsoft.communication/communicationservices/{communication-services-resource-name}", + "subject": "/phonenumber/15555555555", + "data": { + "MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e", + "From": "15555555555", + "To": "15555555555", + "Message": "Great to connect with ACS events", + "ReceivedTimestamp": "2020-09-18T00:27:45.32Z" + }, + "eventType": "Microsoft.Communication.SMSReceived", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2020-09-18T00:27:47Z" +}] +``` ++To start generating the events, we must configure Azure Event Grid for our Azure Communication Services resource. Leveraging Event Grid generates an additional charge for the usage. More information on Event Grid pricing can be found on the [pricing page](https://azure.microsoft.com/pricing/details/event-grid/). ++## Pre-requisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md). +- An SMS-enabled telephone number. [Get a phone number](../telephony/get-phone-number.md). +- Enable Event Grid resource provide on your subscription. See [instructions](../sms/handle-sms-events.md#register-an-event-grid-resource-provider). ++++## Clean up resources ++If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources). ++## Toll-free verification ++If you have a new toll-free number and want to send [high volume of SMS messages](../../concepts/sms/sms-faq.md#what-happens-if-i-dont-verify-my-toll-free-numbers) or send SMS messages to Canadian phone numbers, please visit [here](../../concepts/sms/sms-faq.md#how-do-i-submit-a-toll-free-verification) to learn how to verify your toll-free number. ++## Next steps ++In this quickstart, you learned how to send SMS messages by using Communication Services. ++> [!div class="nextstepaction"] +> [Send SMS](./send.md) ++> [!div class="nextstepaction"] +> [Phone number types](../../concepts/telephony/plan-solution.md) ++> [!div class="nextstepaction"] +> [Learn more about SMS](../../concepts/sms/concepts.md) |
communication-services | Trusted Auth Sample | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/trusted-auth-sample.md | This sample can help you in the following scenarios: - As a developer, you need to enable an authentication flow for Azure Communication Services support Teams identities, which is done by using an Microsoft 365 Azure Active Directory identity of a Teams' user to fetch an Azure Communication Services token to be able to join Teams calling/chat. > [!NOTE]->If you are looking to get started with Azure Communication Services, but are still in learning / prototyping phases, check out our [quickstarts for getting started with Azure communication services users and access tokens](../quickstarts/access-tokens.md?pivots=programming-language-csharp). +>If you are looking to get started with Azure Communication Services, but are still in learning / prototyping phases, check out our [quickstarts for getting started with Azure communication services users and access tokens](../quickstarts/identity/access-tokens.md?pivots=programming-language-csharp).  |
communication-services | Building App Start | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/building-app-start.md | In this tutorial, you learn how to: - The [Azure App Service extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice). The extension allows deploying websites with the option to configure fully managed continuous integration and continuous delivery (CI/CD). - The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) to build your own serverless applications. For example, you can host your authentication application in Azure Functions. - An active Communication Services resource and connection string. [Learn how to create a Communication Services resource](../quickstarts/create-communication-resource.md).-- A user access token. For instructions, see the [quickstart for creating and managing access tokens](../quickstarts/access-tokens.md?pivots=programming-language-javascript) or the [tutorial for building a trusted authentication service](./trusted-service-tutorial.md).+- A user access token. For instructions, see the [quickstart for creating and managing access tokens](../quickstarts/identity/access-tokens.md?pivots=programming-language-javascript) or the [tutorial for building a trusted authentication service](./trusted-service-tutorial.md). ## Configure your development environment You're now ready to build your first Azure Communication Services web applicatio You might also want to: - [Add chat to your app](../quickstarts/chat/get-started.md)-- [Create user access tokens](../quickstarts/access-tokens.md)+- [Create user access tokens](../quickstarts/identity/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md) - [Learn about authentication](../concepts/authentication.md) |
communication-services | Events Playbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md | The goal of this document is to reduce the time it takes for Event Management Pl ## What are virtual events and event management platforms? -Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios. +Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios. ## What are the building blocks of an event management platform? For event attendees, they are presented with an experience that enables them to - Teams Client (Web or Desktop): Attendees can directly join events using a Teams Client by using a provided join link. They get access to the full Teams experience. -- Azure Communication +- Azure Communication ### 3. Host & Organizer experience Microsoft Graph enables event management platforms to empower organizers to sche 1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of reminders. - 1. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](../../active-directory/develop/access-tokens.md). and [refresh tokens](../../active-directory/develop/refresh-tokens.md). + 2. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](../../active-directory/develop/access-tokens.md). and [refresh tokens](../../active-directory/develop/refresh-tokens.md). - 1. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs. + 3. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs. - 1. Refresh tokens can be revoked in the event of a breach or account termination + 4. Refresh tokens can be revoked in the event of a breach or account termination >[!NOTE] >Authorization is required by both developers for testing and organizers who will be using your event platform to set up their events. 2. Organizer logs in to Contoso platform to create an event and generate a registration URL. To enable these capabilities developers should use: - 1. The [Create Calendar Event API](/graph/api/user-post-events?tabs=http&view=graph-rest-1.0) to POST the new event to be created. The Event object returned will contain the join URL required for the next step. Need to set the following parameter: `isonlinemeeting: true` and `onlineMeetingProvider: "teamsForBusiness"`. Set a time zone for the event, using the `Prefer` header. + 1. The [Create Calendar Event API](/graph/api/user-post-events?tabs=http&view=graph-rest-1.0&preserve-view=true) to POST the new event to be created. The Event object returned will contain the join URL required for the next step. Need to set the following parameter: `isonlinemeeting: true` and `onlineMeetingProvider: "teamsForBusiness"`. Set a time zone for the event, using the `Prefer` header. - 1. Next, use the [Create Online Meeting API](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta) to `GET` the online meeting information using the join URL generated from the step above. The `OnlineMeeting` object will contain the `meetingId` required for the registration steps. + 1. Next, use the [Create Online Meeting API](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) to `GET` the online meeting information using the join URL generated from the step above. The `OnlineMeeting` object will contain the `meetingId` required for the registration steps. 1. By using these APIs, developers are creating a calendar event to show up in the OrganizerΓÇÖs calendar and the Teams online meeting where attendees will join. >[!NOTE] >Known issue with double calendar entries for organizers when using the Calendar and Online Meeting APIs. -3. To enable registration for an event, Contoso can use the [External Meeting Registration API](/graph/api/resources/externalmeetingregistration?view=graph-rest-beta) to POST. The API requires Contoso to pass in the `meetingId` of the `OnlineMeeting` created above. Registration is optional. You can set options on who can register. +3. To enable registration for an event, Contoso can use the [External Meeting Registration API](/graph/api/resources/externalmeetingregistration?view=graph-rest-beta&preserve-view=true) to POST. The API requires Contoso to pass in the `meetingId` of the `OnlineMeeting` created above. Registration is optional. You can set options on who can register. ### Register attendees with Microsoft Graph -Event management platforms can use a custom registration flow to register attendees. This flow is powered by the [External Meeting Registrant API](/graph/api/externalmeetingregistrant-post?tabs=http&view=graph-rest-beta). By using the API Contoso will receive a unique `Teams Join URL` for each attendee. This URL will be used as part of the attendee experience either through Teams or Azure Communication Services to have the attendee join the meeting. +Event management platforms can use a custom registration flow to register attendees. This flow is powered by the [External Meeting Registrant API](/graph/api/externalmeetingregistrant-post?tabs=http&view=graph-rest-beta&preserve-view=true). By using the API Contoso will receive a unique `Teams Join URL` for each attendee. This URL will be used as part of the attendee experience either through Teams or Azure Communication Services to have the attendee join the meeting. ### Communicate with your attendees using Azure Communication Services Through Azure Communication Services, developers can use SMS and Email capabilit Attendee experience can be directly embedded into an application or platform using [Azure Communication Services](../overview.md) so that your attendees never need to leave your platform. It provides low-level calling and chat SDKs which support [interoperability with Teams Events](../concepts/teams-interop.md), as well as a turn-key UI Library which can be used to reduce development time and easily embed communications. Azure Communication Services enables developers to have flexibility with the type of solution they need. Review [limitations](../concepts/join-teams-meeting.md#limitations-and-known-issues) of using Azure Communication Services for webinar scenarios. -1. To start, developers can leverage Microsoft Graph APIs to retrieve the join URL. This URL is provided uniquely per attendee during [registration](/graph/api/externalmeetingregistrant-post?tabs=http&view=graph-rest-beta). Alternatively, it can be [requested for a given meeting](/graph/api/onlinemeeting-get?tabs=http&view=graph-rest-beta). +1. To start, developers can leverage Microsoft Graph APIs to retrieve the join URL. This URL is provided uniquely per attendee during [registration](/graph/api/externalmeetingregistrant-post?tabs=http&view=graph-rest-beta&preserve-view=true). Alternatively, it can be [requested for a given meeting](/graph/api/onlinemeeting-get?tabs=http&view=graph-rest-beta&preserve-view=true). -2. Before developers dive into using [Azure Communication Services](../overview.md), they must [create a resource](../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows). +2. Before developers dive into using [Azure Communication Services](../overview.md), they must [create a resource](../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows&preserve-view=true). -3. Once a resource is created, developers must [generate access tokens](../quickstarts/access-tokens.md?pivots=programming-language-javascript) for attendees to access Azure Communication Services. We recommend using a [trusted service architecture](../concepts/client-and-server-architecture.md). +3. Once a resource is created, developers must [generate access tokens](../quickstarts/identity/access-tokens.md?pivots=programming-language-javascript&preserve-view=true) for attendees to access Azure Communication Services. We recommend using a [trusted service architecture](../concepts/client-and-server-architecture.md). 4. Developers can leverage [headless SDKs](../concepts/teams-interop.md) or [UI Library](https://azure.github.io/communication-ui-library/) using the join link URL to join the Teams meeting through [Teams Interoperability](../concepts/teams-interop.md). Details below: |Headless SDKs | UI Library | |-||-| Developers can leverage the [calling](../quickstarts/voice-video-calling/get-started-teams-interop.md?pivots=platform-javascript) and [chat](../quickstarts/chat/meeting-interop.md?pivots=platform-javascript) SDKs to join a Teams meeting with your custom client | Developers can choose between the [call + chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-meeting-basicexample--basic-example) or pure [call](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-basicexample--basic-example) and [chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-chat-basicexample--basic-example) composites to build their experience. Alternatively, developers can leverage [composable components](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-uicomponents--page) to build a custom Teams interop experience.| +| Developers can leverage the [calling](../quickstarts/voice-video-calling/get-started-teams-interop.md?pivots=platform-javascript&preserve-view=true) and [chat](../quickstarts/chat/meeting-interop.md?pivots=platform-javascript&preserve-view=true) SDKs to join a Teams meeting with your custom client | Developers can choose between the [call + chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-meeting-basicexample--basic-example) or pure [call](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-basicexample--basic-example) and [chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-chat-basicexample--basic-example) composites to build their experience. Alternatively, developers can leverage [composable components](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-uicomponents--page) to build a custom Teams interop experience.| >[!NOTE] |
communication-services | File Sharing Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md | If you want to clean up and remove a Communication Services subscription, you ca You may also want to: - [Add chat to your app](../quickstarts/chat/get-started.md)-- [Creating user access tokens](../quickstarts/access-tokens.md)+- [Creating user access tokens](../quickstarts/identity/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md) - [Learn about authentication](../concepts/authentication.md) |
communication-services | Hmac Header Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/hmac-header-tutorial.md | To clean up and remove a Communication Services subscription, delete the resourc You might also want to: - [Add chat to your app](../quickstarts/chat/get-started.md)-- [Create user access tokens](../quickstarts/access-tokens.md)+- [Create user access tokens](../quickstarts/identity/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md) - [Learn about authentication](../concepts/authentication.md) |
communication-services | Postman Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/postman-tutorial.md | The Mobile phone, which owns the number you provided in the "to" value, should a You might also want to: - [Add chat to your app](../quickstarts/chat/get-started.md)-- [Create user access tokens](../quickstarts/access-tokens.md)+- [Create user access tokens](../quickstarts/identity/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md) - [Learn about authentication](../concepts/authentication.md) |
communication-services | Trusted Service Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/trusted-service-tutorial.md | If you want to clean up and remove a Communication Services subscription, you ca You may also want to: - [Add chat to your app](../quickstarts/chat/get-started.md)-- [Creating user access tokens](../quickstarts/access-tokens.md)+- [Creating user access tokens](../quickstarts/identity/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md) - [Learn about authentication](../concepts/authentication.md) |
communication-services | Virtual Visits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md | The app service generated by the Sample Builder is a stand-alone artifact, desig - **Core SDKs ΓÇô** The underlying [Call](../quickstarts/voice-video-calling/get-started-teams-interop.md) and [Chat](../quickstarts/chat/meeting-interop.md) services can be accessed and you can build any kind of user experience. ### Identity & security-The Sample BuilderΓÇÖs consumer experience does not authenticate the end user, but provides [Azure Communication Services user access tokens](../quickstarts/access-tokens.md) to any random visitor. That isnΓÇÖt realistic for most scenarios, and you will want to implement an authentication scheme. +The Sample BuilderΓÇÖs consumer experience does not authenticate the end user, but provides [Azure Communication Services user access tokens](../quickstarts/identity/access-tokens.md) to any random visitor. That isnΓÇÖt realistic for most scenarios, and you will want to implement an authentication scheme. |
confidential-computing | Choose Confidential Containers Offerings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/choose-confidential-containers-offerings.md | Title: Choose container offerings for confidential computing description: How to choose the right confidential container offerings to meet your security, isolation and developer needs. -++ Last updated 11/01/2021 The diagram below will guide different offerings in this portfolio ## Links to container compute offerings -**Azure Container Instances with Confidential containers (AMD SEV_SNP)** are the first serverless offering that helps protect your container deployments with confidential computing through AMD SEV-SNP technology. Read more on the product [here](https://aka.ms/ccacipreview). +**Confidential VM worker nodes on AKS)** supporting full AKS features with node level VM based Trusted Execution Environment (TEE). Also support remote guest attestation. [Get started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md) +**Unmodified containers with serverless offering** [confidential containers on Azure Container Instance (ACI)](./confidential-containers.md#vm-isolated-confidential-containers-on-azure-container-instances-acipublic-preview) supporting existing Linux containers with remote guest attestation flow. -There are two programming and deployment models on Azure Kubernetes Service (AKS). -<!-- You can deploy containers with confidential application enclaves. This method of container deployments has the strongest security and compute isolation, with a lower Trusted Computing Base (TCB). Confidential containers based on Intel Software Guard Extensions (SGX) that run in the hardware-based Trusted Execution Environment (TEE) are available. These containers support lifting and shifting your existing container apps. Another option is to allow building custom apps with enclave awareness. --> --**Unmodified containers** support higher programming languages on Intel SGX through the Azure Partner ecosystem of OSS projects. For more information, see the [unmodified containers deployment flow and samples](./confidential-containers.md). +**Unmodified containers with Intel SGX** support higher programming languages on Intel SGX through the Azure Partner ecosystem of OSS projects. For more information, see the [unmodified containers deployment flow and samples](./confidential-containers.md). **Enclave-aware containers** use a custom Intel SGX programming model. For more information, see the [the enclave-aware containers deployment flow and samples](./enclave-aware-containers.md). |
confidential-computing | Confidential Containers Enclaves | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers-enclaves.md | -This model works well for off the shelf container applications available in the market or custom apps currently running on general purpose nodes +This model works well for off the shelf container applications available in the market or custom apps currently running on general purpose nodes. To run an existing Docker container, applications on confidential computing nodes require an Intel Software Guard Extensions (SGX) wrapper software to help the container execution within the bounds of special CPU instruction set. SGX creates a direct execution to the CPU to remove the guest operating system (OS), host OS, or hypervisor from the trust boundary. This step reduces the overall surface attack areas and vulnerabilities while achieving process level isolation within a single node. |
confidential-computing | Confidential Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md | Title: Confidential containers on Azure -description: Learn about unmodified lift and shift container support to confidential containers. +description: Learn about unmodified container support with confidential containers. Previously updated : 7/15/2022 Last updated : 3/1/2023 -++ # Confidential containers on Azure -Confidential containers provide a set of features and capabilities to further secure your standard container workloads to achieve higher data security by running them in a Trusted Execution Environment (TEE). Azure offers a portfolio of capabilities through different confidential container options as discussed below. +Confidential containers provide a set of features and capabilities to further secure your standard container workloads to achieve higher data security, data privacy and runtime code integrity goals. Confidential containers run in a hardware backed Trusted Execution Environment (TEE) that provide intrinsic capabilities like data integrity, data confidentiality and code integrity. Azure offers a portfolio of capabilities through different confidential container service options as discussed below. ## Benefits-Confidential containers on Azure run within an enclave-based TEE or VM based TEE environments. Both deployment models help achieve high-isolation and memory encryption through hardware-based assurances. Confidential computing can enhance your deployment security posture in Azure cloud by protecting your memory space through encryption. +Confidential containers on Azure run within an enclave-based TEE or VM based TEE environments. Both deployment models help achieve high-isolation and memory encryption through hardware-based assurances. Confidential computing can help you with your zero trust deployment security posture in Azure cloud by protecting your memory space through encryption. Below are the qualities of confidential containers: - Allows running existing standard container images with no code changes (lift-and-shift) within a TEE-- Allows establishing a hardware root of trust through remote guest attestation-- Provides strong assurances of data confidentiality, code integrity and data integrity in a cloud environment+- Ability to extend/build new applications that have confidential computing awareness +- Allows to remotely challenge runtime environment for cryptographic proof that states what was initiated as reported by the secure processor +- Provides strong assurances of data confidentiality, code integrity and data integrity in a cloud environment with hardware based confidential computing offerings - Helps isolate your containers from other container groups/pods, as well as VM node OS kernel -## VM Isolated Confidential containers on Azure Container Instances (ACI) - Private Preview -Confidential Containers on ACI platform leverages VM-based trusted execution environments (TEEs) based on AMD’s SEV-SNP technology. The TEE provides memory encryption and integrity of the utility VM’s address space as well as hardware-level isolation from other container groups, the host operating system, and the hypervisor. The Root-of-Trust (RoT), which is responsible for managing the TEE, provides support for remote attestation, including issuing an attestation report which may be used by a relying party to verify that the utility VM has been created and configured on a genuine AMD SEV-SNP CPU. Read more on the product [here](https://aka.ms/ccacipreview) +## VM Isolated Confidential containers on Azure Container Instances (ACI) - Public preview +[Confidential containers on ACI](../container-instances/container-instances-confidential-overview.md) enables fast and easy deployment of containers natively in Azure and with the ability to protect data and code in use thanks to AMD EPYC™ processors with confidential computing capabilities. This is because your container(s) runs in a hardware-based and attested Trusted Execution Environment (TEE) without the need to adopt a specialized programming model and without infrastructure management overhead. With this launch you get: +1. Full guest attestation, which reflects the cryptographic measurement of all hardware and software components running within your Trusted Computing Base (TCB). +2. Tooling to generate policies that will be enforced in the Trusted Execution Environment. +3. Open-source sidecar containers for secure key release and encrypted file systems. + ## Confidential containers in an Intel SGX enclave through OSS or partner software Azure Kubernetes Service (AKS) supports adding [Intel SGX confidential computing VM nodes](confidential-computing-enclaves.md) as agent pools in a cluster. These nodes allow you to run sensitive workloads within a hardware-based TEE. TEEs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect data confidentiality, data integrity and code integrity from other processes running on the same nodes, as well as Azure operator. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The *hardware based per container isolated execution* model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero-trust, security planning and defense-in-depth container strategy. Learn more on this capability [here](confidential-containers-enclaves.md) :::image type="content" source="./media/confidential-nodes-aks-overview/sgx-aks-node.png" alt-text="Graphic of AKS Confidential Compute Node, showing confidential containers with code and data secured inside."::: - ## Questions? If you have questions about container offerings, please reach out to <acconaks@microsoft.com>. If you have questions about container offerings, please reach out to <acconaks@m ## Next steps - [Deploy AKS cluster with Intel SGX Confidential VM Nodes](./confidential-enclave-nodes-aks-get-started.md)+- [Deploy Confidential container group with Azure Container Instances](../container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md) - [Microsoft Azure Attestation](../attestation/overview.md) - [Intel SGX Confidential Virtual Machines](virtual-machine-solutions-sgx.md) - [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) |
confidential-computing | Confidential Enclave Nodes Aks Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md | Title: 'Quickstart: Deploy an AKS cluster with Enclave Confidential Container Intel SGX nodes by using the Azure CLI' description: Learn how to create an Azure Kubernetes Service (AKS) cluster with enclave confidential containers a Hello World app by using the Azure CLI. -++ Previously updated : 11/1/2021 Last updated : 3/1/2023 Features of confidential computing nodes include: This quickstart requires: -- An active Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Azure CLI version 2.0.64 or later installed and configured on your deployment machine.-- Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](../container-registry/container-registry-get-started-azure-cli.md). - A minimum of eight DCsv2/DCSv3/DCdsv3 cores available in your subscription. By default, there is no pre-assigned quota for Intel SGX VM sizes for your Azure subscriptions. You should follow [these instructions](../azure-portal/supportability/per-vm-quota-requests.md) to request for VM core quota for your subscriptions. This quickstart requires: Use the following instructions to create an AKS cluster with the Intel SGX add-on enabled, add a node pool to the cluster, and verify what you created with hello world enclave application. -### Create an AKS cluster with a system node pool +### Create an AKS cluster with a system node pool and AKS Intel SGX Addon > [!NOTE] > If you already have an AKS cluster that meets the prerequisite criteria listed earlier, [skip to the next section](#add-a-user-node-pool-with-confidential-computing-capabilities-to-the-aks-cluster) to add a confidential computing node pool. +Intel SGX AKS Addon "confcom" exposes the Intel SGX device drivers to your containers to avoid added changes to your pod yaml. + First, create a resource group for the cluster by using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *eastus2* region: ```azurecli-interactive az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enabl ``` The above command will deploy a new AKS cluster with system node pool of non confidential computing node. Confidential computing Intel SGX nodes are not recommended for system node pools. -### Add an user node pool with confidential computing capabilities to the AKS cluster<a id="add-a-user-node-pool-with-confidential-computing-capabilities-to-the-aks-cluster"></a> +### Add a user node pool with confidential computing capabilities to the AKS cluster<a id="add-a-user-node-pool-with-confidential-computing-capabilities-to-the-aks-cluster"></a> Run the following command to add a user node pool of `Standard_DC4s_v3` size with three nodes to the AKS cluster. You can choose another larger sized SKU from the [list of supported DCsv2/DCsv3 SKUs and regions](../virtual-machines/dcv3-series.md). |
confidential-computing | Confidential Node Pool Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-node-pool-aks.md | |
confidential-computing | Confidential Nodes Aks Addon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-addon.md | Title: Azure Kubernetes Service plugin for confidential VMs description: How to use the Intel SGX device plugin and Intel SGX quote helper daemon sets for confidential VMs with Azure Kubernetes Service. --++ Last updated 11/01/2021 Each container needs to opt in to use out-of-proc quote generation by setting th An application can still use the in-proc attestation as before. However, you can't simultaneously use both in-proc and out-of-proc within an application. The out-of-proc infrastructure is available by default and consumes resources. > [!NOTE]-> If you are using a Intel SGX wrapper software (OSS/ISV) to run you unmodified containers the attestation interaction with hardware is typically handled for your higher level apps. Please refer to the attestation implementation per provider. +> If you are using a Intel SGX wrapper software (OSS/ISV) to run your unmodified containers the attestation interaction with hardware is typically handled for your higher level apps. Please refer to the attestation implementation per provider. ### Sample implementation |
container-apps | Azure Arc Enable Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md | A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro The following table describes the various `--configuration-settings` parameters when running the command: | Parameter | Description |- | - | - | + ||| | `Microsoft.CustomLocation.ServiceAccount` | The service account created for the custom location. It's recommended that it 's set to the value `default`. | | `appsNamespace` | The namespace used to create the app definitions and revisions. It **must** match that of the extension release namespace. | | `clusterName` | The name of the Container Apps extension Kubernetes environment that will be created against this extension. | | `logProcessor.appLogs.destination` | Optional. Destination for application logs. Accepts `log-analytics` or `none`, choosing none disables platform logs. | | `logProcessor.appLogs.logAnalyticsConfig.customerId` | Required only when `logProcessor.appLogs.destination` is set to `log-analytics`. The base64-encoded Log analytics workspace ID. This parameter should be configured as a protected setting. |- | `logProcessor.appLogs.logAnalyticsConfig.sharedKey` | Required only when `logProcessor.appLogs.destination` is set to `log-analytics`. The base64-encoded Log analytics workspace shared key. This parameter should be configured as a protected setting. |4 - | `envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group` | The name of the resource group in which the Azure Kubernetes Service cluster resides. Valid and required only when the underlying cluster is Azure Kubernetes Service. | - | | | + | `logProcessor.appLogs.logAnalyticsConfig.sharedKey` | Required only when `logProcessor.appLogs.destination` is set to `log-analytics`. The base64-encoded Log analytics workspace shared key. This parameter should be configured as a protected setting. | + | `envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group` | The name of the resource group in which the Azure Kubernetes Service cluster resides. Valid and required only when the underlying cluster is Azure Kubernetes Service. | 1. Save the `id` property of the Container Apps extension for later. |
container-apps | Azure Arc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md | Optionally, you can choose to have the extension install [KEDA](https://keda.sh/ The following table describes the role of each revision created for you: | Pod | Description | Number of Instances | CPU | Memory |-|||||| +|-|-|-|-|-| | `<extensionName>-k8se-activator` | Used as part of the scaling pipeline | 2 | 100 millicpu | 500 MB | | `<extensionName>-k8se-billing` | Billing record generation - Azure Container Apps on Azure Arc enabled Kubernetes is Free of Charge during preview | 3 | 100 millicpu | 100 MB | | `<extensionName>-k8se-containerapp-controller` | The core operator pod that creates resources on the cluster and maintains the state of components. | 2 | 100 millicpu | 500 MB | The following table describes the role of each revision created for you: | `<extensionName>-k8se-keda-metrics-apiserver` | Keda Metrics Server | 1 | 1 Core | 1000 MB | | `<extensionName>-k8se-keda-operator` | Manages component updated and service endpoints for Dapr | 1 | 100 millicpu | 500 MB | | `<extensionName>-k8se-local-envoy` | A front-end proxy layer for all data-plane tcp requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB |-| `<extensionName>-k8se-log-processor` | Gathers logs from apps and other components and sends them to Log Analytics. | 2 | 200 millicpu | 500 MB | | +| `<extensionName>-k8se-log-processor` | Gathers logs from apps and other components and sends them to Log Analytics. | 2 | 200 millicpu | 500 MB | | `<extensionName>-k8se-mdm` | Metrics and Logs Agent | 2 | 500 millicpu | 500 MB | | dapr-operator | Manages component updates and service endpoints for Dapr | 1 | 100 millicpu | 500 MB | | dapr-placement-server | Used for Actors only - creates mapping tables that map actor instances to pods | 1 | 100 millicpu | 500 MB | |
container-apps | Dapr Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md | Last updated 01/25/2023 # Dapr integration with Azure Container Apps -The Distributed Application Runtime ([Dapr][dapr-concepts]) is a set of incrementally adoptable features that simplify the authoring of distributed, microservice-based applications. For example, Dapr provides capabilities for enabling application intercommunication, whether through messaging via pub/sub or reliable and secure service-to-service calls. Once Dapr is enabled for a container app, a secondary process will be created alongside your application code that will enable communication with Dapr via HTTP or gRPC. +The Distributed Application Runtime ([Dapr][dapr-concepts]) is a set of incrementally adoptable features that simplify the authoring of distributed, microservice-based applications. For example, Dapr provides capabilities for enabling application intercommunication, whether through messaging via pub/sub or reliable and secure service-to-service calls. Once Dapr is enabled for a container app, a secondary process is created alongside your application code that enables communication with Dapr via HTTP or gRPC. Dapr's APIs are built on best practice industry standards, that: The table below outlines the currently supported list of Dapr sidecar configurat | Container Apps CLI | Template field | Description | | - | - | - | | `--enable-dapr` | `dapr.enabled` | Enables Dapr on the container app. |-| `--dapr-app-port` | `dapr.appPort` | The port your application is listening on which will be used by Dapr for communicating to your application | +| `--dapr-app-port` | `dapr.appPort` | The port your application is listening on which is used by Dapr for communicating to your application | | `--dapr-app-protocol` | `dapr.appProtocol` | Tells Dapr which protocol your application is using. Valid options are `http` or `grpc`. Default is `http`. | | `--dapr-app-id` | `dapr.appId` | A unique Dapr identifier for your container app used for service discovery, state encapsulation and the pub/sub consumer ID. | | `--dapr-max-request-size` | `dapr.httpMaxRequestSize` | Set the max size of request body http and grpc servers to handle uploading of large files. Default is 4 MB. | When using an IaC template, specify the following arguments in the `properties.c -The above Dapr configuration values are considered application-scope changes. When you run a container app in multiple revision mode, changes to these settings won't create a new revision. Instead, all existing revisions will be restarted to ensure they're configured with the most up-to-date values. +The above Dapr configuration values are considered application-scope changes. When you run a container app in multiple-revision mode, changes to these settings won't create a new revision. Instead, all existing revisions are restarted to ensure they're configured with the most up-to-date values. ## Dapr components metadata: ### Component scopes -By default, all Dapr-enabled container apps within the same environment will load the full set of deployed components. To ensure components are loaded at runtime by only the appropriate container apps, application scopes should be used. In the example below, the component will only be loaded by the two Dapr-enabled container apps with Dapr application IDs `APP-ID-1` and `APP-ID-2`: +By default, all Dapr-enabled container apps within the same environment load the full set of deployed components. To ensure components are loaded at runtime by only the appropriate container apps, application scopes should be used. In the example below, the component is only loaded by the two Dapr-enabled container apps with Dapr application IDs `APP-ID-1` and `APP-ID-2`: > [!NOTE] > Dapr component scopes correspond to the Dapr application ID of a container app, not the container app name. scopes: ``` > [!NOTE]-> Kubernetes secrets, Local environment variables and Local file Dapr secret stores are not supported in Container Apps. As an alternative for the upstream Dapr default Kubernetes secret store, container apps provides a platform-managed approach for creating and leveraging Kubernetes secrets. +> Kubernetes secrets, Local environment variables and Local file Dapr secret stores aren't supported in Container Apps. As an alternative for the upstream Dapr default Kubernetes secret store, container apps provides a platform-managed approach for creating and leveraging Kubernetes secrets. #### Using Platform-managed Kubernetes secrets This resource defines a Dapr component called `dapr-pubsub` via ARM. +## Release cadence for Dapr ++The latest version of Dapr in Azure Container Apps will be available within six weeks after [the Dapr OSS release][dapr-release]. + ## Limitations ### Unsupported Dapr capabilities This resource defines a Dapr component called `dapr-pubsub` via ARM. - **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec. - **Declarative pub/sub subscriptions** - **Any Dapr sidecar annotations not listed above**-- **Alpha APIs and components**: Azure Container Apps does not guarantee the availability of Dapr alpha APIs and features. If available to use, they are on a self-service, opt-in basis. Alpha APIs and components are provided "as is" and "as available," and are continually evolving as they move toward stable status. Alpha APIs and components are not covered by customer support.+- **Alpha APIs and components**: Azure Container Apps doesn't guarantee the availability of Dapr alpha APIs and features. If available to use, they are on a self-service, opt-in basis. Alpha APIs and components are provided "as is" and "as available," and are continually evolving as they move toward stable status. Alpha APIs and components aren't covered by customer support. ### Known limitations -- **Actor reminders**: Require a minReplicas of 1+ to ensure reminders will always be active and fire correctly.+- **Actor reminders**: Require a minReplicas of 1+ to ensure reminders is always active and fires correctly. ## Next Steps Now that you've learned about Dapr and some of the challenges it solves: [dapr-args]: https://docs.dapr.io/reference/arguments-annotations-overview/ [dapr-component]: https://docs.dapr.io/concepts/components-concept/ [dapr-component-spec]: https://docs.dapr.io/operations/components/component-schema/+[dapr-release]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions |
container-apps | Microservices Dapr Azure Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md | |
cosmos-db | Continuous Backup Restore Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md | description: Azure Cosmos DB's point-in-time restore feature helps to recover da Previously updated : 08/24/2022 Last updated : 03/02/2023 By default, Azure Cosmos DB stores continuous mode backup data in locally redund In a steady state, all mutations performed on the source account (which includes databases, containers, and items) are backed up asynchronously within 100 seconds. If the Azure Storage backup media is down or unavailable, the mutations are persisted locally until the media is available. Then the mutations are flushed out to prevent any loss in fidelity of operations that can be restored. -You can choose to restore any combination of provisioned throughput containers, shared throughput database, or the entire account. The restore action restores all data and its index properties into a new account. The restore process ensures that all the data restored in an account, database, or a container is guaranteed to be consistent up to the restore time specified. The duration of restore will depend on the amount of data that needs to be restored. +You can choose to restore any combination of provisioned throughput containers, shared throughput database, or the entire account. The restore action restores all data and its index properties into a new account. The restore process ensures that all the data restored in an account, database, or a container is guaranteed to be consistent up to the restore time specified. The duration of restore will depend on the amount of data that needs to be restored. The newly restored database accountΓÇÖs consistency setting will be same as the source database accountΓÇÖs consistency settings. > [!NOTE] > With the continuous backup mode, the backups are taken in every region where your Azure Cosmos DB account is available. Backups taken for each region account are Locally redundant by default and Zone redundant if your account has [availability zone](/azure/architecture/reliability/architect) feature enabled for that region. The restore action always restores data into a new account. You can choose to restore any combination of provisioned throughput containers, The following configurations aren't restored after the point-in-time recovery: -* Firewall, VNET, Data plane RBAC or private endpoint settings. -* Consistency settings. By default, the account is restored with session consistency. ΓÇâ +* Firewall, VNET, Data plane RBAC or private endpoint settings. * Regions. * Stored procedures, triggers, UDFs. * Role-based access control assignments. These will need to be re-assigned. -You can add these configurations to the restored account after the restore is completed. +You can add these configurations to the restored account after the restore is completed. An ability to prevent public access to restored account is described [here-to-befilled with url](). ## Restorable timestamp for live accounts |
cosmos-db | Continuous Backup Restore Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md | Scope is a set of resources that have access, to learn more on scopes, see the [ To perform a restore, a user or a principal need the permission to restore (that is *restore/action* permission), and permission to provision a new account (that is *write* permission). To grant these permissions, the owner can assign the `CosmosRestoreOperator` and `Cosmos DB Operator` built in roles to a principal. -1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your subscription. +1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your subscription. The `CosmosRestoreOperator` role is available at subscription level. 1. Select **Access control (IAM)**. |
cosmos-db | Migrate Continuous Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md | Use the following steps to migrate your account from periodic backup to continuo ## <a id="powershell"></a>Migrate using PowerShell -1. Install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or any version higher than 6.2.0. +1. Install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps) or any version higher than 6.2.0. 2. To use ``Continous7Days`` mode for provisioning or migrating, you'll have to use preview of the ``cosmosdb`` extension. Use ``Install-Module -Name Az.CosmosDB -AllowPrerelease`` 3. Next, run the following steps: |
cosmos-db | Provision Account Continuous Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md | For PowerShell and CLI commands, the tier value is optional, if it isn't already 1. Install the latest version of Azure PowerShell - * Before provisioning the account, install any version of Azure PowerShell higher than 6.2.0. For more information about the latest version of Azure PowerShell, see [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true). + * Before provisioning the account, install any version of Azure PowerShell higher than 6.2.0. For more information about the latest version of Azure PowerShell, see [latest version of Azure PowerShell](/powershell/azure/install-az-ps). * For provisioning the ``Continuous7Days`` tier, you'll need to install the preview version of the module by running ``Install-Module -Name Az.CosmosDB -AllowPrerelease``. 1. Next connect to your Azure account and select the required subscription with the following commands: |
cosmos-db | Restore Account Continuous Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md | Use the following steps to get the restore details from Azure portal: ## <a id="restore-account-powershell"></a>Restore an account using Azure PowerShell -Before restoring the account, install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or version higher than 6.2.0. Next connect to your Azure account and select the required subscription with the following commands: +Before restoring the account, install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps) or version higher than 6.2.0. Next connect to your Azure account and select the required subscription with the following commands: 1. Sign into Azure using the following command: |
cost-management-billing | View Reservations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-reservations.md | When you use the PowerShell script to assign the ownership role and it runs succ - Accept wildcard characters: False ## Tenant-level access-[User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) rights are required before you can grant users or groups the Reservation Administrator and Reservation Reader roles at the tenant level. In order to get User Access Administrator rights at the tenant level, follow [Elevate access](../../role-based-access-control/elevate-access-global-admin.md) steps. -## Add a Reservation Administrator role or Reservation Reader role at the tenant level +[User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) rights are required before you can grant users or groups the Reservations Administrator and Reservations Reader roles at the tenant level. In order to get User Access Administrator rights at the tenant level, follow [Elevate access](../../role-based-access-control/elevate-access-global-admin.md) steps. ++### Add a Reservations Administrator role or Reservations Reader role at the tenant level You can assign these roles from [Azure portal](https://portal.azure.com). 1. Sign in to the Azure portal and navigate to **Reservations**.-2. At the top of the page, select **Role Assignment**. -3. To make modifications, add user as a Reservation Administrator or Reservation Reader using Access control. +1. Select a reservation that you have access to. +1. At the top of the page, select **Role Assignment**. +1. Select the **Roles** tab. +1. To make modifications, add a user as a Reservations Administrator or Reservations Reader using Access control. -## Add a Reservation Administrator role at the tenant level using Azure PowerShell script +### Add a Reservation Administrator role at the tenant level using Azure PowerShell script Use the following Azure PowerShell script to add a Reservation Administrator role at the tenant level with PowerShell. Connect-AzAccount -Tenant <TenantId> New-AzRoleAssignment -Scope "/providers/Microsoft.Capacity" -PrincipalId <ObjectId> -RoleDefinitionName "Reservations Administrator" ``` -### Parameters +#### Parameters **-ObjectId** Azure AD ObjectId of the user, group, or service principal. - Type: String New-AzRoleAssignment -Scope "/providers/Microsoft.Capacity" -PrincipalId <Object - Accept pipeline input: False - Accept wildcard characters: False -## Assign a Reservation Reader role at the tenant level using Azure PowerShell script +### Assign a Reservation Reader role at the tenant level using Azure PowerShell script Use the following Azure PowerShell script to assign the Reservation Reader role at the tenant level with PowerShell. Connect-AzAccount -Tenant <TenantId> New-AzRoleAssignment -Scope "/providers/Microsoft.Capacity" -PrincipalId <ObjectId> -RoleDefinitionName "Reservations Reader" ``` -### Parameters +#### Parameters **-ObjectId** Azure AD ObjectId of the user, group, or service principal. - Type: String New-AzRoleAssignment -Scope "/providers/Microsoft.Capacity" -PrincipalId <Object - Accept pipeline input: False - Accept wildcard characters: False - ## Next steps - [Manage Azure Reservations](manage-reserved-vm-instance.md). |
data-factory | Concepts Change Data Capture Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md | The new Change Data Capture resource in ADF allows for full fidelity change data ## Known limitations * Currently, when creating source/target mappings, each source and target is only allowed to be used once. -* Continuous, real-time streaming is coming soon. -* Allow schema drift is coming soon. * Complex types are currently unsupported. For more information on known limitations and troubleshooting assistance, please reference [this troubleshooting guide](change-data-capture-troubleshoot.md). |
data-factory | Connector Azure Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md | |
data-factory | Connector Azure Database For Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-postgresql.md | The three activities work on all Azure Database for PostgreSQL deployment option [!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)] -## Create a linked service to Azure database for PostgreSQL using UI +## Create a linked service to Azure Database for PostgreSQL using UI Use the following steps to create a linked service to Azure database for PostgreSQL in the Azure portal UI. |
data-factory | How To Change Data Capture Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource.md | Title: Capture changed data with a change data capture resource -description: This tutorial provides step-by-step instructions on how to capture changed data from ADLS Gen2 to SQL DB using a Change data capture resource. +description: This tutorial provides step-by-step instructions on how to capture changed data from ADLS Gen2 to Azure SQL DB using a Change data capture resource. -# How to capture changed data from ADLS Gen2 to SQL DB using a Change data capture resource +# How to capture changed data from ADLS Gen2 to Azure SQL DB using a Change Data Capture (CDC) resource [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)] -In this tutorial, you will use the Azure Data Factory user interface (UI) to create a new Change data capture resource that picks up changed data from an Azure Data Lake Storage (ADLS) Gen2 source to a SQL Database. The configuration pattern in this tutorial can be modified and expanded upon. +In this tutorial, you will use the Azure Data Factory user interface (UI) to create a new Change Data Capture (CDC) resource that picks up changed data from an Azure Data Lake Storage (ADLS) Gen2 source to an Azure SQL Database in real-time. The configuration pattern in this tutorial can be modified and expanded upon. In this tutorial, you follow these steps:-* Create a change data capture resource. -* Monitor change data capture activity. +* Create a Change Data Capture resource. +* Monitor Change Data Capture activity. ## Pre-requisites * **Azure subscription.** If you don't have an Azure subscription, create a free Azure account before you begin. * **Azure storage account.** You use ADLS storage as a source data store. If you don't have a storage account, see Create an Azure storage account for steps to create one.-* **Azure SQL Database.** You will use Azure SQL DB as a target data store. If you donΓÇÖt have a SQL DB, please create one in the Azure portal first before continuing the tutorial. +* **Azure SQL Database.** You will use Azure SQL DB as a target data store. If you donΓÇÖt have an Azure SQL DB, please create one in the Azure portal first before continuing the tutorial. ## Create a change data capture artifact -1. Navigate to the **Author** blade in your data factory. You will see a new top-level artifact under **Pipelines** called **Change data capture (preview)**. +1. Navigate to the **Author** blade in your data factory. You will see a new top-level artifact below **Pipelines** called **Change Data Capture (preview)**. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-2.png" alt-text="Screenshot of new top level artifact shown under Factory resources panel."::: --2. To create a new **Change data capture**, hover over **Change data capture (preview)** until you see 3 dots appear. Click on the **Change data capture actions**. + :::image type="content" source="media/adf-cdc/change-data-capture-resource-61.png" alt-text="Screenshot of new top level artifact shown under Factory resources panel." lightbox="media/adf-cdc/change-data-capture-resource-61.png"::: + +2. To create a new **Change Data Capture**, hover over **Change Data Capture (preview)** until you see 3 dots appear. Click on the **Change Data Capture (preview) Actions**. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-3.png" alt-text="Screenshot of Change data capture (preview) Actions after hovering on the new top-level artifact."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-62.png" alt-text="Screenshot of Change Data Capture (preview) Actions after hovering on the new top-level artifact." lightbox="media/adf-cdc/change-data-capture-resource-62.png"::: -3. Select **New change data capture (preview)**. This will open a flyout to begin the guided process. +3. Select **New CDC (preview)**. This will open a flyout to begin the guided process. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-4.png" alt-text="Screenshot of a list of change data capture actions."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-63.png" alt-text="Screenshot of a list of Change Data Capture actions." lightbox="media/adf-cdc/change-data-capture-resource-63.png"::: 4. You will then be prompted to name your CDC resource. By default, the name will be set to ΓÇ£adfcdcΓÇ¥ and continue to increment up by 1. You can replace this default name with your own. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-5.png" alt-text="Screenshot of the text box to update the name of the resource."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-64.png" alt-text="Screenshot of the text box to update the name of the resource."::: 5. Use the drop-down selection list to choose your data source. For this tutorial, we will use **DelimitedText**. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-6.png" alt-text="Screenshot of the guided process flyout with source options in a drop-down selection menu."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-65.png" alt-text="Screenshot of the guided process flyout with source options in a drop-down selection menu."::: 6. You will then be prompted to select a linked service. Create a new linked service or select an existing one. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-7.png" alt-text="Screenshot of the selection box to choose or create a new linked service."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-66.png" alt-text="Screenshot of the selection box to choose or create a new linked service."::: 7. Use the **Browse** button to select your source data folder. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-8.png" alt-text="Screenshot of a folder icon to browse for a folder path."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-67.png" alt-text="Screenshot of a folder icon to browse for a folder path."::: 8. Once youΓÇÖve selected a folder path, click **Continue** to set your data target. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-9.png" alt-text="Screenshot of the continue button in the guided process to proceed to select data targets."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-68.png" alt-text="Screenshot of the continue button in the guided process to proceed to select data targets."::: > [!NOTE] > You can choose to add multiple source folders with the **+** button. The other sources must also use the same linked service that youΓÇÖve already selected. 9. Then, select a **Target type** using the drop-down selection. For this tutorial, we will select **Azure SQL Database**. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-10.png" alt-text="Screenshot of a drop-down selection menu of all data target types."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-69.png" alt-text="Screenshot of a drop-down selection menu of all data target types."::: 10. You will then be prompted to select a linked service. Create a new linked service or select an existing one. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-11.png" alt-text="Screenshot of the selection box to choose or create a new linked service to your data target."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-70.png" alt-text="Screenshot of the selection box to choose or create a new linked service to your data target."::: -11. Create new **Target table(s)** or select an existing **Target table(s)**. Use the checkbox to make your selection(s). The **Preview** button will allow you to view your table data. +11. Create new **Target table(s)** or select an existing **Target table(s)**. Under **Existing entities** use the checkbox to select an existing Target table(s) or Under **New entities** select **Edit new tables** to create new Target table(s). The **Preview** button will allow you to view your table data. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-12.png" alt-text="Screenshot of the create new tables button and the selection boxes to choose tables for your target."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-71.png" alt-text="Screenshot of the existing entities to choose tables for your target."::: ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-72.png" alt-text="Screenshot of the new entities tab to create new tables for your target."::: + +> [!NOTE] +> If there are existing table(s) at the Target with matching name(s), they will be selected by default under **Existing entities**. If not, new tables with matching name(s) are created under **New entities**. Additionally, you can edit new tables with **Edit new tables** button. 12. Click **Continue** when you have finalized your selection(s). - :::image type="content" source="media/adf-cdc/change-data-capture-resource-13.png" alt-text="Screenshot of the continue button in the guided process to proceed to the next step."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-73.png" alt-text="Screenshot of the continue button in the guided process to proceed to the next step."::: > [!NOTE]-> You can choose multiple target tables from your SQL DB. Use the check boxes to select all targets. +> You can choose multiple target tables from your Azure SQL DB. Use the check boxes to select all targets. 13. You will automatically land in a new change data capture tab, where you can configure your new resource. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-14.png" alt-text="Screenshot of the change data capture studio."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-74.png" alt-text="Screenshot of the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-74.png"::: 14. A new mapping will automatically be created for you. You can update the **Source** and **Target** selections for your mapping by using the drop-down selection lists. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-15.png" alt-text="Screenshot of the source to target mapping in the change data capture studio."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-75.png" alt-text="Screenshot of the source to target mapping in the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-75.png"::: ++15. Once youΓÇÖve selected your tables, you should see that their columns are auto mapped by default with the **Auto map** toggle on. Auto map automatically maps the columns by name in the sink, picks up new column changes when source schema evolves and flows this to the supported sink types. If you would want to retain Auto map and not change any column mappings, proceed to **Step 19** directly. -15. Once youΓÇÖve selected your tables, you should see that there are columns mapped. Select the **Column mappings** button to view the column mappings. + :::image type="content" source="media/adf-cdc/change-data-capture-resource-76.png" alt-text="Screenshot of default Auto map toggle set to on." lightbox="media/adf-cdc/change-data-capture-resource-76.png"::: - :::image type="content" source="media/adf-cdc/change-data-capture-resource-16.png" alt-text="Screenshot of the mapping icon to view column mappings."::: +16. If you would want to enable the column mapping(s), select the mapping(s) and switch the Auto map toggle off, and then click the Column mappings button to view the column mappings. -16. Here you can view your column mappings. Use the drop-down lists to edit your column mappings for **Mapping method**, **Source column**, and **Target** column. + :::image type="content" source="media/adf-cdc/change-data-capture-resource-77.png" alt-text="Screenshot of mapping selection, Auto map toggle set to off and column mapping button." lightbox="media/adf-cdc/change-data-capture-resource-77.png"::: + +> [!NOTE] +> You can switch back to the default Auto mapping anytime by switching the **Auto map** toggle on. + +17. Here you can view your column mappings. Use the drop-down lists to edit your column mappings for Mapping method, Source column, and Target column. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-17.png" alt-text="Screenshot of the column mappings."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-78.png" alt-text="Screenshot of the column mapping page to allow users to editing column mappings." lightbox="media/adf-cdc/change-data-capture-resource-78.png"::: - You can add additional column mappings using the **New mapping** button. Use the drop-down lists to select the **Mapping method**, **Source column**, and **Target** column. +You can add additional column mappings using the **New mapping** button. Use the drop-down lists to select the **Mapping method**, **Source column**, and **Target** column. Also, if you want to track the delete operation for supported sink types, you can select the **Keys** column. You can click **Data Preview - Refresh** button to visualize how the data will look at the target. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-18.png" alt-text="Screenshot of the Add new mapping icon to add new column mappings."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-79.png" alt-text="Screenshot of the Add new mapping icon to add new column mappings, drop down with mapping methods, select Keys column and Data preview refresh button for allowing users to visualize data at target." lightbox="media/adf-cdc/change-data-capture-resource-79.png"::: -17. When your mapping is complete, click the back arrow to return to the main canvas. +18. When your mapping is complete, click the back arrow to return to the main CDC canvas. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-19.png" alt-text="Screenshot of the arrow icon to return to the main change data capture canvas."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-80.png" alt-text="Screenshot of back button to go back to table mapping page." lightbox="media/adf-cdc/change-data-capture-resource-80.png"::: -> [!NOTE] -> You can add additional source to target mappings in one CDC artifact. Use the edit button to select more data sources and targets. Then, click **New mapping** and use the drop-down lists to set a new source and target mapping. +19. You can add additional source to target mappings in one CDC artifact. Use the Edit button to add more data sources and targets. Then, click **New mapping** and use the drop-down lists to set a new source and target mapping. Also Auto map can be set on or off for each of these mappings independently. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-20.png" alt-text="Screenshot of the edit button to add new sources."::: - - :::image type="content" source="media/adf-cdc/change-data-capture-resource-21.png" alt-text="Screenshot of the new mapping button to set a new source to target mapping."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-81.png" alt-text="Screenshot of the edit button to add new sources and new mapping button to set a new source to target mapping." lightbox="media/adf-cdc/change-data-capture-resource-81.png"::: -18. Once your mapping complete, set your frequency using the **Set Latency** button. +20. Once your mapping complete, set your CDC latency using the **Set Latency** button. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-22.png" alt-text="Screenshot of the set frequency button at the top of the canvas."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-82.png" alt-text="Screenshot of the set frequency button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-82.png"::: -19. Select the cadence of your change data capture and click **Apply** to make the changes. By default, it will be set to 15 minutes. +21. Select the latency of your CDC and click **Apply** to make the changes. By default, it will be set to **15 minutes**. For this tutorial, we will select the **Real-time** latency. Real-time latency will continuously keep picking up changes in your source data in a less than 1 minute interval. -For example, if you select 30 minutes, every 30 minutes, your change data capture will process your source data and pick up any changed data since the last processed time. + For other latencies, say if you select 15 minutes, every 15 minutes, your change data capture will process your source data and pick up any changed data since the last processed time. > [!NOTE] -> The option to select Real-time to enable streaming data integration is coming soon. +> Support for **streaming data integration** (EventHub & Kafka data sources) is coming soon. When available the latency will be set to Real-time by default. -20. Once everything has been finalized, publish your changes. +22. Once everything has been finalized, click the **Publish All** to publish your changes. > [!NOTE] -> If you do not publish your changes, you will not be able to start your CDC resource. The start button will be grayed out. +> If you do not publish your changes, you will not be able to start your CDC resource. The start button will be greyed out. -21. Click **Start** to start running your **Change data capture**. +23. Click **Start** to start running your **Change Data Capture**. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-25.png" alt-text="Screenshot of the start button at the top of the canvas."::: -- :::image type="content" source="media/adf-cdc/change-data-capture-resource-26.png" alt-text="Screenshot of an actively running change data capture resource."::: - + :::image type="content" source="media/adf-cdc/change-data-capture-resource-85.png" alt-text="Screenshot of the start button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-85.png"::: + ## Monitor your Change data capture 1. To monitor your change data capture, navigate to the **Monitor** blade or click the monitoring icon from the CDC designer. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-27.png" alt-text="Screenshot of the monitoring blade."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-86.png" alt-text="Screenshot of the monitoring blade."::: - :::image type="content" source="media/adf-cdc/change-data-capture-resource-28.png" alt-text="Screenshot of the monitoring button at the top of the change data capture canvas."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-87.png" alt-text="Screenshot of the monitoring button at the top of the CDC canvas." lightbox="media/adf-cdc/change-data-capture-resource-87.png"::: -2. Select **Change data capture** to view your CDC resources. +2. Select **Change Data Capture (preview)** to view your CDC resources. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-29.png" alt-text="Screenshot of the Change data capture monitoring section."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-88.png" alt-text="Screenshot of the Change Data Capture monitoring section."::: 3. Here you can see the **Source**, **Target**, **Status**, and **Last processed** time of your change data capture. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-30.png" alt-text="Screenshot of an overview of the change data capture monitoring page."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-89.png" alt-text="Screenshot of an overview of the change data capture monitoring page." lightbox="media/adf-cdc/change-data-capture-resource-89.png"::: -4. Click the name of your CDC to see more details. You can see how many rows were read and written and other diagnostic information. +4. Click the name of your CDC to see more details. You can see how many changes (insert/update/delete) were read and written and other diagnostic information. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-31.png" alt-text="Screenshot of the detailed monitoring of a selected change data capture."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-90.png" alt-text="Screenshot of the detailed monitoring of a selected change data capture." lightbox="media/adf-cdc/change-data-capture-resource-90.png"::: > [!NOTE] > If you have multiple mappings set up in your Change data capture, each mapping will show as a different color. Click on the bar to see specific details for each mapping or use the Diagnostics at the bottom of the screen. - :::image type="content" source="media/adf-cdc/change-data-capture-resource-32.png" alt-text="Screenshot of the detailed monitoring page of a change data capture with multiple sources to target mappings."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-91.png" alt-text="Screenshot of the detailed monitoring page of a change data capture with multiple sources to target mappings." lightbox="media/adf-cdc/change-data-capture-resource-91.png"::: - :::image type="content" source="media/adf-cdc/change-data-capture-resource-33.png" alt-text="Screenshot of a detailed breakdown of each mapping in the change data capture artifact."::: + :::image type="content" source="media/adf-cdc/change-data-capture-resource-92.png" alt-text="Screenshot of a detailed breakdown of each mapping in the change data capture artifact." lightbox="media/adf-cdc/change-data-capture-resource-92.png"::: ## Next steps |
data-factory | Sap Change Data Capture Introduction Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md | Azure Data Factory is an ETL and ELT data integration platform as a service (Paa The SAP connectors in Data Factory extract SAP source data only in batches. Each batch processes existing and new data the same. In data extraction in batch mode, changes between existing and new datasets aren't identified. This type of extraction mode isnΓÇÖt optimal when you have large datasets like tables that have millions or billions of records that change often. -You can keep your copy of SAP data fresh and up-to-date by frequently extracting the full dataset, but this approach is expensive and inefficient. You also can use a manual, limited workaround to extract mostly new or updated records. In a process called *watermarking*, extraction requires using a timestamp column, monotonously increasing values, and continuously tracking the highest value since the last extraction. But some tables don't have a column that you can use for watermarking. This process also doesn't identify a deleted record as a change in the dataset. +You can keep your copy of SAP data fresh and up-to-date by frequently extracting the full dataset, but this approach is expensive and inefficient. You also can use a manual, limited workaround to extract mostly new or updated records. In a process called *watermarking*, extraction requires using a timestamp column, monotonically increasing values, and continuously tracking the highest value since the last extraction. But some tables don't have a column that you can use for watermarking. This process also doesn't identify a deleted record as a change in the dataset. ## SAP CDC capabilities |
data-factory | Self Hosted Integration Runtime Auto Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-auto-update.md | You can check the last update datetime in your self-hosted integration runtime c :::image type="content" source="media/create-self-hosted-integration-runtime/shir-auto-update-2.png" alt-text="Screenshot of checking the update time"::: -You can use this [PowerShell command](/powershell/module/az.datafactory/get-azdatafactoryv2integrationruntime?view=azps-6.1.0&preserve-view=true#example-5--get-self-hosted-integration-runtime-with-detail-status) to get the auto-update version. +You can use this [PowerShell command](/powershell/module/az.datafactory/get-azdatafactoryv2integrationruntime#example-5--get-self-hosted-integration-runtime-with-detail-status) to get the auto-update version. > [!NOTE] > If you have multiple self-hosted integration runtime nodes, there is no downtime during auto-update. The auto-update happens in one node first while others are working on tasks. When the first node finishes the update, it will take over the remain tasks when other nodes are updating. If you only have one self-hosted integration runtime, then it has some downtime during the auto-update. |
data-factory | Transform Data Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md | Last updated 10/19/2022 [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -You use data transformation activities in a Data Factory or Synapse [pipeline](concepts-pipelines-activities.md) to transform and process raw data into predictions and insights. The Script activity is one of the transformation activities that pipelines support. This article builds on the [transform data article](transform-data.md), which presents a general overview of data transformation and the supported transformation activities. +You use data transformation activities in a Data Factory or Synapse [pipeline](concepts-pipelines-activities.md) to transform and process raw data into predictions and insights. The Script activity is one of the transformation activities that pipelines support. This article builds on the [transform data article](transform-data.md), which presents a general overview of data transformation and the supported transformation activities. -Using the script activity, you can execute common operations with Data Manipulation Language (DML), and Data Definition Language (DDL). DML statements like INSERT, UPDATE, DELETE and SELECT let users insert, modify, delete and retrieve data in the database. DDL statements like CREATE, ALTER and DROP allow a database manager to create, modify, and remove database objects such as tables, indexes, and users. +Using the script activity, you can execute common operations with Data Manipulation Language (DML), and Data Definition Language (DDL). DML statements like INSERT, UPDATE, DELETE and SELECT let users insert, modify, delete and retrieve data in the database. DDL statements like CREATE, ALTER and DROP allow a database manager to create, modify, and remove database objects such as tables, indexes, and users. You can use the Script activity to invoke a SQL script in one of the following data stores in your enterprise or on an Azure virtual machine (VM): - Azure SQL Database - Azure Synapse Analytics -- SQL Server Database. If you are using SQL Server, install Self-hosted integration runtime on the same machine that hosts the database or on a separate machine that has access to the database. Self-Hosted integration runtime is a component that connects data sources on-premises/on Azure VM with cloud services in a secure and managed way. See the [Self-hosted integration runtime](create-self-hosted-integration-runtime.md) article for details. -- Oracle -- Snowflake +- SQL Server Database. If you are using SQL Server, install Self-hosted integration runtime on the same machine that hosts the database or on a separate machine that has access to the database. Self-Hosted integration runtime is a component that connects data sources on-premises/on Azure VM with cloud services in a secure and managed way. See the [Self-hosted integration runtime](create-self-hosted-integration-runtime.md) article for details. +- Oracle +- Snowflake -The script may contain either a single SQL statement or multiple SQL statements that run sequentially. You can use the Execute SQL task for the following purposes: +The script may contain either a single SQL statement or multiple SQL statements that run sequentially. You can use the Script task for the following purposes: - Truncate a table in preparation for inserting data. - Create, alter, and drop database objects such as tables and views. The following table describes these JSON properties: Sample output: ```json { -    "resultSetCount": 2, -    "resultSets": [ -        { -            "rowCount": 10, -            "rows":[ -                { -                    "<columnName1>": "<value1>", -                    "<columnName2>": "<value2>", -                    ... -                } -            ] -        }, -        ... -    ], -    "recordsAffected": 123, -    "outputParameters":{ -        "<parameterName1>": "<value1>", -        "<parameterName2>": "<value2>" -    }, -    "outputLogs": "<logs>", -    "outputLogsLocation": "<folder path>", -    "outputTruncated": true, + "resultSetCount": 2, + "resultSets": [ + { + "rowCount": 10, + "rows":[ + { + "<columnName1>": "<value1>", + "<columnName2>": "<value2>", + ... + } + ] + }, + ... + ], + "recordsAffected": 123, + "outputParameters":{ + "<parameterName1>": "<value1>", + "<parameterName2>": "<value2>" + }, + "outputLogs": "<logs>", + "outputLogsLocation": "<folder path>", + "outputTruncated": true, ... } ``` |
data-factory | Tutorial Managed Virtual Network Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-sql-managed-instance.md | the page. ## Creating Forwarding Rule to Endpoint -1. Login and copy script [ip_fwd.sh](https://github.com/sajitsasi/az-ip-fwd/blob/main/ip_fwd.sh) to your backend server VMs. -2. Run the script on with the following options:<br/> +1. Login and copy script [ip_fwd.sh](https://github.com/sajitsasi/az-ip-fwd/blob/main/ip_fwd.sh) to your backend server VMs. ++ > [!NOTE] + > This script will only temporarily set IP forwarding. To make this setting permanent, please ensure that the line "net.ipv4.ip_forward=1" is uncommented in the file /etc/sysctl.conf ++1. Run the script on with the following options:<br/> **sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433**<br/> <FQDN/IP> is the host of your SQL Managed Instance. |
databox-online | Azure Stack Edge Gpu 2105 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2105-release-notes.md | The following new features are available in the Azure Stack Edge 2105 release. - Diagnostics and telemetry fixes have been made. - Proactive log collection is enhanced for compute logs. -- **Support for Az cmdlets** - Starting this release, the Az cmdlets are available (in preview) when connecting to the local Azure Resource Manager of the device or when deploying VM workloads. For more information, see [Az cmdlets](/powershell/azure/new-azureps-module-az?view=azps-5.9.0&preserve-view=true).+- **Support for Az cmdlets** - Starting this release, the Az cmdlets are available (in preview) when connecting to the local Azure Resource Manager of the device or when deploying VM workloads. For more information, see [Az cmdlets](/powershell/azure/new-azureps-module-az). - **Enable remote PowerShell session over HTTP** - Starting this release, you can enable a remote PowerShell session into a device over *http* via the local UI. For more information, see how to [Enable Remote PowerShell over http](azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#enable-device-access-via-remote-powershell-over-http) for your device. |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell.md | To return a list of all the VMs that are running on your Azure Stack Edge device Get-AzVM -ResourceGroupName <String> -Name <String> ``` -For more information about this cmdlet, see [Get-AzVM](/powershell/module/az.compute/get-azvm?view=azps-6.1.0&preserve-view=true). +For more information about this cmdlet, see [Get-AzVM](/powershell/module/az.compute/get-azvm). ### [AzureRM](#tab/azure-rm) To turn on a virtual machine that's running on your device, run the following cm ```powershell Start-AzVM [-Name] <String> [-ResourceGroupName] <String> ```-For more information about this cmdlet, see [Start-AzVM](/powershell/module/az.compute/start-azvm?view=azps-5.9.0&preserve-view=true). +For more information about this cmdlet, see [Start-AzVM](/powershell/module/az.compute/start-azvm). ### [AzureRM](#tab/azure-rm) To stop or shut down a virtual machine that's running on your device, run the fo Stop-AzVM [-Name] <String> [-StayProvisioned] [-ResourceGroupName] <String> ``` -For more information about this cmdlet, see [Stop-AzVM cmdlet](/powershell/module/az.compute/stop-azvm?view=azps-5.9.0&preserve-view=true). +For more information about this cmdlet, see [Stop-AzVM cmdlet](/powershell/module/az.compute/stop-azvm). ### [AzureRM](#tab/azure-rm) To remove a virtual machine from your device, run the following cmdlet: ```powershell Remove-AzVM [-Name] <String> [-ResourceGroupName] <String> ```-For more information about this cmdlet, see [Remove-AzVm cmdlet](/powershell/module/az.compute/remove-azvm?view=azps-5.9.0&preserve-view=true). +For more information about this cmdlet, see [Remove-AzVm cmdlet](/powershell/module/az.compute/remove-azvm). ### [AzureRM](#tab/azure-rm) |
databox-online | Azure Stack Edge Gpu Manage Edge Resource Groups Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-edge-resource-groups-portal.md | Follow these steps to view the Edge resource groups for the current subscription  > [!NOTE]- > You can get the same listing by using [Get-AzResource](/powershell/module/az.resources/get-azresource?view=azps-6.1.0&preserve-view=true) in Azure PowerShell after you set up the Azure Resource Manager environment on your device. For more information, see [Connect to Azure Resource Manager](azure-stack-edge-gpu-connect-resource-manager.md). + > You can get the same listing by using [Get-AzResource](/powershell/module/az.resources/get-azresource) in Azure PowerShell after you set up the Azure Resource Manager environment on your device. For more information, see [Connect to Azure Resource Manager](azure-stack-edge-gpu-connect-resource-manager.md). ## Delete an Edge resource group |
databox | Data Box Customer Managed Encryption Key Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-customer-managed-encryption-key-portal.md | If you receive any errors related to your customer-managed key, use the followin | SsemUserErrorKeyVaultBadRequestException | Applied a customer-managed key, but key access has not been granted or has been revoked, or the key vault couldn't be accessed because a firewall is enabled. | Add the identity selected to your key vault to enable access to the customer-managed key. If the key vault has a firewall enabled, switch to a system-assigned identity and then add a customer-managed key. For more information, see how to [Enable the key](#enable-key). | | SsemUserErrorEncryptionKeyTypeNotSupported | The encryption key type isn't supported for the operation. | Enable a supported encryption type on the key - for example, RSA or RSA-HSM. For more information, see [Key types, algorithms, and operations](../key-vault/keys/about-keys-details.md). | | SsemUserErrorSoftDeleteAndPurgeProtectionNotEnabled | Key vault does not have soft delete or purge protection enabled. | Ensure that both soft delete and purge protection are enabled on the key vault. |-| SsemUserErrorInvalidKeyVaultUrl<br>(Command-line only) | An invalid key vault URI was used. | Get the correct key vault URI. To get the key vault URI, use [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault?view=azps-7.1.0&preserve-view=true) in PowerShell. | +| SsemUserErrorInvalidKeyVaultUrl<br>(Command-line only) | An invalid key vault URI was used. | Get the correct key vault URI. To get the key vault URI, use [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault) in PowerShell. | | SsemUserErrorKeyVaultUrlWithInvalidScheme | Only HTTPS is supported for passing the key vault URI. | Pass the key vault URI over HTTPS. | | SsemUserErrorKeyVaultUrlInvalidHost | The key vault URI host is not an allowed host in the geographical region. | In the public cloud, the key vault URI should end with `vault.azure.net`. In the Azure Government cloud, the key vault URI should end with `vault.usgovcloudapi.net`. | | Generic error | Could not fetch the passkey. | This error is a generic error. Contact Microsoft Support to troubleshoot the error and determine the next steps.| |
deployment-environments | How To Configure Deployment Environments User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-deployment-environments-user.md | When you assign the role at the project level, the user can perform the precedin ## Assign permissions to developers for a project -1. Select the project that you want your development team members to be able to access. -2. Select **Access control (IAM)** from the left menu. +1. In the Azure portal, go to your project. - :::image type="content" source=".\media\configure-deployment-environments-user\access-control-page.png" alt-text="Screenshot that shows the link to the access control page."::: +1. In the left menu, select **Access control (IAM)**. -3. Select **Add** > **Add role assignment**. +1. Select **Add** > **Add role assignment**. - :::image type="content" source=".\media\configure-deployment-environments-user\add-role-assignment.png" alt-text="Screenshot that shows the menu option for adding a role assignment."::: +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **[Deployment Environments User](how-to-configure-deployment-environments-user.md)**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have access to the project. | -4. On the **Add role assignment** page, on the **Role** tab, search for **deployment environments user**, select the **Deployment Environments User** built-in role, and then select **Next**. -5. On the **Members** tab, select **+ Select members**. -6. In **Select members**, select the Active Directory users or groups that you want to add, and then choose **Select**. -7. On the **Members** tab, select **Review + assign**. + :::image type="content" source="media/quickstart-create-configure-projects/add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment pane."::: The users can now view the project and all the environment types that you've enabled within it. Users who have the Deployment Environments User role can also [create environments from the Azure CLI](./quickstart-create-access-environments.md). ## Assign permissions to developers for an environment type 1. Select the project that you want your development team members to be able to access.-2. Select **Environment types**, and then select the ellipsis (**...**) beside the specific environment type. +1. Select **Environment types**, and then select the ellipsis (**...**) beside the specific environment type. :::image type="content" source=".\media\configure-deployment-environments-user\project-environment-types.png" alt-text="Screenshot that shows the environment types associated with a project."::: -3. Select **Access control (IAM)**. -- :::image type="content" source=".\media\configure-deployment-environments-user\access-control-page.png" alt-text="Screenshot that shows the link to the access control page."::: +1. Select **Access control (IAM)**. -4. Select **Add** > **Add role assignment**. +1. Select **Add** > **Add role assignment**. - :::image type="content" source=".\media\configure-deployment-environments-user\add-role-assignment.png" alt-text="Screenshot that shows the menu option for adding a role assignment."::: +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **[Deployment Environments User](how-to-configure-deployment-environments-user.md)**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have access to the project. | -5. On the **Add role assignment** page, on the **Role** tab, search for **deployment environments user**, select the **Deployment Environments User** built-in role, and then select **Next**. -6. On the **Members** tab, select **+ Select members**. -7. In **Select members**, select the Active Directory users or groups that you want to add, and then choose **Select**. -8. On the **Members** tab, select **Review + assign**. + :::image type="content" source="media/quickstart-create-configure-projects/add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment pane."::: The users can now view the project and the specific environment type that you've granted them access to. Users who have the Deployment Environments User role can also [create environments by using the Azure CLI](./quickstart-create-access-environments.md). |
deployment-environments | How To Configure Project Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-admin.md | When you assign the role at the project level, the user can perform the precedin ## Assign permissions to dev managers for a project 1. Select the project that you want your development team members to be able to access.-2. Select **Access control (IAM)** from the left menu. +1. Select **Access control (IAM)** from the left menu. +1. Select **Add** > **Add role assignment**. - :::image type="content" source=".\media\configure-project-admin\access-control-page.png" alt-text="Screenshot that shows the link to the access control page."::: +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **DevCenter Project Admin**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have administrative access to the project. | -3. Select **Add** > **Add role assignment**. -- :::image type="content" source=".\media\configure-project-admin\add-role-assignment.png" alt-text="Screenshot that shows the menu option for adding a role assignment."::: --4. On the **Add role assignment** page, on the **Role** tab, search for **devcenter project admin**, select the **DevCenter Project Admin** built-in role, and then select **Next**. -- :::image type="content" source=".\media\configure-project-admin\built-in-role.png" alt-text="Screenshot that shows selecting the built-in DevCenter Project Admin role."::: --5. On the **Members** tab, select **+ Select members**. -- :::image type="content" source=".\media\configure-project-admin\select-role-members.png" alt-text="Screenshot that shows the link for selecting role members."::: - -1. In **Select members**, select the Active Directory users or groups that you want to add, and then choose **Select**. --7. On the **Members** tab, select **Review + assign**. + :::image type="content" source="media/configure-project-admin/add-role-assignment-admin.png" alt-text="Screenshot that shows the Add role assignment pane."::: The users can now view the project and manage all the environment types that you've enabled within it. DevCenter Project Admin users can also [create environments from the Azure CLI](./quickstart-create-access-environments.md). The users can now view the project and manage all the environment types that you :::image type="content" source=".\media\configure-project-admin\project-environment-types.png" alt-text="Screenshot that shows the environment types associated with a project."::: -3. Select **Access control (IAM)**. -- :::image type="content" source=".\media\configure-project-admin\access-control-page.png" alt-text="Screenshot that shows the link to the access control page."::: --4. Select **Add** > **Add role assignment**. -- :::image type="content" source=".\media\configure-project-admin\add-role-assignment.png" alt-text="Screenshot that shows the menu option for adding a role assignment."::: +1. In the left menu, select **Access control (IAM)**. -5. On the **Add role assignment** page, on the **Role** tab, search for **devcenter project admin**, select the **DevCenter Project Admin** built-in role, and then select **Next**. +1. Select **Add** > **Add role assignment**. - :::image type="content" source=".\media\configure-project-admin\built-in-role.png" alt-text="Screenshot that shows selecting the built-in DevCenter Project Admin role."::: +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **DevCenter Project Admin**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have administrative access to the environment type. | -6. On the **Members** tab, select **+ Select members**. -7. In **Select members**, select the Active Directory users or groups that you want to add, and then choose **Select**. -8. On the **Members** tab, select **Review + assign**. + :::image type="content" source="media/configure-project-admin/add-role-assignment-admin.png" alt-text="Screenshot that shows the Add role assignment pane."::: The users can now view the project and manage only the specific environment type that you've granted them access to. DevCenter Project Admin users can also [create environments by using the Azure CLI](./quickstart-create-access-environments.md). |
deployment-environments | How To Manage Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-manage-environments.md | Last updated 02/28/2023 # Manage your environment -In Azure Deployment Environments Preview, a dev infrastructure manager gives developers access to projects and the environment types associated with them. Once a developer has access, they can create deployments environments based on the pre-configured environment types. The dev infrastructure manager can also give management permissions to the creator of the environment, like Owner or Contributor. +In Azure Deployment Environments Preview, a dev infra admin gives developers access to projects and the environment types associated with them. Once a developer has access, they can create deployments environments based on the pre-configured environment types. The permissions that the creator of the environment and the rest of team get to the environmentΓÇÖs resources are defined in the specific environment type. As a developer, you can create and manage your environments from the developer portal or from the Azure CLI. ## Prerequisites - Access to a project that has at least one environment type.-- The [Deployment Environments User](how-to-configure-deployment-environments-user.md) role, the [DevCenter Project Admin](how-to-configure-project-admin.md) role, or a [built-in role](../role-based-access-control/built-in-roles.md) that has appropriate permissions.+- The [Deployment Environments User](how-to-configure-deployment-environments-user.md) role, the [DevCenter Project Admin](how-to-configure-project-admin.md) role, or a [built-in role](../role-based-access-control/built-in-roles.md) that has appropriate permissions to create an environment. ## Manage an environment by using the developer portal -The developer portal provides a graphical interface for creation and management tasks and provides a visual status for your environments and dev boxes. You can create, redeploy, and delete your environments as needed. +The developer portal provides a graphical interface for the development teams to create new environments and manage existing environments. You can create, redeploy, and delete your environments as needed. ### Create an environment by using the developer portal The developer portal provides a graphical interface for creation and management :::image type="content" source="media/how-to-manage-environments/environment-resources-link.png" alt-text="Screenshot showing an environment tile with the Environment Resources link highlighted. "::: -1. The environment resources display in the Azure portal. +1. The environment resources are displayed in the Azure portal. :::image type="content" source="media/how-to-manage-environments/environment-resources.png" alt-text="Screenshot showing environment resources in the Azure portal."::: ### Redeploy an environment by using the developer portal -When you need to update your environment parameters, you can redeploy it. The redeployment process updates any existing resources with changed properties and creates any new resources from the catalog item in the environment resource group. +When you need to update your environment, you can redeploy it. The redeployment process updates any existing resources with changed properties or creates any new resources based on the latest configuration of the catalog item. 1. Sign in to the [developer portal](https://devportal.microsoft.com). When you need to update your environment parameters, you can redeploy it. The re 1. To view the redeployed resources, select **Environment Resources**. -1. The environment resources display in the Azure portal. +1. The environment resources are displayed in the Azure portal. :::image type="content" source="media/how-to-manage-environments/redeployed-resources.png" alt-text="Screenshot showing redeployed resources in the Azure portal."::: |
deployment-environments | Quickstart Create And Configure Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md | In this quickstart you assign the Owner role to the system-assigned managed iden :::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted."::: -1. In Azure role assignments, select **Add role assignment (Preview)**, and then enter or select the following information: - - In **Scope**, select **Subscription**. - - In **Subscription**, select the subscription in which to use the managed identity. - - In **Role**, select **Owner**. - - Select **Save**. +1. In Azure role assignments, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**: + + |Name |Value | + ||-| + |**Scope**|Subscription| + |**Subscription**|Select the subscription in which to use the managed identity.| + |**Role**|Owner| + ## Configure a project To configure a project, add a [project environment type](how-to-configure-project-environment-types.md): To configure a project, add a [project environment type](how-to-configure-projec 1. Select **Add** > **Add role assignment**. - :::image type="content" source="media/quickstart-create-configure-projects/project-access-control-page.png" alt-text="Screenshot that shows the Access control pane."::: --1. In **Add role assignment**, enter the following information, and then select **Save**: -- 1. On the **Role** tab, select either [DevCenter Project Admin](how-to-configure-project-admin.md) or [Deployment Environments user](how-to-configure-deployment-environments-user.md). - 1. On the **Members** tab, select either a **User, group, or service principal** or a **Managed identity** to assign access. +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **[Deployment Environments User](how-to-configure-deployment-environments-user.md)**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have access to the project. | :::image type="content" source="media/quickstart-create-configure-projects/add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment pane."::: |
dev-box | How To Configure Azure Compute Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md | Follow these steps to manually assign each role: 1. Select the **Access Control (IAM)** menu item. -1. Select **+ Add** > **Add role assignment**. +1. Select **Add** > **Add role assignment**. -1. On the Role tab, select **Reader**, and then select **Next**. --1. On the Members tab, select **+ Select Members**. --1. In Select members, search for *Windows 365*, select **Windows 365** from the list, and then select **Select**. --1. On the Members tab, select **Next**. --1. On the Review + assign tab, select **Review + assign**. +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **Reader**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Search for and select **Windows 365**. | #### Dev center Managed Identity 1. Open the gallery you want to attach to the dev center from the [Azure portal](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Compute%2Fgalleries). You can also search for Azure Compute Galleries to find your gallery. 1. Select **Access Control (IAM)** from the left menu. -1. Select **+ Add** > **Add role assignment**. --1. On the Role tab, select the **Contributor** role, and then select **Next**. --1. On the Members tab, under **Assign access to**, select **Managed Identity**, and then select **+ Select Members**. --1. In Select managed identities, search for and select the user assigned managed identity you created in "Create a Dev center Managed Identity" and then select -**Select**. --1. On the Members tab, select **Next**. +1. Select **Add** > **Add role assignment**. -1. On the Review + assign tab, select **Review + assign**. +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **Contributor**. | + | **Assign access to** | Select **Managed Identity**. | + | **Members** | Search for and select the user assigned managed identity you created in [Add a user assigned identity to dev center](#add-a-user-assigned-identity-to-dev-center). | You can use the same managed identity in multiple DevCenters and Azure Compute Galleries. Any DevCenter with the managed identity added will have the necessary permissions to the images in the Azure Compute Gallery you've added the owner role assignment to. |
dev-box | How To Dev Box User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-dev-box-user.md | A DevCenter Dev Box User can: 1. Select **Access Control (IAM)** from the left menu. - :::image type="content" source="./media/how-to-dev-box-user/access-control-tab.png " alt-text="Screenshot showing the Project Access control page with the Access Control link highlighted."::: - 1. Select **Add** > **Add role assignment**.- - :::image type="content" source="./media/how-to-dev-box-user/add-role-assignment.png" alt-text="Screenshot showing the Add menu with Add role assignment highlighted."::: --1. On the Add role assignment page, on the Role tab, search for *devcenter dev box user*, select the **DevCenter Dev Box User** built-in role, and then select **Next**. - - :::image type="content" source="./media/how-to-dev-box-user/dev-box-user-role.png" alt-text="Screenshot showing the search box."::: -1. On the Members tab, select **+ Select Members**. - - :::image type="content" source="./media/how-to-dev-box-user/dev-box-user-select-members.png" alt-text="Screenshot showing the Members tab with Select members highlighted."::: --1. In **Select members**, select the Active Directory Users or Groups you want to add, and then select **Select**. - - :::image type="content" source="./media/how-to-dev-box-user/select-members-search.png" alt-text="Screenshot showing the Select members pane with a user account highlighted."::: +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **DevCenter Dev Box User**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have access to the project. | -1. On the Members tab, select **Review + assign**. + :::image type="content" source="media/how-to-dev-box-user/add-role-assignment-user.png" alt-text="Screenshot that shows the Add role assignment pane."::: The user will now be able to view the project and all the pools within it. Dev box users can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal). |
dev-box | How To Manage Dev Box Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md | Each project is associated with a single dev center. When you associate a projec Microsoft Dev Box makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their team, like creating and managing dev box pools. To provide users permissions to manage projects, add them to the DevCenter Project Admin role. The tasks in this quickstart can be performed by project admins. -To learn how to add a user to the Project Admin role, see [Provide access to a dev box project](#provide-access-to-a-dev-box-project). +To learn how to add a user to the Project Admin role, refer to [Provide access to projects for project admins](how-to-project-admin.md). [!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)] Before users can create dev boxes based on the dev box pools in a project, you m 1. Select **Access Control (IAM)** from the left menu. - :::image type="content" source="./media/how-to-manage-dev-box-projects/access-control-tab.png" alt-text="Screenshot showing the Project Access control page with the Access Control link highlighted."::: - 1. Select **Add** > **Add role assignment**. - :::image type="content" source="./media/how-to-manage-dev-box-projects/add-role-assignment.png" alt-text="Screenshot showing the Add menu with Add role assignment highlighted."::: --1. On the Add role assignment page, search for *devcenter dev box user*, select the **DevCenter Dev Box User** built-in role, and then select **Next**. -- :::image type="content" source="./media/how-to-manage-dev-box-projects/dev-box-user-role.png" alt-text="Screenshot showing the Add role assignment search box highlighted."::: --1. On the Members page, select **+ Select Members**. +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **DevCenter Dev Box User**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have access to the project. | - :::image type="content" source="./media/how-to-manage-dev-box-projects/dev-box-user-select-members.png" alt-text="Screenshot showing the Members tab with Select members highlighted."::: --1. On the **Select members** pane, select the Active Directory Users or Groups you want to add, and then select **Select**. -- :::image type="content" source="./media/how-to-manage-dev-box-projects/select-members-search.png" alt-text="Screenshot showing the Select members pane with a user account highlighted."::: --1. On the Add role assignment page, select **Review + assign**. + :::image type="content" source="media/how-to-manage-dev-box-projects/add-role-assignment-user.png" alt-text="Screenshot that shows the Add role assignment pane."::: The user will now be able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal). +To assign administrative access to a project, select the DevCenter Project Admin role. For more details on how to add a user to the Project Admin role, refer to [Provide access to projects for project admins](how-to-project-admin.md). + ## Next steps - [Manage dev box pools](./how-to-manage-dev-box-pools.md) |
dev-box | How To Manage Dev Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md | To make role assignments, use the following steps: 1. Select **Access Control (IAM)** from the left menu. - :::image type="content" source="./media/how-to-manage-dev-center/dev-center-access-control.png" alt-text="Screenshot showing the dev center page with the Access Control link highlighted."::: - 1. Select **Add** > **Add role assignment**. - :::image type="content" source="./media/how-to-manage-dev-center/add-role-assignment.png" alt-text="Screenshot showing the Add menu with Add role assignment highlighted."::: --1. On the Add role assignment page, choose the built-in role you want to assign, and then select **Next**. -- :::image type="content" source="./media/how-to-manage-dev-center/dev-center-built-in-roles.png" alt-text="Screenshot showing the Add role assignment search box highlighted."::: --1. On the Members page, select **+ Select Members**. -- :::image type="content" source="./media/how-to-manage-dev-center/dev-center-owner-select-members.png" alt-text="Screenshot showing the Members tab with Select members highlighted."::: --1. On the **Select members** pane, select the Active Directory Users or Groups you want to add, and then select **Select**. -- :::image type="content" source="./media/how-to-manage-dev-center/select-members-search.png" alt-text="Screenshot showing the Select members pane with a user account highlighted."::: +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **Owner**, **Contributor**, or **Reader**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have access to the dev center. | -1. On the Add role assignment page, select **Review + assign**. ## Next steps - [Provide access to projects for project admins](./how-to-project-admin.md) |
dev-box | How To Project Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-project-admin.md | Follow the instructions below to add role assignments for this role. :::image type="content" source="./media/how-to-project-admin/access-control-tab.png" alt-text="Screenshot showing the Project Access control page with the Access Control link highlighted."::: 1. Select **Add** > **Add role assignment**.- - :::image type="content" source="./media/how-to-project-admin/add-role-assignment.png" alt-text="Screenshot showing the Add menu with Add role assignment highlighted."::: --1. On the Add role assignment page, on the Role tab, search for *devcenter project admin*, select the **DevCenter Project Admin** built-in role, and then select **Next**. - - :::image type="content" source="./media/how-to-project-admin/project-admin-role.png" alt-text="Screenshot showing the search box highlighted."::: -1. On the Members tab, select **+ Select Members**. - - :::image type="content" source="./media/how-to-project-admin/project-admin-select-members.png" alt-text="Screenshot showing the Members tab with Select members highlighted."::: --1. In **Select members**, select the Active Directory Users or Groups you want to add, and then select **Select**. - - :::image type="content" source="./media/how-to-project-admin/select-members-search.png" alt-text="Screenshot showing the Select members pane with a user account highlighted."::: +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **DevCenter Dev Box Admin**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have administrative access to the project. | -1. On the Members tab, select **Review + assign**. + :::image type="content" source="media/how-to-project-admin/add-role-assignment-admin.png" alt-text="Screenshot that shows the Add role assignment pane."::: The user will now be able to manage the project and create dev box pools within it. |
dev-box | Overview What Is Microsoft Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md | This diagram shows the components of the Dev Box service and the relationships b :::image type="content" source="media/overview-what-is-microsoft-dev-box/dev-box-architecture.png" alt-text="Diagram showing dev box architecture."::: -Dev box service configuration begins with the creation of a dev center, which aims to represent the units of organisation per enterprise. Dev centers are logical containers to help organize your dev box resources. ThereΓÇÖs no limit on the number of dev centers you can create, but most organizations require only one. +Dev box service configuration begins with the creation of a dev center, which represents the units of organisation in the enterprise. Dev centers are logical containers to help organize your dev box resources. ThereΓÇÖs no limit on the number of dev centers you can create, but most organizations require only one. -Azure Network connections enable the dev boxes to communicate with your organizationΓÇÖs network. The network connection provides a link between the dev center and your organizationΓÇÖs virtual networks. In the network connection, youΓÇÖll define how the dev box will join your Azure Active Directory (AD). Use an Azure AD join to connect exclusively to cloud-based resources, or use a hybrid Azure AD join to connect to on-premises resources and cloud-based resources. +Azure network connections enable the dev boxes to communicate with your organizationΓÇÖs network. The network connection provides a link between the dev center and your organizationΓÇÖs virtual networks. In the network connection, youΓÇÖll define how the dev box will join your Azure Active Directory (AD). Use an Azure AD join to connect exclusively to cloud-based resources, or use a hybrid Azure AD join to connect to on-premises resources and cloud-based resources. Dev box definitions define the configuration of the dev boxes available to your dev box users. You can use an image from the Azure Marketplace, like the *Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2* image, or you can create your own custom image, stored in an attached Azure Compute Gallery. Specify an SKU with compute and storage to complete the dev box definition. |
dev-box | Quickstart Configure Dev Box Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md | Before users can create dev boxes based on the dev box pools in a project, you m 1. Select **Access Control (IAM)** from the left menu. :::image type="content" source="./media/quickstart-configure-dev-box-service/project-permissions.png" alt-text="Screenshot showing the Project Access control page with the Access Control link highlighted.":::-+ 1. Select **Add** > **Add role assignment**. -1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). - - |Setting |Value | - ||| - |Role | DevCenter Dev Box User | - |Assign access to | User | - |Members | Your account | +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). + + | Setting | Value | + | | | + | **Role** | Select **DevCenter Dev Box User**. | + | **Assign access to** | Select **User, group, or service principal**. | + | **Members** | Select the users or groups you want to have access to the project. | ++ :::image type="content" source="media/how-to-dev-box-user/add-role-assignment-user.png" alt-text="Screenshot that shows the Add role assignment pane."::: The user will now be able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal). |
digital-twins | How To Use Power Platform Logic Apps Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-power-platform-logic-apps-connector.md | -You can integrate Azure Digital Twins into a [Microsoft Power Platform](/power-platform) or [Azure Logic Apps](../logic-apps/logic-apps-overview.md) flow, using the *Azure Digital Twins connector*. +You can integrate Azure Digital Twins into a [Microsoft Power Platform](/power-platform) or [Azure Logic Apps](../logic-apps/logic-apps-overview.md) flow, using the *Azure Digital Twins Power Platform connector*. The connector is a wrapper around the Azure Digital Twins [data plane APIs](concepts-apis-sdks.md#data-plane-apis) for twin, model and query operations, which allows the underlying service to talk to [Microsoft Power Automate](/power-automate/getting-started), [Microsoft Power Apps](/power-apps/powerapps-overview), and [Azure Logic Apps](../logic-apps/logic-apps-overview.md). The connector provides a way for users to connect their accounts and leverage a set of prebuilt actions to build their apps and workflows. -For more information about the Azure Digital Twins Power Platform connector, including a complete list of the connector's actions and their parameters, see the [Azure Digital Twins connector reference documentation](/connectors/azuredigitaltwins). +For an introduction to the connector, including a quick demo, watch the following IoT show video: ++<iframe src="https://aka.ms/docs/player?id=d6c200c2-f622-4254-b61f-d5db613bbd11" width="1080" height="530"></iframe> ++For more information about the connector, including a complete list of the connector's actions and their parameters, see the [Azure Digital Twins connector reference documentation](/connectors/azuredigitaltwins). ## Prerequisites |
dns | Dns Private Resolver Get Started Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md | description: In this quickstart, you create and test a private DNS resolver in A Previously updated : 09/27/2022 Last updated : 03/02/2023 To apply your forwarding ruleset to the second virtual network, you must create  +## Delete a virtual network link ++Later in this article a rule is created using the private resolver inbound endpoint as a destination. This can cause a DNS resolution loop if the VNet where the resolver is provisioned is also linked to the ruleset. To fix this issue, remove the link to **myvnet**. ++1. Search for **DNS forwarding rulesets** in the Azure services list and select your ruleset (ex: **myruleset**). +2. Select **Virtual Network Links**, choose **myvnet-link**, select **Remove** and select **OK**. ++  + ## Configure a DNS forwarding ruleset Add or remove specific rules your DNS forwarding ruleset as desired, such as: |
dns | Dns Private Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md | Azure DNS Private Resolver is available in the following regions: Azure DNS Private Resolver doesn't move or store customer data out of the region where the resolver is deployed. -## DNS resolver endpoints +## DNS resolver endpoints and rulesets -For more information about endpoints and rulesets, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md). +A summary of resolver endpoints and rulesets is provided in this article. For detailed information about endpoints and rulesets, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md). -### Inbound endpoints +## Inbound endpoints An inbound endpoint enables name resolution from on-premises or other private locations via an IP address that is part of your private virtual network address space. To resolve your Azure private DNS zone from on-premises, enter the IP address of the inbound endpoint into your on-premises DNS conditional forwarder. The on-premises DNS conditional forwarder must have a network connection to the virtual network. -The inbound endpoint requires a subnet in the VNet where itΓÇÖs provisioned. The subnet can only be delegated to **Microsoft.Network/dnsResolvers** and can't be used for other services. DNS queries received by the inbound endpoint will ingress to Azure. You can resolve names in scenarios where you have Private DNS zones, including VMs that are using auto registration, or Private Link enabled services. +The inbound endpoint requires a subnet in the VNet where itΓÇÖs provisioned. The subnet can only be delegated to **Microsoft.Network/dnsResolvers** and can't be used for other services. DNS queries received by the inbound endpoint ingress to Azure. You can resolve names in scenarios where you have Private DNS zones, including VMs that are using auto registration, or Private Link enabled services. -### Outbound endpoints +> [!NOTE] +> The IP address assigned to an inbound endpoint is not a static IP address that you can choose. Typically, the fifth IP address in the subnet is assigned. However, if the inbound endpoint is reprovisioned, this IP address might change. The IP address does not change unless the inbound endpoint is reprovisioned. ++## Outbound endpoints An outbound endpoint enables conditional forwarding name resolution from Azure to on-premises, other cloud providers, or external DNS servers. This endpoint requires a dedicated subnet in the VNet where itΓÇÖs provisioned, with no other service running in the subnet, and can only be delegated to **Microsoft.Network/dnsResolvers**. DNS queries sent to the outbound endpoint will egress from Azure. A DNS forwarding ruleset is a group of DNS forwarding rules (up to 1000) that ca ## DNS forwarding rules -A DNS forwarding rule includes one or more target DNS servers that will be used for conditional forwarding, and is represented by: +A DNS forwarding rule includes one or more target DNS servers that are used for conditional forwarding, and is represented by: - A domain name - A target IP address - A target Port and Protocol (UDP or TCP) Outbound endpoints have the following limitations: ### Other restrictions - IPv6 enabled subnets aren't supported.+- DNS private resolver does not support Azure ExpressRoute FastPath. + ## Next steps |
dns | Private Resolver Endpoints Rulesets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md | -In this article, you'll learn about components of the [Azure DNS Private Resolver](dns-private-resolver-overview.md). Inbound endpoints, outbound endpoints, and DNS forwarding rulesets are discussed. Properties and settings of these components are described, and examples are provided for how to use them. +In this article, you learn about components of the [Azure DNS Private Resolver](dns-private-resolver-overview.md). Inbound endpoints, outbound endpoints, and DNS forwarding rulesets are discussed. Properties and settings of these components are described, and examples are provided for how to use them. The architecture for Azure DNS Private Resolver is summarized in the following figure. In this example network, a DNS resolver is deployed in a hub vnet that peers with a spoke vnet. An ExpressRoute-connected on-premises network is also shown in the figure, with ## Inbound endpoints -As the name suggests, inbound endpoints will ingress to Azure. Inbound endpoints provide an IP address to forward DNS queries from on-premises and other locations outside your virtual network. DNS queries sent to the inbound endpoint are resolved using Azure DNS. Private DNS zones that are linked to the virtual network where the inbound endpoint is provisioned are resolved by the inbound endpoint. +As the name suggests, inbound endpoints ingress to Azure. Inbound endpoints provide an IP address to forward DNS queries from on-premises and other locations outside your virtual network. DNS queries sent to the inbound endpoint are resolved using Azure DNS. Private DNS zones that are linked to the virtual network where the inbound endpoint is provisioned are resolved by the inbound endpoint. The IP address associated with an inbound endpoint is always part of the private virtual network address space where the private resolver is deployed. No other resources can exist in the same subnet with the inbound endpoint. The following screenshot shows an inbound endpoint with a virtual IP address (VIP) of **10.10.0.4** inside the subnet `snet-E-inbound` provisioned within a virtual network with address space of 10.10.0.0/16.  +> [!NOTE] +> The IP address assigned to an inbound endpoint is not a static IP address that you can choose. Typically, the fifth IP address in the subnet is assigned. However, if the inbound endpoint is reprovisioned, this IP address might change. The IP address does not change unless the inbound endpoint is reprovisioned. + ## Outbound endpoints Outbound endpoints egress from Azure and can be linked to [DNS Forwarding Rulesets](#dns-forwarding-rulesets). A ruleset can't be linked to a virtual network in another region. For more infor ### Ruleset links -When you link a ruleset to a virtual network, resources within that virtual network will use the DNS forwarding rules enabled in the ruleset. The linked virtual networks are not required to peer with the virtual network where the outbound endpoint exists, but these networks can be configured as peers. This configuration is common in a hub and spoke design. In this hub and spoke scenario, the spoke vnet doesn't need to be linked to the private DNS zone in order to resolve resource records in the zone. In this case, the forwarding ruleset rule for the private zone sends queries to the hub vnet's inbound endpoint. For example: **azure.contoso.com** to **10.10.0.4**. +When you link a ruleset to a virtual network, resources within that virtual network will use the DNS forwarding rules enabled in the ruleset. The linked virtual networks aren't required to peer with the virtual network where the outbound endpoint exists, but these networks can be configured as peers. This configuration is common in a hub and spoke design. In this hub and spoke scenario, the spoke vnet doesn't need to be linked to the private DNS zone in order to resolve resource records in the zone. In this case, the forwarding ruleset rule for the private zone sends queries to the hub vnet's inbound endpoint. For example: **azure.contoso.com** to **10.10.0.4**. -The following screenshot shows a DNS forwarding ruleset linked to two virtual networks: a hub vnet: **myeastvnet**, and a spoke vnet: **myeastspoke**. +The following screenshot shows a DNS forwarding ruleset linked to the spoke virtual network: **myeastspoke**.  Virtual network links for DNS forwarding rulesets enable resources in other vnets to use forwarding rules when resolving DNS names. The vnet with the private resolver must also be linked from any private DNS zones for which there are ruleset rules. For example, resources in the vnet `myeastspoke` can resolve records in the private DNS zone `azure.contoso.com` if:-- The ruleset provisioned in `myeastvnet` is linked to `myeastspoke` and `myeastvnet`+- The ruleset provisioned in `myeastvnet` is linked to `myeastspoke` - A ruleset rule is configured and enabled in the linked ruleset to resolve `azure.contoso.com` using the inbound endpoint in `myeastvnet` ### Rules DNS forwarding rules (ruleset rules) have the following properties: | | | | Rule name | The name of your rule. The name must begin with a letter, and can contain only letters, numbers, underscores, and dashes. | | Domain name | The dot-terminated DNS namespace where your rule applies. The namespace must have either zero labels (for wildcard) or between 2 and 34 labels. For example, `contoso.com.` has two labels. |-| Destination IP:Port | The forwarding destination. One or more IP addresses and ports of DNS servers that will be used to resolve DNS queries in the specified namespace. | +| Destination IP:Port | The forwarding destination. One or more IP addresses and ports of DNS servers that are used to resolve DNS queries in the specified namespace. | | Rule state | The rule state: Enabled or disabled. If a rule is disabled, it's ignored. | If multiple rules are matched, the longest prefix match is used. For example, if you have the following rules: | AzurePrivate | azure.contoso.com. | 10.10.0.4:53 | Enabled | | Wildcard | . | 10.100.0.2:53 | Enabled | -A query for `secure.store.azure.contoso.com` will match the **AzurePrivate** rule for `azure.contoso.com` and also the **Contoso** rule for `contoso.com`, but the **AzurePrivate** rule takes precedence because the prefix `azure.contoso` is longer than `contoso`. +A query for `secure.store.azure.contoso.com` matches the **AzurePrivate** rule for `azure.contoso.com` and also the **Contoso** rule for `contoso.com`, but the **AzurePrivate** rule takes precedence because the prefix `azure.contoso` is longer than `contoso`. ++> [!IMPORTANT] +> If a rule is present in the ruleset that has as its destination a private resolver inbound endpoint, do not link the ruleset to the VNet where the inbound endpoint is provisioned. This configuration can cause DNS resolution loops. For example: In the previous scenario, no ruleset link should be added to `myeastvnet` because the inbound endpoint at `10.10.0.4` is provisioned in `myeastvnet` and a rule is present that resolves `azure.contoso.com` using the inbound endpoint. ++#### Rule processing ++- If multiple DNS servers are entered as the destination for a rule, the first IP address that is entered is used unless it doesn't respond. An exponential backoff algorithm is used to determine whether or not a destination IP address is responsive. Destination addresses that are marked as unresponsive aren't used for 30 minutes. +- Certain domains are ignored when using a wildcard rule for DNS resolution, because they are reserved for Azure services. See [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for a list of domains that are reserved. The two-label DNS names listed in this article (ex: windows.net, azure.com, azure.net, windowsazure.us) are reserved for Azure services. > [!IMPORTANT] > - You can't enter the Azure DNS IP address of 168.63.129.16 as the destination IP address for a rule. Attempting to add this IP address will output the error: **Exception while making add request for rule**. -> - Do not use the private resolver's inbound endpoint IP address as a forwarding destination for zones that are not linked to the virtual network where the private resolver is provisioned. +> - Do not use the private resolver's inbound endpoint IP address as a forwarding destination for zones that aren't linked to the virtual network where the private resolver is provisioned. ## Next steps |
dns | Find Unhealthy Dns Records | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/find-unhealthy-dns-records.md | The following Azure PowerShell script finds unhealthy DNS records in Azure DNS p ```azurepowershell-interactive <#- 1. Install Pre requisites Az PowerShell modules (https://learn.microsoft.com/powershell/azure/install-az-ps?view=azps-5.7.0) + 1. Install Pre requisites Az PowerShell modules (https://learn.microsoft.com/powershell/azure/install-az-ps) 2. Sign in to your Azure Account using Login-AzAccount or Connect-AzAccount. 3. From an elevated PowerShell prompt, navigate to folder where the script is saved and run the following command: .\ Get-AzDNSUnhealthyRecords.ps1 -SubscriptionId <subscription id> -ZoneName <zonename> |
energy-data-services | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md | Microsoft Energy Data Services will begin billing February 15, 2023. Prices will - No upfront costs or termination feesΓÇöpay only for what you use. - No charges for storage, data transfers or compute overage during preview. +### OSDUΓäó Milestone Upgrade ++Azure Data Manager for Energy Preview is now compliant with the M14 OSDUΓäó milestone release. With this release you can take advantage of the latest features and capabilities available in the [OSDUΓäó M14](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M14-Release-Notes). ### Enable Resource sharing (CORS) CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin. With this feature you can set CORS rules for each Azure Data Manager for Energy instance. When you set CORS rules for the instance it gets applied automatically across all the services and storage accounts linked with Microsoft Energy Data services.[Learn more.]( ../energy-data-services/how-to-enable-CORS.md) |
event-hubs | Event Hubs Capture Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-overview.md | You can configure Capture at the event hub creation time using the [Azure portal ## How Event Hubs Capture is charged -Event Hubs Capture is metered similarly to [throughput units](event-hubs-scalability.md#throughput-units) (standard tier) or [processing units](event-hubs-scalability.md#processing-units) (in premium tier): as an hourly charge. The charge is directly proportional to the number of throughput units or processing units purchased for the namespace. As throughput units or processing units are increased and decreased, Event Hubs Capture meters increase and decrease to provide matching performance. The meters occur in tandem. For pricing details, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/). +The capture feature is included in the premium tier so there is no additional charge for that tier. For the Standard tier, the feature is charged monthly, and the charge is directly proportional to the number of throughput units or processing units purchased for the namespace. As throughput units or processing units are increased and decreased, Event Hubs Capture meters increase and decrease to provide matching performance. The meters occur in tandem. For pricing details, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/). Capture doesn't consume egress quota as it is billed separately. |
event-hubs | Event Hubs Premium Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md | For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas Event Hubs standard, premium, and dedicated tiers offer [availability zones](../availability-zones/az-overview.md#availability-zones) support with no extra cost. Using availability zones, you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures. > [!IMPORTANT] -> Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones). +> - Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones). +> - In certain regions, premium-tier's support for availability zones is limited even though the region supports availability zones.  ## Premium vs. dedicated tiers |
expressroute | Expressroute Troubleshooting Expressroute Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md | When your results are ready, you'll have two sets of them for the primary and se * **You see packet matches sent and received on both MSEEs**: This result indicates healthy traffic inbound to and outbound from the MSEEs on your circuit. If loss is occurring either on-premises or in Azure, it's happening downstream from the MSEEs. * **If you're testing PsPing from on-premises to Azure, received results show matches, but sent results show no matches**: This result indicates that traffic is coming in to Azure but isn't returning to on-premises. Check for return-path routing issues. For example, are you advertising the appropriate prefixes to Azure? Is a user-defined route (UDR) overriding prefixes?-* **If you're testing PsPing from Azure to on-premises, sent results show no matches, but received results show matches**: This result indicates that traffic is coming in to on-premises but isn't returning to Azure. Work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit. +* **If you're testing PsPing from Azure to on-premises, sent results show matches, but received results show no matches**: This result indicates that traffic is coming in to on-premises but isn't returning to Azure. Work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit. * **One MSEE shows no matches, but the other shows good matches**: This result indicates that one MSEE isn't receiving or passing any traffic. It might be offline (for example, BGP/ARP is down). Your test results for each MSEE device will look like the following example: For more information or help, check out the following links: [CreatePeering]: ./expressroute-howto-routing-portal-resource-manager.md [ARP]: ./expressroute-troubleshooting-arp-resource-manager.md [HA]: ./designing-for-high-availability-with-expressroute.md-[DR-Pvt]: ./designing-for-disaster-recovery-with-expressroute-privatepeering.md +[DR-Pvt]: ./designing-for-disaster-recovery-with-expressroute-privatepeering.md |
external-attack-surface-management | Host Asset Filters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/host-asset-filters.md | The following filters require that the user manually enters the value with which | Filter name | Description | Value format example | Applicable operators | |-|-|-|-|-| Port State | Indicates the status of the observed port. | Open, Closed, Filtered | `Equals` `In` | +| Port State | Indicates the status of the observed port. | Open, Filtered | `Equals` `In` | | Port | Any ports detected on the asset. | 443, 80 | `Equals` `Not Equals` `In` `Not In` | | ASN | Autonomous System Number is a network identification for transporting data on the Internet between Internet routers. An ASN will have associated public IP blocks tied to it where hosts are located. | 12345 | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` | | Affected CVSS Score | Searches for assets with a CVE that matches a specific numerical score or range of scores. | Numerical (1-10) | `Equals` `Not Equals` `In` `Not In` `Greater Than or Equal To` `Less Than or Equal To` `Between` `Empty` `Not Empty` | |
external-attack-surface-management | Ip Address Asset Filters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/ip-address-asset-filters.md | The following filters require that the user manually enters the value with which | Filter name | Description | Value format | Applicable operators | |-|-||--|-| Port State | Indicates the status of the observed port. | Open, Closed, Filtered | `Equals` `In` | +| Port State | Indicates the status of the observed port. | Open, Filtered | `Equals` `In` | | Port | Any ports detected on the asset. | 443, 80 | `Equals` `Not Equals` `In` `Not In` | | ASN | Autonomous System Number is a network identification for transporting data on the Internet between Internet routers. An ASN will have associated public IP blocks tied to it where hosts are located. | 12345 | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` | | Banner | A banner is a text displayed by a host that provides details such as the type and version of software running on the system or server. | We recommend using the ΓÇ£matchesΓÇ¥ operator to search for HTML banners by keyword (e.g. ΓÇ£HTTP/1.1ΓÇ¥) | `Matches` `Does not match` `Matches in` `Does not match in` `Empty` `Not empty` | |
external-attack-surface-management | Understanding Billable Assets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-billable-assets.md | Title: Understand billable assets- -description: This article describes how users will be billed for their Defender EASM resource usage, and guides them to the dashboard that displays their counts. ++description: This article describes how users are billed for their Defender EASM resource usage, and guides them to the dashboard that displays their counts. -When customers create their first Microsoft Defender External Attack Surface Management (Defender EASM) resource, they are automatically granted a 30-day free trial. Once the trial has completed, customers will automatically be charged based on their count of billable assets. The charged amount will appear on their core Azure billing, with ΓÇ£Defender EASMΓÇ¥ appearing as separate line item on their invoice. +When customers create their first Microsoft Defender External Attack Surface Management (Defender EASM) resource, they are automatically granted a 30-day free trial. Once the trial has completed, customers are automatically charged based on their count of billable assets. The charged amount appears on their core Azure billing, with ΓÇ£Defender EASMΓÇ¥ appearing as separate line item on their invoice. ## What is a billable asset? The following kinds of assets are considered billable: - Approved IP addresses ΓÇ» -Assets are only categorized as billable if they have been placed in the Approved Inventory state. We do not charge for any other state. Additionally, duplicative host assets are NOT included in the billable asset count. +Assets are only categorized as billable if they have been placed in the Approved Inventory state. We don't charge for any other state. Additionally, duplicative host assets are NOT included in the billable asset count. For example: if www.contoso.com has resolved to 1.2.3.4 and 5.6.7.8 in the past - www.contoso.com / 5.6.7.8 -The list is then analyzed to identify duplicate entries and eliminate duplicate hosts. If a host is a subdomain of a parent host that resolves to the same IP address, we will exclude the child from the billable host count. For example, if both www.contoso.com and contoso.com resolve to 1.2.3.4, then we will exclude www.contoso.com/ 1.2.3.4 from our Host Count list. +The list is then analyzed to identify duplicate entries and eliminate duplicate hosts. If a host is a subdomain of a parent host that resolves to the same IP address, we'll exclude the child from the billable host count. For example, if both www.contoso.com and contoso.com resolve to 1.2.3.4, then we'll exclude www.contoso.com/ 1.2.3.4 from our Host Count list. Excluding the IP addresses that resolve to a billable resolving host, all active For an IP address to be considered active and therefore billable, it must have one of the following: -- a recent detected open port -- a recent detected SSL certificate -- recently appeared on a reputation list +- A recent detected open port +- A recent detected SSL certificate +- Recently appeared on a reputation list These values are all considered ΓÇ£recentΓÇ¥ if observed within the last 30 days. For example: if server1.contoso.com has recently resolved to an IP address and i ## Viewing billable asset data -Users can view their billable assets count within their Defender EASM resource to better understand how Microsoft determines their pricing. This dashboard displays the total number of assets that are billable and therefore comprise your total spend. Users should expect to see counts from the last 30 days when applicable, excluding the most recent couple days that have not yet processed. +Users can view their billable assets count within their Defender EASM resource to better understand how Microsoft determines their pricing. This dashboard displays the total number of assets that are billable and therefore comprise your total spend. Users should expect to see counts from the last 30 days when applicable, excluding the most recent couple days that haven't yet processed. -Prospective customers accessing Defender EASM with a 30-day trial can also see these billable asset counts. Although these users are not charged until the trial has expired, they can view the billable asset dashboard to better understand how they would be billed according to the size of their attack surface. +Prospective customers accessing Defender EASM with a 30-day trial can also see these billable asset counts. Although these users aren't charged until the trial has expired, they can view the billable asset dashboard to better understand how they would be billed according to the size of their attack surface. 1. From the Defender EASM resource, select **Billable assets** from the **Manage** section of the left-hand navigation menu. - +  2. The chart displays billable asset counts over the past 30 days (if we have 30 days of data). The individual bars are segmented by asset type so users can quickly understand how their billable assets are distributed across their attack surface. Users can view the daily counts for each kind of asset by hovering their mouse over the chart. - +  3. Beneath the chart, users can view their current billable asset counts. These numbers are useful when approximating your monthly spend to best protect your organizationΓÇÖs attack surface. - +  ## Next steps |
hdinsight | Hdinsight Hadoop Oms Log Analytics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md | If you want to disable Azure Monitor, you can do the same in this portal. ## Enable Azure Monitor using Azure PowerShell -You can enable Azure Monitor logs using the Azure PowerShell Az module [Enable-AzHDInsightAzureMonitor](/powershell/module/az.hdinsight/enable-azhdinsightazuremonitor?view=azps-6.2.1&preserve-view=true) cmdlet. +You can enable Azure Monitor logs using the Azure PowerShell Az module [Enable-AzHDInsightAzureMonitor](/powershell/module/az.hdinsight/enable-azhdinsightazuremonitor) cmdlet. ```powershell # Enter user information Get-AzHDInsightAzureMonitor ` -ClusterName $cluster ``` -To disable, the use the [Disable-AzHDInsightAzureMonitor](/powershell/module/az.hdinsight/disable-azhdinsightazuremonitor?view=azps-6.2.1&preserve-view=true) cmdlet: +To disable, the use the [Disable-AzHDInsightAzureMonitor](/powershell/module/az.hdinsight/disable-azhdinsightazuremonitor) cmdlet: ```powershell Disable-AzHDInsightAzureMonitor -ResourceGroupName $resourceGroup ` |
hdinsight | Hdinsight Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md | For more information, see [HDInsight 5.1.0 version](./hdinsight-51-component-ver **Ambari CVEs** * Multiple Ambari CVEs are fixed. +> [!NOTE] +> ESP isn't supported for Kafka and HBase in this release. +> +  End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For more information, see [Spark versions supported in Azure HDInsight](./hdinsight-40-component-versioning.md#spark-versions-supported-in-azure-hdinsight) |
healthcare-apis | How To Create Mappings Copies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-create-mappings-copies.md | - Title: Create copies of the MedTech service device and FHIR destination mappings - Azure Health Data Services -description: This article helps users create copies of their MedTech service device and FHIR destination mappings. ----- Previously updated : 1/30/2023----# How to create copies of the MedTech service device and FHIR destination mappings --This article provides steps for creating copies of your MedTech service's device and Fast Healthcare Interoperability Resources (FHIR®) destination mappings that can be used outside of Azure. These copies can be used for editing, troubleshooting, and archiving. --> [!TIP] -> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service. --> [!NOTE] -> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for your MedTech service, include copies of your device and FHIR destination mappings to assist in the troubleshooting process. --## Device mappings copy creation process --1. Select **"MedTech service"** on the left side of your Azure Health Data Services workspace under **Services**. -- :::image type="content" source="media/iot-mappings-copies/iot-mappings-copies-select-medtech-service-button-in-workspace.png" alt-text="Screenshot of select MedTech service within the workspace." lightbox="media/iot-mappings-copies/iot-mappings-copies-select-medtech-service-button-in-workspace.png"::: --2. Select the name of the **MedTech service** that you'll be copying the device mappings from. In this example, we'll be making a copy of the device mappings from a MedTech service named **mt-azuredocsdemo**. You'll be selecting your own MedTech service as part of this process. -- :::image type="content" source="media/iot-mappings-copies/iot-mappings-copies-select-medtech-service.png" alt-text="Screenshot of select the MedTech service that you'll be making mappings copies from." lightbox="media/iot-mappings-copies/iot-mappings-copies-select-medtech-service.png"::: --3. Select the **Device mapping** button under **Settings**. -- :::image type="content" source="media/iot-mappings-copies/iot-mappings-copies-select-device-mapping.png" alt-text="Screenshot of select Device mapping button." lightbox="media/iot-mappings-copies/iot-mappings-copies-select-device-mapping.png"::: -- > [!TIP] - > This process can also be used for copying and saving the contents of the also know as the FHIR destination mappings found in **Destination** which is also under **Settings** within your MedTech service. --4. Select the contents of the device mappings (for example: press **Ctrl + a**) and do a copy operation (for example: press **Ctrl + c**). -- :::image type="content" source="media/iot-mappings-copies/iot-mappings-copies-select-device-mapping-contents.png" alt-text="Screenshot of select and copy contents of the device mappings." lightbox="media/iot-mappings-copies/iot-mappings-copies-select-device-mapping-contents.png"::: --5. Open an editor application like Notepad or [Microsoft Visual Studio Code](https://code.visualstudio.com/) and do a paste operation (for example: press **Ctrl + v**) and a save operation (for example: press **Ctrl + s**) to create a file copy of your MedTech service device mappings. For this example, we'll be using Notepad. -- :::image type="content" source="media/iot-mappings-copies/iot-mappings-copies-save-in-notepad.png" alt-text="Screenshot of using Notepad with the device mappings copy." lightbox="media/iot-mappings-copies/iot-mappings-copies-save-in-notepad.png"::: -- 1. Select a folder to save the file in. - 2. Select a name for your file. - 3. Leave the remaining fields at their defaults (for example: **Save as type** and **Encoding**). - 4. Select the **Save** button. --## Next steps --In this article, you learned about how to make copies of your MedTech service device and FHIR destination mappings. --To learn how to troubleshoot MedTech service errors, see --> [!div class="nextstepaction"] -> [Troubleshoot MedTech service errors](troubleshoot-errors.md) --(FHIR®) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | How To Use Mapping Debugger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md | + + Title: How to use the MedTech service Mapping debugger - Azure Health Data Services +description: This article describes how to use the MedTech service Mapping debugger. +++++ Last updated : 03/03/2023++++# How to use the Mapping debugger ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++In this article, you'll learn how to use the MedTech service Mapping debugger in the Azure portal. The Mapping debugger is a tool used for creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations for persistence in the FHIR service. This new self-service tool allows you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. ++> [!TIP] +> To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Understand the device message data transformation](understand-service.md). ++## Overview of the Mapping debugger ++1. To access the MedTech service's Mapping debugger, select **Mapping debugger** within your MedTech service on the Azure portal. For this article, we'll be using a MedTech service named **mt-azuredocsdemo**. You'll select your own MedTech service. From this screen, we can see the Mapping debugger is presenting the device and FHIR destination mappings associated with this MedTech service and has provided a **Validation** of those mappings. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-main-screen.png" alt-text="Screenshot of the Mapping debugger main screen." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-main-screen.png"::: ++2. The Mapping debugger provides convenient features to help make the management, editing, and troubleshooting of device and FHIR destination mappings easier. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-upload-and-download.png" alt-text="Screenshot of the Mapping debugger main screen with Upload and Download buttons highlighted." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-upload-and-download.png"::: ++ **Upload** - With this selection, you can upload: + - **Device mapping**: Can be edited and saved (optional) to the MedTech service. + - **FHIR destination mapping**: Can be edited and saved (optional) to the MedTech service. + - **Test device message**: Used by the validation service to produce a sample normalized measurement and FHIR Observation based on the supplied mappings. ++ **Download** - With this selection you can download copies of: + - **Device mapping**: The device mapping currently used by your MedTech service. + - **FHIR destination mapping**: The FHIR destination mapping currently used by your MedTech service. + - **Mappings**: Both mappings currently used by your MedTech service ++## How to troubleshoot the device and FHIR destination mappings using the Mapping debugger ++1. If there are errors with the device or FHIR destination mappings, the Mapping debugger will display the issues. In this example, we can see that there are error *warnings* at **Line 12** in the **Device mapping** and at **Line 20** in the **FHIR destination mapping**. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-with-errors.png" alt-text="Screenshot of the Mapping debugger with device and FHIR destination mappings warnings." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-with-errors.png"::: ++2. If you place your mouse cursor over an error warning, the Mapping debugger will provide you with more error information. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-with-error-details.png" alt-text="Screenshot of the Mapping debugger with error details for the device mappings warning." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-with-error-details.png"::: ++3. Using the suggestions provided by the Mapping debugger, we've now fixed the error warnings and are ready to select **Save** to commit our updated device and FHIR destination mappings to the MedTech service. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-save-mappings.png" alt-text="Screenshot of the Mapping debugger and the Save button." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-save-mappings.png"::: ++ > [!NOTE] + > The MedTech service only saves the mappings that have been changed/updated. For example: If you only made a change to the **device mapping**, only those changes are saved to your MedTech service and no changes would be saved to the FHIR destination mapping. This is by design and to help with performance of the MedTech service. ++4. Once the device and FHIR destination mappings are successfully saved, you'll receive confirmation from **Notifications** within the Azure portal. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-successful-save.png" alt-text="Screenshot of the Mapping debugger and a successful the save of the device and FHIR destination mappings." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-successful-save.png"::: ++## View a normalized message and FHIR Observation ++1. The Mapping debugger gives you the ability to view sample outputs of the normalization and FHIR transformation processes by supplying a test device message. Select **Upload** and **Test device message**. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-select-upload-and-test-device-message.png" alt-text="Screenshot of the Mapping debugger and test device message box." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-select-upload-and-test-device-message.png"::: ++2. The **Select a file** box will open. For this example, we'll select **Enter manually**. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-select-test-device-message-manual.png" alt-text="Screenshot of the Mapping debugger and Select a file box." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-select-test-device-message-manual.png"::: ++3. Copy/paste or type the test device message into the **Upload test device message** box. The **Validation** box may still be *red* if the either of the mappings has an error/warning. As long as **No errors** is green, the test device message is valid. Select the **X** in the right corner to close the **Upload test device message** box. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-input-test-device-message.png" alt-text="Screenshot of the Enter manually box with a validated test device message in the box." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-input-test-device-message.png"::: ++4. Once a conforming test device message is uploaded, the **View normalized message** and **View FHIR observation** buttons will become available so that you may view the sample outputs of the normalization and FHIR transformation processes. These sample outputs can be used to validate your device and FHIR destination mappings are properly configured for processing events according to your requirement. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-normalized-and-FHIR-selections-available.png" alt-text="Screenshot View normalized message and View FHIR observation available." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-normalized-and-FHIR-selections-available.png"::: ++5. Use the **X** in the corner to close the **Normalized message** and **FHIR observation** boxes. ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-normalized-message.png" alt-text="Screenshot of the normalized message." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-normalized-and-FHIR-selections-available.png"::: ++ :::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-fhir-observation.png" alt-text="Screenshot of the FHIR observation available." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-fhir-observation.png"::: ++## Next steps ++In this article, you learned about how to use the Mapping debugger to edit/troubleshoot the MedTech service device and FHIR destination mappings and view normalized message and FHIR Observation from a test device message. ++To learn how to troubleshoot MedTech service deployment errors, see ++> [!div class="nextstepaction"] +> [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md) ++To learn how to troubleshoot errors using the MedTech service logs, see ++> [!div class="nextstepaction"] +> [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. + |
healthcare-apis | Understand Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md | -# Understand the MedTech service device message data transformation +# Understand the device message data transformation > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. |
iot-edge | How To Vs Code Develop Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md | description: Use Visual Studio Code to develop, build, and debug a module for Az Previously updated : 9/30/2022 Last updated : 3/2/2023 To build and deploy your module image, you need Docker to build the module image ::: zone pivot="iotedge-dev-cli" -- Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) in order to set up your local development environment to debug, run, and test your IoT Edge solution. If you haven't already, install [Python (3.6/3.7)](https://www.python.org/downloads/) and Pip3 and then install the IoT Edge Dev Tool (iotedgedev) with the following command in your terminal. +- Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) with the following command to enable you to debug, run, and test your IoT Edge solution. [Python (3.6/3.7)](https://www.python.org/downloads/) and [Pip3](https://pip.pypa.io/en/stable/installation/) are required. - ```cmd + ```bash pip3 install iotedgedev ``` > [!NOTE] >- > If you have multiple Python including pre-installed Python 2.7 (for example, on Ubuntu or macOS), make sure you are using `pip3` to install *IoT Edge Dev Tool (iotedgedev)*. + > If you have multiple Python versions, including pre-installed Python 2.7 (for example, on Ubuntu or macOS), make sure you use `pip3` to install *IoT Edge Dev Tool (iotedgedev)*. > > For more information setting up your development machine, see [iotedgedev development setup](https://github.com/Azure/iotedgedev/blob/main/docs/environment-setup/manual-dev-machine-setup.md). Install prerequisites specific to the language you're developing in: - Install [Node.js](https://nodejs.org). Install [Yeoman](https://www.npmjs.com/package/yo) and the [Azure IoT Edge Node.js Module Generator](https://www.npmjs.com/package/generator-azure-iot-edge-module). # [Python](#tab/python) -- Install [Python](https://www.python.org/downloads/) and [Pip](https://pip.pypa.io/en/stable/installation/)-- Install [Python Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)-- Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) to debug, run, and test your IoT Edge solution. You can alternatively install the Azure IoT Edge Dev Tool using the CLI:- - ```cmd - pip3 install iotedgedev - ``` +Install [Python](https://www.python.org/downloads/) and [Pip](https://pip.pypa.io/en/stable/installation/). - > [!NOTE] - > - > If you have multiple Python including pre-installed Python 2.7 (for example, on Ubuntu or macOS), make sure you are using `pip3` to install *IoT Edge Dev Tool (iotedgedev)*. For more information setting up your development machine, see [iotedgedev development setup](https://github.com/Azure/iotedgedev/blob/main/docs/environment-setup/manual-dev-machine-setup.md). ++Install the [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python). -To test your module on a device: +To test your module on a device, you need: - An active IoT Hub with at least one IoT Edge device. - A physical IoT Edge device or a virtual device. To create a virtual device in Azure, follow the steps in the quickstart for [Linux](quickstart-linux.md) or [Windows](quickstart.md). The following steps show you how to create an IoT Edge module in your preferred The [IoT Edge Dev Tool](https://github.com/Azure/iotedgedev) simplifies Azure IoT Edge development to simple commands driven by environment variables. It gets you started with IoT Edge development with the IoT Edge Dev Container and IoT Edge solution scaffolding that contains a default module and all the required configuration files. -1. Create a directory for your solution. +1. Create a directory for your solution with the filepath of your choice. Change into your `iotedgesolution` directory. ```bash mkdir c:\dev\iotedgesolution ``` -1. Use the **iotedgedev solution init** command to create a solution and set up your Azure IoT Hub. Use the following command to create an IoT Edge solution for a specified development language. +1. Use the **iotedgedev solution init** command to create a solution and set up your Azure IoT Hub in the development language of your choice. # [C\#](#tab/csharp) The *iotedgedev solution init* script prompts you to complete several steps incl * Choose or create an Azure IoT Hub * Choose or create an Azure IoT Edge device -After solution creation, there are four items within the solution: +After solution creation, these main files are in the solution: -- A **.vscode** folder contains configuration file *launch.json*.-- A **modules** folder has subfolders for each module. Within the subfolder for each module, the *module.json* file controls how modules are built and deployed.-- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost* by default. If Azure Container Registry is your registry, set an Azure Container Registry username and password. For example,+- A **.vscode** folder contains configuration file launch.json. +- A **modules** folder that has subfolders for each module. Within the subfolder for each module, the module.json file controls how modules are built and deployed. +- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost:5000* by default. If Azure Container Registry is your registry, set an Azure Container Registry username and password. Get these values from your container registry's **Settings** > **Access keys** menu in the Azure portal. The **CONTAINER_REGISTRY_SERVER** is the **Login server** of your registry. ++ For example: ```env CONTAINER_REGISTRY_SERVER="myacr.azurecr.io" After solution creation, there are four items within the solution: > [!NOTE] > The environment file is only created if you provide an image repository for the module. If you accepted the localhost defaults to test and debug locally, then you don't need to declare environment variables. -- Two module deployment files named **deployment.template.json** and **deployment.debug.template** list the modules to deploy to your device. By default, the list includes the IoT Edge system modules and two sample modules:+- Two module deployment files named **deployment.template.json** and **deployment.debug.template.json** list the modules to deploy to your device. By default, the list includes the IoT Edge system modules (edgeAgent and edgeHub) and sample modules such as: - **filtermodule** is a sample module that implements a simple filter function. - **SimulatedTemperatureSensor** module that simulates data you can use for testing. For more information about how deployment manifests work, see [Learn how to use deployment manifests to deploy modules and establish routes](module-composition.md). For more information on how the simulated temperature module works, see the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).+ + > [!NOTE] + > The exact modules installed may depend on your language of choice. ::: zone-end Use Visual Studio Code and the [Azure IoT Edge](https://marketplace.visualstudio 1. Select **View** > **Command Palette**. 1. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge Solution**. - :::image type="content" source="./media/how-to-develop-csharp-module/new-solution.png" alt-text="Screenshot of how to run a new IoT Edge solution."::: + :::image type="content" source="./media/how-to-develop-csharp-module/new-solution.png" alt-text="Screenshot of how to run a new IoT Edge solution." lightbox="./media/how-to-develop-csharp-module/new-solution.png"::: 1. Browse to the folder where you want to create the new solution and then select **Select folder**. 1. Enter a name for your solution. 1. Select a module template for your preferred development language to be the first module in the solution. 1. Enter a name for your module. Choose a name that's unique within your container registry.-1. Provide the name of the module's image repository. Visual Studio Code autopopulates the module name with **localhost:5000/<your module name\>**. Replace it with your own registry information. Use **localhost** if you use a local Docker registry for testing. If you use Azure Container Registry, then use sign in server from your registry's settings. The sign-in server looks like **_\<registry name\>_.azurecr.io**. Only replace the **localhost:5000** part of the string so that the final result looks like **\<*registry name*\>.azurecr.io/_\<your module name\>_**. +1. Provide the name of the module's image repository. Visual Studio Code autopopulates the module name with **localhost:5000/<your module name\>**. Replace it with your own registry information. Use **localhost** if you use a local Docker registry for testing. If you use Azure Container Registry, then use **Login server** from your registry's settings. The sign-in server looks like **_\<registry name\>_.azurecr.io**. Only replace the **localhost:5000** part of the string so that the final result looks like **\<*registry name*\>.azurecr.io/_\<your module name\>_**. - :::image type="content" source="./media/how-to-develop-csharp-module/repository.png" alt-text="Screenshot of how to provide a Docker image repository."::: + :::image type="content" source="./media/how-to-develop-csharp-module/repository.png" alt-text="Screenshot of how to provide a Docker image repository." lightbox="./media/how-to-develop-csharp-module/repository.png"::: Visual Studio Code takes the information you provided, creates an IoT Edge solution, and then loads it in a new window. There are four items within the solution: - A **.vscode** folder contains debug configurations.-- A **modules** folder has subfolders for each module. Within the folder for each module, there's a file called **module.json** that controls how modules are built and deployed. You need to modify this file to change the module deployment container registry from a localhost to a remote registry. At this point, you only have one module. But you can add more if needed-- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost* by default. If Azure Container Registry is your registry, set an Azure Container Registry username and password. For example,-- ```env - CONTAINER_REGISTRY_SERVER="myacr.azurecr.io" - CONTAINER_REGISTRY_USERNAME="myacr" - CONTAINER_REGISTRY_PASSWORD="<your_acr_password>" - ``` -- In production scenarios, you should use service principals to provide access to your container registry instead of the *.env* file. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry). -- > [!NOTE] - > The environment file is only created if you provide an image repository for the module. If you accepted the localhost defaults to test and debug locally, then you don't need to declare environment variables. --- Two module deployment files named **deployment.template.json** and **deployment.debug.template** list the modules to deploy to your device. By default, the list includes the IoT Edge system modules and sample modules including the **SimulatedTemperatureSensor** module that simulates data you can use for testing. For more information about how deployment manifests work, see [Learn how to use deployment manifests to deploy modules and establish routes](module-composition.md). For more information on how the simulated temperature module works, see the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).---### Set IoT Edge runtime version +- A **modules** folder has subfolders for each module. Within the folder for each module, there's a file called **module.json** that controls how modules are built and deployed. You need to modify this file to change the module deployment container registry from a localhost to a remote registry. At this point, you only have one module. But you can add more if needed. +- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost* by default. If Azure Container Registry is your registry, set an Azure Container Registry username and password. Get these values from your container registry's **Settings** > **Access keys** menu in the Azure portal. The **CONTAINER_REGISTRY_SERVER** is the **Login server** of your registry. -The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. ---1. Select **View** > **Command Palette**. -1. In the command palette, enter and run the command **Azure IoT Edge: Set default IoT Edge runtime version**. -1. Choose the runtime version that your IoT Edge devices are running from the list. -- Currently, the extension doesn't include a selection for the latest runtime versions. If you want to set the runtime version higher than 1.2, open *deployment.debug.template.json* deployment manifest file. Change the runtime version for the system runtime module images *edgeAgent* and *edgeHub*. For example, if you want to use the IoT Edge runtime version 1.4, change the following lines in the deployment manifest file: -- ```json - ... - "systemModules": { - "edgeAgent": { - ... - "image": "mcr.microsoft.com/azureiotedge-agent:1.4", - ... - "edgeHub": { - ... - "image": "mcr.microsoft.com/azureiotedge-hub:1.4", - ... - ``` +For example: -1. After you select a new runtime version, your deployment manifest is dynamically updated to reflect the change to the runtime module images. + ```env + CONTAINER_REGISTRY_SERVER="myacr.azurecr.io" + CONTAINER_REGISTRY_USERNAME="myacr" + CONTAINER_REGISTRY_PASSWORD="<my_acr_password>" + ``` + In production scenarios, you should use service principals to provide access to your container registry instead of the *.env* file. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry). + > [!NOTE] + > The environment file is only created if you provide an image repository for the module. If you accepted the localhost defaults to test and debug locally, then you don't need to declare environment variables. -1. In Visual Studio Code, open *deployment.debug.template.json* deployment manifest file. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) is a JSON document that describes the modules to be configured on the targeted IoT Edge device. -1. Change the runtime version for the system runtime module images *edgeAgent* and *edgeHub*. For example, if you want to use the IoT Edge runtime version 1.4, change the following lines in the deployment manifest file: +- Two module deployment files named **deployment.template.json** and **deployment.debug.template** list the modules to deploy to your device. By default, the list includes the IoT Edge system modules and sample modules including the **SimulatedTemperatureSensor** module that simulates data you can use for testing. - ```json - ... - "systemModules": { - "edgeAgent": { - ... - "image": "mcr.microsoft.com/azureiotedge-agent:1.4", - ... - "edgeHub": { - ... - "image": "mcr.microsoft.com/azureiotedge-hub:1.4", - ... - ``` + For more information about deployment manifests, see [Learn how to use deployment manifests to deploy modules and establish routes](module-composition.md). For more information about the simulated temperature module, see the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor). ::: zone-end ## Add more modules -To add more modules to your solution, change to *module* directory. +To add more modules to your solution, change to the *modules* directory and add them there. ```bash cd modules Run the command **Azure IoT Edge: Add IoT Edge Module** from the command palette ::: zone pivot="iotedge-dev-cli" +Install the modules using your language of choice. + # [C\#](#tab/csharp) 1. Install the [.NET IoT Edge C# template](https://github.com/azure/dotnet-template-azure-iot-edge-module/). Run the command **Azure IoT Edge: Add IoT Edge Module** from the command palette # [Python](#tab/python) 1. Create a new directory folder in the *modules* folder and change directory to the new folder. For example, `mkdir pythonmodule` then `cd pythonmodule`.-1. Download a ZIP of the contents of the [Cookiecutter Template for Azure IoT Edge Python Module](https://github.com/azure/cookiecutter-azure-iot-edge-module) from GitHub. -1. Extract the contents of the `{{cookiecutter.module_name}}` folder in the ZIP file then copy the files into the new module directory. -1. Update *module.json* file with correct repository. For example, if you want to use the repository defined in your environment variables, use `${CONTAINER_REGISTRY_SERVER}/cmodule`. +1. Get the `cookiecutter-azure-iot-edge-module` from GitHub, using one of these methods: + * From another Bash terminal instance, clone the repository to your desktop with the command: + ```URL + git clone https://github.com/Azure/cookiecutter-azure-iot-edge-module.git + ``` + * Download a ZIP and extract the contents of the [Cookiecutter Template for Azure IoT Edge Python Module](https://github.com/azure/cookiecutter-azure-iot-edge-module). +1. Copy the contents in the `{{cookiecutter.module_name}}` folder then add these files into your new module directory. In this tutorial, we call this new directory **pythonmodule**. ++ :::image type="content" source="media/how-to-vs-code-develop-module/modules-folder-structure.png" alt-text="Screenshot of the expected folder structure for your I o T Edge solution."::: ++1. Update the *module.json* file with the correct repository. For example, if you want to use the repository defined in your environment variables, use `${CONTAINER_REGISTRY_SERVER}/cmodule`, instead of `{{cookiecutter.image_repository}}`. modules/*<your module name>*/**app.js** # [Python](#tab/python) modules/*<your module name>*/**main.py** + The sample modules allow you to build the solution, push to your container registry, and deploy to a device. This process lets you start testing without modifying any code. The sample module takes input from a source (in this case, the *SimulatedTemperatureSensor* module that simulates data) and pipes it to IoT Hub. Debugging a module without a container isn't available when using *C* or *Python In the Visual Studio Code integrated terminal, change the directory to the ***<your module name>*** folder, and then run the following command to build .NET Core application. -```cmd +```bash dotnet build ``` Navigate to the Visual Studio Code Debug view by selecting the debug icon from t In the Visual Studio Code integrated terminal, change the directory to the ***<your module name>*** folder, and then run the following command to install Node packages -```cmd +```bash npm install ``` In each module folder, there are several Docker files for different container ty When you debug modules using this method, your modules are running on top of the IoT Edge runtime. The IoT Edge device and your Visual Studio Code can be on the same machine, or more typically, Visual Studio Code is on the development machine and the IoT Edge runtime and modules are running on another physical machine. In order to debug from Visual Studio Code, you must: - Set up your IoT Edge device, build your IoT Edge modules with the **.debug** Dockerfile, and then deploy to the IoT Edge device.-- Update the `launch.json` so that Visual Studio Code can attach to the process in the container on the remote machine. You can find this file in the `.vscode` folder in your workspace and updates each time you add a new module that supports debugging.+- Update `launch.json` so that Visual Studio Code can attach to the process in a container on the remote machine. You can find this file in the `.vscode` folder in your workspace, and it updates each time you add a new module that supports debugging. - Use Remote SSH debugging to attach to the container on the remote machine. ### Build and deploy your module to an IoT Edge device -In Visual Studio Code, open the *deployment.debug.template.json* deployment manifest file. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) is a JSON document that describes the modules to be configured on the targeted IoT Edge device. Before deployment, you need to update your Azure Container Registry credentials and your module images with the proper `createOptions` values. For more information about createOption values, see [How to configure container create options for IoT Edge modules](how-to-use-create-options.md). +In Visual Studio Code, open the *deployment.debug.template.json* deployment manifest file. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) describes the modules to be configured on the targeted IoT Edge device. Before deployment, you need to update your Azure Container Registry credentials and your module images with the proper `createOptions` values. For more information about createOption values, see [How to configure container create options for IoT Edge modules](how-to-use-create-options.md). ::: zone pivot="iotedge-dev-cli" -1. If you're using an Azure Container Registry to store your module image, add your credentials to **deployment.debug.template.json** in the *edgeAgent* settings. For example: +1. If you're using an Azure Container Registry to store your module image, add your credentials to the *edgeAgent* > *settings* > *registryCredentials* section in **deployment.debug.template.json**. Replace **myacr** with your own registry name in both places and provide your password and **Login server** address. For example: ```json "modulesContent": { In Visual Studio Code, open the *deployment.debug.template.json* deployment mani ... ``` -1. Add or replace the following stringified content to the *createOptions* value for each system and custom module listed. Change the values if necessary. +1. Add or replace the following stringified content to the *createOptions* value for each system (edgeHub and edgeAgent) and custom module (for example, tempSensor) listed. Change the values if necessary. ```json "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}" In Visual Studio Code, open the *deployment.debug.template.json* deployment mani 1. In the Visual Studio Code command palette, run the command **Azure IoT Edge: Build and Push IoT Edge solution**. 1. Select the `deployment.debug.template.json` file for your solution.-1. In the **Azure IoT Hub Devices** section of the Visual Studio Code Explorer view, right-click the IoT Edge device name for deployment and then choose **Create Deployment for Single Device**. +1. In the **Azure IoT Hub** > **Devices** section of the Visual Studio Code Explorer view, right-click the IoT Edge device name for deployment and then choose **Create Deployment for Single Device**. > [!TIP] > To confirm that the device you've chosen is an IoT Edge device, select it to expand the list of modules and verify the presence of **$edgeHub** and **$edgeAgent**. Every IoT Edge device includes these two modules. 1. Navigate to your solution's **config** folder, select the `deployment.debug.amd64.json` file, and then select **Select Edge Deployment Manifest**. -You can check your container status by running the `docker ps` command in the terminal. If your Visual Studio Code and IoT Edge runtime are running on the same machine, you can also check the status in the Visual Studio Code Docker view. +You can check your container status from your device or virtual machine by running the `docker ps` command in a terminal. You should see your container listed after running the command. If your Visual Studio Code and IoT Edge runtime are running on the same machine, you can also check the status in the Visual Studio Code Docker view. > [!IMPORTANT] > If you're using a private registry like Azure Container Registry (ACR) for your images, you may need to authenticate to push images. Use `docker login <ACR login server>` or `az acr login --name <ACR name>` to authenticate. You can check your container status by running the `docker ps` command in the te #### Build module Docker image -Use the module's Dockerfile to build the module Docker image. +Use the module's Dockerfile to [build](https://docs.docker.com/engine/reference/commandline/build/) the module Docker image. ```bash docker build --rm -f "<DockerFilePath>" -t <ImageNameAndTag> "<ContextPath>" docker build --rm -f "./modules/filtermodule/Dockerfile.amd64.debug" -t myacr.az #### Push module Docker image -Push your module image to the local registry or a container registry. +[Push](https://docs.docker.com/engine/reference/commandline/push/) your module image to the local registry or a container registry. `docker push <ImageName>` The Docker and Moby engines support SSH connections to containers allowing you t 1. In Visual Studio Code, set breakpoints in your custom module. 1. When a breakpoint is hit, you can inspect variables, step through code, and debug your module. - :::image type="content" source="media/how-to-vs-code-develop-module/vs-code-breakpoint.png" alt-text="Screenshot of Visual Studio Code attached to a Docker container on a remote device paused at a breakpoint."::: + :::image type="content" source="media/how-to-vs-code-develop-module/vs-code-breakpoint.png" alt-text="Screenshot of Visual Studio Code attached to a Docker container on a remote device paused at a breakpoint." lightbox="media/how-to-vs-code-develop-module/vs-code-breakpoint.png"::: > [!NOTE]-> The preceding example shows how to debug IoT Edge modules on remote containers. It added a remote Docker context and changes to the Docker privileges on the remote device. After you finish debugging your modules, set your Docker context to *default* and remove privileges from your user account. +> The preceding example shows how to debug IoT Edge modules on remote containers. The example adds a remote Docker context and changes to the Docker privileges on the remote device. After you finish debugging your modules, set your Docker context to *default* and remove privileges from your user account. See this [IoT Developer blog entry](https://devblogs.microsoft.com/iotdev/easily-build-and-debug-iot-edge-modules-on-your-remote-device-with-azure-iot-edge-for-vs-code-1-9-0/) for an example using a Raspberry Pi device. See this [IoT Developer blog entry](https://devblogs.microsoft.com/iotdev/easily After you've built your module, learn how to [deploy Azure IoT Edge modules from Visual Studio Code](how-to-deploy-modules-vscode.md). -To develop modules for your IoT Edge devices, [Understand and use Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md). +To develop modules for your IoT Edge devices, understand and use [Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md). |
iot-hub | Monitor Device Connection State | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-device-connection-state.md | Title: Monitor device status - Azure IoT Hub -description: Use Event Grid or heartbeat patterns to monitor IoT Hub device connection states. + Title: Monitor device status ++description: Use Event Grid or the device heartbeat pattern to monitor the connection states of Azure IoT Hub devices. Using Event Grid to monitor your device status comes with the following limitati If any of these limitations affect your ability to use Event Grid for device status monitoring, then you should consider building a custom device heartbeat pattern instead. -## Device heartbeat +## Device heartbeat pattern If you need to know the connection state of your devices but the limitations of Event Grid are too restricting for your solution, you can implement the *heartbeat pattern*. In the heartbeat pattern, the device sends device-to-cloud messages at least once every fixed amount of time (for example, at least once every hour). Even if a device doesn't have any data to send, it still sends an empty device-to-cloud message, usually with a property that identifies it as a heartbeat message. On the service side, the solution maintains a map with the last heartbeat received for each device. If the solution doesn't receive a heartbeat message within the expected time from the device, it assumes that there's a problem with the device. -> [!NOTE] -> If an IoT solution uses the connection state solely to determine whether to send cloud-to-device messages, and messages are not broadcast to large sets of devices, consider using the simpler *short expiry time* pattern. This pattern achieves the same result as maintaining a device connection state registry using the heartbeat pattern, while being more efficient. If you request message acknowledgements, IoT Hub can notify you about which devices are able to receive messages and which are not. - ### Device heartbeat limitations Since heartbeat messages are implemented as device-to-cloud messages, they count against your [IoT Hub message quota and throttling limits](iot-hub-devguide-quotas-throttling.md). +### Short expiry time pattern ++If an IoT solution uses the connection state solely to determine whether to send cloud-to-device messages to a device, and messages aren't broadcast to large sets of devices, consider using the *short expiry time pattern* as a simpler alternative to the heartbeat pattern. The short expiry time pattern is a way to determine whether to send cloud-to-device messages by sending messages with a short message expiration time and requesting message acknowledgments from the devices. ++For more information, see [Message expiration (time to live)](./iot-hub-devguide-messages-c2d.md#message-expiration-time-to-live). + ## Other monitoring options A more complex implementation could include the information from [Azure Monitor](../azure-monitor/index.yml) and [Azure Resource Health](../service-health/resource-health-overview.md) to identify devices that are trying to connect or communicate but failing. Azure Monitor dashboards are helpful for seeing the aggregate health of your devices, while Event Grid and heartbeat patterns make it easier to respond to individual device outages. |
lab-services | Approaches For Custom Image Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/approaches-for-custom-image-creation.md | Here are a few reasons why you might want to use this approach: - You can create either [generalized or specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) images to use in your labs. Otherwise, if you use a [lab's template VM](how-to-use-shared-image-gallery.md) to export an image, the image is always specialized. - You can access resources that exist within your on-premises environment. For example, you might have large installation files in your on-premises environment that are too time consuming to copy to a lab's template VM.-- You can upload images created by using other tools, such as [Microsoft Endpoint Configuration Manager](/mem/configmgr/core/understand/introduction), so that you don't have to manually set up an image by using a lab's template VM.+- You can upload images created by using other tools, such as [Microsoft Configuration Manager](/mem/configmgr/core/understand/introduction), so that you don't have to manually set up an image by using a lab's template VM. Bringing a custom image from a VHD is the most advanced approach because you must ensure that the image is set up properly so that it works within Azure. As a result, IT departments are typically responsible for creating custom images from VHDs. |
lab-services | Class Type Adobe Creative Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-adobe-creative-cloud.md | Last updated 02/17/2023 # Set up a lab for Adobe Creative Cloud in Azure Lab Services + In this article, you learn how to set up a class that uses Adobe Creative Cloud. [Adobe Creative Cloud](https://www.adobe.com/creativecloud.html) is a collection of desktop applications and web services used for photography, design, video, web, user experience (UX), and more. Universities and K-12 schools use Creative Cloud in digital arts and media classes. Some of Creative CloudΓÇÖs media processes might require more computational and visualization (GPU) power than a typical tablet, laptop, or workstation support. With Azure Lab Services, you have flexibility to choose from various virtual machine (VM) sizes, including GPU sizes. ## Create Cloud licensing in a lab VM |
lab-services | Class Type Autodesk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-autodesk.md | Title: Set up a lab with Autodesk using Azure Lab Services -description: Learn how to set up labs to teach engineering classes with Autodesk. -- Previously updated : 02/02/2022+ Title: Set up a lab with Autodesk ++description: Learn how to set up a lab in Azure Lab Services to teach engineering classes with Autodesk. + +++ Last updated : 03/03/2023 -# Set up labs for Autodesk +# Set up a lab to teach engineering classes with Autodesk [!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)] -This article describes how to set up Autodesk Inventor and Autodesk Revit software for engineering classes. +This article describes how to set up Autodesk Inventor and Autodesk Revit software for engineering classes in Azure Lab Services. - [Inventor computer-aided design (CAD)](https://www.autodesk.com/products/inventor/new-features) and [computer-aided manufacturing (CAM)](https://www.autodesk.com/products/inventor-cam/overview) provide 3D modeling and are used in engineering design. - [Revit](https://www.autodesk.com/products/revit/overview) is used in architecture design for 3D building information modeling (BIM). Autodesk is commonly used in both universities and K-12 schools. For example, i ## License server -You'll need to access a license server if you plan to use the Autodesk network licensing model. Read Autodesk's article on [Network License Administration](https://knowledge.autodesk.com/customer-service/network-license-administration/network-deployment/preparing-for-deployment/determining-installation-type) for more information. +You need to access a license server if you plan to use the Autodesk network licensing model. Read Autodesk's article on [Network License Administration](https://knowledge.autodesk.com/customer-service/network-license-administration/network-deployment/preparing-for-deployment/determining-installation-type) for more information. -To use network licensing with Autodesk software, [AutoDesk provides detailed steps](https://knowledge.autodesk.com/customer-service/network-license-administration/install-and-configure-network-license) to install Autodesk Network License Manager on your license server. This license server is ordinarily located in either your on-premises network or hosted on an Azure virtual machine (VM) within in Azure virtual network. +To use network licensing with Autodesk software, [AutoDesk provides detailed steps](https://knowledge.autodesk.com/customer-service/network-license-administration/install-and-configure-network-license) to install Autodesk Network License Manager on your license server. You can host the license server in your on-premises network, or on an Azure virtual machine (VM) within in an Azure virtual network. -After your license server is set up, you'll need to enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plan. +After setting up your license server, you need to enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when you create the lab plan. -Autodesk-generated license files embed the MAC address of the license server. If you decide to host your license server by using an Azure VM, itΓÇÖs important to make sure that your license serverΓÇÖs MAC address doesnΓÇÖt change. If the MAC address changes, you'll need to regenerate your licensing files. To prevent your MAC address from changing: +Autodesk-generated license files embed the MAC address of the license server. If you decide to host your license server by using an Azure VM, itΓÇÖs important to make sure that your license serverΓÇÖs MAC address doesnΓÇÖt change. If the MAC address changes, you need to regenerate your licensing files. To prevent your MAC address from changing: - [Set a static private IP and MAC address](how-to-create-a-lab-with-shared-resource.md#tips) for the Azure VM that hosts your license server.-- Be sure to create both your lab plan and the license serverΓÇÖs virtual network in the same region. Also, verify the region has sufficient VM capacity so that you donΓÇÖt have to move these resources to a new region later.+- Create both your lab plan and the license serverΓÇÖs virtual network in the same region. Also, verify that the region has sufficient VM capacity to avoid that you have to move these resources to another region later. For more information, see [Set up a license server as a shared resource](./how-to-create-a-lab-with-shared-resource.md). > [!IMPORTANT]-> [Advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) must be enabled during the creation of your lab plan. It can not be added later. +> You must enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plan. You can't enable advanced networking for an existing lab plan. ## Lab configuration For more information, see [Set up a license server as a shared resource](./how-t ### Lab plan settings -Enable your lab plan settings as described in the following table. For more information about how to enable Azure Marketplace images, see [Specify the Azure Marketplace images available to lab creators](./specify-marketplace-images.md). +This lab uses a Windows 10 Pro Azure Marketplace images as the base VM image. You first need to enable this image in your lab plan. This lets lab creators then select the image as a base image for their lab. -| Lab plan setting | Instructions | -| - | | -|Marketplace image| Enable the Windows 10 Pro or Windows 10 Pro N image, if not done already.| +Follow these steps to [enable these Azure Marketplace images available to lab creators](specify-marketplace-images.md). Select one of the **Windows 10** Azure Marketplace images. ### Lab settings +1. Create a lab for your lab plan: ++ [!INCLUDE [create lab](./includes/lab-services-class-type-lab.md)] Use the following settings when creating the lab. -| Lab setting | Value and description | -| | | -| Virtual Machine Size | **Small GPU (Visualization)**. Best suited for remote visualization, streaming, gaming, and encoding with frameworks such as OpenGL and DirectX. | + | Lab setting | Value and description | + | | | + | Virtual Machine Size | **Small GPU (Visualization)**. Best suited for remote visualization, streaming, gaming, and encoding with frameworks such as OpenGL and DirectX. | + | Virtual Machine Image | Windows 10 Pro | -> [!WARNING] -> The **Small GPU (Visualization)** virtual machine size is configured to enable a high-performing graphics experience and meets [AdobeΓÇÖs system requirements for each application](https://helpx.adobe.com/creative-cloud/system-requirements.html). Make sure to choose **Small GPU (Visualization)** not **Small GPU (Compute)**. For more information about this virtual machine size, see the article on [how to set up a lab with GPUs](./how-to-setup-lab-gpu.md). +1. When you create a lab with the **Small GPU (Visualization)** size, follow these steps to [set up a lab with GPUs](./how-to-setup-lab-gpu.md). ++ > [!WARNING] + > The **Small GPU (Visualization)** virtual machine size is configured to enable a high-performing graphics experience and meets [AdobeΓÇÖs system requirements for each application](https://helpx.adobe.com/creative-cloud/system-requirements.html). Make sure to choose Small GPU (Visualization) not Small GPU (Compute). ## Template machine configuration [!INCLUDE [configure template vm](./includes/lab-services-class-type-template-vm.md)] -1. Start the template VM and connect to the machine. +1. Start the template VM and connect using RDP. ++1. Download and install Inventor and Revit using [instructions from AutoDesk](https://knowledge.autodesk.com/customer-service/download-install/install-software). -1. Download and install Inventor and Revit using [instructions from AutoDesk](https://knowledge.autodesk.com/customer-service/download-install/install-software). When prompted, specify the computer name of your license server. + When prompted, specify the computer name of your license server. -1. Finally, [publish the template VM](how-to-create-manage-template.md#publish-the-template-vm) to create the studentsΓÇÖ VMs. +1. Once the template VM is set up, [publish the template VM](how-to-create-manage-template.md). All lab VMs use this template as their base image. ## Cost -LetΓÇÖs cover an example cost estimate for this class. This estimate doesnΓÇÖt include the cost of running a license server. Suppose you have a class of 25 students, each of whom has 20 hours of scheduled class time. Each student also has an extra 10 quota hours for homework or assignments outside of scheduled class time. The virtual machine size we chose was **Small GPU (Visualization)**, which is 160 lab units. +This section provides a cost estimate for running this class for 25 users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Small GPU (Visualization)**, which is 160 lab units. This estimate doesnΓÇÖt include the cost of running a license server. -- 25 students × (20 scheduled hours + 10 quota hours) × 160 Lab Units × USD0.01 per hour = USD1200.00+- 25 students × (20 scheduled hours + 10 quota hours) × 160 lab units > [!IMPORTANT] > The cost estimate is for example purposes only. For current pricing information, see [Azure Lab Services pricing](https://azure.microsoft.com/pricing/details/lab-services/). |
lab-services | Class Type Jupyter Notebook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-jupyter-notebook.md | Last updated 02/17/2023 # Set up a lab to teach data science with Python and Jupyter Notebooks + This article outlines how to set up a [template virtual machine (VM)](./classroom-labs-concepts.md#template-virtual-machine) in Azure Lab Services with the tools for teaching students to use Jupyter Notebooks. You also learn how to lab users can connect to notebooks on their virtual machines. [Jupyter Notebooks](https://jupyter-notebook.readthedocs.io/) is an open-source project that enables you to easily combine rich text and executable Python source code on a single canvas, known as a notebook. Running a notebook results in a linear record of inputs and outputs. Those outputs can include text, tables of information, scatter plots, and more. |
lab-services | How To Attach Detach Shared Image Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-detach-shared-image-gallery.md | Title: Attach or detach an Azure Compute Gallery in Azure Lab Services | Microsoft Docs + Title: Attach or detach a compute gallery to a lab plan + description: This article describes how to attach an Azure Compute Gallery to a lab in Azure Lab Services. ++++ Previously updated : 07/04/2022- Last updated : 03/01/2023 -# Attach or detach a compute gallery in Azure Lab Services +# Attach or detach an Azure compute gallery to a lab plan in Azure Lab Services [!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)] This article shows you how to attach or detach an Azure Compute Gallery to a lab Saving images to a compute gallery and replicating those images incurs additional cost. This cost is separate from the Azure Lab Services usage cost. For more information about Azure Compute Gallery pricing, see [Azure Compute Gallery ΓÇô Billing](../virtual-machines/azure-compute-gallery.md#billing). +## Prerequisites ++- To change settings for the lab plan, your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Lab Services Contributor](/azure/role-based-access-control/built-in-roles#lab-services-contributor) role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles). ++- To attach an Azure compute gallery to a lab plan, your Azure account needs the following permissions: ++ - [Owner](/azure/role-based-access-control/built-in-roles#owner) role on the Azure compute gallery resource, if you're using an existing compute gallery + - [Owner](/azure/role-based-access-control/built-in-roles#owner) role on the resource group, if you're creating a new compute gallery + ## Scenarios Here are a couple of scenarios supported by attaching a compute gallery. A lab creator can create a template VM based on both generalized and specialized ## Create and attach a compute gallery -> [!IMPORTANT] -> Your user account must have permission to create a new Azure Compute Gallery. - 1. Open your lab plan in the [Azure portal](https://portal.azure.com).+ 1. Select **Azure compute gallery** on the menu.+ 1. Select the **Create Azure compute gallery** button. :::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/no-gallery-create-new.png" alt-text="Screenshot of the Create Azure compute gallery button."::: 1. In the **Create Azure compute gallery** window, enter a **name** for the gallery, and then select **Create**. - :::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/create-azure-compute-gallery-window.png" alt-text="Screenshot of the Create compute gallery window."::: + :::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/create-azure-compute-gallery-window.png" alt-text="Screenshot of the Create compute gallery window." lightbox="./media/how-to-attach-detach-shared-image-gallery/create-azure-compute-gallery-window.png"::: Azure Lab Services creates the compute gallery and attaches it to the lab plan. All labs created using this lab plan can now use images from the attached compute gallery. In the bottom pane, you see images in the compute gallery. There are no images in this new gallery. When you upload images to the gallery, you see them on this page. ## Attach an existing compute gallery The following procedure shows you how to attach an existing compute gallery to a lab plan. 1. Open your lab plan in the [Azure portal](https://portal.azure.com).+ 1. Select **Azure compute gallery** on the menu.+ 1. Select the **Attach existing gallery** button. :::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/no-gallery-attach-existing.png" alt-text="Screenshot of the Attach existing gallery button."::: 1. On the **Attach an existing compute gallery** page, select your compute gallery, and then select the **Select** button. - :::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/attach-existing-compute-gallery.png" alt-text="Azure compute gallery page for lab plan when gallery has been attached."::: --> [!NOTE] -> The **Azure Lab Services** app must be assigned the **Owner** role on the compute gallery to show in the list. + :::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/attach-existing-compute-gallery.png" alt-text="Screenshot of the Azure compute gallery page for a lab plan when the gallery is attached."::: All labs created using this lab plan can now use images from the attached compute gallery. ## Enable and disable images -All images in the attached compute gallery are disabled by default. To enable selected images: +All images in the attached compute gallery are disabled by default. -1. Check images you want to enable. -1. Select **Enable image** button. -1. Select **Apply**. +To enable or disable images from a compute gallery: +1. Check the VM images in the list. -To disable selected images: +1. Select **Enable image** or **Disable image**, to enable or disable the images. -1. Check images you want to disable. -1. Select **Disable image** button. -1. Select **Apply**. +1. Select **Apply** to confirm the action. ++ :::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/enable-attached-gallery-image.png" alt-text="Screenshot that shows how to enable an image for an attached compute gallery."::: ## Detach a compute gallery To detach a compute gallery from your lab, select **Detach** on the toolbar. Confirm the detach operation. -Only one Azure Compute Gallery can be attached to a lab. To attach another compute gallery, follow the below steps: --1. Select **Change gallery** on the toolbar. -1. Confirm the change operation. -1. On the **Attach an existing compute gallery** page, select your compute gallery, and then select the **Select** button. +Only one Azure compute gallery can be attached to a lab plan. To attach another compute gallery, follow the steps to [attach an existing compute gallery](#attach-an-existing-compute-gallery). ## Next steps |
lab-services | How To Configure Auto Shutdown Lab Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-auto-shutdown-lab-plans.md | Title: Configure automatic shutdown of VMs in Azure Lab Services -description: This article describes how to configure automatic shutdown of VMs in the lab plan. + Title: Configure automatic shutdown for a lab plan ++description: Learn how to enable or disable automatic shutdown of lab VMs in Azure Lab Services by configuring the lab plan settings. Automatic shutdown happens when a user disconnects from the remote connection. ++++ Previously updated : 11/13/2021 Last updated : 03/01/2023 # Configure automatic shutdown of VMs for a lab plan ++> [!NOTE] +> If using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see how to [configure automatic shutdown of VMs for a lab account](./how-to-configure-lab-accounts.md). + You can enable several auto-shutdown cost control features to avoid extra costs when the virtual machines aren't being used. - Disconnect idle virtual machines. The **disconnect idle virtual machines** has two settings. Both settings use a Review more details about the auto-shutdown features in the [Maximize cost control with auto-shutdown settings](cost-management-guide.md#automatic-shutdown-settings-for-cost-control) section. +Azure Lab Services supports automatic shutdown for both Windows-based and Linux-based virtual machines. For Linux-based VMs, [support depends on the specific Linux distribution and version](#supported-linux-distributions-for-automatic-shutdown). + ## Enable automatic shutdown 1. In the [Azure portal](https://portal.azure.com/), navigate to the **Lab Plan** page. Review more details about the auto-shutdown features in the [Maximize cost contr To disable the setting(s), uncheck the checkbox(s) on this page. +## Supported Linux distributions for automatic shutdown ++Azure Lab Services supports automatic shutdown for many Linux distristributions and versions. ++ ## Next steps To learn about how a lab owner can configure or override this setting at the lab level, see [Configure automatic shutdown of VMs for a lab](how-to-enable-shutdown-disconnect.md) |
lab-services | How To Configure Student Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-student-usage.md | Title: Configure usage settings in labs of Azure Lab Services -description: Learn how to configure the number of students for a lab, get them registered with the lab, control the number of hours they can use the VM, and more. + Title: Manage lab users ++description: Learn how to manage lab users in Azure Lab Services. Configure the number of lab users, manage user registrations, and specify the number of hours they can use their lab VM. ++++ Previously updated : 01/05/2022 Last updated : 03/02/2023 -# Add and manage lab users +# Manage lab users in Azure Lab Services -This article describes how to add student users to a lab, register them with the lab, control the number of additional hours they can use the virtual machine (VM), and more. +This article describes how to manage lab users in Azure Lab Services. Learn how to add users to a lab, manage their registration status, and how to specify the number of additional hours they can use the virtual machine (VM). -When you add users, by default, the **Restrict access** option is turned on and, unless they're in the list of users, students can't register with the lab even if they have a registration link. Only listed users can register with the lab by using the registration link you send. You can turn off **Restrict access**, which allows students to register with the lab as long as they have the registration link. +The workflow for letting lab users access a lab consists of the following steps: -This article shows how to add users to a lab. +1. Specify the list of lab users that can access the lab +1. Invite users to the lab by sending a lab registration link +1. Lab users register for the lab by using the registration link +1. Specify a lab schedule or quota hours to control when users can access their lab VM -## Add users from an Azure AD group +By default, access to a lab is restricted. Only users that are in the list of lab users can register for a lab, and get access to the lab virtual machine (VM). You can disable restricted access for a lab, which lets any user register for a lab if they have the registration link. -### Overview +You can [add users from an Azure Active Directory (Azure AD) group](#add-users-to-a-lab-from-an-azure-ad-group), or [manually add a list of users by email](#add-users-manually). If you enable Azure Lab Services integration with [Microsoft Teams](./how-to-manage-labs-within-teams.md) or [Canvas](./how-to-manage-labs-within-canvas.md), Azure Lab Services automatically grants user access to the lab and assigns a lab VM based on their membership in Microsoft or Canvas. In this case, you don't have to specify the lab user list, and users don't have to register for the lab. -You can now sync a lab user list to an existing Azure Active Directory (Azure AD) group so that you don't have to manually add or delete users. +## Prerequisites -An Azure AD group can be created within your organization's Azure Active Directory to manage access to organizational resources and cloud-based apps. To learn more, see [Azure AD groups](../active-directory/fundamentals/active-directory-manage-groups.md). If your organization uses Microsoft Office 365 or Azure services, your organization will already have admins who manage your Azure Active Directory. +- To manage users for the lab, your Azure account needs one of the following permissions: ++ - [Lab Creator](/azure/role-based-access-control/built-in-roles#lab-creator), [Lab Contributor](/azure/role-based-access-control/built-in-roles#lab-contributor), or [Lab Operator](/azure/role-based-access-control/built-in-roles#lab-operator) role at the lab plan or resource group level. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles). + - [Owner](/azure/role-based-access-control/built-in-roles#owner) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) at the lab plan or resource group level. ++## Add users to a lab from an Azure AD group ++You can sync a lab user list to an existing Azure AD group. When you use an Azure AD group, you don't have to manually add or delete users in the lab settings. ++You can create an Azure AD group within your organization's Azure AD to manage access to organizational resources and cloud-based apps. To learn more, see [Azure AD groups](../active-directory/fundamentals/active-directory-manage-groups.md). If your organization uses Microsoft Office 365 or Azure services, your organization already has admins who manage your Azure Active Directory. ### Sync users with Azure AD group -> [!IMPORTANT] -> Make sure the user list is empty. If there are existing users inside a lab that you added manually or through importing a CSV file, the option to sync the lab to an existing group will not appear. +When you sync a lab with an Azure AD group, Azure Lab Services pulls all users inside the Azure AD group into the lab as lab users. Only people in the Azure AD group have access to the lab. The user list automatically refreshes every 24 hours to match the latest membership of the Azure AD group. You can also manually synchronize the list of lab users at any time. ++The option to synchronize the list of lab users with an Azure AD group is only available if you haven't added users to the lab manually or through a CSV import yet. Make sure there are no users in the lab user list. ++To sync a lab with an existing Azure AD group: 1. Sign in to the [Azure Lab Services website](https://labs.azure.com/).+ 1. Select the lab you want to work with.-1. In the left pane, select **Users**. -1. Select **Sync from group**. - :::image type="content" source="./media/how-to-configure-student-usage/add-users-sync-group.png" alt-text="Add users by syncing from an Azure AD group"::: +1. In the left pane, select **Users**, and then select **Sync from group**. ++ :::image type="content" source="./media/how-to-configure-student-usage/add-users-sync-group.png" alt-text="Screenshot that shows how to add users by syncing from an Azure AD group."::: ++1. Select the Azure AD group you want to sync users with from the list of groups. ++ If you don't see any Azure AD groups in the list, this could be because of the following reasons: -1. You'll be prompted to pick an existing Azure AD group to sync your lab to. + - You're a guest user in Azure Active Directory (usually if you're outside the organization that owns the Azure AD), and you're not allowed to search for groups inside the Azure AD. In this case, you can't add an Azure AD group to the lab. + - Azure AD groups you created through Microsoft Teams don't show up in this list. You can add the Azure Lab Services app inside Microsoft Teams to create and manage labs directly from within Microsoft Teams. Learn more about [managing a labΓÇÖs user list from within Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams). - If you don't see an Azure AD group in the list, could be because of the following reasons: +1. Select **Add** to sync the lab users with the Azure AD group. - - If you are a guest user for an Azure Active Directory (usually if you're outside the organization that owns the Azure AD), and you are not able to search for groups inside the Azure AD. In this case, you can't add an Azure AD group to the lab in this case. - - Azure AD groups created through Teams don't show up in this list. You can add the Azure Lab Services app inside Teams to create and manage labs directly from within it. See more information about [managing a labΓÇÖs user list from within Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams). -1. Once you picked the Azure AD group to sync your lab to, select **Add**. -1. Once a lab is synced, it will pull everyone inside the Azure AD group into the lab as users, and you will see the user list updated. Only the people in this Azure AD group will have access to your lab. The user list will refresh every 24 hours to match the latest membership of the Azure AD group. You can also select the Sync button in the Users tab to manually sync to the latest changes in the Azure AD group. -1. Invite the users to your lab by clicking on the **Invite All** button, which will send an email to all users with the registration link to the lab. + Azure Lab Services automatically pulls the list of users from Azure AD, and refreshes the list every 24 hours. ++ Optionally, you can select **Sync** in the **Users** tab to manually synchronize to the latest changes in the Azure AD group. ++You can now start inviting users to your lab. Learn how to [send invitations to lab users](#send-invitations-to-users). ### Automatic management of virtual machines based on changes to the Azure AD group -Once the lab is synced to an Azure AD group, the number of virtual machines in the lab will automatically match the number of users in the group. You will no longer be able to manually update the lab capacity. When a user is added to the Azure AD group, a lab will automatically add a virtual machine for that user. When a user is deleted from the Azure AD group, a lab will automatically delete the userΓÇÖs virtual machine from the lab. +When you synchronize a lab with an Azure AD group, Azure Lab Services automatically manages the number of lab VMs based on the number of users in the group. You can't manually update the lab capacity in this case. ++When a user is added to the Azure AD group, Azure Lab Services automatically adds a lab VM for that user. When a user is no longer a member of the Azure AD group, the lab VM for that user is automatically deleted from the lab. -## Add users manually from email(s) or CSV file +## Add users manually -In this section, you add students manually (by email address or by uploading a CSV file). +You can add lab users manually by providing their email address in the lab configuration or by uploading a CSV file. ### Add users by email address -1. In the left pane, select **Users**. -1. Select **Add users manually**. +1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. - :::image type="content" source="./media/how-to-configure-student-usage/add-users-manually.png" alt-text="Add users manually"::: -1. Select **Add by email address** (default), enter the students' email addresses on separate lines or on a single line separated by semicolons. +1. Select **Users**, and then select **Add users manually**. - :::image type="content" source="./media/how-to-configure-student-usage/add-users-email-addresses.png" alt-text="Add users' email addresses"::: -1. Select **Save**. + :::image type="content" source="./media/how-to-configure-student-usage/add-users-manually.png" alt-text="Screenshot that shows how to add users manually."::: - The list displays the email addresses and statuses of the current users, whether they're registered with the lab or not. +1. Select **Add by email address**, enter the users' email addresses on separate lines or on a single line separated by semicolons. - :::image type="content" source="./media/how-to-configure-student-usage/list-of-added-users.png" alt-text="Users list"::: + :::image type="content" source="./media/how-to-configure-student-usage/add-users-email-addresses.png" alt-text="Screenshot that shows how to add users' email addresses in the Lab Services website." lightbox="./media/how-to-configure-student-usage/add-users-email-addresses.png"::: - > [!NOTE] - > After the students are registered with the lab, the list displays their names. +1. Select **Add**. ++ The list displays the email addresses and registration status of the lab users. After a user registers for the lab, the list also displays the user's name. ++ :::image type="content" source="./media/how-to-configure-student-usage/list-of-added-users.png" alt-text="Screenshot that shows the lab user list in the Lab Services website." lightbox="./media/how-to-configure-student-usage/list-of-added-users.png"::: ### Add users by uploading a CSV file You can also add users by uploading a CSV file that contains their email addresses. -A CSV text file is used to store comma-separated (CSV) tabular data (numbers and text). Instead of storing information in columns fields (such as in spreadsheets), a CSV file stores information separated by commas. Each line in a CSV file will have the same number of comma-separated "fields." You can use Excel to easily create and edit CSV files. +You use a CSV text file to store comma-separated (CSV) tabular data (numbers and text). Instead of storing information in columns fields (such as in spreadsheets), a CSV file stores information separated by commas. Each line in a CSV file has the same number of comma-separated *fields*. You can use Microsoft Excel to easily create and edit CSV files. -1. Using Microsoft Excel, create a CSV file that lists students' email addresses in one column. +1. Use Microsoft Excel or a text editor of your choice, to create a CSV file with the users' email addresses in one column. - :::image type="content" source="./media/how-to-configure-student-usage/csv-file-with-users.png" alt-text="List of users in a CSV file"::: -1. At the top of the **Users** pane, select **Add users**, and then select **Upload CSV**. -1. Select the CSV file that contains the students' email addresses, and then select **Open**. The **Add users** window displays the email address list from the CSV file. -1. Select **Save**. -1. In the **Users** pane, view the list of added students. + :::image type="content" source="./media/how-to-configure-student-usage/csv-file-with-users.png" alt-text="Screenshot that shows the list of users in a CSV file."::: - :::image type="content" source="./media/how-to-configure-student-usage/list-of-added-users.png" alt-text="List of added users in the Users pane"::: +1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. -## Send invitations to users +1. Select **Users**, select **Add users**, and then select **Upload CSV**. ++1. Select the CSV file with the users' email addresses, and then select **Open**. ++ The **Add users** page shows the email address list from the CSV file. ++1. Select **Add**. -To send a registration link to new users, use one of the following methods. + The **Users** page now shows the list of lab users you uploaded. ++ :::image type="content" source="./media/how-to-configure-student-usage/list-of-added-users.png" alt-text="Screenshot that shows the list of added users in the Users page in the Lab Services website." lightbox="./media/how-to-configure-student-usage/list-of-added-users.png"::: ++## Send invitations to users If the **Restrict access** option is enabled for the lab, only listed users can use the registration link to register to the lab. This option is enabled by default. +To send a registration link to new users, use one of the methods in the following sections. + ### Invite all users -This method shows you how to send email with a registration link and an optional message to all listed students. +You can invite all users to the lab by sending an email via the Azure Lab Services website. The email contains the lab registration link, and an optional message. ++To invite all users: ++1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. -1. In the **Users** pane, select **Invite all**. +1. Select **Users**, and then select **Invite all**. -  + :::image type="content" source="./media/how-to-configure-student-usage/invite-all-button.png" alt-text="Screenshot that shows the Users page in the Azure Lab Services website, highlighting the Invite all button." lightbox="./media/how-to-configure-student-usage/invite-all-button.png"::: 1. In the **Send invitation by email** window, enter an optional message, and then select **Send**. The email automatically includes the registration link. To get and save the registration link separately, select the ellipsis (**...**) at the top of the **Users** pane, and then select **Registration link**. -  + :::image type="content" source="./media/how-to-configure-student-usage/send-email.png" alt-text="Screenshot that shows the Send registration link by email window in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/send-email.png"::: The **Invitation** column of the **Users** list displays the invitation status for each added user. The status should change to **Sending** and then to **Sent on \<date>**. ### Invite selected users -This method shows you how to invite only certain students and get a registration link that you can share with other people. +Instead of inviting all users, you can also invite specific users and get a registration link that you can share with other people. -1. In the **Users** pane, select a student or multiple students in the list. +To invite selected users: -1. In the row for the student you've selected, select the **envelope** icon or, on the toolbar, select **Invite**. +1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. -  +1. Select **Users**, and then select one or more users from the list. ++1. In the row for the user you selected, select the **envelope** icon or, on the toolbar, select **Invite**. ++ :::image type="content" source="./media/how-to-configure-student-usage/invite-selected-users.png" alt-text="Screenshot that shows how to invite selected users to a lab in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/invite-selected-users.png"::: 1. In the **Send invitation by email** window, enter an optional **message**, and then select **Send**. -  + :::image type="content" source="./media/how-to-configure-student-usage/send-invitation-to-selected-users.png" alt-text="Screenshot that shows the Send invitation email for selected users in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/send-invitation-to-selected-users.png"::: - The **Users** pane displays the status of this operation in the **Invitation** column of the table. The invitation email includes the registration link that students can use to register with the lab. + The **Users** pane displays the status of this operation in the **Invitation** column of the table. The invitation email includes the registration link that users can use to register with the lab. -## Get the registration link +### Get the registration link -In this section, you can get the registration link from the portal and send it by using your own email application. +You can get the lab registration link from the Azure Lab Services website, and send it by using your own email application. -1. In the **Users** pane, select **Registration link**. +1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. -  +1. Select **Users**, and then select **Registration link**. -1. In the **User registration** window, select **Copy**, and then select **Done**. + :::image type="content" source="./media/how-to-configure-student-usage/registration-link-button.png" alt-text="Screenshot that shows how to get the lab registration link in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/registration-link-button.png"::: -  +1. In the **User registration** window, select **Copy**, and then select **Done**. - The link is copied to the clipboard. + :::image type="content" source="./media/how-to-configure-student-usage/registration-link.png" alt-text="Screenshot that shows the User registration window in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/registration-link.png"::: -1. In your email application, paste the registration link, and then send the email to a student so that the student can register for the class. + The link is copied to the clipboard. In your email application, paste the registration link, and then send the email to a user so that they can register for the class. ## View registered users -1. Go to the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). -1. Select **Sign in**, and then enter your credentials. Azure Lab Services supports organizational accounts and Microsoft accounts. -1. On the **My labs** page, select the lab whose usage you want to track. -1. In the left pane, select **Users**, or select the **Users** tile. The **Users** pane displays a list of students who have registered with your lab. +To view the list of lab users that have already registered for the lab by using the lab registration link: ++1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. -  +1. Select **Users** to view the list of lab users. ++ The list shows the list of lab users with their registration status. The user status should show **Registered**, and their name should also be available after registration. ++ :::image type="content" source="./media/tutorial-track-usage/registered-users.png" alt-text="Screenshot that shows the list of registered users for a lab in the Azure Lab Services website." lightbox="./media/tutorial-track-usage/registered-users.png"::: > [!NOTE]- > If you [republish a lab](how-to-create-manage-template.md#publish-the-template-vm) or [Reset VMs](how-to-manage-vm-pool.md#reset-vms), the students will remain registered for the labs' VMs. However, the contents of the VMs will be deleted and the VMs will be recreated with the template VM's image. + > If you [republish a lab](how-to-create-manage-template.md#publish-the-template-vm) or [Reset VMs](how-to-manage-vm-pool.md#reset-vms), the users remain registered for the labs' VMs. However, the contents of the VMs will be deleted and the VMs will be recreated with the template VM's image. ## Set quotas for users -You can set an hour quota for a student one of two ways: +Quotas enable lab users to use the lab for a number of hours outside of scheduled times. For example, users might access the lab to complete their homework. Learn more about [quota hours](./classroom-labs-concepts.md#quota). -1. In the **Users** pane, select **Quota per user: \<number> hour(s)** on the toolbar. -1. In the **Quota per user** window, specify the number of hours you want to give to each student outside the scheduled class time, and then select **Save**. +You can set an hour quota for a user in one of two ways: -  +1. In the **Users** pane, select **Quota per user: \<number> hour(s)** on the toolbar. - The changed values are now displayed on the **Quota per user: \<number of hours>** button on the toolbar and in the users list, as shown here: +1. In the **Quota per user** window, specify the number of hours you want to give to each user outside the scheduled time. -  + :::image type="content" source="./media/how-to-configure-student-usage/quota-per-user.png" alt-text="Screenshot that shows the Quota per user window in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/quota-per-user.png"::: > [!IMPORTANT]- > The [scheduled running time of VMs](how-to-create-schedules.md) does not count against the quota that's allotted to a student. The quota is for the time outside of scheduled hours that a student spends on VMs. + > The [scheduled running time of VMs](how-to-create-schedules.md) does not count against the quota that's allotted to a user. The quota is for the time outside of scheduled hours that a user spends on VMs. -## Set additional quotas for specific users +1. Select **Save** to save the changes. -You can specify quotas for certain students beyond the common quotas that were set for all users in the preceding section. For example, if you, as an educator, set the quota for all students to 10 hours and set an additional quota of 5 hours for a specific student, that student gets 15 (10 + 5) hours of quota. If you change the common quota later to, say, 15, the student gets 20 (15 + 5) hours of quota. Remember that this overall quota is outside the scheduled time. The time that a student spends on a lab VM during the scheduled time does not count against this quota. + Notice that the user list shows the updated quota hours for all users. -To set additional quotas, do the following: +### Set additional quotas for specific users -1. In the **Users** pane, select a student from the list, and then select **Adjust quota** on the toolbar. +You can specify quotas for certain users beyond the common quotas that were set for all users in the preceding section. For example, if you, as a lab creator, set the quota for all users to 10 hours and set an additional quota of 5 hours for a specific user, that user gets 15 (10 + 5) hours of quota. If you change the common quota later to, say, 15, the user gets 20 (15 + 5) hours of quota. Remember that this overall quota is outside the scheduled time. The time that a user spends on a lab VM during the scheduled time doesn't count against this quota. -  +To set additional quotas, do the following: ++1. In the **Users** pane, select one or more users from the list, and then select **Adjust quota** on the toolbar. -1. In the **Adjust quota for \<selected user or users email address>**, enter the number of additional lab hours you want to grant to the selected student or students, and then select **Apply**. +1. In the **Adjust quota** window, enter the number of additional lab hours you want to grant to the selected users, and then select **Apply**. -  + :::image type="content" source="./media/how-to-configure-student-usage/additional-quota.png" alt-text="Screenshot that shows the Adjust quota window in the Azure Lab Services website." lightbox="./media/how-to-configure-student-usage/additional-quota.png"::: - The **Usage** column displays the updated quota for the selected students. +1. Select **Apply** to save the changes. -  + Notice that the user list shows the updated quota hours for the users you selected. -## Student accounts +## User account types -To add students to a lab, you use their email accounts. Students might have the following types of email accounts: +To add users to a lab, you use their email accounts. Users might have the following types of email accounts: -- A student email account that's provided by your university's Azure Active Directory instance.+- An organizational email account that's provided by your university's Azure Active Directory instance. - A Microsoft-domain email account, such as *outlook.com*, *hotmail.com*, *msn.com*, or *live.com*. - A non-Microsoft email account, such as one provided by Yahoo! or Google. However, these types of accounts must be linked with a Microsoft account. - A GitHub account. This account must be linked with a Microsoft account. ### Use a non-Microsoft email account -Students can use non-Microsoft email accounts to register and sign in to a lab. However, the registration requires that they first create a Microsoft account that's linked to their non-Microsoft email address. +Users can use non-Microsoft email accounts to register and sign in to a lab. However, the registration requires that they first create a Microsoft account that's linked to their non-Microsoft email address. -Many students might already have a Microsoft account that's linked to their non-Microsoft email address. For example, students already have a Microsoft account if they've used their email address with other Microsoft products or services, such as Office, Skype, OneDrive, or Windows. +Many users might already have a Microsoft account that's linked to their non-Microsoft email address. For example, users already have a Microsoft account if they've used their email address with other Microsoft products or services, such as Office, Skype, OneDrive, or Windows. -When students use the registration link to sign in to a classroom, they're prompted for their email address and password. Students who attempt to sign in with a non-Microsoft account that's not linked to a Microsoft account will receive the following error message: +When users use the registration link to sign in to a classroom, they're prompted for their email address and password. Users who attempt to sign in with a non-Microsoft account that's not linked to a Microsoft account receive the following error message: - -Here's a link for students to [sign up for a Microsoft account](http://signup.live.com). +Here's a link for users to [sign up for a Microsoft account](http://signup.live.com). > [!IMPORTANT]-> When students sign in to a lab, they aren't given the option to create a Microsoft account. For this reason, we recommend that you include this sign-up link, `http://signup.live.com`, in the lab registration email that you send to students who are using non-Microsoft accounts. +> When users sign in to a lab, they aren't given the option to create a Microsoft account. For this reason, we recommend that you include this sign-up link, `http://signup.live.com`, in the lab registration email that you send to users who are using non-Microsoft accounts. ### Use a GitHub account -Students can also use an existing GitHub account to register and sign in to a lab. If they already have a Microsoft account linked to their GitHub account, students can sign in and provide their password as shown in the preceding section. +Users can also use an existing GitHub account to register and sign in to a lab. If they already have a Microsoft account linked to their GitHub account, users can sign in and provide their password as shown in the preceding section. -If they haven't yet linked their GitHub account to a Microsoft account, they can do the following: +If users haven't yet linked their GitHub account to a Microsoft account, they can do the following: 1. Select the **Sign-in options** link, as shown here: -  + :::image type="content" source="./media/how-to-configure-student-usage/signin-options.png" alt-text="Screenshot that shows the Microsoft sign in window, highlighting the Sign-in options link."::: 1. In the **Sign-in options** window, select **Sign in with GitHub**. -  + :::image type="content" source="./media/how-to-configure-student-usage/signin-github.png" alt-text="Screenshot that shows the Microsoft sign-in options window, highlighting the option to sign in with GitHub."::: - At the prompt, students then create a Microsoft account that's linked to their GitHub account. The linking happens automatically when they select **Next**. They're then immediately signed in and connected to the lab. + At the prompt, users then create a Microsoft account that's linked to their GitHub account. The linking happens automatically when they select **Next**. They're then immediately signed in and connected to the lab. ## Export a list of users to a CSV file -1. Go to the **Users** pane. +To export the list of users for a lab: ++1. In the [Azure Lab Services website](https://labs.azure.com/), select the lab you want to work with. ++1. Select **Users**. + 1. On the toolbar, select the ellipsis (**...**), and then select **Export CSV**. -  + :::image type="content" source="./media/how-to-export-users-virtual-machines-csv/users-export-csv.png" alt-text="Screenshot that shows how to export the list of lab users to a CSV file in the Azure Lab Services website." lightbox="./media/how-to-export-users-virtual-machines-csv/users-export-csv.png"::: ## Next steps |
lab-services | How To Enable Shutdown Disconnect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-shutdown-disconnect.md | Title: Configure automatic shutdown of VMs for a lab in Azure Lab Services -description: Learn how to enable or disable automatic shutdown of VMs when a remote desktop connection is disconnected. + Title: Configure automatic shutdown for a lab ++description: Learn how to enable or disable automatic shutdown of lab VMs in Azure Lab Services by configuring the lab settings. Automatic shutdown happens when a user disconnects from the remote connection. ++++ Previously updated : 07/04/2022- Last updated : 03/01/2023 # Configure automatic shutdown of VMs for a lab This article shows you how you can configure [automatic shut-down](classroom-lab A lab plan administrator can configure automatic shutdown policies for the lab plan that you use create labs. For more information, see [Configure automatic shutdown of VMs for a lab plan](how-to-configure-auto-shutdown-lab-plans.md). As a lab owner, you can override the settings when creating a lab or after the lab is created. -> [!IMPORTANT] -> Prior to the [August 2022 Update](lab-services-whats-new.md), Linux labs only support automatic shut down when users disconnect and when VMs are started but users don't connect. Support also varies depending on [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions). Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) image. +Azure Lab Services supports automatic shutdown for both Windows-based and Linux-based virtual machines. For Linux-based VMs, [support depends on the specific Linux distribution and version](#supported-linux-distributions-for-automatic-shutdown). ## Configure for the lab level You can configure the auto-shutdown settings when you create a lab or after it's > [!WARNING] > If you shutdown the Linux or Windows operating system (OS) on a VM before disconnecting an RDP session to the VM, the auto-shutdown feature will not work properly. For more information, see [Guide to controlling Windows shutdown behavior](how-to-windows-shutdown.md). +## Supported Linux distributions for automatic shutdown ++Azure Lab Services supports automatic shutdown for many Linux distributions and versions. Support varies depending on whether you're using a lab plan or lab account. ++### Lab plan-based labs +++### Lab account-based labs ++If you're using lab account-based labs, Linux labs only support automatic shut down when users disconnect and when VMs are started but users don't connect. ++Support varies depending on [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions). ++Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) image. + ## Next steps - As an educator, learn about the different [shut-down policies](classroom-labs-concepts.md#automatic-shut-down) available. |
lab-services | How To Use Lab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-lab.md | |
lab-services | Quick Create Lab Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-portal.md | -Educators can create labs containing VMs for students using the Azure Lab Services portal. This quickstart shows you how to create a lab with Windows 11 Pro image. Once a lab is created, an educator [configures the template](how-to-create-manage-template.md), [adds lab users](how-to-configure-student-usage.md#add-and-manage-lab-users), and [publishes the lab](tutorial-setup-lab.md#publish-a-lab). +Educators can create labs containing VMs for students using the Azure Lab Services portal. This quickstart shows you how to create a lab with Windows 11 Pro image. Once a lab is created, an educator [configures the template](how-to-create-manage-template.md), [adds lab users](how-to-configure-student-usage.md), and [publishes the lab](tutorial-setup-lab.md#publish-a-lab). ## Prerequisites The following steps show how to create a lab with Azure Lab Services. :::image type="content" source="./media/quick-create-lab-portal/new-lab-credentials.png" alt-text="Screenshot of the Virtual Machine credentials window for Azure Lab Services."::: - > [!IMPORTANT] - > Make a note of user name and password. They won't be shown again. -+ > [!IMPORTANT] + > Make a note of user name and password. They won't be shown again. + 1. On the **Lab policies** page, leave the default selections and select **Next**. :::image type="content" source="./media/quick-create-lab-portal/quota-for-each-user.png" alt-text="Screenshot of the Lab policy window when creating a new Azure Lab Services lab."::: The following steps show how to create a lab with Azure Lab Services. ## Clean up resources -When no longer needed, you can delete the lab. +When no longer needed, you can delete the lab: -On the tile for the lab, select three dots (...) in the corner, and then select **Delete**. +1. On the tile for the lab, select three dots (...) in the corner, and then select **Delete**. + :::image type="content" source="./media/how-to-manage-labs/delete-button.png" alt-text="Screenshot of My labs page with More menu then Delete menu item highlighted."::: -On the **Delete lab** dialog box, select **Delete** to continue with the deletion. +1. On the **Delete lab** dialog box, select **Delete** to continue with the deletion. ## Troubleshooting |
lab-services | Quick Create Lab Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-template.md | Last updated 05/10/2022 # Quickstart: Create a lab using an ARM template -This quickstart shows you, as the educator or admin, how to use an Azure Resource Manager (ARM) template to create a lab. This quickstart shows you how to create a lab with Windows 11 Pro image. Once a lab is created, an educator [configures the template](how-to-create-manage-template.md), [adds lab users](how-to-configure-student-usage.md#add-and-manage-lab-users), and [publishes the lab](tutorial-setup-lab.md#publish-a-lab). For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). +This quickstart shows you, as the educator or admin, how to use an Azure Resource Manager (ARM) template to create a lab. This quickstart shows you how to create a lab with Windows 11 Pro image. Once a lab is created, an educator [configures the template](how-to-create-manage-template.md), [adds lab users](how-to-configure-student-usage.md), and [publishes the lab](tutorial-setup-lab.md#publish-a-lab). For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] |
lab-services | Setup Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/setup-guide.md | After you understand the requirements for your class's lab, you're ready to set - [Send invitations to users](./tutorial-setup-lab.md#send-invitation-emails-to-users) - [Manage Lab Services user lists in Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) - For information about the types of accounts that students can use, see [Student accounts](./how-to-configure-student-usage.md#student-accounts). + For information about the types of accounts that students can use, see [Student accounts](./how-to-configure-student-usage.md#user-account-types). 1. **Set cost controls**. To set a schedule, establish quotas, and enable automatic shutdown, see the following tutorials: |
lab-services | Tutorial Setup Lab Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab-account.md | Title: Set up a lab account with Azure Lab Services | Microsoft Docs -description: Learn how to set up a lab account and add users that can create labs in the lab account. + Title: 'Tutorial: Set up a lab account with Azure Lab Services' ++description: Learn how to set up a lab account with Azure Lab Services in the Azure portal. Then, grant a user access to create labs. Previously updated : 01/06/2022++++ Last updated : 03/03/2023 -In Azure Lab Services, a lab account serves as the central account in which your organization's labs are managed. In your lab account, give permission to others to create labs, and set policies that apply to all labs under the lab account. In this tutorial, learn how to create a lab account. +In Azure Lab Services, a lab account serves as the central resource in which you manage your organization's labs. In your lab account, give permission to others to create labs, and set policies that apply to all labs under the lab account. In this tutorial, learn how to create a lab account by using the Azure portal. In this tutorial, you do the following actions: In this tutorial, you do the following actions: > - Create a lab account > - Add a user to the Lab Creator role -If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. ++## Prerequisites ++* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create a lab account The following steps illustrate how to use the Azure portal to create a lab account with Azure Lab Services. 1. Sign in to the [Azure portal](https://portal.azure.com).-2. Select **All Services** on the left menu. Select **DevOps** from **Categories**. Then, select **Lab Services**. If you select star (`*`) next to **Lab Services**, it's added to the **FAVORITES** section on the left menu. From the next time onwards, you select **Lab Services** under **FAVORITES**. -  -3. On the **Lab Services** page, select **Add** on the toolbar or select **Create lab account** button on the page. +1. Select **Create a resource** in the upper left-hand corner of the Azure portal. ++ :::image type="content" source="./media/tutorial-setup-lab-account/azure-portal-create-resource.png" alt-text="Screenshot that shows the Azure portal home page, highlighting the Create a resource button."::: ++1. Search for **lab account**. (**Lab account** can also be found under the **DevOps** category.) ++1. On the **Lab account** tile, select **Create** > **Lab account**. ++ :::image type="content" source="./media/tutorial-setup-lab-account/select-lab-accounts-service.png" alt-text="Screenshot of how to search for and create a lab account by using the Azure Marketplace."::: -  -4. On the **Basics** tab of the **Create a lab account** page, do the following actions: - 1. For **Lab account name**, enter a name. - 2. Select the **Azure subscription** in which you want to create the lab account. - 3. For **Resource group**, select an existing resource group or select **Create new**, and enter a name for the resource group. - 4. For **Location**, select a location/region in which you want to create the lab account. +1. On the **Basics** tab of the **Create a lab account** page, provide the following information: -  - 5. Select **Review + create**. - 6. Review the summary, and select **Create**. + | Field | Description | + | | -- | + | **Subscription** | Select the Azure subscription that you want to use to create the resource. | + | **Resource group** | Select an existing resource group or select **Create new**, and enter a name for the new resource group. | + | **Name** | Enter a unique lab account name. <br/>For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices). | + | **Region** | Select a geographic location to host your lab account. | -  -5. When the deployment is complete, expand **Next steps**, and select **Go to resource**. +1. After you're finished configuring the resource, select **Review + Create**. -  -6. Confirm that you see the **Lab Account** page. + :::image type="content" source="./media/tutorial-setup-lab-account/lab-account-basics-page.png" alt-text="Screenshot that shows the Basics tab to create a new lab account in the Azure portal."::: -  +1. Review all the configuration settings and select **Create** to start the deployment of the lab account. ++1. To view the new resource, select **Go to resource**. ++ :::image type="content" source="./media/tutorial-setup-lab-account/go-to-lab-account.png" alt-text="Screenshot that shows the resource deployment completion page in the Azure portal."::: ++1. Confirm that you see the lab account **Overview** page. ++ :::image type="content" source="./media/tutorial-setup-lab-account/lab-account-page.png" alt-text="Screenshot that shows the lab account overview page in the Azure portal."::: ++You've now successfully created a lab account by using the Azure portal. To let others create labs in the lab account, you assign them the Lab Creator role. ## Add a user to the Lab Creator role -To set up a lab in a lab account, the user must be a member of the **Lab Creator** role in the lab account. To provide educators the permission to create labs for their classes, add them to the **Lab Creator** role: For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). +To set up a lab in a lab account, you must be a member of the Lab Creator role in the lab account. To grant people the permission to create labs, add them to the Lab Creator role. ++Follow these steps to [assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). > [!NOTE]-> The account you used to create the lab account is automatically added to this role. If you are planning to use the same user account to create a lab in this tutorial, skip this step. +> Azure Lab Services automatically assign the Lab Creator role to the Azure account you use to create the lab account. If you plan to use the same user account to create a lab in this tutorial, skip this step. -1. On the **Lab Account** page, select **Access control (IAM)** +1. On the **Lab Account** page, select **Access control (IAM)**. -1. Select **Add** > **Add role assignment**. +1. From the **Access control (IAM)** page, select **Add** > **Add role assignment**. -  + :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows the Access control (I A M) page with Add role assignment menu option highlighted."::: 1. On the **Role** tab, select the **Lab Creator** role. -  + :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot that shows the Add roll assignment page with Role tab selected."::: -1. On the **Members** tab, select the user you want to add to the Lab Creators role +1. On the **Members** tab, select the user you want to add to the Lab Creators role. 1. On the **Review + assign** tab, select **Review + assign** to assign the role. ## Next steps -In this tutorial, you created a lab account. To learn about how to create a lab as an educator, advance to the next tutorial: +In this tutorial, you created a lab account and granted lab creation permissions to another user. To learn about how to create a lab, advance to the next tutorial: > [!div class="nextstepaction"] > [Set up a lab](tutorial-setup-lab.md) |
lab-services | Tutorial Setup Lab Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab-plan.md | Title: Create a lab plan with Azure Lab Services + Title: 'Tutorial: Create a lab plan with Azure Lab Services' description: Learn how to set up a lab plan with Azure Lab Services and assign lab creation permissions to a user by using the Azure portal. |
lab-services | Tutorial Setup Lab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab.md | Manually add users to the lab by providing their email address: :::image type="content" source="./media/tutorial-setup-lab/list-of-added-users.png" alt-text="Screenshot that shows the Users page, showing the list of user email addresses."::: > [!NOTE]- > After a student registers for the lab uing the registration link, the user list also displays their name. The name that's shown in the list is constructed by using the first and last names of the student's information from Azure Active Directory or their Microsoft Account. For more information about supported account types, see [Student accounts](how-to-configure-student-usage.md#student-accounts). + > After a student registers for the lab uing the registration link, the user list also displays their name. The name that's shown in the list is constructed by using the first and last names of the student's information from Azure Active Directory or their Microsoft Account. For more information about supported account types, see [Student accounts](how-to-configure-student-usage.md#user-account-types). ## Send invitation emails to users |
lighthouse | Cross Tenant Management Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/cross-tenant-management-experience.md | Title: Cross-tenant management experiences description: Azure Lighthouse enables and enhances cross-tenant experiences in many Azure services. Previously updated : 12/02/2022 Last updated : 03/01/2023 Most Azure tasks and services can be used with delegated resources across manage [Azure Cost Management + Billing](../../cost-management-billing/index.yml): -- From the managing tenant, CSP partners can view, manage, and analyze pre-tax consumption costs (not inclusive of purchases) for customers who are under the Azure plan. The cost will be based on retail rates and the Azure role-based access control (Azure RBAC) access that the partner has for the customer's subscription. Currently, you can view consumption costs at retail rates for each individual customer subscription based on Azure RBAC access.+- From the managing tenant, CSP partners can view, manage, and analyze pre-tax consumption costs (not inclusive of purchases) for customers who are under the Azure plan. The cost is based on retail rates and the Azure role-based access control (Azure RBAC) access that the partner has for the customer's subscription. Currently, you can view consumption costs at retail rates for each individual customer subscription based on Azure RBAC access. [Azure Key Vault](../../key-vault/general/index.yml): Most Azure tasks and services can be used with delegated resources across manage [Microsoft Defender for Cloud](../../security-center/index.yml): - Cross-tenant visibility- - Monitor compliance to security policies and ensure security coverage across all tenants' resources + - Monitor compliance with security policies and ensure security coverage across all tenants' resources - Continuous regulatory compliance monitoring across multiple tenants in a single view - Monitor, triage, and prioritize actionable security recommendations with secure score calculation - Cross-tenant security posture management Support requests: ## Current limitations -With all scenarios, please be aware of the following current limitations: +With all scenarios, be aware of the following current limitations: -- Requests handled by Azure Resource Manager can be performed using Azure Lighthouse. The operation URIs for these requests start with `https://management.azure.com`. However, requests that are handled by an instance of a resource type (such as Key Vault secrets access or storage data access) aren't supported with Azure Lighthouse. The operation URIs for these requests typically start with an address that is unique to your instance, such as `https://myaccount.blob.core.windows.net` or `https://mykeyvault.vault.azure.net/`. The latter also are typically data operations rather than management operations.+- Requests handled by Azure Resource Manager can be performed using Azure Lighthouse. The operation URIs for these requests start with `https://management.azure.com`. However, requests that are handled by an instance of a resource type (such as Key Vault secrets access or storage data access) aren't supported with Azure Lighthouse. The operation URIs for these requests typically start with an address that is unique to your instance, such as `https://myaccount.blob.core.windows.net` or `https://mykeyvault.vault.azure.net/`. The latter are also typically data operations rather than management operations. - Role assignments must use [Azure built-in roles](../../role-based-access-control/built-in-roles.md). All built-in roles are currently supported with Azure Lighthouse, except for Owner or any built-in roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission. The User Access Administrator role is supported only for limited use in [assigning roles to managed identities](../how-to/deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). Custom roles and [classic subscription administrator roles](../../role-based-access-control/classic-administrators.md) are not supported. For more information, see [Role support for Azure Lighthouse](tenants-users-roles.md#role-support-for-azure-lighthouse).-- Role assignments from Azure Lighthouse are not shown under Access Control (IAM) or with CLI tools such as `az role assignment list`. They are only visible in Azure Lighthouse under the Delegations section.+- For users in the managed tenant, role assignments made through Azure Lighthouse aren't shown under Access Control (IAM) or with CLI tools such as `az role assignment list`. These assignments are only visible in the Azure portal in the **Delegations** section of Azure Lighthouse, or through the Azure Lighthouse API. - While you can onboard subscriptions that use Azure Databricks, users in the managing tenant can't launch Azure Databricks workspaces on a delegated subscription.-- While you can onboard subscriptions and resource groups that have resource locks, those locks will not prevent actions from being performed by users in the managing tenant. [Deny assignments](../../role-based-access-control/deny-assignments.md) that protect system-managed resources (system-assigned deny assignments), such as those created by Azure managed applications or Azure Blueprints, do prevent users in the managing tenant from acting on those resources. However, users in the customer tenant can't create their own deny assignments.+- While you can onboard subscriptions and resource groups that have resource locks, those locks won't prevent actions from being performed by users in the managing tenant. [Deny assignments](../../role-based-access-control/deny-assignments.md) that protect system-managed resources (system-assigned deny assignments), such as those created by Azure managed applications or Azure Blueprints, do prevent users in the managing tenant from acting on those resources. However, users in the customer tenant can't create their own deny assignments. - Delegation of subscriptions across a [national cloud](../../active-directory/develop/authentication-national-cloud.md) and the Azure public cloud, or across two separate national clouds, is not supported. ## Next steps |
lighthouse | Remove Delegation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/remove-delegation.md | Title: Remove access to a delegation -description: Learn how to remove access to resources that had been delegated to a service provider for Azure Lighthouse. Previously updated : 06/22/2022+description: Learn how to remove access to resources that were delegated to a service provider for Azure Lighthouse. Last updated : 03/02/2023 Removing a delegation can be done by a user in either the customer tenant or the > [!TIP] > Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes. +> [!IMPORTANT] +> When a customer subscription has multiple delegations from the same service provider, removing one delegation could cause users to lose access granted via the other delegations. This only occurs when the same `principalId` and `roleDefinitionId` combination is included in multiple delegations and then one of the delegations is removed. To fix this, repeat the [onboarding process](onboard-customer.md) for the delegations that you aren't removing. + ## Customers Users in the customer's tenant who have a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), can remove service provider access to that subscription (or to resource groups in that subscription). To do so, the user can go to the [Service providers page](view-manage-service-providers.md#remove-service-provider-offers) of the Azure portal, find the offer on the **Service provider offers** screen, and select the trash can icon in the row for that offer. After confirming the deletion, no users in the service provider's tenant will be ## Service providers -Users in a managing tenant can remove access to delegated resources if they were granted the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) for the customer's resources. If this role was not assigned to any service provider users, the delegation can only be removed by a user in the customer's tenant. +Users in a managing tenant can remove access to delegated resources if they were granted the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) for the customer's resources. If this role isn't assigned to any service provider users, the delegation can only be removed by a user in the customer's tenant. -The example below shows an assignment granting the **Managed Services Registration Assignment Delete Role** that can be included in a parameter file during the [onboarding process](onboard-customer.md): +This example shows an assignment granting the **Managed Services Registration Assignment Delete Role** that can be included in a parameter file during the [onboarding process](onboard-customer.md): ```json "authorizations": [ |