Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Configure Automatic User Provisioning Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/configure-automatic-user-provisioning-portal.md | This article describes the general steps for managing automatic user account pro ## Finding your apps in the portal + Use the Azure portal to view and manage all applications that are configured for single sign-on in a directory. Enterprise apps are apps that are deployed and used within your organization. Follow these steps to view and manage your enterprise applications: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Customize Application Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md | You can customize the default attribute-mappings according to your business need ## Editing user attribute-mappings + Follow these steps to access the **Mappings** feature of user provisioning: 1. Sign in to the [Azure portal](https://portal.azure.com). When you're editing the list of supported attributes, the following properties a - **Referenced Object Attribute** - If it's a Reference type attribute, then this menu lets you select the table and attribute in the target application that contains the value associated with the attribute. For example, if you have an attribute named "Department" whose stored value references an object in a separate "Departments" table, you would select "Departments.Name". The reference tables and the primary ID fields supported for a given application are preconfigured and can't be edited using the Azure portal. However, you can edit them using the [Microsoft Graph API](/graph/api/resources/synchronization-configure-with-custom-target-attributes). #### Provisioning a custom extension attribute to a SCIM compliant application+ The SCIM RFC defines a core user and group schema, while also allowing for extensions to the schema to meet your application's needs. To add a custom attribute to a SCIM application: 1. Sign in to the [Azure portal](https://portal.azure.com), select **Enterprise Applications**, select your application, and then select **Provisioning**. 2. Under **Mappings**, select the object (user or group) for which you'd like to add a custom attribute. |
active-directory | Define Conditional Rules For Provisioning User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md | Scoping filters are configured as part of the attribute mappings for each Azure ### Create a scoping filter + 1. Sign in to the [Azure portal](https://portal.azure.com). ::: zone pivot="app-provisioning" |
active-directory | Export Import Provisioning Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/export-import-provisioning-configuration.md | In this article, you learn how to: ### Export your provisioning configuration + To export your configuration: -1. In the [Azure portal](https://portal.azure.com/), on the left navigation panel, select **Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. In the left navigation panel, select **Azure Active Directory**. 1. In the **Azure Active Directory** pane, select **Enterprise applications** and choose your application. 1. In the left navigation pane, select **provisioning**. From the provisioning configuration page, click on **attribute mappings**, then **show advanced options**, and finally **review your schema**. The schema editor opens. 1. Click on download in the command bar at the top of the page to download your schema. You can use the Microsoft Graph API and the Microsoft Graph Explorer to export y ### Step 1: Retrieve your Provisioning App Service Principal ID (Object ID) -1. Launch the [Azure portal](https://portal.azure.com), and navigate to the Properties section of your provisioning application. For example, if you want to export your *Workday to AD User Provisioning application* mapping navigate to the Properties section of that app. +1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to the Properties section of your provisioning application. For example, if you want to export your *Workday to AD User Provisioning application* mapping navigate to the Properties section of that app. 1. In the Properties section of your provisioning app, copy the GUID value associated with the *Object ID* field. This value is also called the **ServicePrincipalId** of your App and it's used in Microsoft Graph Explorer operations. ![Workday App Service Principal ID](./media/export-import-provisioning-configuration/wd_export_01.png) |
active-directory | On Premises Powershell Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-powershell-connector.md | If you have already downloaded the provisioning agent and configured it for anot ## Configure the On-premises ECMA app + 1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator. 2. Go to **Enterprise applications** and select **New application**. 3. Search for the **On-premises ECMA app** application, give the app a name, and select **Create** to add it to your tenant. Follow these steps to confirm that the connector host has started and has identi ## Test the connection from Azure AD to the connector host+ 1. Return to the web browser window where you were configuring the application provisioning in the portal. >[!NOTE] >If the window had timed out, then you need to re-select the agent. Follow these steps to confirm that the connector host has started and has identi 5. After the connection test is successful and indicates that the supplied credentials are authorized to enable provisioning, select **Save**. ## Configure the application connection in the Azure portal+ Return to the web browser window where you were configuring the application provisioning. >[!NOTE] |
active-directory | Provision On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md | Use on-demand provisioning to provision a user or group in seconds. Among other ## How to use on-demand provisioning + 1. Sign in to the [Azure portal](https://portal.azure.com). ::: zone pivot="app-provisioning" |
active-directory | Skip Out Of Scope Deletions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md | Because this configuration is widely used with the *Workday to Active Directory ## Step 1: Retrieve your Provisioning App Service Principal ID (Object ID) -1. Launch the [Azure portal](https://portal.azure.com), and navigate to the Properties section of your provisioning application. For example, if you want to export your *Workday to AD User Provisioning application* mapping navigate to the Properties section of that app. ++1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to the Properties section of your provisioning application. For example, if you want to export your *Workday to AD User Provisioning application* mapping navigate to the Properties section of that app. 1. In the Properties section of your provisioning app, copy the GUID value associated with the *Object ID* field. This value is also called the **ServicePrincipalId** of your app and it's used in Graph Explorer operations. ![Screenshot of Workday App Service Principal ID.](./media/skip-out-of-scope-deletions/wd_export_01.png) |
active-directory | Use Scim To Provision Users And Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md | Check with your application provider, or your application provider's documentati ### Getting started + Applications that support the SCIM profile described in this article can be connected to Azure AD using the "non-gallery application" feature in the Azure AD application gallery. Once connected, Azure AD runs a synchronization process. The process runs every 40 minutes. The process queries the application's SCIM endpoint for assigned users and groups, and creates or modifies them according to the assignment details. **To connect an application that supports SCIM:** |
active-directory | App Proxy Protect Ndes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/app-proxy-protect-ndes.md | Azure AD Application Proxy is built on Azure. It gives you a massive amount of n ## Install and register the connector on the NDES server -1. Sign in to the [Azure portal](https://portal.azure.com/) as an application administrator of the directory that uses Application Proxy. For example, if the tenant domain is contoso.com, the admin should be admin@contoso.com or any other admin alias on that domain. ++1. Sign in to the [Azure portal](https://portal.azure.com) as an application administrator of the directory that uses Application Proxy. For example, if the tenant domain is contoso.com, the admin should be admin@contoso.com or any other admin alias on that domain. 1. Select your username in the upper-right corner. Verify you're signed in to a directory that uses Application Proxy. If you need to change directories, select **Switch directory** and choose a directory that uses Application Proxy. 1. In left navigation panel, select **Azure Active Directory**. 1. Under **Manage**, select **Application proxy**. |
active-directory | Application Proxy Add On Premises Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md | Public DNS records for Azure AD Application Proxy endpoints are chained CNAME re ## Install and register a connector -To use Application Proxy, install a connector on each Windows server you're using with the Application Proxy service. The connector is an agent that manages the outbound connection from the on-premises application servers to Application Proxy in Azure AD. You can install a connector on servers that also have other authentication agents installed such as Azure AD Connect. +To use Application Proxy, install a connector on each Windows server you're using with the Application Proxy service. The connector is an agent that manages the outbound connection from the on-premises application servers to Application Proxy in Azure AD. You can install a connector on servers that also have other authentication agents installed such as Azure AD Connect. To install the connector: -1. Sign in to the [Azure portal](https://portal.azure.com/) as an application administrator of the directory that uses Application Proxy. For example, if the tenant domain is `contoso.com`, the admin should be `admin@contoso.com` or any other admin alias on that domain. +1. Sign in to the [Azure portal](https://portal.azure.com) as an application administrator of the directory that uses Application Proxy. For example, if the tenant domain is `contoso.com`, the admin should be `admin@contoso.com` or any other admin alias on that domain. 1. Select your username in the upper-right corner. Verify you're signed in to a directory that uses Application Proxy. If you need to change directories, select **Switch directory** and choose a directory that uses Application Proxy. 1. In left navigation panel, select **Azure Active Directory**. 1. Under **Manage**, select **Application proxy**. To confirm the connector installed and registered correctly: Now that you've prepared your environment and installed a connector, you're ready to add on-premises applications to Azure AD. -1. Sign in as an administrator in the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator. 2. In the left navigation panel, select **Azure Active Directory**. 3. Select **Enterprise applications**, and then select **New application**. 4. Select **Add an on-premises application** button which appears about halfway down the page in the **On-premises applications** section. Alternatively, you can select **Create your own application** at the top of the page and then select **Configure Application Proxy for secure remote access to an on-premises application**. |
active-directory | Application Proxy Configure Cookie Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md | Additionally, if your back-end application has cookies that need to be available ## Set the cookie settings - Azure portal++ To set the cookie settings using the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Application Proxy Configure Custom Home Page | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md | You can set the home page URL either through the Azure portal or by using PowerS ## Change the home page in the Azure portal + To change the home page URL of your app through the Azure portal, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) as an administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator. 1. Select **Azure Active Directory**, and then **App registrations**. The list of registered apps appears. 1. Choose your app from the list. A page showing the details of the registered app appears. 1. Under **Manage**, select **Branding**. Create the home page URL, and update your app with that value. Continue using th ## Next steps - [Enable remote access to SharePoint with Azure AD Application Proxy](./application-proxy-integrate-with-sharepoint-server.md)-- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md)+- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md) |
active-directory | Application Proxy Configure Hard Coded Link Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-hard-coded-link-translation.md | If you need to support one of these two scenarios, use the same internal and ext ## Enable link translation + Getting started with link translation is as easy as clicking a button: 1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator. Now, when your users access this application, the proxy will automatically scan ## Next steps [Use custom domains with Azure AD Application Proxy](application-proxy-configure-custom-domain.md) to have the same internal and external URL -[Configure alternate access mappings for SharePoint 2013](/SharePoint/administration/configure-alternate-access-mappings) +[Configure alternate access mappings for SharePoint 2013](/SharePoint/administration/configure-alternate-access-mappings) |
active-directory | Application Proxy Configure Native Client Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-native-client-application.md | Publish your proxy application as you would any other application and assign use ## Step 2: Register your native application + You now need to register your application in Azure AD, as follows: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Application Proxy Configure Single Sign On Password Vaulting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md | You should already have published and tested your app with Application Proxy. If ## Set up password vaulting for your application + 1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator. 1. Select **Azure Active Directory** > **Enterprise applications** > **All applications**. 1. From the list, select the app that you want to set up with SSO. |
active-directory | Application Proxy Integrate With Tableau | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-tableau.md | Application Proxy supports the OAuth 2.0 Grant Flow, which is required for Table ## Publish your applications in Azure + To publish Tableau, you need to publish an application in the Azure portal. For: |
active-directory | Application Proxy Ping Access Publishing Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md | This article is for people to publish an application with this scenario for the ### Install an Application Proxy connector + If you've enabled Application Proxy and installed a connector already, you can skip this section and go to [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy). The Application Proxy connector is a Windows Server service that directs the traffic from your remote employees to your published applications. For more detailed installation instructions, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md). You'll first have to publish your application. This action involves: To publish your own on-premises application: -1. If you didn't in the last section, sign in to the [Azure portal](https://portal.azure.com) as an Application Administrator. +1. If you didn't in the previous section, sign in to the [Azure portal](https://portal.azure.com) as an Application Administrator. 1. Browse to **Enterprise applications** > **New application** > **Add an on-premises application**. The **Add your own on-premises application** page appears. ![Add your own on-premises application](./media/application-proxy-configure-single-sign-on-with-ping-access/add-your-own-on-premises-application.png) |
active-directory | Application Proxy Qlik | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-qlik.md | The remainder of this scenario assumes you done the following: To publish QlikSense, you will need to publish two applications in Azure. ### Application #1: ++ Follow these steps to publish your app. For a more detailed walkthrough of steps 1-8, see [Publish applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md). |
active-directory | Application Proxy Secure Api Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-secure-api-access.md | To publish the SecretAPI web API through Application Proxy: 1. Build and publish the sample SecretAPI project as an ASP.NET web app on your local computer or intranet. Make sure you can access the web app locally. -1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**. Then select **Enterprise applications**. +1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Select **Azure Active Directory**, then select **Enterprise applications**. 1. At the top of the **Enterprise applications - All applications** page, select **New application**. |
active-directory | Concept Certificate Based Authentication Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-migration.md | This article explains how to migrate from running federated servers such as Acti ## Enable Staged Rollout for certificate-based authentication on your tenant + To configure Staged Rollout, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization. +1. Sign in to the [Azure portal](https://portal.azure.com) in the User Administrator role for the organization. 1. Search for and select **Azure Active Directory**. 1. From the left menu, select **Azure AD Connect**. 1. On the Azure AD Connect page, under the Staged Rollout of cloud authentication, click **Enable Staged Rollout for managed user sign-in**. |
active-directory | Concept Certificate Based Authentication Technical Deep Dive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md | If CBA enabled user cannot use MF cert (such as on mobile device without smart c ## MFA with Single-factor certificate-based authentication + Azure AD CBA can be used as a second factor to meet MFA requirements with single-factor certificates. Some of the supported combinations are |
active-directory | How To Certificate Based Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md | You can configure CAs by using the Azure portal or PowerShell. ### Configure certification authorities using the Azure portal + To enable the certificate-based authentication and configure user bindings in the Azure portal, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. For more information, see [Understanding the certificate revocation process](./c To enable the certificate-based authentication in the Azure portal, complete the following steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) as an Authentication Policy Administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as an Authentication Policy Administrator. 1. Select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side. 1. Under **Manage**, select **Authentication methods** > **Certificate-based Authentication**. 1. Under **Enable and Target**, click **Enable**. |
active-directory | Howto Authentication Methods Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-methods-activity.md | The following roles have the required permissions: ## How it works + To access authentication method usage and insights: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Howto Authentication Passwordless Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md | Microsoft offers the following [three passwordless authentication options](conce ## Use the passwordless methods wizard -The [Azure portal](https://portal.azure.com/) now has a passwordless methods wizard that will help you to select the appropriate method for each of your audiences. If you haven't yet determined the appropriate methods, see [https://aka.ms/passwordlesswizard](https://aka.ms/passwordlesswizard), then return to this article to continue planning for your selected methods. **You need administrator rights to access this wizard.** +The [Azure portal](https://portal.azure.com) now has a passwordless methods wizard that will help you to select the appropriate method for each of your audiences. If you haven't yet determined the appropriate methods, see [https://aka.ms/passwordlesswizard](https://aka.ms/passwordlesswizard), then return to this article to continue planning for your selected methods. **You need administrator rights to access this wizard.** ## Passwordless authentication scenarios Here are the sample test cases for passwordless authentication with security key ## Manage passwordless authentication -To manage your user's passwordless authentication methods in the [Azure portal](https://portal.azure.com/), select your user account, and then select Authentication methods. +To manage your user's passwordless authentication methods in the [Azure portal](https://portal.azure.com), select your user account, and then select Authentication methods. ### Microsoft Graph APIs For more information on what authentication methods can be managed in Microsoft ### Rollback + Though passwordless authentication is a lightweight feature with minimal impact on end users, it may be necessary to roll back. Rolling back requires the administrator to sign in to the Azure portal, select the desired strong authentication methods, and change the enable option to No. This process turns off the passwordless functionality for all users. |
active-directory | Howto Authentication Passwordless Phone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md | To use passwordless authentication in Azure AD, first enable the combined regist ## Enable passwordless phone sign-in authentication methods + Azure AD lets you choose which authentication methods can be used during the sign-in process. Users then register for the methods they'd like to use. The **Microsoft Authenticator** authentication method policy manages both the traditional push MFA method and the passwordless authentication method. > [!NOTE] |
active-directory | Howto Authentication Passwordless Security Key On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md | If you encounter issues or want to share feedback about this passwordless securi ## Passwordless security key sign-in FAQ + Here are some answers to commonly asked questions about passwordless sign-in: ### Does passwordless security key sign-in work in my on-premises environment? For information about compliant security keys, see [FIDO2 security keys](concept ### What can I do if I lose my security key? -To delete an enrolled security key, sign in to the Azure portal, and then go to the **Security info** page. +To delete an enrolled security key, sign in to the [Azure portal](https://portal.azure.com), and then go to the **Security info** page. ### What can I do if I'm unable to use the FIDO security key immediately after I create a hybrid Azure AD-joined machine? |
active-directory | Howto Authentication Passwordless Security Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md | Registration features for passwordless authentication methods rely on the combin ### Enable FIDO2 security key method + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication method policy**. 1. Under the method **FIDO2 Security Key**, click **All users**, or click **Add groups** to select specific groups. *Only security groups are supported*. |
active-directory | Howto Authentication Use Email Signin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md | During preview, you currently need *Global Administrator* permissions to enable ### Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com) as a *Global Administrator*. 1. Search for and select **Azure Active Directory**. 1. From the navigation menu on the left-hand side of the Azure Active Directory window, select **Azure AD Connect > Email as alternate login ID**. |
active-directory | Howto Mfa Adfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-adfs.md | The first thing we need to do is to configure the AD FS claims. Create two claim ### Configure Azure AD Multi-Factor Authentication Trusted IPs with Federated Users + Now that the claims are in place, we can configure trusted IPs. 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Howto Mfa App Passwords | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md | In this scenario, you use the following credentials: ## Allow users to create app passwords + By default, users can't create app passwords. The app passwords feature must be enabled before users can use them. To give users the ability to create app passwords, **admin needs** to complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Howto Mfa Mfasettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md | The following Azure AD Multi-Factor Authentication settings are available: ## Account lockout (MFA Server only) + >[!NOTE] >Account lockout only affects users who sign in by using MFA Server on-premises. After you enable the **remember multi-factor authentication** feature, users can ## Next steps -To learn more, see [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md) +To learn more, see [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md) |
active-directory | Howto Mfa Nps Extension Rdg | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md | This section provides instructions for configuring RDS infrastructure to use Azu ### Acquire Azure Active Directory tenant ID + As part of the configuration of the NPS extension, you need to supply admin credentials and the Azure AD ID for your Azure AD tenant. To get the tenant ID, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of the Azure tenant. |
active-directory | Howto Mfa Nps Extension Vpn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md | If the value is set to *TRUE* or is blank, all authentication requests are subje ### Obtain the Azure Active Directory tenant ID + As part of the configuration of the NPS extension, you must supply administrator credentials and the ID of your Azure AD tenant. To get the tenant ID, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of the Azure tenant. |
active-directory | Howto Mfa Nps Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md | The Microsoft Azure Active Directory Module for Windows PowerShell is also insta ### Azure Active Directory + Everyone using the NPS extension must be synced to Azure AD using Azure AD Connect, and must be registered for MFA. When you install the extension, you need the *Tenant ID* and admin credentials for your Azure AD tenant. To get the tenant ID, complete the following steps: |
active-directory | Howto Mfa Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-reporting.md | This article shows you how to view the Azure AD sign-ins report in the Azure por ## View the Azure AD sign-ins report + The sign-ins report provides you with information about the usage of managed applications and user sign-in activities, which includes information about multi-factor authentication (MFA) usage. The MFA data gives you insights into how MFA is working in your organization. It answers questions like: - Was the sign-in challenged with MFA? |
active-directory | Howto Mfa Server Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-server-settings.md | The following MFA Server settings are available: ## One-time bypass + The one-time bypass feature allows a user to authenticate a single time without performing multi-factor authentication. The bypass is temporary and expires after a specified number of seconds. In situations where the mobile app or phone is not receiving a notification or phone call, you can allow a one-time bypass so the user can access the desired resource. To create a one-time bypass, complete the following steps: |
active-directory | Howto Mfa Userdevicesettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userdevicesettings.md | Authentication methods can also be managed using Microsoft Graph APIs. For more ## Manage user authentication options + If you're assigned the *Authentication Administrator* role, you can require users to reset their password, re-register for MFA, or revoke existing MFA sessions from their user object. To manage user settings, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com). To delete a user's app passwords, complete the following steps: This article showed you how to configure individual user settings. To configure overall Azure AD Multi-Factor Authentication service settings, see [Configure Azure AD Multi-Factor Authentication settings](howto-mfa-mfasettings.md). -If your users need help, see the [User guide for Azure AD Multi-Factor Authentication](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc). +If your users need help, see the [User guide for Azure AD Multi-Factor Authentication](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc). |
active-directory | Howto Mfa Userstates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md | All users start out *Disabled*. When you enroll users in per-user Azure AD Multi ## View the status for a user + To view and manage user states, complete the following steps to access the Azure portal page: 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global administrator. |
active-directory | Howto Mfaserver Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy.md | If you aren't using the Event Confirmation feature, and your users aren't using ## Download the MFA Server + Follow these steps to download the Azure AD Multi-Factor Authentication Server from the Azure portal: > [!IMPORTANT] |
active-directory | Howto Password Ban Bad On Premises Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-operations.md | This article shows you how to enable Azure AD Password Protection for your on-pr ## Enable on-premises password protection + 1. Sign in to the [Azure portal](https://portal.azure.com) and browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Password protection**. 1. Set the option for **Enable password protection on Windows Server Active Directory** to *Yes*. |
active-directory | Howto Sspr Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-reporting.md | After deployment, many organizations want to know how or if self-service passwor ![Reporting on SSPR using the audit logs in Azure AD][Reporting] -The following questions can be answered by the reports that exist in the [Azure portal](https://portal.azure.com/): +The following questions can be answered by the reports that exist in the [Azure portal](https://portal.azure.com): > [!NOTE] > You must be [a global administrator](../roles/permissions-reference.md), and you must opt-in for this data to be gathered on behalf of your organization. To opt in, you must visit the **Reporting** tab or the audit logs at least once. Until then, data is not collected for your organization. The following questions can be answered by the reports that exist in the [Azure ## How to view password management reports in the Azure portal + In the Azure portal experience, we have improved the way that you can view password reset and password reset registration activity. Use the following the steps to find the password reset and password reset registration events: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Tutorial Configure Custom Password Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-configure-custom-password-protection.md | The custom banned password list is limited to a maximum of 1000 terms. It's not ## Configure custom banned passwords + Let's enable the custom banned password list and add some entries. You can add additional entries to the custom banned password list at any time. To enable the custom banned password list and add entries to it, complete the following steps: |
active-directory | Tutorial Enable Azure Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-azure-mfa.md | To complete this tutorial, you need the following resources and privileges: ## Create a Conditional Access policy + The recommended way to enable and use Azure AD Multi-Factor Authentication is with Conditional Access policies. Conditional Access lets you create and define policies that react to sign-in events and that request additional actions before a user is granted access to an application or service. :::image type="content" alt-text="Overview diagram of how Conditional Access works to secure the sign-in process" source="media/tutorial-enable-azure-mfa/conditional-access-overview.png" lightbox="media/tutorial-enable-azure-mfa/conditional-access-overview.png"::: |
active-directory | Tutorial Enable Cloud Sync Sspr Writeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md | You can enable Azure AD connect cloud sync provisioning directly in Azure portal #### Enable password writeback in Azure portal + With password writeback enabled in Azure AD Connect cloud sync, now verify, and configure Azure AD self-service password reset (SSPR) for password writeback. When you enable SSPR to use password writeback, users who change or reset their password have that updated password synchronized back to the on-premises AD DS environment as well. To verify and enable password writeback in SSPR, complete the following steps: Set-AADCloudSyncPasswordWritebackConfiguration -Enable $true -Credential $(Get-C ``` ## Clean up resources+ If you no longer want to use the SSPR writeback functionality you have configured as part of this tutorial, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Tutorial Enable Sspr Writeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md | To enable SSPR writeback, first enable the writeback option in Azure AD Connect. ## Enable password writeback for SSPR + With password writeback enabled in Azure AD Connect, now configure Azure AD SSPR for writeback. SSPR can be configured to writeback through Azure AD Connect sync agents and Azure AD Connect provisioning agents (cloud sync). When you enable SSPR to use password writeback, users who change or reset their password have that updated password synchronized back to the on-premises AD DS environment as well. To enable password writeback in SSPR, complete the following steps: |
active-directory | Tutorial Enable Sspr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr.md | To finish this tutorial, you need the following resources and privileges: ## Enable self-service password reset + Azure AD lets you enable SSPR for *None*, *Selected*, or *All* users. This granular ability lets you choose a subset of users to test the SSPR registration process and workflow. When you're comfortable with the process and the time is right to communicate the requirements with a broader set of users, you can select a group of users to enable for SSPR. Or, you can enable SSPR for everyone in the Azure AD tenant. > [!NOTE] |
active-directory | Tutorial Risk Based Sspr Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-risk-based-sspr-mfa.md | For more information about Azure AD Identity Protection, see [What is Azure AD I ## Enable MFA registration policy + Azure AD Identity Protection includes a default policy that can help get users registered for Azure AD Multi-Factor Authentication. If you use additional policies to protect sign-in events, you would need users to have already registered for MFA. When you enable this policy, it doesn't require users to perform MFA at each sign-in event. The policy only checks the registration status for a user and asks them to pre-register if needed. It's recommended to enable the MFA registration policy for users that are to be enabled for additional Azure AD Identity Protection policies. To enable this policy, complete the following steps: |
active-directory | Howto Reactivate Disabled Acs Namespaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/howto-reactivate-disabled-acs-namespaces.md | Further extensions will no longer be automatically approved. If you need additio ### To request an extension + 1. Sign in to the [Azure portal](https://portal.azure.com) and create a [new support request](https://portal.azure.com/#create/Microsoft.Support). 1. Fill in the new support request form as shown in the following example. |
active-directory | Onboard Enable Controller After Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md | -This article describes how to enable or disable the controller in Microsoft Azure and Google Cloud Platform (GCP) after onboarding is complete. +With the controller, you determine what level of access to provide Permissions Management. ++* Enable to grant read and write access to your environment(s). You can manage permissions and remediate through Permissions Management. + +* Disable to grant read-only access to your environment(s). +++This article describes how to enable the controller in Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) after onboarding is complete. +++This article also describes how to disable the controller in Microsoft Azure and Google Cloud Platform (GCP). Once you enable the controller in AWS, you can't disable it. -This article also describes how to enable the controller in Amazon Web Services (AWS) if you disabled it during onboarding. You can only enable the controller in AWS at this time; you can't disable it. ## Enable the controller in AWS > [!NOTE]-> You can only enable the controller in AWS; you can't disable it at this time. +> You can enable the controller in AWS if you disabled it during onboarding. Once you enable the controller, you canΓÇÖt disable it at this time. 1. Sign in to the AWS console of the member account in a separate browser window. 1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab. This article also describes how to enable the controller in Amazon Web Services You can enable or disable the controller in Azure at the Subscription level of your Management Group(s). -1. From the Azure **Home** page, select **Management groups**. +1. From the Azure [**Home**](https://portal.azure.com) page, select **Management groups**. 1. Locate the group for which you want to enable or disable the controller, then select the arrow to expand the group menu and view your subscriptions. Alternatively, you can select the **Total Subscriptions** number listed for your group. 1. Select the subscription for which you want to enable or disable the controller, then click **Access control (IAM)** in the navigation menu. 1. In the **Check access** section, in the **Find** box, enter **Cloud Infrastructure Entitlement Management**. |
active-directory | Product Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-reports.md | Permissions Management offers the following reports for management associated wi - **Permissions Analytics Report** - **Summary of report**: Provides information about the violation of key security best practices. - **Applies to**: AWS, Azure, and GCP- - **Report output type**: CSV, PDF - - **Ability to collate report**: Yes + - **Report output type**: XSLX, PDF + - **Ability to collate report**: Yes (XSLX only) - **Type of report**: **Detailed** - **Use cases**: - This report lists the different key findings in the selected auth systems. The key findings include super identities, inactive identities, over provisioned active identities, storage bucket hygiene, and access key age (for AWS only). The report helps administrators to visualize the findings across the organization. |
active-directory | Ui Triggers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-triggers.md | |
active-directory | Policy Migration Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/policy-migration-mfa.md | The migration process consists of the following steps: ## Open a classic policy -1. In the [Azure portal](https://portal.azure.com), navigate to **Azure Active Directory** > **Security** > **Conditional Access**. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Navigate to **Azure Active Directory** > **Security** > **Conditional Access**. + 1. Select, **Classic policies**. ![Classic policies view](./media/policy-migration-mfa/12.png) |
active-directory | Require Tou | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/require-tou.md | To complete the scenario in this quickstart, you need: ## Sign-in without terms of use + The goal of this step is to get an impression of the sign-in experience without a Conditional Access policy. -1. Sign in to the [Azure portal](https://portal.azure.com/) as your test user. +1. Sign in to the [Azure portal](https://portal.azure.com) as your test user. 1. Sign out. ## Create your terms of use |
active-directory | Custom Extension Configure Saml App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-configure-saml-app.md | The following steps are for registering a demo [XRayClaims](https://adfshelp.mic ### Add a new SAML application + Add a new, non-gallery SAML application in your tenant: -1. In the [Azure portal](https://portal.azure.com), go to **Azure Active Directory** and then **Enterprise applications**. Select **New application** and then **Create your own application**. +1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Go to **Azure Active Directory** and then **Enterprise applications**. Select **New application** and then **Create your own application**. 1. Add a name for the app. For example, **AzureADClaimsXRay**. Select the **Integrate any other application you don't find in the gallery (Non-gallery)** option and select **Create**. Before testing the user sign-in, you must assign a user or group of users to the 1. In the **Users and groups** page, select **Add user/group**. -1. Search for and select the user to sign into the app. Select the **Assign** button. +1. Search for and select the user to sign in to the app. Select the **Assign** button. ### Test the application |
active-directory | Custom Extension Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md | This how-to guide demonstrates the token issuance start event with a REST API ru ## Step 1. Create an Azure Function app + In this step, you create an HTTP trigger function API in the Azure portal. The function API is the source of extra claims for your token. Follow these steps to create an Azure Function: -1. Sign in to the [Azure portal](https://portal.azure.com/) with your administrator account. +1. Sign in to the [Azure portal](https://portal.azure.com) with your administrator account. 1. From the Azure portal menu or the **Home** page, select **Create a resource**. 1. In the **New** page, select **Compute** > **Function App**. 1. On the **Basics** page, use the function app settings as specified in the following table: In this step, you configure a custom extension, which will be used by Azure AD t # [Azure portal](#tab/azure-portal) -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Under **Azure services**, select **Azure Active Directory**. 1. Ensure your user account has the Global Administrator or Application Administrator and Authentication Extensibility Administrator role. Otherwise, learn how to [assign a role](../roles/manage-roles-portal.md). 1. From the menu, select **Enterprise applications**. To protect your Azure function, follow these steps to integrate Azure AD authent > [!NOTE] > If the Azure function app is hosted in a different Azure tenant than the tenant in which your custom extension is registered, skip to [using OpenID Connect identity provider](#51-using-openid-connect-identity-provider) step. -1. In the [Azure portal](https://portal.azure.com), navigate and select the function app you previously published. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. Navigate and select the function app you previously published. 1. Select **Authentication** in the menu on the left. 1. Select **Add Identity provider**. 1. Select **Microsoft** as the identity provider. To protect your Azure function, follow these steps to integrate Azure AD authent If you configured the [Microsoft identity provider](#step-5-protect-your-azure-function), skip this step. Otherwise, if the Azure Function is hosted under a different tenant than the tenant in which your custom extension is registered, follow these steps to protect your function: -1. In the [Azure portal](https://portal.azure.com), navigate and select the function app you previously published. +1. Sign in to the [Azure portal](https://portal.azure.com), then navigate and select the function app you previously published. 1. Select **Authentication** in the menu on the left. 1. Select **Add Identity provider**. 1. Select **OpenID Connect** as the identity provider. |
active-directory | Custom Extension Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-troubleshoot.md | In order to troubleshoot issues with your custom claims provider REST API endpoi ## Azure AD sign-in logs + You can also use [Azure AD sign-in logs](../reports-monitoring/concept-sign-ins.md) in addition to your REST API logs, and hosting environment diagnostics solutions. Using Azure AD sign-in logs, you can find errors, which may affect the users' sign-ins. The Azure AD sign-in logs provide information about the HTTP status, error code, execution duration, and number of retries that occurred the API was called by Azure AD. Azure AD sign-in logs also integrate with [Azure Monitor](../../azure-monitor/index.yml). You can set up alerts and monitoring, visualize the data, and integrate with security information and event management (SIEM) tools. For example, you can set up notifications if the number of errors exceed a certain threshold that you choose. Use the following table to diagnose an error code. Your REST API is protected by Azure AD access token. You can test your API by obtaining an access token with the [application registration](custom-extension-get-started.md#22-grant-admin-consent) associated with the custom extensions. After you acquire an access token, pass it the HTTP `Authorization` header. To obtain an access token, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure administrator account. +1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure administrator account. 1. Select **Azure Active Directory** > **App registrations**. 1. Select the *Azure Functions authentication events API* app registration [you created previously](custom-extension-get-started.md#step-2-register-a-custom-extension). 1. Copy the [application ID](custom-extension-get-started.md#22-grant-admin-consent). One of the most common issues is that your custom claims provider API doesn't re - Learn how to [create and register a custom claims provider](custom-extension-get-started.md) with a sample Open ID Connect application. - If you already have a custom claims provider registered, you can configure a [SAML application](custom-extension-configure-saml-app.md) to receive tokens with claims sourced from an external store.-- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article.+- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article. |
active-directory | Deploy Web App Authentication Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/deploy-web-app-authentication-pipeline.md | Save your changes and run the pipeline. ## Deploy Azure resources -Next, add a stage to the pipeline that deploys Azure resources. The pipeline uses an [inline script](/azure/devops/pipelines/scripts/powershell) to create the App Service instance. In a later step, the inline script creates an Azure AD app registration for App Service authentication. An Azure CLI bash script is used because Azure Resource Manager (and Azure pipeline tasks) can't create an app registration. +Next, add a stage to the pipeline that deploys Azure resources. The pipeline uses an [inline script](/azure/devops/pipelines/scripts/powershell) to create the App Service instance. In a later step, the inline script creates an Azure AD app registration for App Service authentication. An Azure CLI bash script is used because Azure Resource Manager (and Azure Pipelines tasks) can't create an app registration. The inline script runs in the context of the pipeline, assign the [Application.Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) role to the app so the script can create app registrations: Select the application for the web app, *pipelinetestwebapp*, and delete it. Learn more about: - [App Service built-in authentication](/azure/app-service/overview-authentication-authorization).-- [Deploy to App Service using Azure Pipelines](/azure/app-service/deploy-azure-pipelines)+- [Deploy to App Service using Azure Pipelines](/azure/app-service/deploy-azure-pipelines) |
active-directory | Enterprise App Role Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/enterprise-app-role-management.md | You can customize the role claim in the access token that is received after an a ## Locate the enterprise application + Use the following steps to locate the enterprise application: -1. In the [Azure portal](https://portal.azure.com/), in the left pane, select **Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. In the left pane, select **Azure Active Directory**. 1. Select **Enterprise applications**, and then select **All applications**. 1. Enter the name of the existing application in the search box, and then select the application from the search results. 1. After the application is selected, copy the object ID from the overview pane. |
active-directory | Howto Add App Roles In Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-apps.md | The number of roles you add counts toward application manifest limits enforced b ### App roles UI + To create an app role by using the Azure portal's user interface: 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. |
active-directory | Howto Add Terms Of Service Privacy Statement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-terms-of-service-privacy-statement.md | When the terms of service and privacy statement are ready, you can add links to * [Using the Microsoft Graph API](#msgraph-rest-api) ### <a name="azure-portal"></a>Using the Azure portal++ Follow these steps in the Azure portal. 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> and select the correct Azure AD tenant(not B2C). |
active-directory | Howto Call A Web Api With Curl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-call-a-web-api-with-curl.md | The Microsoft identity platform requires your application to be registered befor ### Register the web API + Follow these steps to create the web API registration: 1. Sign in to the [Azure portal](https://portal.azure.com). By running the previous cURL command, the Microsoft identity platform has provid For more information about OAuth 2.0 authorization code flow and application types, see: - [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md) -- [Application types for the Microsoft identity platform](v2-app-types.md#web-apps) +- [Application types for the Microsoft identity platform](v2-app-types.md#web-apps) |
active-directory | Howto Call A Web Api With Postman | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-call-a-web-api-with-postman.md | The Microsoft identity platform requires your application to be registered befor ### Register the web API + Follow these steps to create the web API registration: -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations > New registration**. Follow these steps to create the web app registration: ::: zone pivot="api" -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. If access to multiple tenants is available, use the Directories + subscriptions filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**. |
active-directory | Howto Configure App Instance Property Locks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-app-instance-property-locks.md | The following property usage scenarios are considered as sensitive: ## Configure an app instance lock + To configure an app instance lock using the Azure portal: 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. |
active-directory | Howto Configure Publisher Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md | If your app was registered *before May 21, 2019*, your app's consent prompt show ## Set a publisher domain in the Azure portal + To set a publisher domain for your app by using the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Howto Create Service Principal Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md | You must have sufficient permissions to register an application with your Azure ## Register an application with Azure AD and create a service principal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and Select **Azure Active Directory**. 1. Select **App registrations**, then select **New registration**. |
active-directory | Howto Get List Of All Auth Library Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-get-list-of-all-auth-library-apps.md | No sign-in event that occurred *before* you configure Azure AD to send the event ## Step 2: Access sign-ins workbook in Azure portal + Once you've integrated your Azure AD sign-in and audit logs with Azure Monitor as specified in the Azure Monitor integration, access the sign-ins workbook: - 1. Sign into the Azure portal. + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**. 1. In the **Usage** section, open the **Sign-ins** workbook. |
active-directory | Howto Modify Supported Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-modify-supported-accounts.md | In the following sections, you learn how to modify your app's registration in th ## Change the application registration to support different accounts + To specify a different setting for the account types supported by an existing app registration: 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. |
active-directory | Howto Remove App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-remove-app.md | In the following sections, you learn how to: ## Remove an application authored by you or your organization + Applications that you or your organization have registered are represented by both an application object and service principal object in your tenant. For more information, see [Application objects and service principal objects](./app-objects-and-service-principals.md). > [!NOTE] |
active-directory | Howto Restore App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restore-app.md | You must have one of the following roles to restore applications. ## View your deleted applications + You can see all the applications in a soft deleted state. Only applications deleted less than 30 days ago can be restored. To view your restorable applications: |
active-directory | Howto Restrict Your App To A Set Of Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md | The option to restrict an app to a specific set of users, apps or security group ## Update the app to require user assignment + To update an application to require user assignment, you must be owner of the application under Enterprise apps, or be assigned one of **Global administrator**, **Application administrator**, or **Cloud application administrator** directory roles. -1. Sign in to the [Azure portal](https://portal.azure.com/) +1. Sign in to the [Azure portal](https://portal.azure.com) 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **Enterprise Applications** then select **All applications**. |
active-directory | Jwt Claims Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/jwt-claims-customization.md | -# Customize claims issued in the JSON web token (JWT) for enterprise applications (Preview) +# Customize claims issued in the JSON web token (JWT) for enterprise applications The Microsoft identity platform supports [single sign-on (SSO)](../manage-apps/what-is-single-sign-on.md) with most preintegrated applications in the Azure Active Directory (Azure AD) application gallery and custom applications. When a user authenticates to an application through the Microsoft identity platform using the OIDC protocol, the Microsoft identity platform sends a token to the application. The application validates and uses the token to sign the user in instead of prompting for a username and password. -These JSON Web tokens (JWT) used by OIDC and OAuth applications (preview) contain pieces of information about the user known as *claims*. A claim is information that an identity provider states about a user inside the token they issue for that user. In an [OIDC response](v2-protocols-oidc.md), claims data is typically contained in the ID Token issued by the identity provider in the form of a JWT. +These JSON Web tokens (JWT) used by OIDC and OAuth applications contain pieces of information about the user known as *claims*. A claim is information that an identity provider states about a user inside the token they issue for that user. In an [OIDC response](v2-protocols-oidc.md), claims data is typically contained in the ID Token issued by the identity provider in the form of a JWT. ## View or edit claims + To view or edit the claims issued in the JWT to the application, open the application in Azure portal. Then select **Single sign-on** blade in the left-hand menu and open the **Attributes & Claims** section. :::image type="content" source="./media/jwt-claims-customization/attributes-claims.png" alt-text="Screenshot of opening the Attributes & Claims section in the Azure portal."::: An application may need claims customization for various reasons. For example, w The following steps describe how to assign a constant value: -1. In the [Azure portal](https://portal.azure.com/), on the **Attributes & Claims** section, Select **Edit** to edit the claims. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. In the **Attributes & Claims** section, Select **Edit** to edit the claims. 1. Select the required claim that you want to modify. 1. Enter the constant value without quotes in the **Source attribute** as per your organization, and then select **Save**. |
active-directory | Migrate Spa Implicit To Auth Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-spa-implicit-to-auth-code.md | The following sections describe each step in additional detail. ## Switch redirect URIs to SPA platform + If you'd like to continue using your existing app registration for your applications, use the Azure portal to update the registration's redirect URIs to the SPA platform. Doing so enables the authorization code flow with PKCE and CORS support for apps that use the registration (you still need to update your application's code to MSAL.js v2.x). Follow these steps for app registrations that are currently configured with **Web** platform redirect URIs: |
active-directory | Msal Android Single Sign On | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-single-sign-on.md | If Intune Company Portal is installed and is operating as the active broker, and #### Generate a redirect URI for a broker + You must register a redirect URI that is compatible with the broker. The redirect URI for the broker should include your app's package name and the Base64-encoded representation of your app's signature. The format of the redirect URI is: `msauth://<yourpackagename>/<base64urlencodedsignature>` |
active-directory | Msal Js Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-sso.md | Azure Active Directory (Azure AD) enables SSO by setting a session cookie when a ## SSO between browser tabs for the same app -When a user has an application open in several tabs and signs in on one of them, they can be signed into the same app open on other tabs without being prompted. To do so, you'll need to set the *cacheLocation* in MSAL.js configuration object to `localStorage` as shown in the following example: +When a user has an application open in several tabs and signs in on one of them, they can be signed into the same app open on other tabs without being prompted. To do so, you need to set the *cacheLocation* in MSAL.js configuration object to `localStorage` as shown in the following example: ```javascript const config = { When a user authenticates, a session cookie is set on the Azure AD domain in the To improve performance and ensure that the authorization server will look for the correct account session, you can pass one of the following options in the request object of the `ssoSilent` method to obtain the token silently. -- Session ID `sid` (which can be retrieved from `idTokenClaims` of an `account` object)-- `login_hint` (which can be retrieved from the `account` object username property or the `upn` claim in the ID token) (if your app is authenticating users with B2C, see: [Configure B2C user-flows to emit username in ID tokens](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/FAQ.md#why-is-getaccountbyusername-returning-null-even-though-im-signed-in) )-- `account` (which can be retrieved from using one the [account methods](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/login-user.md#account-apis))+- `login_hint`, which can be retrieved from the `account` object username property or the `upn` claim in the ID token. If your app is authenticating users with B2C, see: [Configure B2C user-flows to emit username in ID tokens](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/FAQ.md#why-is-getaccountbyusername-returning-null-even-though-im-signed-in) +- Session ID, `sid`, which can be retrieved from `idTokenClaims` of an `account` object. +- `account`, which can be retrieved from using one the [account methods](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/login-user.md#account-apis) -#### Using a session ID -To use a session ID, add `sid` as an [optional claim](active-directory-optional-claims.md) to your app's ID tokens. The `sid` claim allows an application to identify a user's Azure AD session independent of their account name or username. To learn how to add optional claims like `sid`, see [Provide optional claims to your app](active-directory-optional-claims.md). Use the session ID (SID) in silent authentication requests you make with `ssoSilent` in MSAL.js. + We recommended to using the `login_hint` [optional ID token claim](optional-claims-reference.md#v10-and-v20-optional-claims-set) provided to `ssoSilent` as `loginHint` as it is the most reliable account hint of silent and interactive requests. +++#### Using a login hint ++The `login_hint` optional claim provides a hint to Azure AD about the user account attempting to sign in. To bypass the account selection prompt typically shown during interactive authentication requests, provide the `loginHint` as shown: ```javascript-const request = { - scopes: ["user.read"], - sid: sid, +const silentRequest = { + scopes: ["User.Read", "Mail.Read"], + loginHint: "user@contoso.com" }; - try { - const loginResponse = await msalInstance.ssoSilent(request); +try { + const loginResponse = await msalInstance.ssoSilent(silentRequest); } catch (err) { if (err instanceof InteractionRequiredAuthError) {- const loginResponse = await msalInstance.loginPopup(request).catch(error => { + const loginResponse = await msalInstance.loginPopup(silentRequest).catch(error => { // handle error }); } else { const request = { } ``` -#### Using a login hint +In this example, `loginHint` contains the user's email or UPN, which is used as a hint during interactive token requests. The hint can be passed between applications to facilitate silent SSO, where application A can sign in a user, read the `loginHint`, and then send the claim and the current tenant context to application B. Azure AD will attempt to pre-fill the sign-in form or bypass the account selection prompt and directly proceed with the authentication process for the specified user. ++If the information in the `login_hint` claim doesn't match any existing user, they're redirected to go through the standard sign-in experience, including account selection. ++#### Using a session ID -To bypass the account selection prompt typically shown during interactive authentication requests (or for silent requests when you haven't configured the `sid` optional claim), provide a `loginHint`. In multi-tenant applications, also include a `domainHint`. +To use a session ID, add `sid` as an [optional claim](active-directory-optional-claims.md) to your app's ID tokens. The `sid` claim allows an application to identify a user's Azure AD session independent of their account name or username. To learn how to add optional claims like `sid`, see [Provide optional claims to your app](active-directory-optional-claims.md). Use the session ID (SID) in silent authentication requests you make with `ssoSilent` in MSAL.js. ```javascript const request = { scopes: ["user.read"],- loginHint: "preferred_username", - domainHint: "preferred_tenant_id" + sid: sid, }; -try { + try { const loginResponse = await msalInstance.ssoSilent(request); } catch (err) { if (err instanceof InteractionRequiredAuthError) { try { } ``` -Get the values for `loginHint` and `domainHint` from the user's **ID token**: --- `loginHint`: Use the ID token's `preferred_username` claim value.--- `domainHint`: Use the ID token's `tid` claim value. Required in requests made by multi-tenant applications that use the */common* authority. Optional for other applications.--For more information about login hint and domain hint, see [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). - #### Using an account object If you know the user account information, you can also retrieve the user account by using the `getAccountByUsername()` or `getAccountByHomeId()` methods: |
active-directory | Msal Net Use Brokers With Xamarin Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-use-brokers-with-xamarin-apps.md | Add `msauthv2` to the `LSApplicationQueriesSchemes` section of the *Info.plist* ### Step 7: Add a redirect URI to your app registration + When you use the broker, your redirect URI has an extra requirement. The redirect URI _must_ have the following format: ```csharp As an alternative, you can configure MSAL to fall back to the embedded browser, Here are a few tips on avoiding issues when you implement brokered authentication on Android: -- **Redirect URI** - Add a redirect URI to your application registration in the [Azure portal](https://portal.azure.com/). A missing or incorrect redirect URI is a common issue encountered by developers.+- **Redirect URI** - Add a redirect URI to your application registration in the [Azure portal](https://portal.azure.com). A missing or incorrect redirect URI is a common issue encountered by developers. - **Broker version** - Install the minimum required version of the broker apps. Either of these two apps can be used for brokered authentication on Android. - [Intune Company Portal](https://play.google.com/store/apps/details?id=com.microsoft.windowsintune.companyportal) (version 5.0.4689.0 or greater) - [Microsoft Authenticator](https://play.google.com/store/apps/details?id=com.azure.authenticator) (version 6.2001.0140 or greater). |
active-directory | Optional Claims | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/optional-claims.md | Within the JWT, these claims are emitted with the following name format: `extn. ## Configure groups optional claims + This section covers the configuration options under optional claims for changing the group attributes used in group claims from the default group objectID to attributes synced from on-premises Windows Active Directory. You can configure groups optional claims for your application through the Azure portal or application manifest. Group optional claims are only emitted in the JWT for user principals. Service principals aren't included in group optional claims emitted in the JWT. > [!IMPORTANT] |
active-directory | Quickstart Configure App Access Web Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md | By specifying a web API's scopes in your client app's registration, the client a ## Add permissions to access your web API + In the first scenario, you grant a client app access to your own web API, both of which you should have registered as part of the prerequisites. If you don't yet have both a client app and a web API registered, complete the steps in the two [Prerequisites](#prerequisites) articles. This diagram shows how the two app registrations relate to one another. In this section, you add permissions to the client app's registration. |
active-directory | Quickstart Configure App Expose Web Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md | With the web API registered, you can add scopes to the API's code so it can prov ## Add a scope + The code in a client application requests permission to perform operations defined by your web API by passing an access token along with its requests to the protected resource (the web API). Your web API then performs the requested operation only if the access token it receives contains the scopes required for the operation. First, follow these steps to create an example scope named `Employees.Read.All`: |
active-directory | Quickstart Create New Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-create-new-tenant.md | This quickstart addresses two scenarios for the type of app you want to build: To build an environment for either work and school accounts or personal Microsoft accounts (MSA), you can use an existing Azure AD tenant or create a new one. ### Use an existing Azure AD tenant + Many developers already have tenants through services or subscriptions that are tied to Azure AD tenants, such as Microsoft 365 or Azure subscriptions. To check the tenant: To begin building external facing applications that sign in social and local acc ## Next steps > [!div class="nextstepaction"]-> [Register an app](quickstart-register-app.md) +> [Register an app](quickstart-register-app.md) |
active-directory | Quickstart V2 Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript.md | See [How the sample works](#how-the-sample-works) for an illustration. ## Prerequisites + * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * [Node.js](https://nodejs.org/en/download/) * [Visual Studio Code](https://code.visualstudio.com/download) (to edit project files) |
active-directory | Saml Claims Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/saml-claims-customization.md | Transient `nameID` is also supported, but isn't available in the dropdown and ca ### Attributes + Select the desired source for the `NameIdentifier` (or `nameID`) claim. You can select from the options in the following table. | Name | Description | For more information about identifier values, see the table that lists the valid Any constant (static) value can be assigned to any claim. Use the following steps to assign a constant value: -1. In the [Azure portal](https://portal.azure.com/), in the **User Attributes & Claims** section, select **Edit** to edit the claims. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. In the **User Attributes & Claims** section, select **Edit** to edit the claims. 1. Select the required claim that you want to modify. 1. Enter the constant value without quotes in the **Source attribute** as per your organization and select **Save**. Any constant (static) value can be assigned to any claim. Use the following step You can also configure directory schema extension attributes as non-conditional/conditional attributes. Use the following steps to configure the single or multi-valued directory schema extension attribute as a claim: -1. In the [Azure portal](https://portal.azure.com/), in the **User Attributes & Claims** section, select **Edit** to edit the claims. +1. Sign in to the [Azure portal](https://portal.azure.com). ++1. In the **User Attributes & Claims** section, select **Edit** to edit the claims. 1. Select **Add new claim** or edit an existing claim. :::image type="content" source="./media/saml-claims-customization/mv-extension-1.jpg" alt-text="Screenshot of the MultiValue extension configuration section in the Azure portal."::: |
active-directory | Scenario Mobile App Registration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-app-registration.md | If you prefer to manually configure the redirect URI, you can do so through the ### Username-password authentication + If your app uses only username-password authentication, you don't need to register a redirect URI for your application. This flow does a round trip to the Microsoft identity platform. Your application won't be called back on any specific URI. However, identify your application as a public client application. To do so: |
active-directory | Scenario Spa App Registration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-app-registration.md | To register a single-page application (SPA) in the Microsoft identity platform, ## Create the app registration + For both MSAL.js 1.0- and 2.0-based applications, start by completing the following steps to create the initial app registration. 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. |
active-directory | Scenario Web App Sign User App Registration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-registration.md | You can use these links to bootstrap the creation of your web application: ## Register an app by using the Azure portal + > [!NOTE] > The portal to use is different depending on whether your application runs in the Microsoft Azure public cloud or in a national or sovereign cloud. For more information, see [National clouds](./authentication-national-cloud.md#app-registration-endpoints). - 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Search for and select **Azure Active Directory**. |
active-directory | Single Page App Tutorial 01 Register App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-tutorial-01-register-app.md | In this tutorial: ## Register the application and record identifiers + To complete registration, provide the application a name, specify the supported account types, and add a redirect URI. Once registered, the application **Overview** pane displays the identifiers needed in the application source code. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations > New registration**. To complete registration, provide the application a name, specify the supported ## Next steps > [!div class="nextstepaction"]-> [Tutorial: Prepare an application for authentication](single-page-app-tutorial-02-prepare-spa.md) +> [Tutorial: Prepare an application for authentication](single-page-app-tutorial-02-prepare-spa.md) |
active-directory | Test Automate Integration Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-automate-integration-testing.md | We recommend you securely store the test usernames and passwords as [secrets](.. ## Create test users + Create some test users in your tenant for testing. Since the test users are not actual humans, we recommend you assign complex passwords and securely store these passwords as [secrets](../../key-vault/secrets/about-secrets.md) in Azure Key Vault. -1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. 1. Go to **Users**. 1. Select **New user** and create one or more test user accounts in your directory. 1. The example test later in this article uses a single test user. [Add the test username and password as secrets](../../key-vault/secrets/quick-create-portal.md) in the key vault you created previously. Add the username as a secret named "TestUserName" and the password as a secret named "TestPassword". client_id={your_client_ID} Replace *{tenant}* with your tenant ID, *{your_client_ID}* with the client ID of your application, and *{resource_you_want_to_call}* with the identifier URI (for example, "https://graph.microsoft.com") or app ID of the API you are trying to access. ## Exclude test apps and users from your MFA policy+ Your tenant likely has a conditional access policy that [requires multifactor authentication (MFA) for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md), as recommended by Microsoft. MFA won't work with ROPC, so you'll need to exempt your test applications and test users from this requirement. To exclude user accounts: |
active-directory | Test Setup Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-setup-environment.md | You can [manually create a tenant](quickstart-create-new-tenant.md), which will ### Populate your tenant with users + For convenience, you may want to invite yourself and other members of your development team to be guest users in the tenant. This will create separate guest objects in the test tenant, but means you only have to manage one set of credentials for your corporate account and your test account. -1. From the [Azure portal](https://portal.azure.com), click on **Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. 2. Go to **Users**. 3. Click on **New guest user** and invite your work account email address. 4. Repeat for other members of the development and/or testing team for your application. You can also create test users in your test tenant. If you used one of the Microsoft 365 sample packs, you may already have some test users in your tenant. If not, you should be able to create some yourself as the tenant administrator. -1. From the [Azure portal](https://portal.azure.com), click on **Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com), then select on **Azure Active Directory**. 2. Go to **Users**. 3. Click **New user** and create some new test users in your directory. You'll need to create an app registration to use in your test environment. This You'll need to create some test users with associated test data to use while testing your scenarios. This step might need to be performed by an admin. -1. From the [Azure portal](https://portal.azure.com), click on **Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. 2. Go to **Users**.-3. Click **New user** and create some new test users in your directory. +3. Select **New user** and create some new test users in your directory. ### Add the test users to a group (optional) For convenience, you can assign all these users to a group, which makes other assignment operations easier. -1. From the [Azure portal](https://portal.azure.com), click on **Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. 2. Go to **Groups**. 3. Click **New group**. 4. Select either **Security** or **Microsoft 365** for group type. |
active-directory | Tutorial V2 Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-android.md | Follow these steps to create a new project if you don't already have an Android ### Register your application with Azure AD + 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Search for and select **Azure Active Directory**. |
active-directory | Tutorial V2 Angular Auth Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-angular-auth-code.md | To continue with the tutorial and build the application yourself, move on to the ## Register the application and record identifiers + To complete registration, provide the application a name, specify the supported account types, and add a redirect URI. Once registered, the application **Overview** pane displays the identifiers needed in the application source code. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations > New registration**. |
active-directory | Tutorial V2 Aspnet Daemon Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-aspnet-daemon-web-app.md | If you don't want to use the automation, use the steps in the following sections ### Choose the Azure AD tenant + 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. To provide a recommendation, go to the [User Voice page](https://feedback.azure. Learn more about building daemon apps that use the Microsoft identity platform to access protected web APIs: > [!div class="nextstepaction"]-> [Scenario: Daemon application that calls web APIs](scenario-daemon-overview.md) +> [Scenario: Daemon application that calls web APIs](scenario-daemon-overview.md) |
active-directory | Tutorial V2 Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-ios.md | If you'd like to download a completed version of the app you build in this tutor ## Register your application + 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Search for and select **Azure Active Directory**. Learn more about building mobile apps that call protected web APIs in our multi- > [!div class="nextstepaction"] > [Scenario: Mobile application that calls web APIs](scenario-mobile-overview.md)- |
active-directory | Tutorial V2 Javascript Spa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md | In the next steps, you'll create a new folder for the JavaScript SPA and set up ## Register the application + Before you proceed with authentication, register the application on Azure AD: -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to **Azure Active Directory**. 1. On the left panel, under **Manage**, select **App registrations**. Then, on the top menu bar, select **New registration**. 1. For **Name**, enter a name for the application (for example, **sampleApp**). You can change the name later if necessary. |
active-directory | Tutorial V2 Windows Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-desktop.md | Create the application using the following steps: ## Register your application + You can register your application in either of two ways. ### Option 1: Express mode Use the following steps to register your application: ### Option 2: Advanced mode + To register and configure your application, follow these steps: 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. |
active-directory | Tutorial V2 Windows Uwp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-uwp.md | private async Task DisplayMessageAsync(string message) ## Register your application + Now, register your application: 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. |
active-directory | Web Api Tutorial 01 Register App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-tutorial-01-register-app.md | In this tutorial: ## Register the application and record identifiers + To complete registration, provide the application a name and specify the supported account types. Once registered, the application **Overview** page will display the identifiers needed in the application source code. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations > New registration**. Once the API is registered, you can configure its permission by defining the sco ## Next steps > [!div class="nextstepaction"]-> [Tutorial: Create an ASP.NET Core project and configure the API](web-api-tutorial-02-prepare-api.md) +> [Tutorial: Create an ASP.NET Core project and configure the API](web-api-tutorial-02-prepare-api.md) |
active-directory | Web App Tutorial 01 Register Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-tutorial-01-register-application.md | In this tutorial: ## Register the application and record identifiers + To complete registration, provide the application a name and specify the supported account types. Once registered, the application **Overview** page will display the identifiers needed in the application source code. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations > New registration**. |
active-directory | Assign Local Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md | To view and update the membership of the Global Administrator role, see: ## Manage the device administrator role + In the Azure portal, you can manage the device administrator role from **Device settings**. 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. |
active-directory | Device Management Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md | To view or copy BitLocker keys, you need to be the owner of the device or have o ## View and filter your devices (preview) + In this preview, you have the ability to infinitely scroll, reorder columns, and select all devices. You can filter the device list by these device attributes: - Enabled state |
active-directory | Enterprise State Roaming Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-enable.md | Enterprise State Roaming provides users with a unified experience across their W ## To enable Enterprise State Roaming -1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Sign in to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Devices** > **Enterprise State Roaming**. 1. Select **Users may sync settings and app data across devices**. For more information, see [how to configure device settings](./device-management-azure-portal.md). The country/region value is set as part of the Azure AD directory creation proce Follow these steps to view a per-user device sync status report. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Users** > **All users**. 1. Select the user, and then select **Devices**. 1. Select **View devices syncing settings and app data** to show sync status. |
active-directory | Howto Vm Sign In Azure Ad Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md | There are two ways to enable Azure AD login for your Linux VM: ### Azure portal + You can enable Azure AD login for any of the [supported Linux distributions](#supported-linux-distributions-and-azure-regions) by using the Azure portal. For example, to create an Ubuntu Server 18.04 Long Term Support (LTS) VM in Azure with Azure AD login: |
active-directory | Howto Vm Sign In Azure Ad Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md | There are two ways to enable Azure AD login for your Windows VM: ### Azure portal + You can enable Azure AD login for VM images in Windows Server 2019 Datacenter or Windows 10 1809 and later. To create a Windows Server 2019 Datacenter VM in Azure with Azure AD login: |
active-directory | Clean Up Stale Guest Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-stale-guest-accounts.md | There are a few recommended patterns that are effective at monitoring and cleani Use the following instructions to learn how to enhance monitoring of inactive guest accounts at scale and create Access Reviews that follow these patterns. Consider the configuration recommendations and then make the needed changes that suit your environment. ## Monitor guest accounts at scale with inactive guest insights (Preview)++ 1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page. 2. Access the inactive guest account report by navigating to "Guest access governance" card and click on "View inactive guests" |
active-directory | Directory Delete Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md | Check the following conditions: ## Delete the organization + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is the Global Administrator for your organization. 1. Select **Azure Active Directory**. 1. On a tenant's **Overview** page, select **Manage tenants**. Product state | Data | Access to data You can put a self-service sign-up product like Microsoft Power BI or Azure RMS into a **Delete** state to be immediately deleted in the Azure portal: -1. Sign in to the [Azure portal](https://portal.azure.com/) with an account that is a global administrator in the organization. If you're trying to delete the Contoso organization that has the initial default domain `contoso.onmicrosoft.com`, sign in with a UPN such as `admin@contoso.onmicrosoft.com`. +1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a global administrator in the organization. If you're trying to delete the Contoso organization that has the initial default domain `contoso.onmicrosoft.com`, sign in with a UPN such as `admin@contoso.onmicrosoft.com`. 1. Browse to **Azure Active Directory**. 1. Select **Licenses**, and then select **Self-service sign-up products**. You can see all the self-service sign-up products separately from the seat-based subscriptions. Choose the product that you want to permanently delete. Here's an example in Microsoft Power BI: |
active-directory | Domains Admin Takeover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md | When you complete the preceding steps, you are now the global administrator of t ### Adding the domain name to a managed organization in Azure AD + 1. Open the [Microsoft 365 admin center](https://admin.microsoft.com). 2. Select **Users** tab, and create a new user account with a name like *user\@fourthcoffeexyz.onmicrosoft.com* that does not use the custom domain name. 3. Ensure that the new user account has Global Administrator privileges for the Azure AD organization. |
active-directory | Domains Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-manage.md | A domain name is an important part of the identifier for resources in many Azure ## Set the primary domain name for your Azure AD organization + When your organization is created, the initial domain name, such as ΓÇÿcontoso.onmicrosoft.com,ΓÇÖ is also the primary domain name. The primary domain is the default domain name for a new user when you create a new user. Setting a primary domain name streamlines the process for an administrator to create new users in the portal. To change the primary domain name: 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that's a Global Administrator for the organization. |
active-directory | Groups Assign Sensitivity Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md | You will also need to synchronize your sensitivity labels to Azure AD. For instr ## Assign a label to a new group in Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Groups**, and then select **New group**. 1. On the **New Group** page, select **Office 365**, and then fill out the required information for the new group and select a sensitivity label from the list. |
active-directory | Groups Bulk Download Members | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download-members.md | You can bulk download the members of a group in your organization to a comma-sep ## To bulk download group membership + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account in the organization. 1. In Azure AD, select **Groups** > **All groups**. 1. Open the group whose membership you want to download, and then select **Members**. |
active-directory | Groups Bulk Download | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download.md | You can download a list of all the groups in your organization to a comma-separa ## To download a list of groups + >[!NOTE] > The columns downloaded are pre-defined |
active-directory | Groups Bulk Import Members | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-import-members.md | The rows in a downloaded CSV template are as follows: ## To bulk import group members + 1. Sign in to the [Azure portal](https://portal.azure.com) with a User administrator account in the organization. Group owners can also bulk import members of groups they own. 1. In Azure AD, select **Groups** > **All groups**. 1. Open the group to which you're adding members and then select **Members**. |
active-directory | Groups Bulk Remove Members | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-remove-members.md | The rows in a downloaded CSV template are as follows: ## To bulk remove group members + 1. Sign in to the [Azure portal](https://portal.azure.com) with a User administrator account in the organization. Group owners can also bulk remove members of groups they own. 1. In Azure AD, select **Groups** > **All groups**. 1. Open the group from which you're removing members and then select **Members**. |
active-directory | Groups Change Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md | You can change a group's membership from static to dynamic (or vice-versa) In Az ## Change the membership type for a group + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a Global Administrator, User Administrator, or Groups Administrator in your Azure AD organization. 2. Browse to **Azure Active Directory** > **Groups**. 3. From the **All groups** list, open the group that you want to change. |
active-directory | Groups Create Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-create-rule.md | For examples of syntax, supported properties, operators, and values for a member ## To create a group membership rule + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is in the Global Administrator, Intune Administrator, or User Administrator role in the Azure AD organization. 1. Browse to **Azure Active Directory** > **Groups**. 1. Select **All groups**, and select **New group**. |
active-directory | Groups Dynamic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-tutorial.md | You're not required to assign licenses to the users for them to be members in dy ## Create a group of guest users + First, you'll create a group for your guest users who all are from a single partner company. They need special licensing, so it's often more efficient to create a group for this purpose. 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is the global administrator for your organization. |
active-directory | Groups Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md | For more information on permissions to restore a deleted group, see [Restore a d ## Set group expiration + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a Global Administrator in your Azure AD organization. 2. Browse to **Azure Active Directory** > **Groups**, then select **Expiration** to open the expiration settings. |
active-directory | Groups Naming Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md | Some administrator roles are exempted from these policies, across all group work ## Configure naming policy in Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com) with a Group Administrator account. 1. Browse to **Azure Active Directory** > **Groups**, then select **Naming policy** to open the Naming policy page. |
active-directory | Groups Quickstart Expiration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-expiration.md | If you don't have an Azure subscription, [create a free account](https://azure.m ## Turn on user creation for groups + 1. Sign in to the [Azure portal](https://portal.azure.com) with a User administrator account. 2. Select **Groups**, and then select **General**. That's it! In this quickstart, you successfully set the expiration policy for th ### To remove the expiration policy -1. Ensure that you are signed in to the [Azure portal](https://portal.azure.com) with an account that is the Global Administrator for your Azure AD organization. +1. Sign in to to the [Azure portal](https://portal.azure.com) with an account that is the Global Administrator for your Azure AD organization. 2. Select **Azure Active Directory** > **Groups** > **Expiration**. 3. Set **Enable expiration for these Microsoft 365 groups** to **None**. |
active-directory | Groups Quickstart Naming Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-naming-policy.md | If you don't have an Azure subscription, [create a free account](https://azure.m ## Configure the group naming policy in the Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com) with a User Administrator account. 1. Browse to **Azure Active Directory** > **Groups**, then select **Naming policy** to open the Naming policy page. |
active-directory | Groups Restore Deleted | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md | User | Can restore any deleted Microsoft 365 group that they own ## View and manage the deleted Microsoft 365 groups that are available to restore + 1. Sign in to the [Azure portal](https://portal.azure.com) with a User Administrator account. 2. Browse to **Azure Active Directory** > **Groups**, then select **Deleted groups** to view the deleted groups that are available to restore. |
active-directory | Groups Saasapps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-saasapps.md | Using Azure Active Directory (Azure AD), part of Microsoft Entra, with an Azure ## To assign access for a user or group to a SaaS application -1. In the [Azure portal](https://portal.azure.com). ++1. Sign in to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Enterprise applications**. 1. Select an application that you added from the Application Gallery to open it. 1. Select **Users and groups**, and then select **Add user**. |
active-directory | Groups Self Service Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md | Groups created in | Security group default behavior | Microsoft 365 group defaul ## Make a group available for user self-service + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that's been assigned the Global Administrator or Groups Administrator role for the directory. 2. Browse to **Azure Active Directory** > **Groups**, and then select **General** settings. These articles provide additional information on Azure Active Directory. * [Application Management in Azure Active Directory](../manage-apps/what-is-application-management.md) * [What is Azure Active Directory?](../fundamentals/active-directory-whatis.md) * [Integrate your on-premises identities with Azure Active Directory](../hybrid/whatis-hybrid-identity.md)--- |
active-directory | Licensing Group Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md | Use the following information and examples to gain a more advanced understanding ## Usage location + Some Microsoft services aren't available in all locations. For group license assignment, any users without a usage location specified inherit the location of the directory. If you have users in multiple locations, make sure to reflect that correctly in your user resources before adding users to groups with licenses. Before a license can be assigned to a user, the administrator should specify the **Usage location** property on the user. 1. Sign in to the [Azure portal](https://portal.azure.com) in the **User Administrator** role. |
active-directory | Licensing Groups Assign | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-assign.md | In this example, the Azure AD organization contains a security group called **HR ## Step 1: Assign the required licenses + 1. Sign in to the [Azure portal](https://portal.azure.com) with a license administrator account. To manage licenses, the account must be a License Administrator, User Administrator, or Global Administrator. 1. Browse to **Azure Active Directory** > **Licenses** to open a page where you can see and manage all licensable products in the organization. To learn more about the feature set for license assignment using groups, see the - [How to migrate individual licensed users to group-based licensing in Azure Active Directory](licensing-groups-migrate-users.md) - [How to migrate users between product licenses using group-based licensing in Azure Active Directory](licensing-groups-change-licenses.md) - [Azure Active Directory group-based licensing additional scenarios](licensing-group-advanced.md)-- [PowerShell examples for group-based licensing in Azure Active Directory](licensing-ps-examples.md)+- [PowerShell examples for group-based licensing in Azure Active Directory](licensing-ps-examples.md) |
active-directory | Licensing Groups Change Licenses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-change-licenses.md | Before you update the license assignments, it's important to verify certain assu ## Change user license assignments + On the **Update license assignments** page, if you see that some checkboxes are unavailable, it indicates services that can't be changed because they're inherited from a group license. -1. Sign in to the [Azure portal](https://portal.azure.com/) using a License administrator account in your Azure AD organization. +1. Sign in to the [Azure portal](https://portal.azure.com) using a License administrator account in your Azure AD organization. 1. Select **Azure Active Directory** > **Users**, and then open the **Profile** page for a user. 1. Select **Licenses**. 1. Select **Assignments** to edit license assignment for the user or group. The **Assignments** page is where you can resolve license assignment conflicts. Azure AD applies the new licenses and removes the old licenses simultaneously to ## Change group license assignments -1. Sign in to the [Azure portal](https://portal.azure.com/) using a License administrator account in your Azure AD organization. +1. Sign in to the [Azure portal](https://portal.azure.com) using a License administrator account in your Azure AD organization. 1. Select **Azure Active Directory** > **Groups**, and then open the **Overview** page for a group. 1. Select **Licenses**. 1. Select the **Assignments** command to edit license assignment for the user or group. |
active-directory | Linkedin Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-integration.md | You can allow users in your organization to access their LinkedIn connections wi ## Enable LinkedIn account connections in the Azure portal + You can enable LinkedIn account connections for only the users you want to have access, from your entire organization to only selected users in your organization. -1. Sign in to the [Azure portal](https://portal.azure.com/) with an account that's a Global Administrator for the Azure AD organization. +1. Sign in to the [Azure portal](https://portal.azure.com) with an account that's a Global Administrator for the Azure AD organization. 1. Browse to **Azure Active Directory** > **Users**. 1. On the **Users** page, select **User settings**. 1. Under **LinkedIn account connections**, allow users to connect their accounts to access their LinkedIn connections within some Microsoft apps. No data is shared until users consent to connect their accounts. This group policy affects only Office 2016 apps for a local computer. If users d * [LinkedIn help center](https://www.linkedin.com/help/linkedin) -* [View your current LinkedIn integration setting in the Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UserManagementMenuBlade/UserSettings) +* [View your current LinkedIn integration setting in the Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UserManagementMenuBlade/UserSettings) |
active-directory | Users Bulk Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-add.md | The rows in a downloaded CSV template are as follows: ## To create users in bulk + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization. 1. Browse to **Azure Active Directory** > **Users** > **Bulk create**. 1. On the **Bulk create user** page, select **Download** to receive a valid comma-separated values (CSV) file of user properties, and then add users you want to create. |
active-directory | Users Bulk Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-delete.md | The rows in a downloaded CSV template are as follows: ## To bulk delete users + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization. 1. Browse to **Azure Active Directory** > **Users** > **Bulk operations** > **Bulk delete**. 1. On the **Bulk delete user** page, select **Download** to download the latest version of the CSV template. |
active-directory | Users Bulk Download | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-download.md | Both admin and non-admin users can download user lists. ## To download a list of users + 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Navigate to **Azure Active Directory** > **Users**. 3. In Azure AD, select **Users** > **Download users**. By default, all user profiles are exported. |
active-directory | Users Bulk Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md | The rows in a downloaded CSV template are as follows: ## To bulk restore users + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User Administrator in the Azure AD organization. 1. Browse to **Azure Active Directory** > **Users** > **Deleted**. 1. On the **Deleted users** page, select **Bulk restore** to upload a valid CSV file of properties of the users to restore. |
active-directory | Users Custom Security Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-custom-security-attributes.md | To assign or remove custom security attributes for a user in your Azure AD tenan ## Assign custom security attributes to a user + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure that you have defined custom security attributes. For more information, see [Add or deactivate custom security attribute definitions in Azure AD](../fundamentals/custom-security-attributes-add.md). |
active-directory | Users Restrict Guest Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md | You must be in the Global Administrator role to configure guest user access. The ## Update in the Azure portal + WeΓÇÖve made changes to the existing Azure portal controls for guest user permissions. 1. Sign in to the [Azure portal](https://portal.azure.com) with Global Administrator permissions. |
active-directory | Add Users Administrator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md | Make sure your organization's external collaboration settings are configured suc ## Add guest users to the directory + To add B2B collaboration users to the directory, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) in the **User Administrator** role. A role with Guest Inviter privileges can also invite external users. +1. Sign in to the [Azure portal](https://portal.azure.com) in the **User Administrator** role. A role with Guest Inviter privileges can also invite external users. 1. Navigate to **Azure Active Directory** > **Users**. |
active-directory | Add Users Information Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-information-worker.md | Self-service app management requires some initial setup by a Global Administrato > You cannot add guest users to a dynamic group or to a group that is synced with on-premises Active Directory. ### Enable self-service group management for your tenant++ 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. 2. In the navigation panel, select **Azure Active Directory**. 3. Select **Groups**. Self-service app management requires some initial setup by a Global Administrato 6. Select **Save**. ### Create a group to assign to the app and make the user an owner+ 1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator or Global Administrator. 2. In the navigation panel, select **Azure Active Directory**. 3. Select **Groups**. Self-service app management requires some initial setup by a Global Administrato 10. Under **Manage**, select **Owners** > **Add owners**. Search for the user who should manage access to the application. Select the user, and then click **Select**. ### Configure the app for self-service and assign the group to the app+ 1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator or Global Administrator. 2. In the navigation pane, select **Azure Active Directory**. 3. Under **Manage**, select **Enterprise applications** > **All applications**. |
active-directory | Allow Deny List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/allow-deny-list.md | By default, the **Allow invitations to be sent to any domain (most inclusive)** ### Add a blocklist + This is the most typical scenario, where your organization wants to work with almost any organization, but wants to prevent users from specific domains to be invited as B2B users. To add a blocklist: |
active-directory | B2b Quickstart Add Guest Users Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md | To complete the scenario in this quickstart, you need: ## Invite an external guest user + This quickstart guide provides the basic steps to invite an external user. To learn about all of the properties and settings that you can include when you invite an external user, see [How to create and delete a user](../fundamentals/how-to-create-delete-users.md). -1. Sign in to the [Azure portal](https://portal.azure.com/) using one of the roles listed in the Prerequisites. +1. Sign in to the [Azure portal](https://portal.azure.com) using one of the roles listed in the Prerequisites. 1. Navigate to **Azure Active Directory** > **Users**. Now sign in as the guest user to see the invitation. When no longer needed, delete the test guest user. -1. Sign in to the [Azure portal](https://portal.azure.com/) with an account that's been assigned the Global administrator or User administrator role. +1. Sign in to the [Azure portal](https://portal.azure.com) with an account that's been assigned the Global administrator or User administrator role. 1. Select the **Azure Active Directory** service. 1. Under **Manage**, select **Users**. 1. Select the test user, and then select **Delete user**. |
active-directory | B2b Tutorial Require Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md | To complete the scenario in this tutorial, you need: ## Create a test guest user in Azure AD -1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator. ++1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 1. In the Azure portal, select **Azure Active Directory**. 1. In the left menu, under **Manage**, select **Users**. 1. Select **New user**, and then select **Invite external user**. To complete the scenario in this tutorial, you need: ## Test the sign-in experience before MFA setup -1. Use your test user name and password to sign in to the [Azure portal](https://portal.azure.com/). +1. Use your test user name and password to sign in to the [Azure portal](https://portal.azure.com). 1. You should be able to access the Azure portal using only your sign-in credentials. No other authentication is required. 1. Sign out. ## Create a Conditional Access policy that requires MFA -1. Sign in to the [Azure portal](https://portal.azure.com/) as a security administrator or a Conditional Access administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as a security administrator or a Conditional Access administrator. 1. In the Azure portal, select **Azure Active Directory**. 1. In the left menu, under **Manage**, select **Security**. 1. Under **Protect**, select **Conditional Access**. To complete the scenario in this tutorial, you need: ## Test your Conditional Access policy -1. Use your test user name and password to sign in to the [Azure portal](https://portal.azure.com/). +1. Use your test user name and password to sign in to the [Azure portal](https://portal.azure.com). 1. You should see a request for more authentication methods. It can take some time for the policy to take effect. :::image type="content" source="media/tutorial-mfa/mfa-required.PNG" alt-text="Screenshot showing the More information required message."::: To complete the scenario in this tutorial, you need: When no longer needed, remove the test user and the test Conditional Access policy. -1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 1. In the left pane, select **Azure Active Directory**. 1. Under **Manage**, select **Users**. 1. Select the test user, and then select **Delete user**. |
active-directory | Cross Cloud Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md | After each organization has completed these steps, Azure AD B2B collaboration be ## Enable the cloud in your Microsoft cloud settings + In your Microsoft cloud settings, enable the Microsoft Azure cloud you want to collaborate with. 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service. The following scenarios are supported when collaborating with an organization fr ## Next steps -See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts. +See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts. |
active-directory | Cross Tenant Access Settings B2b Collaboration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md | Use External Identities cross-tenant access settings to manage how you collabora ## Configure default settings + Default cross-tenant access settings apply to all external tenants for which you haven't created organization-specific customized settings. If you want to modify the Azure AD-provided default settings, follow these steps. 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service. When you remove an organization from your Organizational settings, the default c ## Next steps - See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.-- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)+- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md) |
active-directory | Cross Tenant Access Settings B2b Direct Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md | Learn more about using cross-tenant access settings to [manage B2B direct connec ## Configure default settings + Default cross-tenant access settings apply to all external tenants for which you haven't created organization-specific customized settings. If you want to modify the Azure AD-provided default settings, follow these steps. 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service. When you remove an organization from your Organizational settings, the default c ## Next steps -[Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md) +[Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md) |
active-directory | How To Browserless App Dotnet Sign In Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-browserless-app-dotnet-sign-in-overview.md | - Title: Sign in users to an ASP.NET browserless app using Device Code flow -description: Learn about how to Sign in users in your ASP.NET browserless app using Device Code flow. --------- Previously updated : 05/10/2023--#Customer intent: As a dev, devops, I want to learn about how to enable authentication in my ASP.NET browserless app with Azure Active Directory (Azure AD) for customers tenant ---# Sign in users to an ASP.NET browserless app using Device Code flow --In this series of articles, you learn how to sign in users to your ASP.NET browserless app. The articles guide you through the steps of building an app that authenticates users against Azure Active Directory (Azure AD) for Customers using the device code flow. --The article series is broken down into the following steps: --1. Overview (this article) -1. [Prepare your tenant](how-to-browserless-app-dotnet-sign-in-prepare-tenant.md) -1. [Sign in user](how-to-browserless-app-dotnet-sign-in-sign-in.md) --## Prerequisites --- [.NET 7 SDK](https://dotnet.microsoft.com/download/dotnet/7.0).--- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.--- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.--## OAuth 2.0 device authorization grant flow --The Microsoft identity platform supports the [device authorization grant](https://tools.ietf.org/html/rfc8628), which allows users to sign in to input-constrained devices such as a smart TV, IoT device, or a printer. To enable this flow: --1. The device provides a verification url to the user. The user navigates to this url in a browser on another device to sign in. -1. The user inputs a code provided by the device which is then verified if it matches the code issues by the device. -1. Once the user is signed in, the device is able to get access tokens and refresh tokens as needed. --For more information, see [device code flow in the Microsoft identity platform](/azure/active-directory/develop/v2-oauth2-device-code). --If you want to run a sample ASP.NET browserless app to get a feel of how things work, complete the steps in [Sign in users in a sample ASP.NET browserless app](./how-to-browserless-app-dotnet-sample-sign-in.md) --## Next steps --Next, learn how to prepare your Azure AD for customers tenant. --> [!div class="nextstepaction"] -> [Prepare your Azure AD for customers tenant >](how-to-browserless-app-dotnet-sign-in-prepare-tenant.md) |
active-directory | How To Browserless App Dotnet Sign In Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-browserless-app-dotnet-sign-in-prepare-app.md | - Title: Sign in users in your ASP.NET browserless app using Device Code flow - Prepare app -description: Learn about how to prepare an ASP.NET browserless app that signs in users by using Device Code flow. --------- Previously updated : 05/10/2023--#Customer intent: As a dev, devops, I want to learn about how to enable authentication in my ASP.NET browserless app with Azure Active Directory (Azure AD) for customers tenant ---# Sign in users in your ASP.NET browserless app using Device Code flow - Prepare app --In this article, you create an ASP.NET browserless app project and organize all the folders and files you require. You also install the packages you need to help you with configuration and authentication. --## Prerequisites --Completion of the prerequisites and steps in the [Overview](./how-to-browserless-app-dotnet-sign-in-prepare-tenant.md) before proceeding. --## Create an ASP.NET browserless app --This how-to guide useS Visual Studio Code and .NET 7.0. --1. Open the [integrated terminal](https://code.visualstudio.com/docs/editor/integrated-terminal). -1. Navigate to the folder where you want your project to live. -1. Initialize a .NET console app and navigate to its root folder -- ```dotnetcli - dotnet new console -o MsIdBrowserlessApp - cd MsIdBrowserlessApp - ``` --## Add packages - -Install the following packages to help you handle app [configuration](/dotnet/core/extensions/configuration?source=recommendations). These packages are part of the [Microsoft.Extensions.Configuration](https://www.nuget.org/packages/Microsoft.Extensions.Configuration/) package. --- [*Microsoft.Extensions.Configuration*](/dotnet/api/microsoft.extensions.configuration)-- [*Microsoft.Extensions.Configuration.Json*](/dotnet/api/microsoft.extensions.configuration.json): JSON configuration provider implementation for `Microsoft.Extensions.Configuration`.-- [*Microsoft.Extensions.Configuration.Binder*](/dotnet/api/microsoft.extensions.configuration.configurationbinder): Functionality to bind an object to data in configuration providers for `Microsoft.Extensions.Configuration`.--Install the following package to help with authentication. --- [*Microsoft.Identity.Web*](/entra/msal/dotnet/microsoft-identity-web/) simplifies adding authentication and authorization support to apps that integrate with the Microsoft identity platform.--- ```dotnetcli - dotnet add package Microsoft.Extensions.Configuration - dotnet add package Microsoft.Extensions.Configuration.Json - dotnet add package Microsoft.Extensions.Configuration.Binder - dotnet add package Microsoft.Identity.Web - ``` --## Configure app registration details --1. In your code editor, create an *appsettings.json* file in the root folder of the app. --1. Add the following code to the *appsettings.json* file. - - ```json - { - "AzureAd": { - "Authority": "https://<Enter_the_Tenant_Subdomain_Here>.ciamlogin.com/", - "ClientId": "<Enter_the_Application_Id_Here>" - } - } - ``` --1. Replace `Enter_the_Application_Id_Here` with the Application (client) ID of the app you registered earlier. - -1. Replace `Enter_the_Tenant_Subdomain_Here` with the Directory (tenant) subdomain. For example, if your primary domain is *contoso.onmicrosoft.com*, replace `Enter_the_Tenant_Subdomain_Here` with *contoso*. If you don't have your primary domain, learn how to [read tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). --1. Add the following code to the *MsIdBrowserlessApp.csproj* file to instruct your app to copy the *appsettings.json* file to the output directory when the project is compiled. -- ```xml - <Project Sdk="Microsoft.NET.Sdk"> - ... -- <ItemGroup> - <None Update="appsettings.json"> - <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> - </None> - <ItemGroup> - </Project> - ``` --## Next steps --> [!div class="nextstepaction"] -> [Sign in to your ASP.NET browserless app >](./how-to-browserless-app-dotnet-sign-in-sign-in.md) |
active-directory | How To Browserless App Dotnet Sign In Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-browserless-app-dotnet-sign-in-prepare-tenant.md | - Title: Sign in users in your ASP.NET browserless app using Device Code flow - Prepare tenant -description: Learn about how to prepare your Azure Active Directory (Azure AD) for customers tenant to sign in users in your ASP.NET browserless application by using Device Code flow. --------- Previously updated : 05/10/2023---#Customer intent: As a dev, devops, I want to learn about how to enable authentication in my own ASP.NET browserless app with Azure Active Directory (Azure AD) for customers tenant ---# Sign in users in your ASP.NET browserless app using Device Code flow- Prepare tenant --In this article, you prepare your Azure Active Directory (Azure AD) for customers tenant for authentication. --## Register the browserless app ---## Enable public client flow ---## Grant API permissions --Since this app signs-in users, add delegated permissions: ---## Create a user flow ---## Associate the browserless app with the user flow ---## Next steps --> [!div class="nextstepaction"] -> [Prepare your ASP.NET browserless app >](how-to-browserless-app-dotnet-sign-in-prepare-app.md) |
active-directory | How To Daemon Node Call Api Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-daemon-node-call-api-overview.md | - Title: Call an API in your Node.js daemon application -description: Learn how to configure your Node.js daemon application that calls an API. The API is protected by Azure Active Directory (Azure AD) for customers --------- Previously updated : 05/22/2023--#Customer intent: As a dev, devops, I want to create a Node.js daemon application that acquires an access token, then calls an API protected by Azure Active Directory (Azure AD) for customers tenant ---# Call an API in your Node.js daemon application --In this article, you learn how to acquire an access token, then call a web API in your own Node.js daemon application. You add authorization to your daemon application against your Azure Active Directory (Azure AD) for customers tenant. --We've organized the content into three separate articles so it's easy for you to follow: --- [Prepare your Azure AD for customers tenant](how-to-daemon-node-call-api-prepare-app.md) tenant guides you how to register your app and configure user flows in the Microsoft Entra admin center.--- [Prepare your daemon application](how-to-daemon-node-call-api-prepare-app.md) guides you how to set up your Node.js app structure.--- [Acquire an access token and call API](how-to-daemon-node-call-api-call-api.md) guides you how to acquire an access token using client credentials flow, then use the token to call a web API.--## Overview --The [OAuth 2.0 client credentials grant flow](../../develop/v2-oauth2-client-creds-grant-flow.md) permits a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. --The client credentials grant flow is commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. --The application you build uses [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) to simplify adding authorization to your node daemon application. ---## Prerequisites --- [Node.js](https://nodejs.org).--- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.--- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>. ---If you want to run a sample Node.js daemon application to get a feel of how things work, complete the steps in [Call an API in a sample Node.js daemon application](how-to-daemon-node-sample-call-api.md). --## Next steps --Next, learn how to prepare your Azure AD for customers tenant. --> [!div class="nextstepaction"] -> [Prepare your Azure AD for customers tenant for authorization >](how-to-daemon-node-call-api-prepare-tenant.md) |
active-directory | How To Daemon Node Call Api Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-daemon-node-call-api-prepare-app.md | - Title: Call an API in your Node.js daemon application - prepare client app and web API -description: Learn about how to prepare your Node.js client daemon app and ASP.NET web API. The app you here prepare is what you configure later to sign in users, then call the web API. --------- Previously updated : 05/22/2023----# Call an API in your Node.js daemon application - prepare client app and web API --In this article, you create app projects for both the client daemon app and web API. Later, you enable the client daemon app to acquire an access token using its own identity, then call the web API. --## Prerequisite --- Install [.NET SDK](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/install) v7 or later in your computer.--## Build ASP.NET web API --You must first create a protected web API, which the client daemon calls by presenting a valid token. To do so, complete the steps in [Secure an ASP.NET web API](how-to-protect-web-api-dotnet-core-overview.md) article. In this article, you learn how to create and protect ASP.NET API endpoints, and run and test the API. The web API checks both app and user permissions. However, in this article, the client app acquires an access token with only app permissions. --Before you proceed, make sure you've [registered a web API app in Microsoft Entra admin center](how-to-daemon-node-call-api-prepare-tenant.md). --## Prepare Node.js client web app --In this step, you prepare the Node.js client web app that calls the ASP.NET web API. --### Create the Node.js daemon project --Create a folder to host your Node.js daemon application, such as `ciam-call-api-node-daemon`: --1. In your terminal, change directory into your Node daemon app folder, such as `cd ciam-call-api-node-daemon`, then run `npm init -y`. This command creates a default package.json file for your Node.js project. This command creates a default `package.json` file for your Node.js project. --1. Create more folders and files to achieve the following project structure: -- ``` - ciam-call-api-node-daemon/ - Γö£ΓöÇΓöÇ auth.js - ΓööΓöÇΓöÇ authConfig.js - ΓööΓöÇΓöÇ fetch.js - ΓööΓöÇΓöÇ index.js - ΓööΓöÇΓöÇ package.json - ``` --## Install app dependencies --In your terminal, install `axios`, `yargs` and `@azure/msal-node` packages by running the following command: --```console -npm install axios yargs @azure/msal-node -``` --## Next steps --Next, learn how to acquire an access token and call API: --> [!div class="nextstepaction"] -> [Acquire an access token and call API >](how-to-daemon-node-call-api-call-api.md) |
active-directory | How To Daemon Node Call Api Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-daemon-node-call-api-prepare-tenant.md | - Title: Call an API in your Node.js daemon application - prepare your tenant -description: Learn about how to prepare your Azure Active Directory (Azure AD) tenant for customers to acquire an access token using client credentials flow in your Node.js daemon application --------- Previously updated : 05/22/2023----# Call an API in your Node.js daemon application - prepare your tenant --In this article, you prepare your Azure Active Directory (Azure AD) for customers tenant for authorization. To prepare your tenant, you do the following tasks: --- Register a web API and configure app permissions the Microsoft Entra admin center. --- Register a client daemon application and grant it app permissions in the Microsoft Entra admin center.--- Create a client secret for your daemon application in the Microsoft Entra admin center.--After you complete the tasks, you collect: --- *Application (client) ID* for your client daemon app and one for your web API.--- A *Client secret* for your client daemon app.--- A *Directory (tenant) ID* for your Azure AD for customers tenant.--- App permissions/roles.--If you've already registered a client daemon application and a web API in the Microsoft Entra admin center, you can skip the steps in this article, then proceed to [Prepare your daemon application and web API](how-to-daemon-node-call-api-prepare-app.md). --## Register a web API application ---## Configure app roles ---## Configure optional claims ---## Register the daemon app ---## Create a client secret ---## Grant API permissions to the daemon app ----## Next steps --Next, learn how to prepare your daemon application and web API. --> [!div class="nextstepaction"] -> [Prepare your daemon application and web API >](how-to-daemon-node-call-api-prepare-app.md) |
active-directory | Sample Browserless App Dotnet Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-browserless-app-dotnet-sign-in.md | If you choose to download the *.zip* file, extract the sample app file to a fold ```console dotnet run ```-1. When the app launches, copy the suggested URL *https://microsoft.com/devicelogin* from the terminal and visit it in a browser. Then, copy the device code from the terminal and [follow the prompts](./how-to-browserless-app-dotnet-sign-in-sign-in.md#sign-in-to-your-app) on *https://microsoft.com/devicelogin*. +1. When the app launches, copy the suggested URL *https://microsoft.com/devicelogin* from the terminal and visit it in a browser. Then, copy the device code from the terminal and [follow the prompts](./tutorial-browserless-app-dotnet-sign-in-build-app.md#sign-in-to-your-app) on *https://microsoft.com/devicelogin*. ## How it works Console.WriteLine($"You signed in as {result.Account.Username}"); Next, learn how to prepare your Azure AD for customers tenant. > [!div class="nextstepaction"]-> [Build your own ASP.NET browserless app and sign in users >](how-to-browserless-app-dotnet-sign-in-overview.md) +> [Build your own ASP.NET browserless app and sign in users >](./tutorial-browserless-app-dotnet-sign-in-prepare-tenant.md) |
active-directory | Sample Daemon Node Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-daemon-node-call-api.md | A Web API endpoint should be prepared to accept calls from both users and applic ## Next steps -- Learn how to [Acquire an access token, then call a web API in your own Node.js daemon app](how-to-daemon-node-call-api-overview.md).+- Learn how to [Acquire an access token, then call a web API in your own Node.js daemon app](tutorial-daemon-node-call-api-prepare-tenant.md). - Learn how to [Use client certificate instead of a secret for authentication in your Node.js confidential app](how-to-web-app-node-use-certificate.md). |
active-directory | Samples Ciam All | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/samples-ciam-all.md | These samples and how-to guides demonstrate how to write a browserless applicati > | Language/<br/>Platform | Code sample guide | Build and integrate guide | > | - | -- | - | > | JavaScript, Node | • [Sign in users](how-to-browserless-app-node-sample-sign-in.md) | • [Sign in users](how-to-browserless-app-node-sign-in-overview.md ) |-> | .NET | • [Sign in users](how-to-browserless-app-dotnet-sample-sign-in.md) | • [Sign in users](how-to-browserless-app-dotnet-sign-in-overview.md) | +> | .NET | • [Sign in users](how-to-browserless-app-dotnet-sample-sign-in.md) | • [Sign in users](./tutorial-browserless-app-dotnet-sign-in-prepare-tenant.md) | ### Desktop These samples and how-to guides demonstrate how to write a daemon application th > [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample guide | Build and integrate guide | > | -- | -- |-- |-> | Node.js | • [Call an API](how-to-daemon-node-sample-call-api.md) | • [Call an API](how-to-daemon-node-call-api-overview.md) | +> | Node.js | • [Call an API](how-to-daemon-node-sample-call-api.md) | • [Call an API](tutorial-daemon-node-call-api-prepare-tenant.md) | > | .NET | • [Call an API](sample-daemon-dotnet-call-api.md) | • [Call an API](tutorial-daemon-dotnet-call-api-prepare-tenant.md) | These samples and how-to guides demonstrate how to write a daemon application th > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample guide | Build and integrate guide | > | - | -- | - |-> | Browserless | • [Sign in users](how-to-browserless-app-dotnet-sample-sign-in.md) | • [Sign in users](how-to-browserless-app-dotnet-sign-in-overview.md) | +> | Browserless | • [Sign in users](how-to-browserless-app-dotnet-sample-sign-in.md) | • [Sign in users](./tutorial-browserless-app-dotnet-sign-in-prepare-tenant.md) | > | Daemon | • [Call an API](sample-daemon-dotnet-call-api.md) | • [Call an API](tutorial-daemon-dotnet-call-api-prepare-tenant.md) | |
active-directory | Tutorial Browserless App Dotnet Sign In Build App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-browserless-app-dotnet-sign-in-build-app.md | + + Title: "Tutorial: Sign in users in your .NET browserless app" +description: Learn about how to build a .NET browserless app that signs in users by using Device Code flow. +++++++++ Last updated : 07/23/2023++#Customer intent: As a dev, devops, I want to learn about how to enable authentication in my .NET browserless app with Azure Active Directory (Azure AD) for customers tenant +++# Tutorial: Sign in users to your .NET browserless application ++In this tutorial, you build your own .NET browserless app and authenticate a user using Azure Active Directory (Azure AD) for customers. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Configure a .NET browserless app to use it's app registration details. +> - Build a .NET browserless app that signs in a user and acquires a token on behalf of the user. ++## Prerequisites ++- Registration details for the browserless app you created in the [prepare tenant tutorial](./tutorial-browserless-app-dotnet-sign-in-prepare-tenant.md). You need the following details: + - The Application (client) ID of the .NET browserless app that you registered. + - The Directory (tenant) subdomain where you registered your .NET browserless app. +- [.NET 7.0](https://dotnet.microsoft.com/download/dotnet/7.0) or later. +- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor. ++## Create an ASP.NET browserless app ++1. Open your terminal and navigate to the folder where you want your project to live. +1. Initialize a .NET console app and navigate to its root folder ++ ```dotnetcli + dotnet new console -o MsIdBrowserlessApp + cd MsIdBrowserlessApp + ``` ++## Install packages ++Install configuration providers that help our app to read configuration data from key-value pairs in our app settings file. These configuration abstractions provide the ability to bind configuration values to instances of .NET objects. ++```dotnetcli +dotnet add package Microsoft.Extensions.Configuration +dotnet add package Microsoft.Extensions.Configuration.Json +dotnet add package Microsoft.Extensions.Configuration.Binder +``` ++Install Microsoft Identity Web library that simplifies adding authentication and authorization support to apps that integrate with the Microsoft identity platform. ++```dotnetcli +dotnet add package Microsoft.Identity.Web +``` ++## Create appsettings.json file and add registration configs ++1. In your code editor, create an *appsettings.json* file in the root folder of the app. +1. Add the following code to the *appsettings.json* file. + + ```json + { + "AzureAd": { + "Authority": "https://<Enter_the_Tenant_Subdomain_Here>.ciamlogin.com/", + "ClientId": "<Enter_the_Application_Id_Here>" + } + } + ``` ++ - Replace `Enter_the_Application_Id_Here` with the Application (client) ID of the app you registered earlier. + - Replace `Enter_the_Tenant_Subdomain_Here` with the Directory (tenant) subdomain. ++1. Add the following code to the *MsIdBrowserlessApp.csproj* file to instruct your app to copy the *appsettings.json* file to the output directory when the project is compiled. ++ ```xml + <Project Sdk="Microsoft.NET.Sdk"> + ... ++ <ItemGroup> + <None Update="appsettings.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + </None> + <ItemGroup> + </Project> + ``` ++## Add sign-in code ++1. In your code editor, open the *Program.cs* file. +1. Clear the contents of the *Program.cs* file then add the packages and set up your configuration to read configs from the *appsettings.json* file. ++ ```csharp + // Import packages + using Microsoft.Extensions.Configuration; + using Microsoft.Identity.Client; ++ // Setup your configuration to read configs from appsettings.json + var configuration = new ConfigurationBuilder() + .AddJsonFile($"appsettings.json"); + + var config = configuration.Build(); + var publicClientOptions = config.GetSection("AzureAd"); ++ // Placeholders for the rest of the code ++ var app = PublicClientApplicationBuilder.Create(...) + var scopes = new string[] { }; + var result = await app.AcquireTokenWithDeviceCode(...) ++ Console.WriteLine(...) + ``` ++1. The browserless app is a public client application. Create an instance of the `PublicClientApplication` class and pass in the `ClientId` and `Authority` values from the *appsettings.json* file. ++ ```csharp + var app = PublicClientApplicationBuilder.Create(publicClientOptions.GetValue<string>("ClientId")) + .WithAuthority(publicClientOptions.GetValue<string>("Authority")) + .Build(); + ``` ++1. Add the code that helps the app acquire tokens using the device code flow. Pass in the scopes you want to request for and a callback function that is called when the device code is available. By default, MSAL attaches OIDC scopes to every token request. ++ ```csharp ++ var scopes = new string[] { }; // by default, MSAL attaches OIDC scopes to every token request + var result = await app.AcquireTokenWithDeviceCode(scopes, async deviceCode => { + Console.WriteLine($"In a broswer, navigate to the URL '{deviceCode.VerificationUrl}' and enter the code '{deviceCode.UserCode}'"); + await Task.FromResult(0); + }).ExecuteAsync(); ++ Console.WriteLine($"You signed in as {result.Account.Username}"); + Console.WriteLine($"{result.Account.HomeAccountId}"); + Console.WriteLine("\nRetrieved ID token:"); + result.ClaimsPrincipal.Claims.ToList() + .ForEach(c => Console.WriteLine(c)); + ``` + + The callback function displays the device code and the verification URL to the user. The user then navigates to the verification URL and enters the device code to complete the authentication process. The method then proceeds to poll for the ID token which is granted upon successful login by the user based on the device code information. ++## Sign in to your app ++1. In your terminal, navigate to the root folder of your browserless app and run the app by running the command `dotnet run` in your terminal. +1. Open your browser, then navigate to `https://<Enter_the_Tenant_Subdomain_Here>.ciamlogin.com/common/oauth2/deviceauth`. Replace `Enter_the_Tenant_Subdomain_Here` with the Directory (tenant) subdomain. You should see a page similar to the following screenshot: ++ :::image type="content" source="media/how-to-browserless-dotnet-sign-in-sign-in/browserless-app-dotnet-enter-code.png" alt-text="Screenshot of the enter code prompt in a node browserless application using the device code flow."::: ++1. Copy the device code from the message in the terminal and paste it in the **Enter Code** prompt to authenticate. After entering the code, you'll be redirected to the sign in page as follows: ++ :::image type="content" source="media/how-to-browserless-dotnet-sign-in-sign-in/browserless-app-dotnet-sign-in-page.png" alt-text="Screenshot showing the sign in page in a node browserless application."::: ++1. At this point, you most likely don't have an account. If so, select **No account? Create one**, which starts the sign-up flow. Follow through this flow to create a new account. If you already have an account, enter your credentials and sign in. +1. After completing the sign up flow and signing in, you see a page similar to the following screenshot: ++ :::image type="content" source="media/how-to-browserless-dotnet-sign-in-sign-in/browserless-app-dotnet-signed-in-user.png" alt-text="Screenshot showing a signed-in user in a node browserless application."::: ++1. Move back to the terminal and see your authentication information including the ID token claims. ++You can view the full code for this sample in the [code repo](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/tree/main/1-Authentication/4-sign-in-device-code). ++## See also ++- [Authenticate users in a sample Node.js browserless application.](./sample-browserless-app-node-sign-in.md) +- [Customize branding for your sign-in experience](./how-to-customize-branding-customers.md) |
active-directory | Tutorial Browserless App Dotnet Sign In Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-browserless-app-dotnet-sign-in-prepare-tenant.md | + + Title: "Tutorial: Register and configure .NET browserless app authentication details in a customer tenant" +description: Learn how to register and configure .NET browserless app authentication details in a customer tenant so as to sign in users using Device Code flow. +++++++++ Last updated : 07/24/2023++#Customer intent: As a dev, devops, I want to learn how to register and configure .NET browserless app authentication details in a customer tenant so as to sign in users using Device Code flow. +++# Tutorial: Register and configure .NET browserless app authentication details in a customer tenant ++In this article, you prepare your Azure Active Directory (Azure AD) for customers tenant for authentication. This tutorial is part of a series that guides you through the steps of building an app that authenticates users against Azure Active Directory (Azure AD) for Customers using the device code flow. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Register a .NET browserless app in the Microsoft Entra admin center +> - Create a sign-in and sign-out user flow in customers tenant. +> - Associate your browserless app with the user flow. ++## Prerequisites ++- [.NET 7.0](https://dotnet.microsoft.com/download/dotnet/7.0) or later. ++- [Visual Studio 2022](https://code.visualstudio.com/download) or another code editor. ++- Azure AD for customers tenant. If you don't already have one, [sign up for a free trial](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). ++## Register the browserless app +++## Enable public client flow +++## Grant API permissions ++Since this app signs-in users, add delegated permissions: +++## Create a user flow +++## Associate the browserless app with the user flow +++## Record your registration details ++The next step after this tutorial is to build a WPF desktop app that authenticates users. Ensure you have the following details: ++- The Application (client) ID of the .NET browserless app that you registered. +- The Directory (tenant) subdomain where you registered your .NET browserless app. If your primary domain is *contoso.onmicrosoft.com*, your Directory (tenant) subdomain is *contoso*. If you don't have your primary domain, learn how to [read tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). ++## Next steps ++> [!div class="nextstepaction"] +> [Sign-in users to your .NET browserless app >](./tutorial-browserless-app-dotnet-sign-in-build-app.md) |
active-directory | Tutorial Daemon Node Call Api Build App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-node-call-api-build-app.md | + + Title: "Tutorial: Call a web API from your Node.js daemon application" +description: Learn about how to prepare your Node.js client daemon app, then configure it to acquire an access token for calling a web API. +++++++++ Last updated : 07/26/2023++++# Tutorial: Call a web API from your Node.js daemon application ++This tutorial demonstrates how to prepare your Node.js daemon client app, then configure it to acquire an access token for calling a web API. The application you build uses [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) to simplify adding authorization to your node daemon application. ++The [OAuth 2.0 client credentials grant flow](../../develop/v2-oauth2-client-creds-grant-flow.md) permits a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate before calling another web service. The client credentials grant flow is commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. ++In this tutorial, you'll: ++> [!div class="checklist"] +> - Create a Node.js app, then install dependencies. +> - Enable the Node.js app to acquire an access token for calling a web API. ++## Prerequisites +++- [Node.js](https://nodejs.org). +- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor. +- Registration details for the Node.js daemon app and web API you created in the [prepare tenant tutorial](tutorial-daemon-node-call-api-prepare-tenant.md). +- A protected web API that is running and ready to accept requests. If you haven't created one, see the [create a protected web API tutorial](how-to-protect-web-api-dotnet-core-overview.md). Ensure this web API is using the app registration details you created in the [prepare tenant tutorial](tutorial-daemon-node-call-api-prepare-tenant.md). Make sure your web API exposes the following endpoints via https: + - `GET /api/todolist` to get all todos. + - `POST /api/todolist` to add a todo. ++## Create the Node.js daemon project ++Create a folder to host your Node.js daemon application, such as `ciam-call-api-node-daemon`: ++1. In your terminal, change directory into your Node daemon app folder, such as `cd ciam-call-api-node-daemon`, then run `npm init -y`. This command creates a default package.json file for your Node.js project. This command creates a default `package.json` file for your Node.js project. ++1. Create more folders and files to achieve the following project structure: ++ ``` + ciam-call-api-node-daemon/ + Γö£ΓöÇΓöÇ auth.js + ΓööΓöÇΓöÇ authConfig.js + ΓööΓöÇΓöÇ fetch.js + ΓööΓöÇΓöÇ index.js + ΓööΓöÇΓöÇ package.json + ``` ++## Install app dependencies ++In your terminal, install `axios`, `yargs` and `@azure/msal-node` packages by running the following command: ++```console +npm install axios yargs @azure/msal-node +``` ++## Create MSAL configuration object ++In your code editor, open *authConfig.js* file, then add the following code: ++```javascript +require('dotenv').config(); ++/** + * Configuration object to be passed to MSAL instance on creation. + * For a full list of MSAL Node configuration parameters, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/configuration.md + */ +const msalConfig = { + auth: { + clientId: process.env.CLIENT_ID || 'Enter_the_Application_Id_Here', // 'Application (client) ID' of app registration in Azure portal - this value is a GUID + authority: process.env.AUTHORITY || 'https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/', // Replace "Enter_the_Tenant_Subdomain_Here" with your tenant subdomain + clientSecret: process.env.CLIENT_SECRET || 'Enter_the_Client_Secret_Here', // Client secret generated from the app + }, + system: { + loggerOptions: { + loggerCallback(loglevel, message, containsPii) { + console.log(message); + }, + piiLoggingEnabled: false, + logLevel: 'Info', + }, + }, +}; +const protectedResources = { + apiToDoList: { + endpoint: process.env.API_ENDPOINT || 'https://localhost:44351/api/todolist', + scopes: [process.env.SCOPES || 'api://Enter_the_Web_Api_Application_Id_Here'], + }, +}; ++module.exports = { + msalConfig, + protectedResources, +}; +``` +The `msalConfig` object contains a set of configuration options that you use to customize the behavior of your authorization flow. ++In your *authConfig.js* file, replace: ++- `Enter_the_Application_Id_Here` with the Application (client) ID of the client daemon app that you registered earlier. ++- `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). ++- `Enter_the_Client_Secret_Here` with the client daemon app secret value that you copied earlier. ++- `Enter_the_Web_Api_Application_Id_Here` with the Application (client) ID of the web API app that you copied earlier. ++Notice that the `scopes` property in the `protectedResources` variable is the resource identifier (application ID URI) of the [web API](tutorial-daemon-node-call-api-prepare-tenant.md#register-a-web-api-application) that you registered earlier. The complete scope URI looks similar to `api://Enter_the_Web_Api_Application_Id_Here/.default`. ++## Acquire an access token ++In your code editor, open *auth.js* file, then add the following code: ++```javascript +const msal = require('@azure/msal-node'); +const { msalConfig, protectedResources } = require('./authConfig'); +/** + * With client credentials flows permissions need to be granted in the portal by a tenant administrator. + * The scope is always in the format '<resource-appId-uri>/.default'. For more, visit: + * https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow + */ +const tokenRequest = { + scopes: [`${protectedResources.apiToDoList.scopes}/.default`], +}; ++const apiConfig = { + uri: protectedResources.apiToDoList.endpoint, +}; ++/** + * Initialize a confidential client application. For more info, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md + */ +const cca = new msal.ConfidentialClientApplication(msalConfig); +/** + * Acquires token with client credentials. + * @param {object} tokenRequest + */ +async function getToken(tokenRequest) { + return await cca.acquireTokenByClientCredential(tokenRequest); +} ++module.exports = { + apiConfig: apiConfig, + tokenRequest: tokenRequest, + getToken: getToken, +}; +``` +In the code: ++- Prepare the `tokenRequest` and `apiConfig` object. The `tokenRequest` contains the scope for which you request an access token. The scope looks something like `api://Enter_the_Web_Api_Application_Id_Here/.default`. The `apiConfig` object contains the endpoint to your web API. Learn more about [OAuth 2.0 client credentials flow](../../develop/v2-oauth2-client-creds-grant-flow.md). ++- You create a confidential client instance by passing the `msalConfig` object to the [ConfidentialClientApplication](/javascript/api/@azure/msal-node/confidentialclientapplication#constructors) class' constructor. ++ ```javascript + const cca = new msal.ConfidentialClientApplication(msalConfig); + ``` ++- You then use the [acquireTokenByClientCredential](/javascript/api/@azure/msal-node/confidentialclientapplication#@azure-msal-node-confidentialclientapplication-acquiretokenbyclientcredential) function to acquire an access token. You implement this logic in the `getToken` function: ++ ```javascript + cca.acquireTokenByClientCredential(tokenRequest); + ``` +Once you acquire an access token, you can proceed to call an API. ++## Call an API ++In your code editor, open *fetch.js* file, then add the following code: ++```javascript +const axios = require('axios'); ++/** + * Calls the endpoint with authorization bearer token. + * @param {string} endpoint + * @param {string} accessToken + */ +async function callApi(endpoint, accessToken) { ++ const options = { + headers: { + Authorization: `Bearer ${accessToken}` + } + }; ++ console.log('request made to web API at: ' + new Date().toString()); ++ try { + const response = await axios.get(endpoint, options); + return response.data; + } catch (error) { + console.log(error) + return error; + } +}; ++module.exports = { + callApi: callApi +}; +``` +In this code, you make a call to the web API, by passing the access token as a bearer token in the request `Authorization` header: ++```javascript + Authorization: `Bearer ${accessToken}` +``` ++You use the access token that you acquired earlier in [Acquire an access token](#acquire-an-access-token). ++Once the web API receives the request, it evaluates it then determines that it's an application request. If the access token is valid, the web API returns requested data. Otherwise, the API returns a `401 Unauthorized` HTTP error. ++## Finalize your daemon app ++In your code editor, open *index.js* file, then add the following code: ++```javascript +#!/usr/bin/env node ++// read in env settings ++require('dotenv').config(); ++const yargs = require('yargs'); +const fetch = require('./fetch'); +const auth = require('./auth'); ++const options = yargs + .usage('Usage: --op <operation_name>') + .option('op', { alias: 'operation', describe: 'operation name', type: 'string', demandOption: true }) + .argv; ++async function main() { + console.log(`You have selected: ${options.op}`); ++ switch (yargs.argv['op']) { + case 'getToDos': + try { + const authResponse = await auth.getToken(auth.tokenRequest); + const todos = await fetch.callApi(auth.apiConfig.uri, authResponse.accessToken); + } catch (error) { + console.log(error); + } ++ break; + default: + console.log('Select an operation first'); + break; + } +}; ++main(); +``` ++This code is the entry point to your app. You use the [yargs js](https://www.npmjs.com/package/yargs) command-line argument parsing library for Node.js apps to interactively fetch an access token, then call API. You use the `getToken` and `callApi` functions you defined earlier: ++```javascript +const authResponse = await auth.getToken(auth.tokenRequest); +const todos = await fetch.callApi(auth.apiConfig.uri, authResponse.accessToken); +``` +## Run and test daemon app and API ++At this point, you're ready to test your client daemon app and web API: ++1. Use the steps you learned in [Secure an ASP.NET web API](how-to-protect-web-api-dotnet-core-overview.md) tutorial to start your web API. Your web API is now ready to serve client requests. If you don't run your web API on port `44351` as specified in the *authConfig.js* file, make sure you update the *authConfig.js* file to use the correct web API's port number. ++1. In your terminal, make sure you're in the project folder that contains your daemon Node.js app such as `ciam-call-api-node-daemon`, then run the following command: ++ ```console + node . --op getToDos + ``` ++If your daemon app and web API run successfully, you should find the data returned by the web API endpoint `todos` variable, similar to the following JSON array, in your console window: ++```json +{ + id: 1, + owner: '3e8....-db63-43a2-a767-5d7db...', + description: 'Pick up grocery' +}, +{ + id: 2, + owner: 'c3cc....-c4ec-4531-a197-cb919ed.....', + description: 'Finish invoice report' +}, +{ + id: 3, + owner: 'a35e....-3b8a-4632-8c4f-ffb840d.....', + description: 'Water plants' +} +``` ++## Next steps ++Learn how to [Use client certificate instead of a secret for authentication in your Node.js confidential app](how-to-web-app-node-use-certificate.md). |
active-directory | Tutorial Daemon Node Call Api Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-node-call-api-prepare-tenant.md | + + Title: 'Tutorial: Prepare your customer tenant to authorize a Node.js daemon application' +description: Learn how to prepare your customer tenant to authorize your Node.js daemon application +++++++++ Last updated : 07/26/2023++++# Tutorial: Prepare your customer tenant to authorize a Node.js daemon application ++In this tutorial, you learn how to acquire an access token, then call a web API in a Node.js daemon application. You enable the client daemon app to acquire an access token using its own identity. To do so, you first register your application in your Azure Active Directory (Azure AD) for customers tenant. ++In this tutorial, you'll: ++> [!div class="checklist"] +> - Register a web API and configure app permissions in the Microsoft Entra admin center. +> - Register a client daemon application, the grant it app permissions in the Microsoft Entra admin center. +> - Create a client secret for your daemon application in the Microsoft Entra admin center. ++If you've already registered a client daemon application and a web API in the Microsoft Entra admin center, you can skip the steps in this tutorial, then proceed to [Acquire access token for calling an API](tutorial-daemon-node-call-api-build-app.md). ++## Prerequisites ++- An Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>. ++## Register a web API application +++## Configure app roles +++## Configure optional claims +++## Register the daemon app +++## Create a client secret +++## Grant API permissions to the daemon app +++## Record your app registration details ++In the next step, you prepare your daemon app application. Make sure you've the following details: ++- The Application (client) ID of the client daemon app that you registered. +- The Directory (tenant) subdomain where you registered your daemon app. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). +- The application secret value for the daemon app you created. +- The Application (client) ID of the web API app you registered. +++## Next steps ++In the next tutorial, you prepare your daemon Node.js application. ++> [!div class="nextstepaction"] +> [Prepare your daemon application](tutorial-daemon-node-call-api-build-app.md) |
active-directory | Direct Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md | Next, configure federation with the IdP configured in step 1 in Azure AD. You ca ### To configure federation in the Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com) as an External Identity Provider Administrator or a Global Administrator. 2. In the left pane, select **Azure Active Directory**. 3. Select **External Identities** > **All identity providers**. |
active-directory | External Collaboration Settings Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md | For B2B collaboration with other Azure AD organizations, you should also review ## Configure settings in the portal + 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account and open the **Azure Active Directory** service. 1. Select **External Identities** > **External collaboration settings**. |
active-directory | Facebook Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md | To use a Facebook account as an [identity provider](identity-providers.md), you Now you'll set the Facebook client ID and client secret, either by entering it in the Azure portal or by using PowerShell. You can test your Facebook configuration by signing up via a user flow on an app enabled for self-service sign-up. ### To configure Facebook federation in the Azure portal++ 1. Sign in to the [Azure portal](https://portal.azure.com) as an External Identity Provider Administrator or a Global Administrator. 2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **External Identities**. Now you'll set the Facebook client ID and client secret, either by entering it i You can delete your Facebook federation setup. If you do so, any users who have signed up through user flows with their Facebook accounts will no longer be able to sign in. ### To delete Facebook federation in the Azure portal: + 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of your Azure AD tenant. 2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **External Identities**. You can delete your Facebook federation setup. If you do so, any users who have - [Add self-service sign-up to an app](self-service-sign-up-user-flow.md) - [SAML/WS-Fed IdP federation](direct-federation.md)-- [Google federation](google-federation.md)+- [Google federation](google-federation.md) |
active-directory | Invite Internal Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md | You can use the Azure portal, PowerShell, or the invitation API to send a B2B in ## Use the Azure portal to send a B2B invitation + 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or User administrator account for the directory. 1. Select the **Azure Active Directory** service. 1. Select **Users**. |
active-directory | Leave The Organization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md | In these cases, you can select **Leave**, but then you'll see a message saying y ## More information for administrators + Administrators can use the **External user leave settings** to control whether external users can remove themselves from their organization. If you disallow the ability for external users to remove themselves from your organization, external users will need to contact your admin, or privacy contact to be removed. > [!IMPORTANT] When a B2B collaboration user leaves an organization, the user's account is "sof If desired, a tenant administrator can permanently delete the account at any time during the soft-delete period with the following steps. This action is irrevocable. -1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. 1. Under **Manage**, select **Users**. |
active-directory | One Time Passcode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md | The email one-time passcode feature is now turned on by default for all new tena ### To enable or disable email one-time passcodes -1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD global administrator. ++1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD global administrator. 1. In the navigation pane, select **Azure Active Directory**. |
active-directory | Reset Redemption Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md | To reset a user's redemption status, you'll need one of the following roles: ## Use the Azure portal to reset redemption status -1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator or User administrator account for the directory. ++1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or User administrator account for the directory. 1. Search for and select **Azure Active Directory**. 1. Select **Users**. 1. In the list, select the user's name to open their user profile. ContentType: application/json - [Properties of an Azure AD B2B guest user](user-properties.md) - [Add Azure Active Directory B2B collaboration users by using PowerShell](customize-invitation-api.md#powershell)- |
active-directory | Self Service Sign Up Add Api Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md | description: Configure a web API to be used in a user flow. -+ Last updated 01/16/2023 To use an [API connector](api-connectors-overview.md), you first create the API ## Create an API connector -1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Sign in to the [Azure portal](https://portal.azure.com). 2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **External Identities**. 4. Select **All API connectors**, and then select **New API connector**. Additionally, the claims are typically sent in all request: Follow these steps to add an API connector to a self-service sign-up user flow. -1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **External Identities**. 4. Select **User flows**, and then select the user flow you want to add the API connector to. |
active-directory | Self Service Sign Up Add Approvals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-add-approvals.md | This article gives an example of how to integrate with an approval system. In th ## Register an application for your approval system + You need to register your approval system as an application in your Azure AD tenant so it can authenticate with Azure AD and have permission to create users. Learn more about [authentication and authorization basics for Microsoft Graph](/graph/auth/auth-concepts). 1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. To create these connectors, follow the steps in [create an API connector](self-s Now you'll add the API connectors to a self-service sign-up user flow with these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **External Identities**. 4. Select **User flows**, and then select the user flow you want to enable the API connector for. |
active-directory | Self Service Sign Up Secure Api Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-secure-api-connector.md | description: Secure your custom RESTful APIs used as API connectors in self-serv -+ Last updated 02/15/2023 You can protect your API endpoint by using either HTTP basic authentication or H ## HTTP basic authentication + HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/rfc2617). Basic authentication works as follows: Azure AD sends an HTTP request with the client credentials (`username` and `password`) in the `Authorization` header. The credentials are formatted as the base64-encoded string `username:password`. Your API then is responsible for checking these values to perform other authorization decisions. To configure an API Connector with HTTP basic authentication, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 2. Under **Azure services**, select **Azure AD**. 1. In the left menu, select **External Identities**. 1. Select **All API connectors**, and then select the **API Connector** you want to configure. You can then [export the certificate](../../key-vault/certificates/how-to-export To configure an API Connector with client certificate authentication, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 2. Under **Azure services**, select **Azure AD**. 1. In the left menu, select **External Identities**. 1. Select **All API connectors**, and then select the **API Connector** you want to configure. |
active-directory | Self Service Sign Up User Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-user-flow.md | User attributes are values collected from the user during self-service sign-up. ## Enable self-service sign-up for your tenant + Before you can add a self-service sign-up user flow to your applications, you need to enable the feature for your tenant. After it's enabled, controls become available in the user flow that let you associate the user flow with an application. > [!NOTE] Next, you'll create the user flow for self-service sign-up and add it to an appl You can choose order in which the attributes are displayed on the sign-up page. -1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. 2. Select **External Identities**, select **User flows**. 3. Select the self-service sign-up user flow from the list. 4. Under **Customize**, select **Page layouts**. |
active-directory | Tenant Restrictions V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md | Settings for tenant restrictions V2 are located in the Azure portal under **Cros ### To configure default tenant restrictions + 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator, Security administrator, or Conditional Access administrator account. Then open the **Azure Active Directory** service. 1. Select **External Identities** |
active-directory | Tutorial Bulk Invite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md | If you use Azure Active Directory (Azure AD) B2B collaboration to work with exte ## Invite guest users in bulk + 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a global administrator in the organization. 2. In the navigation pane, select **Azure Active Directory**. 3. Under **Manage**, select **All Users**. |
active-directory | Use Dynamic Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/use-dynamic-groups.md | A dynamic group is a dynamic configuration of security group membership for Azur [Azure AD Premium P1 or P2 licensing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) is required to create and use dynamic groups. Learn more in [Create attribute-based rules for dynamic group membership in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md). ## Creating an "all users" dynamic group++ You can create a group containing all users within a tenant using a membership rule. When users are added or removed from the tenant in the future, the group's membership is adjusted automatically. 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is assigned the Global administrator or User administrator role in the tenant. |
active-directory | User Flow Add Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-add-custom-attributes.md | The `<extensions-app-id>` is specific to your tenant. To find this identifier, n ## Create a custom attribute + 1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **External Identities**. |
active-directory | User Flow Customize Language | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-customize-language.md | By default, language customization is enabled for users signing up to ensure a c ## Customize your strings + Language customization enables you to customize any string in your user flow. 1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. Azure AD includes support for the following languages. User flow languages are p ## Next steps - [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md) -- [Define custom attributes for user flows](user-flow-add-custom-attributes.md)+- [Define custom attributes for user flows](user-flow-add-custom-attributes.md) |
active-directory | Add Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-custom-domain.md | Before you can add a custom domain name, create your domain name with a domain r ## Create your directory in Azure AD + After you get your domain name, you can create your first Azure AD directory. Sign in to the [Azure portal](https://portal.azure.com) for your directory, using an account with the **Owner** role for the subscription. Create your new directory by following the steps in [Create a new tenant for your organization](active-directory-access-create-new-tenant.md#create-a-new-tenant-for-your-organization). For more information about subscription roles, see [Azure roles](../../role-base After you create your directory, you can add your custom domain name. -1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory. +1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory. 1. Search for and select *Azure Active Directory* from any page. Then select **Custom domain names** > **Add custom domain**. After you register your custom domain name, make sure it's valid in Azure AD. Th To verify your custom domain name, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory. +1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory. 1. Search for and select *Azure Active Directory* from any page, then select **Custom domain names**. |
active-directory | Add Users Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-users-azure-active-directory.md | Add new users or delete existing users from your Azure Active Directory (Azure A ## Add a new user + You can create a new user for your organization or invite an external user from the same starting point. -1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role. +1. Sign in to the [Azure portal](https://portal.azure.com) in the User Administrator role. 1. Navigate to **Azure Active Directory** > **Users**. You can delete an existing user using Azure portal. To delete a user, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) using one of the appropriate roles listed above. +1. Sign in to the [Azure portal](https://portal.azure.com) using one of the appropriate roles listed above. 1. Go to **Azure Active Directory** > **Users**. |
active-directory | Concept Fundamentals Security Defaults | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md | Title: Providing a default level of security in Azure Active Directory -description: Azure AD Security defaults that help protect organizations from common identity attacks +description: Azure AD security defaults that help protect organizations from common identity attacks Previously updated : 03/23/2023 Last updated : 07/25/2023 -+ -- # Security defaults in Azure AD -Microsoft is making security defaults available to everyone, because managing security can be difficult. Identity-related attacks like password spray, replay, and phishing are common in today's environment. More than 99.9% of these identity-related attacks are stopped by using multifactor authentication (MFA) and blocking legacy authentication. The goal is to ensure that all organizations have at least a basic level of security enabled at no extra cost. +Security defaults make it easier to help protect your organization from identity-related attacks like password spray, replay, and phishing common in today's environments. ++Microsoft is making these preconfigured security settings available to everyone, because we know managing security can be difficult. Based on our learnings more than 99.9% of those common identity-related attacks are stopped by using multifactor authentication (MFA) and blocking legacy authentication. Our goal is to ensure that all organizations have at least a basic level of security enabled at no extra cost. -Security defaults make it easier to help protect your organization from these identity-related attacks with preconfigured security settings: +These basic controls include: - [Requiring all users to register for Azure AD Multifactor Authentication](#require-all-users-to-register-for-azure-ad-multifactor-authentication). - [Requiring administrators to do multifactor authentication](#require-administrators-to-do-multifactor-authentication). Security defaults make it easier to help protect your organization from these id ### Who should use Conditional Access? -- If you're an organization currently using Conditional Access policies, security defaults are probably not right for you. - If you're an organization with Azure Active Directory Premium licenses, security defaults are probably not right for you. - If your organization has complex security requirements, you should consider [Conditional Access](#conditional-access). Security defaults make it easier to help protect your organization from these id If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants at creation. -> [!NOTE] -> To help protect organizations, we're always working to improve the security of Microsoft account services. As part of this, free tenants not actively using multifactor authentication for all their users will be periodically notified for the automatic enablement of the security defaults setting. After this setting is enabled, all users in the organization will need to register for multifactor authentication. To avoid confusion, please refer to the email you received and alternatively you can [disable security defaults](#disabling-security-defaults) after it's enabled. +To help protect organizations, we're always working to improve the security of Microsoft account services. As part of this protection, customers are periodically notified for the automatic enablement of the security defaults if they: ++- Haven't enabled Conditional Access policies. +- Don't have premium licenses. +- Aren’t actively using legacy authentication clients. ++After this setting is enabled, all users in the organization will need to register for multifactor authentication. To avoid confusion, refer to the email you received and alternatively you can [disable security defaults](#disabling-security-defaults) after it's enabled. To enable security defaults in your directory: -1. Sign in to the [Azure portal](https://portal.azure.com) as a security administrator, Conditional Access administrator, or global administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as a Security Administrator, Conditional Access Administrator, or Global Administrator. 1. Browse to **Azure Active Directory** > **Properties**. 1. Select **Manage security defaults**. 1. Set **Security defaults** to **Enabled**. To enable security defaults in your directory: ![Screenshot of the Azure portal with the toggle to enable security defaults](./media/concept-fundamentals-security-defaults/security-defaults-azure-ad-portal.png) +### Revoking active tokens ++As part of enabling security defaults, administrators should revoke all existing tokens to require all users to register for multifactor authentication. This revocation event forces previously authenticated users to authenticate and register for multifactor authentication. This task can be accomplished using the [Revoke-AzureADUserAllRefreshToken](/powershell/module/azuread/revoke-azureaduserallrefreshtoken) PowerShell cmdlet. + ## Enforced security policies ### Require all users to register for Azure AD Multifactor Authentication Administrators have increased access to your environment. Because of the power t After registration with Azure AD Multifactor Authentication is finished, the following Azure AD administrator roles will be required to do extra authentication every time they sign in: -- Global administrator-- Application administrator-- Authentication administrator-- Billing administrator-- Cloud application administrator-- Conditional Access administrator-- Exchange administrator-- Helpdesk administrator-- Password administrator-- Privileged authentication administrator-- Security administrator-- SharePoint administrator-- User administrator+- Global Administrator +- Application Administrator +- Authentication Administrator +- Billing Administrator +- Cloud Application Administrator +- Conditional Access Administrator +- Exchange Administrator +- Helpdesk Administrator +- Password Administrator +- Privileged Authentication Administrator +- Privileged Role Administrator +- Security Administrator +- SharePoint Administrator +- User Administrator ### Require users to do multifactor authentication when necessary We tend to think that administrator accounts are the only accounts that need ext After these attackers gain access, they can request access to privileged information for the original account holder. They can even download the entire directory to do a phishing attack on your whole organization. -One common method to improve protection for all users is to require a stronger form of account verification, such as multifactor authentication, for everyone. After users complete registration, they'll be prompted for another authentication whenever necessary. Azure AD decides when a user will be prompted for multifactor authentication, based on factors such as location, device, role and task. This functionality protects all applications registered with Azure AD including SaaS applications. +One common method to improve protection for all users is to require a stronger form of account verification, such as multifactor authentication, for everyone. After users complete registration, they'll be prompted for another authentication whenever necessary. Azure AD decides when a user is prompted for multifactor authentication, based on factors such as location, device, role and task. This functionality protects all applications registered with Azure AD including SaaS applications. > [!NOTE] > In case of [B2B direct connect](../external-identities/b2b-direct-connect-overview.md) users, any multifactor authentication requirement from security defaults enabled in resource tenant will need to be satisfied, including multifactor authentication registration by the direct connect user in their home tenant. Security defaults users are required to register for and use Azure AD Multifacto ### Backup administrator accounts -Every organization should have at least two backup administrator accounts configured. We call these emergency access accounts. +Every organization should have at least two backup administrators configured. We call these emergency access accounts. These accounts may be used in scenarios where your normal administrator accounts can't be used. For example: The person with the most recent global administrator access has left the organization. Azure AD prevents the last global administrator account from being deleted, but it doesn't prevent the account from being deleted or disabled on-premises. Either situation might make the organization unable to recover the account. If your organization is a previous user of per-user based Azure AD Multifactor A ### Conditional Access -You can use Conditional Access to configure policies similar to security defaults, but with more granularity. Conditional Access policies allow selecting other authentication methods and the ability to exclude users, which aren't available in security defaults. If you're using Conditional Access in your environment today, security defaults won't be available to you. +You can use Conditional Access to configure policies similar to security defaults, but with more granularity. Conditional Access policies allow selecting other authentication methods and the ability to exclude users, which aren't available in security defaults. If you're using Conditional Access in your environment today, security defaults aren't available to you. ![Warning message that you can have security defaults or Conditional Access not both](./media/concept-fundamentals-security-defaults/security-defaults-conditional-access.png) Organizations that choose to implement Conditional Access policies that replace To disable security defaults in your directory: -1. Sign in to the [Azure portal](https://portal.azure.com) as a security administrator, Conditional Access administrator, or global administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as a Security Administrator, Conditional Access Administrator, or Global Administrator. 1. Browse to **Azure Active Directory** > **Properties**. 1. Select **Manage security defaults**. 1. Set **Security defaults** to **Disabled (not recommended)**. |
active-directory | Create New Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/create-new-tenant.md | If you don't have an Azure subscription, create a [free account](https://azure.m ## Create a new tenant for your organization -After you sign in to the Azure portal, you can create a new tenant for your organization. Your new tenant represents your organization and helps you to manage a specific instance of Microsoft cloud services for your internal and external users. ++After you sign in to the [Azure portal](https://portal.azure.com), you can create a new tenant for your organization. Your new tenant represents your organization and helps you to manage a specific instance of Microsoft cloud services for your internal and external users. >[!Note] >If you're unable to create Azure AD or Azure AD B2C tenant, review your user settings page to ensure that tenant creation isn't switched off. If tenant creation is switched off, ask your _Global Administrator_ to assign you a _Tenant Creator_ role. ### To create a new tenant -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. From the Azure portal menu, select **Azure Active Directory**. |
active-directory | Custom Security Attributes Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-add.md | To add or deactivate custom security attributes definitions, you must have: ## Add an attribute set + An attribute set is a collection of related attributes. All custom security attributes must be part of an attribute set. Attribute sets cannot be renamed or deleted. 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Custom Security Attributes Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md | To grant access to the appropriate people, follow these steps to assign one of t ### Assign roles at attribute set scope + The following examples show how to assign a custom security attribute role to a principal at an attribute set scope named Engineering. # [Portal](#tab/azure-portal) |
active-directory | Groups View Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/groups-view-azure-portal.md | Before you begin, youΓÇÖll need to: - Create an Azure Active Directory tenant. For more information, see [Access the Azure portal and create a new tenant](active-directory-access-create-new-tenant.md). -## Sign in to the Azure portal +<a name='sign-in-to-the-azure-portal'></a> -You must sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory. +## Sign in to the [Azure portal](https://portal.azure.com) +++You must sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory. ## Create a new group |
active-directory | How To Create Delete Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-create-delete-users.md | The required role of least privilege varies based on the type of user you're add ## Create a new user -1. Sign in to the [Azure portal](https://portal.azure.com/) in the **User Administrator** role. ++1. Sign in to the [Azure portal](https://portal.azure.com) in the **User Administrator** role. 1. Navigate to **Azure Active Directory** > **Users**. The final tab captures several key details from the user creation process. Revie The overall process for inviting an external guest user is similar, except for a few details on the **Basics** tab and the email invitation process. You can't assign external users to administrative units. -1. Sign in to the [Azure portal](https://portal.azure.com/) in the **User Administrator** role. A role with Guest Inviter privileges can also invite external users. +1. Sign in to the [Azure portal](https://portal.azure.com) in the **User Administrator** role. A role with Guest Inviter privileges can also invite external users. 1. Navigate to **Azure Active Directory** > **Users**. You can delete an existing user using Azure portal. To delete a user, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) using one of the appropriate roles. +1. Sign in to the [Azure portal](https://portal.azure.com) using one of the appropriate roles. 1. Go to **Azure Active Directory** > **Users**. When a user is deleted, any licenses consumed by the user are made available for * [Learn about B2B collaboration users](../external-identities/add-users-administrator.md) * [Review the default user permissions](users-default-permissions.md)-* [Add a custom domain](add-custom-domain.md) +* [Add a custom domain](add-custom-domain.md) |
active-directory | How To Customize Branding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md | In the following examples replace the contoso.com with your own tenant name, or ## How to navigate the company branding process -1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory. ++1. Sign in to the [Azure portal](https://portal.azure.com) using a Global Administrator account for the directory. 2. Go to **Azure Active Directory** > **Company branding** > **Customize**. - If you currently have a customized sign-in experience, the **Edit** button is available. Once your default sign-in experience is created, select the **Edit** button to m To create an inclusive experience for all of your users, you can customize the sign-in experience based on browser language. -1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory. +1. Sign in to the [Azure portal](https://portal.azure.com) using a Global Administrator account for the directory. 2. Go to **Azure Active Directory** > **Company branding** > **Add browser language**. |
active-directory | How To Find Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-find-tenant.md | Azure subscriptions have a trust relationship with Azure Active Directory (Azure ## Find tenant ID through the Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory**. |
active-directory | How To Manage Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-groups.md | This article covers basic group scenarios where a single group is added to a sin Before adding groups and members, [learn about groups and membership types](concept-learn-about-groups.md) to help you decide which options to use when you create a group. ## Create a basic group and add members++ You can create a basic group and add your members at the same time using the Azure Active Directory (Azure AD) portal. Azure AD roles that can manage groups include **Groups Administrator**, **User Administrator**, **Privileged Role Administrator**, or **Global Administrator**. Review the [appropriate Azure AD roles for managing groups](../roles/delegate-by-task.md#groups) To create a basic group and add members: Members and owners can be added to and removed from existing Azure AD groups. Th Need to add multiple members at one time? Learn about the [add members in bulk](../enterprise-users/groups-bulk-import-members.md) option. -### Add members or owners of a group: +### Add members or owners of a group 1. Sign in to the [Azure portal](https://portal.azure.com). Need to add multiple members at one time? Learn about the [add members in bulk]( The **Group Overview** page updates to show the number of members who are now added to the group. -### Remove members or owners of a group: +### Remove members or owners of a group 1. Go to **Azure Active Directory** > **Groups**. Need to add multiple members at one time? Learn about the [add members in bulk]( ![Screenshot of group members with a name selected and the Remove button highlighted.](media/how-to-manage-groups/groups-remove-member.png) ## Edit group settings+ Using Azure AD, you can edit a group's name, description, or membership type. You'll need the **Groups Administrator** or **User Administrator** role to edit a group's settings. To edit your group settings: You can remove an existing Security group from another Security group; however, ![Screenshot of the 'Group membership' page showing both the member and the group details with 'Remove membership' option highlighted.](media/how-to-manage-groups/remove-nested-group.png) ## Delete a group+ You can delete an Azure AD group for any number of reasons, but typically it will be because you: - Chose the incorrect **Group type** option. |
active-directory | How To Manage Stay Signed In Prompt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-stay-signed-in-prompt.md | You must have the **Global Administrator** role to enable the 'Stay signed in?' ## Enable the 'Stay signed in?' prompt + The KMSI setting is managed in the **User settings** of Azure Active Directory (Azure AD). -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to **Azure Active Directory** > **Users** > **User settings**. 1. Set the **Show keep user signed in** toggle to **Yes**. |
active-directory | How To Manage User Profile Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-user-profile-info.md | A user's profile information and settings can be managed on an individual basis This article covers how to add user profile information, such as a profile picture and job-specific information. You can also choose to allow users to connect their LinkedIn accounts or restrict access to the Azure AD administration portal. Some settings may be managed in more than one area of Azure AD. For more information about adding new users, see [How to add or delete users in Azure Active Directory](add-users-azure-active-directory.md). ## Add or change profile information++ When new users are created, only some details are added to their user profile. If your organization needs more details, they can be added after the user is created. -1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization. +1. Sign in to the [Azure portal](https://portal.azure.com) in the User Administrator role for the organization. 1. Go to **Azure Active Directory** > **Users** and select a user. |
active-directory | License Users Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/license-users-groups.md | You can view your available service plans, including the individual licenses, ch ### To find your service plan and plan details -1. Sign in to the [Azure portal](https://portal.azure.com/) using a License administrator account in your Azure AD organization. ++1. Sign in to the [Azure portal](https://portal.azure.com) using a License administrator account in your Azure AD organization. 1. Select **Azure Active Directory**, and then select **Licenses**. |
active-directory | Properties Area | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/properties-area.md | You add your organization's privacy information in the **Properties** area of Az ### To access the Properties area and add your privacy information + 1. Sign in to the [Azure portal](https://portal.azure.com) as a tenant administrator. 2. On the left navbar, select **Azure Active Directory**, and then select **Properties**. |
active-directory | Users Assign Role Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-assign-role-azure-portal.md | There are two main steps to the role assignment process. First you'll select the ### Select the role to assign -1. Sign in to the [Azure portal](https://portal.azure.com/) using the Privileged Role Administrator role for the directory. ++1. Sign in to the [Azure portal](https://portal.azure.com) using the Privileged Role Administrator role for the directory. 1. Go to **Azure Active Directory** > **Users**. You can remove role assignments from the **Administrative roles** page for a sel - [Add guest users from another directory](../external-identities/what-is-b2b.md) -- [Explore other user management tasks](../enterprise-users/index.yml)+- [Explore other user management tasks](../enterprise-users/index.yml) |
active-directory | Users Reset Password Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-reset-password-azure-portal.md | Azure Active Directory (Azure AD) administrators can reset a user's password if ## To reset a password -1. Sign in to the [Azure portal](https://portal.azure.com/) as a user administrator, or password administrator. For more information about the available roles, see [Azure AD built-in roles](../roles/permissions-reference.md) ++1. Sign in to the [Azure portal](https://portal.azure.com) as a user administrator, or password administrator. For more information about the available roles, see [Azure AD built-in roles](../roles/permissions-reference.md) 2. Select **Azure Active Directory**, select **Users**, search for and select the user that needs the reset, and then select **Reset Password**. |
active-directory | Users Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-restore.md | You can see all the users that were deleted less than 30 days ago. These users c ### To view your restorable users -1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the organization. ++1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the organization. 2. Select **Azure Active Directory**, select **Users**, and then select **Deleted users**. |
active-directory | Check Status Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md | When a workflow is created, it's important to check its status, and run history ## Run workflow history using the Azure portal + You're able to retrieve run information of a workflow using Lifecycle Workflows. To check the runs of a workflow using the Azure portal, you would do the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com). To list task processing results for a user processing result via API using Micro ## Next steps - [Manage workflow versions](manage-workflow-tasks.md)-- [Delete Lifecycle Workflows](delete-lifecycle-workflow.md)+- [Delete Lifecycle Workflows](delete-lifecycle-workflow.md) |
active-directory | Check Workflow Execution Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-workflow-execution-scope.md | Workflow scheduling will automatically process the workflow for users meeting th ## Check execution user scope of a workflow using the Azure portal + To check the users who fall under the execution scope of a workflow, you'd follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Complete Access Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/complete-access-review.md | For more information, see [License requirements](access-reviews-overview.md#lice ## View the status of an access review++ You can track the progress of access reviews as they're completed. |
active-directory | Create Access Review Pim For Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review-pim-for-groups.md | For more information, see [License requirements](access-reviews-overview.md#lice ## Create a PIM for Groups access review ### Scope++ 1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page. 2. On the left menu, select **Access reviews**. |
active-directory | Create Access Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md | If you're reviewing access to an application, then before creating the review, s ## Create a single-stage access review ### Scope++ 1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page. 2. On the left menu, select **Access reviews**. |
active-directory | Create Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md | You can create and customize workflows for common scenarios by using templates, ## Create a lifecycle workflow by using a template in the Azure portal + If you're using the Azure portal to create a workflow, you can customize existing templates to meet your organization's needs. These templates include one for pre-hire common scenarios. To create a workflow based on a template: |
active-directory | Customize Workflow Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md | For more information on these customizable parameters, see [Common email task pa ## Customize email by using the Azure portal + When you're customizing an email sent via lifecycle workflows, you can choose to customize either a new task or an existing task. You do these customizations the same way whether the task is new or existing, but the following steps walk you through updating an existing task. To customize emails sent from tasks within workflows by using the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Customize Workflow Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md | When you create workflows by using lifecycle workflows, you can fully customize ## Customize the schedule of workflows by using the Azure portal + Workflows that you create within lifecycle workflows follow the same schedule that you define on the **Workflow settings** pane. To adjust the schedule, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). To schedule workflow settings by using the Microsoft Graph API, see [lifecycleMa ## Next steps - [Manage workflow properties](manage-workflow-properties.md)-- [Delete lifecycle workflows](delete-lifecycle-workflow.md)+- [Delete lifecycle workflows](delete-lifecycle-workflow.md) |
active-directory | Delete Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md | When a workflow is deleted, it enters a soft-delete state. During this period, y ## Delete a workflow by using the Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. On the search bar near the top of the page, enter **Identity Governance**. Then select **Identity Governance** in the results. |
active-directory | Entitlement Management Access Package Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md | Here are the high-level steps to create a new access package. ## Start new access package + **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Entitlement Management Access Package First | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md | For more information, see [License requirements](entitlement-management-overview ## Step 1: Set up users and group + A resource directory has one or more resources to share. In this step, you create a group named **Marketing resources** in the Woodgrove Bank directory that is the target resource for entitlement management. You also set up an internal requestor. **Prerequisite role:** Global administrator or User administrator |
active-directory | Entitlement Management Access Package Incompatible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md | To use entitlement management and assign users to access packages, you must have ## Configure another access package or group membership as incompatible for requesting access to an access package + **Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager Follow these steps to change the list of incompatible groups or other access packages for an existing access package: |
active-directory | Entitlement Management Logic Apps Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md | These triggers to Logic Apps are controlled in a tab within access package polic ## Create and add a Logic App workflow to a catalog for use in entitlement management + **Prerequisite roles:** Global administrator, Identity Governance administrator, Catalog owner or Resource Group Owner 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Entitlement Management Logs And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md | Azure AD stores audit events for up to 30 days in the audit log. However, you ca ## Configure Azure AD to use Azure Monitor++ Before you use the Azure Monitor workbooks, you must configure Azure AD to send a copy of its audit logs to Azure Monitor. Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure subscription. You can read more about the prerequisites and estimated costs of using Azure Monitor in [Azure AD activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md). |
active-directory | Entitlement Management Reprocess Access Package Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md | To use entitlement management and assign users to access packages, you must have ## Open an existing access package and reprocess user assignments + **Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager If you have users who are in the "Delivered" state but don't have access to resources that are a part of the access package, you'll likely need to reprocess the assignments to reassign those users to the access package's resources. Follow these steps to reprocess assignments for an existing access package: |
active-directory | Entitlement Management Reprocess Access Package Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md | To use entitlement management and assign users to access packages, you must have ## Open an existing access package and reprocess user requests + **Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager If you have a set of users whose requests are in the "Partially Delivered" or "Failed" state, you might need to reprocess some of those requests. Follow these steps to reprocess requests for an existing access package: If you have a set of users whose requests are in the "Partially Delivered" or "F ## Next steps - [View requests for an access package](entitlement-management-access-package-requests.md)-- [Approve or deny access requests](entitlement-management-request-approve.md)+- [Approve or deny access requests](entitlement-management-request-approve.md) |
active-directory | Entitlement Management Ticketed Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-ticketed-provisioning.md | After setting up custom extensibility in the catalog, administrators can create ## Register an application with secrets in Azure portal + With Azure, you're able to use [Azure Key Vault](/azure/key-vault/secrets/about-secrets) to store application secrets such as passwords. To register an application with secrets within the Azure portal, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Identity Governance Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md | This article shows you how to get started using Azure Automation for Microsoft E ## Create an Azure Automation account + Azure Automation provides a cloud-hosted environment for [runbook execution](../../automation/automation-runbook-execution.md). Those runbooks can start automatically based on a schedule, or be triggered by webhooks or by Logic Apps. Using Azure Automation requires you to have an Azure subscription. |
active-directory | Manage Workflow Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md | If done via the Azure portal, the new version is created automatically. If done ## Edit the properties of a workflow using the Azure portal + To edit the properties of a workflow using the Azure portal, you do the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Manage Workflow Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md | Changing a workflow's tasks or execution conditions requires the creation of a n ## Edit the tasks of a workflow using the Azure portal + Tasks within workflows can be added, edited, reordered, and removed at will. To edit the tasks of a workflow using the Azure portal, you complete the following steps: |
active-directory | On Demand Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md | Scheduled workflows by default run every 3 hours, but can also run on-demand so ## Run a workflow on-demand in the Azure portal + Use the following steps to run a workflow on-demand. >[!NOTE] To run a workflow on-demand using API via Microsoft Graph, see: [workflow: activ ## Next steps - [Customize the schedule of workflows](customize-workflow-schedule.md)-- [Delete a Lifecycle workflow](delete-lifecycle-workflow.md)+- [Delete a Lifecycle workflow](delete-lifecycle-workflow.md) |
active-directory | Tutorial Offboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md | The leaver scenario includes the following steps: ## Create a workflow by using the leaver template + Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination by using lifecycle workflows in the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). At any time, you can monitor the status of workflows and tasks. Three data pivot ## Next steps - [Prepare user accounts for lifecycle workflows](tutorial-prepare-azure-ad-user-accounts.md)-- [Complete tasks in real time on an employee's last day of work by using lifecycle workflow APIs](/graph/tutorial-lifecycle-workflows-offboard-custom-workflow)+- [Complete tasks in real time on an employee's last day of work by using lifecycle workflow APIs](/graph/tutorial-lifecycle-workflows-offboard-custom-workflow) |
active-directory | Tutorial Onboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md | The pre-hire scenario can be broken down into the following: - Verifying the workflow was successfully executed ## Create a workflow using prehire template++ Use the following steps to create a pre-hire workflow that generates a TAP and send it via email to the user's manager using the Azure portal. - 1. Sign in to the [Azure portal](https://portal.azure.com). - 2. On the right, select **Azure Active Directory**. - 3. Select **Identity Governance**. - 4. Select **Lifecycle workflows**. - 5. On the **Overview** page, select **New workflow**. - :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: +1. Sign in to the [Azure portal](https://portal.azure.com). +2. On the right, select **Azure Active Directory**. +3. Select **Identity Governance**. +4. Select **Lifecycle workflows**. +5. On the **Overview** page, select **New workflow**. + :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: ++6. From the templates, select **select** under **Onboard pre-hire employee**. + :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting workflow template." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: - 6. From the templates, select **select** under **Onboard pre-hire employee**. - :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting workflow template." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: - - 7. Next, you configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow triggers two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**. +7. Next, you configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow triggers two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**. - :::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png"::: - 8. Next, you configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters). +8. Next, you configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters). - :::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png"::: - 9. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you're finished. - :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-review-create.png" alt-text="Screenshot of reviewing an on-board workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-review-create.png"::: +9. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you're finished. + :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-review-create.png" alt-text="Screenshot of reviewing an on-board workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-review-create.png"::: - 10. On the review blade, verify the information is correct and select **Create**. - :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-create.png" alt-text="Screenshot of creating an onboard workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-create.png"::: +10. On the review blade, verify the information is correct and select **Create**. + :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-create.png" alt-text="Screenshot of creating an onboard workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-create.png"::: - ## Run the workflow Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. |
active-directory | Tutorial Prepare Azure Ad User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md | In most cases, users are going to be provisioned to Azure AD either from an on-p ## Create users in Azure AD + We use the Graph Explorer to quickly create two users needed to execute the Lifecycle Workflows in the tutorials. One user represents our new employee and the second represents the new employee's manager. You need to edit the POST and replace the <your tenant name here> portion with the name of your tenant. For example: $UPN_manager = "bsimon@<your tenant name here>" to $UPN_manager = "bsimon@contoso.onmicrosoft.com". Once your user(s) has been successfully created in Azure AD, you may proceed to There are some additional steps that you should be aware of when testing either the [On-boarding users to your organization using Lifecycle workflows with Azure portal](tutorial-onboard-custom-workflow-portal.md) tutorial or the [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph](tutorial-onboard-custom-workflow-graph.md) tutorial. ### Edit the users attributes using the Azure portal+ Some of the attributes required for the pre-hire onboarding tutorial are exposed through the Azure portal and can be set there. These attributes are: For the tutorial, the **mail** attribute only needs to be set on the manager acc 11. Select **Save**. ### Edit employeeHireDate+ The employeeHireDate attribute is new to Azure AD. It isn't exposed through the UI and must be updated using Graph. To edit this attribute, we can use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). >[!NOTE] The employeeHireDate attribute is new to Azure AD. It isn't exposed through the In order to do this, we must get the object ID for our user Melva Prince. - 1. Sign in to the [Azure portal](https://portal.azure.com). - 2. On the right, select **Azure Active Directory**. - 3. Select **Users**. - 4. Select **Melva Prince**. - 5. Select the copy sign next to the **Object ID**. - 6. Now navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). - 7. Sign-in to Graph Explorer with the global administrator account for your tenant. - 8. At the top, change **GET** to **PATCH** and add `https://graph.microsoft.com/v1.0/users/<id>` to the box. Replace `<id>` with the value we copied before. - 9. Copy the following in to the **Request body** and select **Run query** - ```Example - { - "employeeHireDate": "2022-04-15T22:10:00Z" - } - ``` - :::image type="content" source="media/tutorial-lifecycle-workflows/update-1.png" alt-text="Screenshot of the PATCH employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-1.png"::: - - 10. Verify the change by changing **PATCH** back to **GET** and **v1.0** to **beta**. Select **Run query**. You should see the attributes for Melva set. - :::image type="content" source="media/tutorial-lifecycle-workflows/update-3.png" alt-text="Screenshot of the GET employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-3.png"::: +1. Sign in to the [Azure portal](https://portal.azure.com). +2. On the right, select **Azure Active Directory**. +3. Select **Users**. +4. Select **Melva Prince**. +5. Select the copy sign next to the **Object ID**. +6. Now navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). +7. Sign-in to Graph Explorer with the global administrator account for your tenant. +8. At the top, change **GET** to **PATCH** and add `https://graph.microsoft.com/v1.0/users/<id>` to the box. Replace `<id>` with the value we copied before. +9. Copy the following in to the **Request body** and select **Run query** + ```Example + { + "employeeHireDate": "2022-04-15T22:10:00Z" + } + ``` + :::image type="content" source="media/tutorial-lifecycle-workflows/update-1.png" alt-text="Screenshot of the PATCH employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-1.png"::: ++10. Verify the change by changing **PATCH** back to **GET** and **v1.0** to **beta**. Select **Run query**. You should see the attributes for Melva set. + :::image type="content" source="media/tutorial-lifecycle-workflows/update-3.png" alt-text="Screenshot of the GET employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-3.png"::: ### Edit the manager attribute on the employee account The manager attribute is used for email notification tasks. It's used by the lifecycle workflow to email the manager a temporary password for the new employee. Use the following steps to ensure your Azure AD users have a value for the manager attribute. -1. Still in [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). -2. Make sure the top is still set to **PUT** and `https://graph.microsoft.com/v1.0/users/<id>/manager/$ref` is in the box. Change `<id>` to the ID of Melva Prince. - 3. Copy the code below in to the **Request body** - 4. Replace `<managerid>` in the following code with the value of Britta Simons ID. - 5. Select **Run query** - ```Example - { +1. Still in [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). +2. Make sure the top is still set to **PUT** and `https://graph.microsoft.com/v1.0/users/<id>/manager/$ref` is in the box. Change `<id>` to the ID of Melva Prince. +3. Copy the code below in to the **Request body** +4. Replace `<managerid>` in the following code with the value of Britta Simons ID. +5. Select **Run query** + ```Example + { "@odata.id": "https://graph.microsoft.com/v1.0/users/<managerid>"- } - ``` + } + ``` - :::image type="content" source="media/tutorial-lifecycle-workflows/graph-add-manager.png" alt-text="Screenshot of Adding a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-add-manager.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/graph-add-manager.png" alt-text="Screenshot of Adding a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-add-manager.png"::: - 6. Now, we can verify that the manager has been set correctly by changing the **PUT** to **GET**. - 7. Make sure `https://graph.microsoft.com/v1.0/users/<id>/manager/` is in the box. The `<id>` is still that of Melva Prince. - 8. Select **Run query**. You should see Britta Simon returned in the Response. +6. Now, we can verify that the manager has been set correctly by changing the **PUT** to **GET**. +7. Make sure `https://graph.microsoft.com/v1.0/users/<id>/manager/` is in the box. The `<id>` is still that of Melva Prince. +8. Select **Run query**. You should see Britta Simon returned in the Response. - :::image type="content" source="media/tutorial-lifecycle-workflows/graph-get-manager.png" alt-text="Screenshot of getting a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-get-manager.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/graph-get-manager.png" alt-text="Screenshot of getting a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-get-manager.png"::: For more information about updating manager information for a user in Graph API, see [assign manager](/graph/api/user-post-manager?view=graph-rest-1.0&tabs=http&preserve-view=true) documentation. You can also set this attribute in the Azure Admin center. For more information, see [add or change profile information](../fundamentals/active-directory-users-profile-azure-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context). -### Enabling the Temporary Access Pass (TAP) +### Enabling the Temporary Access Pass (TAP) + A Temporary Access Pass is a time-limited pass issued by an admin that satisfies strong authentication requirements. In this scenario, we use this feature of Azure AD to generate a temporary access pass for our new employee. It is then mailed to the employee's manager. |
active-directory | Tutorial Scheduled Leaver Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md | The scheduled leaver scenario can be broken down into the following: - Verify that the workflow was successfully executed ## Create a workflow using scheduled leaver template++ Use the following steps to create a scheduled leaver workflow that will configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal. - 1. Sign in to the [Azure portal](https://portal.azure.com). - 2. On the right, select **Azure Active Directory**. - 3. Select **Identity Governance**. - 4. Select **Lifecycle workflows**. - 5. On the **Overview** page, select **New workflow**. + 1. Sign in to the [Azure portal](https://portal.azure.com). + 2. On the right, select **Azure Active Directory**. + 3. Select **Identity Governance**. + 4. Select **Lifecycle workflows**. + 5. On the **Overview** page, select **New workflow**. :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: 6. From the templates, select **Select** under **Post-offboarding of an employee**. Now that the workflow is created, it will automatically run the workflow every 3 To run a workflow on-demand, for users using the Azure portal, do the following steps: - 1. On the workflow screen, select the specific workflow you want to run. - 2. Select **Run on demand**. - 3. On the **select users** tab, select **add users**. - 4. Add a user. - 5. Select **Run workflow**. +1. On the workflow screen, select the specific workflow you want to run. +2. Select **Run on demand**. +3. On the **select users** tab, select **add users**. +4. Add a user. +5. Select **Run workflow**. - ## Check tasks and workflow status At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available. You may learn more in the how-to guide [Check the status of a workflow](check-status-workflow.md). In the course of this tutorial, we'll look at the status using the user focused reports. - 1. To begin, select the **Workflow history** tab on the left to view the user summary and associated workflow tasks and statuses. - :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png" alt-text="Screenshot of the workflow history summary." lightbox="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png"::: +1. To begin, select the **Workflow history** tab on the left to view the user summary and associated workflow tasks and statuses. + :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png" alt-text="Screenshot of the workflow history summary." lightbox="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png"::: 1. Once the **Workflow history** tab has been selected, you'll land on the workflow history page as shown.- :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png" alt-text="Screenshot of the workflow history overview." lightbox="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png" alt-text="Screenshot of the workflow history overview." lightbox="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png"::: 1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith. - :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-post-offboard.png" alt-text="Screenshot of workflow's total tasks." lightbox="media/tutorial-lifecycle-workflows/total-tasks-post-offboard.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-post-offboard.png" alt-text="Screenshot of workflow's total tasks." lightbox="media/tutorial-lifecycle-workflows/total-tasks-post-offboard.png"::: 1. To add an extra layer of granularity, you may select **Failed tasks** for the user Wade Warren to view the total number of failed tasks assigned to the user Wade Warren.- :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-post-offboard.png" alt-text="Screenshot of workflow failed tasks." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-post-offboard.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-post-offboard.png" alt-text="Screenshot of workflow failed tasks." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-post-offboard.png"::: 1. Similarly, you may select **Unprocessed tasks** for the user Wade Warren to view the total number of unprocessed or canceled tasks assigned to the user Wade Warren.- :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-post-offboard.png" alt-text="Screenshot of workflow unprocessed tasks." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-post-offboard.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-post-offboard.png" alt-text="Screenshot of workflow unprocessed tasks." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-post-offboard.png"::: ## Enable the workflow schedule |
active-directory | How To Install Pshell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-install-pshell.md | The Windows server must have TLS 1.2 enabled before you install the Azure AD Con ## Install the Azure AD Connect provisioning agent by using PowerShell cmdlets + 1. Sign in to the server you use with enterprise admin permissions. 2. Sign in to the [Azure portal](https://portal.azure.com), and then go to **Azure Active Directory**. 3. On the menu on the left, select **Azure AD Connect**. |
active-directory | How To Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-troubleshoot.md | You can verify these items in the Azure portal and on the local server that's ru ### Azure portal agent verification + To verify that Azure detects the agent, and that the agent is healthy, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Tutorial Basic Ad Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/tutorial-basic-ad-azure.md | Now that you have our Active Directory environment, you need to a test account. ## Create an Azure AD tenant++ Now you need to create an Azure AD tenant so that you can synchronize our users to the cloud. To create a new Azure AD tenant, do the following. 1. Sign in to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. |
active-directory | Tutorial Existing Forest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/tutorial-existing-forest.md | If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md [!INCLUDE [active-directory-cloud-sync-how-to-verify-installation](../../../../includes/active-directory-cloud-sync-how-to-verify-installation.md)] ## Configure Azure AD Connect cloud sync- Use the following steps to configure provisioning -1. Sign in to the [Azure portal](https://portal.azure.com). -2. Select **Azure Active Directory** -3. Select **Azure AD Connect** -4. Select **Manage cloud sync** ++Use the following steps to configure provisioning: ++1. Sign in to the [Azure portal](https://portal.azure.com). +2. Select **Azure Active Directory** +3. Select **Azure AD Connect** +4. Select **Manage cloud sync** ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png) -5. Select **New Configuration** +5. Select **New Configuration** ![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png) -7. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**. +6. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**. ![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png) -1. The configuration status should now be **Healthy**. +7. The configuration status should now be **Healthy**. ![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png) If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md You'll now verify that the users that you had in our on-premises directory have been synchronized and now exist in our Azure AD tenant. This process may take a few hours to complete. To verify users are synchronized, do the following: - 1. Sign in to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. 2. On the left, select **Azure Active Directory** 3. Under **Manage**, select **Users**. |
active-directory | Tutorial Single Forest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/tutorial-single-forest.md | If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md ## Configure Azure AD Connect cloud sync + Use the following steps to configure and start the provisioning: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | How To Connect Health Adfs Risky Ip Workbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-adfs-risky-ip-workbook.md | Filter the report by IP address or user name to see an expanded view of sign-ins ## Accessing the workbook + To access the workbook: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | How To Connect Health Agent Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-agent-install.md | The Usage Analytics feature needs to gather and analyze data, so the Azure AD Co 1. On the **Local Security Setting** tab, verify that the AD FS service account is listed. If it's not listed, select **Add User or Group**, and add the AD FS service account to the list. Then select **OK**. 1. To enable auditing, open a Command Prompt window as administrator, and then run the following command: - `auditpol.exe /set /subcategory:{0CCE9222-69AE-11D9-BED3-505054503030} /failure:enable /success:enable` + `auditpol.exe /set /subcategory:"Application Generated" /failure:enable /success:enable` 1. Close **Local Security Policy**. |
active-directory | How To Connect Post Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-post-installation.md | Now that your users have been synchronized to the cloud, you need to assign them ### To assign an Azure AD Premium or Enterprise Mobility Suite License + 1. Sign in to the [Azure portal](https://portal.azure.com) as an admin. 2. On the left, select **Active Directory**. 3. On the **Active Directory** page, double-click the directory that has the users you want to set up. Now that your users have been synchronized to the cloud, you need to assign them Use the Azure portal to check the status of a synchronization. ### To verify the scheduled synchronization task+ 1. Sign in to the [Azure portal](https://portal.azure.com) as an admin. 2. On the left, select **Active Directory**. 3. On the left, select **Azure AD Connect** |
active-directory | How To Connect Pta Disable Do Not Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-disable-do-not-configure.md | In this article, you learn how to disable pass-through authentication by using A ## Prerequisites + Before you begin, ensure that you have the following prerequisite. - A Windows machine with pass-through authentication agent version 1.5.1742.0 or later installed. Any earlier version might not have the requisite cmdlets for completing this operation. |
active-directory | How To Connect Pta Upgrade Preview Authentication Agents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-upgrade-preview-authentication-agents.md | This article is for customers using Azure AD Pass-through Authentication through ### Step 1: Check where your Authentication Agents are installed + Follow these steps to check where your Authentication Agents are installed: 1. Sign in to the [Azure portal](https://portal.azure.com) with the Global Administrator credentials for your tenant. |
active-directory | How To Connect Sso Quick Start | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sso-quick-start.md | Ensure that the following prerequisites are in place: ## Enable the feature + Enable Seamless SSO through [Azure AD Connect](../whatis-hybrid-identity.md). > [!NOTE] |
active-directory | How To Connect Staged Rollout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-staged-rollout.md | To roll out a specific feature (*pass-through authentication*, *password hash sy ### Enable a Staged Rollout of a specific feature on your tenant + You can roll out these options: - **Password hash sync** + **Seamless SSO** You can roll out these options: To configure Staged Rollout, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization. +1. Sign in to the [Azure portal](https://portal.azure.com) in the User Administrator role for the organization. 1. Search for and select **Azure Active Directory**. |
active-directory | Tutorial Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tutorial-federation.md | To create a certificate: ## Create an Azure AD tenant + Now, create an Azure AD tenant, so you can sync your users in Azure: -1. In the [Azure portal](https://portal.azure.com), sign in with the account that's associated with your Azure subscription. +1. Sign in to the [Azure portal](https://portal.azure.com) using the account that's associated with your Azure subscription. 1. Search for and then select **Azure Active Directory**. 1. Select **Create**. Now you'll verify that the users in your on-premises Active Directory tenant hav To verify that the users are synced: -1. In the [Azure portal](https://portal.azure.com), sign in to the account that's associated with your Azure subscription. +1. Sign in to the [Azure portal](https://portal.azure.com) using the account that's associated with your Azure subscription. 1. In the portal menu, select **Azure Active Directory**. 1. In the resource menu under **Manage**, select **Users**. 1. Verify that the new users appear in your tenant. |
active-directory | Tutorial Passthrough Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tutorial-passthrough-authentication.md | Next, create a test user account. Create this account in your on-premises Active Now, create an Azure AD tenant, so you can sync your users in Azure: -1. In the [Azure portal](https://portal.azure.com), sign in with the account that's associated with your Azure subscription. +1. Sign in to the [Azure portal](https://portal.azure.com) using the account that's associated with your Azure subscription. 1. Search for and then select **Azure Active Directory**. 1. Select **Create**. Now you'll verify that the users in your on-premises Active Directory tenant hav To verify that the users are synced: -1. In the [Azure portal](https://portal.azure.com), sign in to the account that's associated with your Azure subscription. +1. Sign in to the [Azure portal](https://portal.azure.com) using the account that's associated with your Azure subscription. 1. In the portal menu, select **Azure Active Directory**. 1. In the resource menu under **Manage**, select **Users**. 1. Verify that the new users appear in your tenant. |
active-directory | Tutorial Password Hash Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tutorial-password-hash-sync.md | Next, create a test user account. Create this account in your on-premises Active ## Create an Azure AD tenant + Now, create an Azure AD tenant, so you can sync your users in Azure: -1. In the [Azure portal](https://portal.azure.com), sign in with the account that's associated with your Azure subscription. +1. Sign in to the [Azure portal](https://portal.azure.com) using the account that's associated with your Azure subscription. 1. Search for and then select **Azure Active Directory**. 1. Select **Create**. Now you'll verify that the users in your on-premises Active Directory tenant hav To verify that the users are synced: -1. In the [Azure portal](https://portal.azure.com), sign in to the account that's associated with your Azure subscription. +1. Sign in to the [Azure portal](https://portal.azure.com) using the account that's associated with your Azure subscription. 1. In the portal menu, select **Azure Active Directory**. 1. In the resource menu under **Manage**, select **Users**. 1. Verify that the new users appear in your tenant. |
active-directory | Howto Identity Protection Configure Mfa Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md | For more information on Azure AD multifactor authentication, see [What is Azure ## Policy configuration + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **MFA registration policy**. 1. Under **Assignments** > **Users** |
active-directory | Howto Identity Protection Simulate Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md | The sign-in shows up in the Identity Protection dashboard within 2-4 hours. ## Leaked Credentials for Workload Identities + This risk detection indicates that the application's valid credentials have been leaked. This leak can occur when someone checks in the credentials in a public code artifact on GitHub. Therefore, to simulate this detection, you need a GitHub account and can [sign up a GitHub account](https://docs.github.com/get-started/signing-up-for-github) if you don't have one already. **To simulate Leaked Credentials in GitHub for Workload Identities, perform the following steps**: |
active-directory | Access Panel Collections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/access-panel-collections.md | Your users can use the My Apps portal to view and start the cloud-based applicat > [!NOTE] > This article covers how an admin can enable and create collections. For information for the end user about how to use the My Apps portal and collections, see [Access and use collections](https://support.microsoft.com/account-billing/organize-apps-using-collections-in-the-my-apps-portal-2dae6b8a-d8b0-4a16-9a5d-71ed4d6a6c1d). ## Prerequisites To create collections on the My Apps portal, you need: To create collections on the My Apps portal, you need: ## Create a collection + To create a collection, you must have an Azure AD Premium P1 or P2 license. -1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as an admin with an Azure AD Premium P1 or P2 license. +1. Sign in to the [Azure portal](https://portal.azure.com) as an admin with an Azure AD Premium P1 or P2 license. 2. Go to **Azure Active Directory** > **Enterprise Applications**. |
active-directory | Add Application Portal Assign Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-assign-users.md | In this quickstart, you use the Azure portal to create a user account in your Az It's recommended that you use a nonproduction environment to test the steps in this quickstart. - ## Prerequisites To create a user account and assign it to an enterprise application, you need: To create a user account and assign it to an enterprise application, you need: ## Create a user account + To create a user account in your Azure AD tenant: 1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites. To create a user account in your Azure AD tenant: To assign a user account to an enterprise application: -1. In the [Azure portal](https://portal.azure.com), browse to **Azure Active Directory** and select **Enterprise applications**. +1. Sign in to the [Azure portal](https://portal.azure.com), then browse to **Azure Active Directory** and select **Enterprise applications**. 1. Search for and select the application to which you want to assign the user account. For example, the application that you created in the previous quickstart named **Azure AD SAML Toolkit 1**. 1. In the left pane, select **Users and groups**, and then select **Add user/group**. |
active-directory | Add Application Portal Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md | To configure the properties of an enterprise application, you need: ## Configure application properties + Application properties control how the application is represented and how the application is accessed. :::zone pivot="portal" |
active-directory | Add Application Portal Setup Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-sso.md | Azure AD has a gallery that contains thousands of pre-integrated applications th It is recommended that you use a non-production environment to test the steps in this article. - ## Prerequisites To configure SSO, you need: To configure SSO, you need: ## Enable single sign-on + To enable SSO for an application: 1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites. |
active-directory | Add Application Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal.md | In this quickstart, you use the Azure portal to add an enterprise application to It's recommended that you use a nonproduction environment to test the steps in this quickstart. - ## Prerequisites To add an enterprise application to your Azure AD tenant, you need: To add an enterprise application to your Azure AD tenant, you need: ## Add an enterprise application + To add an enterprise application to your tenant: 1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites. |
active-directory | Assign User Or Group Access Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md | Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. For greater control, certain types of enterprise applications can be configured to require user assignment. For more information on requiring user assignment for an app, see [Manage access to an application](what-is-access-management.md#requiring-user-assignment-for-an-app). - ## Prerequisites + To assign users to an enterprise application, you need: - An Azure AD account with an active subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). To assign users to an enterprise application, you need: To assign a user or group account to an enterprise application: -1. In the [Azure portal](https://portal.azure.com), select **Enterprise applications**, and then search for and select the application to which you want to assign the user or group account. +1. Sign in to the [Azure portal](https://portal.azure.com), then select **Enterprise applications**, and then search for and select the application to which you want to assign the user or group account. 1. Browse to **Azure Active Directory** > **Users and groups**, and then select **Add user/group**. :::image type="content" source="media/add-application-portal-assign-users/assign-user.png" alt-text="Assign user account to an application in your Azure AD tenant."::: |
active-directory | Cloudflare Conditional Access Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloudflare-conditional-access-policies.md | With Conditional Access, administrators enforce policies on application and user Learn more: [What is Conditional Access?](../conditional-access/overview.md) - ## Prerequisites * An Azure AD subscription Go to developers.cloudflare.com to [set up Azure AD as an IdP](https://developer ## Configure Conditional Access -1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **Azure Active Directory**. 3. Under **Manage**, select **App registrations**. 4. Select the application you created. |
active-directory | Cloudflare Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloudflare-integration.md | Integrate Cloudflare Zero Trust account with an instance of Azure AD. ## Register Cloudflare with Azure AD + Use the instructions in the following three sections to register Cloudflare with Azure AD. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 2. Under **Azure Services**, select **Azure Active Directory**. 3. In the left menu, under **Manage**, select **App registrations**. 4. Select the **+ New registration** tab. |
active-directory | Configure Admin Consent Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md | The admin consent workflow gives admins a secure way to grant access to applicat To approve requests, a reviewer must have the [permissions required](grant-admin-consent.md#prerequisites) to grant admin consent for the application requested. Simply designating them as a reviewer doesn't elevate their privileges. - ## Prerequisites To configure the admin consent workflow, you need: To configure the admin consent workflow, you need: ## Enable the admin consent workflow + To enable the admin consent workflow and choose reviewers: 1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites. |
active-directory | Configure Permission Classifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md | Three permission classifications are supported: "Low", "Medium" (preview), and " The minimum permissions needed to do basic sign-in are `openid`, `profile`, `email`, and `offline_access`, which are all delegated permissions on the Microsoft Graph. With these permissions an app can read details of the signed-in user's profile, and can maintain this access even when the user is no longer using the app. - ## Prerequisites To configure permission classifications, you need: To configure permission classifications, you need: ## Manage permission classifications + :::zone pivot="portal" Follow these steps to classify permissions using the Azure portal: |
active-directory | Configure User Consent Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md | To complete the tasks in this guide, you need the following: ## Manage group owner consent to apps + You can configure which users are allowed to consent to apps accessing their groups' or teams' data, or you can disable this for all users. # [Portal](#tab/azure-portal) |
active-directory | Configure User Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md | Before an application can access your organization's data, a user must grant the To reduce the risk of malicious applications attempting to trick users into granting them access to your organization's data, we recommend that you allow user consent only for applications that have been published by a [verified publisher](../develop/publisher-verification-overview.md). - ## Prerequisites To configure user consent, you need: To configure user consent, you need: ## Configure user consent settings + :::zone pivot="portal" To configure user consent settings through the Azure portal: |
active-directory | Custom Security Attributes Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/custom-security-attributes-apps.md | To assign or remove custom security attributes for an application in your Azure Learn how to work with custom attributes for applications in Azure AD. ### Assign custom security attributes to an application + :::zone pivot="portal" |
active-directory | Datawiza Sso Mfa Oracle Ebs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-sso-mfa-oracle-ebs.md | Configuration on the management console is complete. You're prompted to deploy D ### Optional: Enable Multi-Factor Authentication on Azure AD + To provide more security for sign-ins, you can enable Multi-Factor Authentication in the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. |
active-directory | Datawiza Sso Mfa To Owa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-sso-mfa-to-owa.md | time, effort, and errors. ## Optional: Enable Microsoft Entra ID Multi-Factor Authentication + To provide more sign-in security, you can enforce Microsoft Entra ID Multi-Factor Authentication. The process starts in the Azure portal. -1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. -2. Select **Azure Active Directory**. +2. Select **Azure Active Directory**. -3. Select **Manage** +3. Select **Manage** -4. Select **Properties** +4. Select **Properties** -5. Under **Tenant properties**, select **Manage security defaults** +5. Under **Tenant properties**, select **Manage security defaults** ![Screenshot shows the manage security defaults.](media/datawiza-access-proxy/manage-security-defaults.png) -6. For **Enable Security defaults**, select **Yes** +6. For **Enable Security defaults**, select **Yes** -7. Select **Save** +7. Select **Save** ## Next steps |
active-directory | Datawiza Sso Oracle Jde | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-sso-oracle-jde.md | The Oracle JDE application needs to recognize the user: using a name, the applic ## Enable Azure AD Multi-Factor Authentication + To provide more security for sign-ins, you can enforce MFA for user sign-in. See, [Tutorial: Secure user sign-in events with Azure AD MFA](../authentication/tutorial-enable-azure-mfa.md). |
active-directory | Datawiza Sso Oracle Peoplesoft | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-sso-oracle-peoplesoft.md | The Oracle PeopleSoft application needs to recognize the user. Using a name, the ## Enable Azure AD Multi-Factor Authentication + To provide more security for sign-ins, you can enforce Azure AD Multi-Factor Authentication (MFA). Learn more: [Tutorial: Secure user sign-in events with Azure AD MFA](../authentication/tutorial-enable-azure-mfa.md) |
active-directory | Delete Application Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md | To delete an enterprise application, you need: - One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - An [enterprise application added to your tenant](add-application-portal.md) - ## Delete an enterprise application + :::zone pivot="portal" 1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites. |
active-directory | Disable User Sign In Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md | There may be situations while configuring or managing an application where you d In this article, you learn how to prevent users from signing in to an application in Azure Active Directory through both the Azure portal and PowerShell. If you're looking for how to block specific users from accessing an application, use [user or group assignment](./assign-user-or-group-access-portal.md). - ## Prerequisites To disable user sign-in, you need: To disable user sign-in, you need: ## Disable user sign-in + :::zone pivot="portal" 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator for your directory. |
active-directory | F5 Big Ip Forms Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-forms-advanced.md | The configuration in this article is a flexible SHA implementation: manual creat ## Register F5 BIG-IP in Azure AD + BIG-IP registration is the first step for SSO between entities. The app you create from the F5 BIG-IP gallery template is the relying party, representing the SAML SP for the BIG-IP published application. 1. Sign in to the [Azure portal](https://portal.azure.com) with Application Administrator permissions. |
active-directory | F5 Big Ip Header Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md | The following instructions are an advanced configuration method, a flexible way ## Add F5 BIG-IP from the Azure AD gallery + To implement SHA, the first step is to set up a SAML federation trust between BIG-IP APM and Azure AD. The trust establishes the integration for BIG-IP to hand off preauthentication and Conditional Access to Azure AD, before granting access to the published service. Learn more: [What is Conditional Access?](../conditional-access/overview.md) |
active-directory | F5 Big Ip Headers Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md | This tutorial uses Guided Configuration v16.1 with an Easy button template. With ## Register Easy Button + Before a client or service accesses Microsoft Graph, the Microsoft identity platform must trust it. Learn more: [Quickstart: Register an application with the Microsoft identity platform](../develop/quickstart-register-app.md) Create a tenant app registration to authorize the Easy Button access to Graph. With these permissions, the BIG-IP pushes the configurations to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP. -1. Sign in to the [Azure portal](https://portal.azure.com/) with Application Administrative permissions. +1. Sign in to the [Azure portal](https://portal.azure.com) with Application Administrative permissions. 2. In the left navigation, select **Azure Active Directory**. 3. Under **Manage**, select **App registrations > New registration**. 4. Enter an application **Name**. |
active-directory | F5 Big Ip Kerberos Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md | This article covers the advanced configuration, a flexible SHA implementing that ## Register F5 BIG-IP in Azure AD + Before BIG-IP can hand off pre-authentication to Azure AD, register it in your tenant. This process initiates SSO between both entities. The app you create from the F5 BIG-IP gallery template is the relying party that represents the SAML SP for the BIG-IP published application. 1. Sign in to the [Azure portal](https://portal.azure.com) with Application Administrator permissions. |
active-directory | F5 Big Ip Kerberos Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md | Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefi * Improved governance: See, [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) and learn more about Azure AD pre-authentication. * Enforce organizational policies. See [What is Conditional Access?](../conditional-access/overview.md). * Full SSO between Azure AD and BIG-IP published services-* Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/) +* Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com) To learn more about benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-integration.md). This tutorial covers the latest Guided Configuration 16.1 with an Easy Button te ## Register Easy Button + Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md). This action creates a tenant app registration to authorize Easy Button access to Graph. Through these permissions, the BIG-IP pushes the configurations to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP. -1. Sign in to the [Azure portal](https://portal.azure.com/) using an account with Application Admin permissions. +1. Sign in to the [Azure portal](https://portal.azure.com) using an account with Application Admin permissions. 2. From the left navigation pane, select the **Azure Active Directory** service. 3. Under Manage, select **App registrations > New registration**. 4. Enter a display name for your application. For example, F5 BIG-IP Easy Button. |
active-directory | F5 Big Ip Ldap Header Easybutton | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md | In this article, you can learn to secure header and LDAP-based applications usin * Improved governance: See, [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) and learn more about Azure AD pre-authentication * See also, [What is Conditional Access?](../conditional-access/overview.md) to learn about how it helps enforce organizational policies * Full single sign-on (SSO) between Azure AD and BIG-IP published services-* Manage identities and access from one control plane, the [Azure portal](https://portal.azure.com/) +* Manage identities and access from one control plane, the [Azure portal](https://portal.azure.com) To learn about more benefits, see [F5 BIG-IP and Azure AD integration](./f5-integration.md). This tutorial uses Guided Configuration 16.1 with an Easy Button template. With ## Register Easy Button + Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md) This first step creates a tenant app registration to authorize the **Easy Button** access to Graph. With these permissions, the BIG-IP can push the configurations to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP. |
active-directory | F5 Big Ip Oracle Enterprise Business Suite Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md | This tutorial uses the Guided Configuration v16.1 Easy Button template. With the ## Register the Easy Button + Before a client or service accesses Microsoft Graph, the Microsoft identity platform must trust it. Learn more: [Quickstart: Register an application with the Microsoft identity platform](../develop/quickstart-register-app.md) Create a tenant app registration to authorize the Easy Button access to Graph. The BIG-IP pushes configurations to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP. -1. Sign in to the [Azure portal](https://portal.azure.com/) with Application Administrative permissions. +1. Sign in to the [Azure portal](https://portal.azure.com) with Application Administrative permissions. 2. In the left navigation pane, select the **Azure Active Directory** service. 3. Under **Manage**, select **App registrations > New registration**. 4. Enter an application **Name**. For example, F5 BIG-IP Easy Button. |
active-directory | F5 Big Ip Oracle Jde Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-jde-easy-button.md | Integrate BIG-IP with Azure AD for many benefits: * See, [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) * See, [What is Conditional Access?](../conditional-access/overview.md) * Single sign-on (SSO) between Azure AD and BIG-IP published services-* Manage identities and access from the [Azure portal](https://portal.azure.com/) +* Manage identities and access from the [Azure portal](https://portal.azure.com) Learn more: This tutorial uses Guided Configuration 16.1 with an Easy Button template. With ## Register the Easy Button + Before a client or service accesses Microsoft Graph, the Microsoft identity platform must trust it. Learn more: [Quickstart: Register an application with the Microsoft identity platform](../develop/quickstart-register-app.md) The following instructions help you create a tenant app registration to authorize Easy Button access to Graph. With these permissions, the BIG-IP pushes the configurations to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP. -1. Sign in to the [Azure portal](https://portal.azure.com/) with Application Administrative permissions. +1. Sign in to the [Azure portal](https://portal.azure.com) with Application Administrative permissions. 2. From the left navigation pane, select the **Azure Active Directory** service. 3. Under **Manage**, select **App registrations > New registration**. 4. Enter an application **Name**. |
active-directory | F5 Big Ip Oracle Peoplesoft Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button.md | Integrate BIG-IP with Azure AD for many benefits: * See, [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) * See, [What is Conditional Access?](../conditional-access/overview.md) * Single sign-on (SSO) between Azure AD and BIG-IP published services-* Manage identities and access from the [Azure portal](https://portal.azure.com/) +* Manage identities and access from the [Azure portal](https://portal.azure.com) Learn more: With the Easy Button, admins don't go between Azure AD and a BIG-IP to enable se ## Register the Easy Button + Before a client or service accesses Microsoft Graph, the Microsoft identity platform must trust it. Learn more: [Quickstart: Register an application with the Microsoft identity platform](../develop/quickstart-register-app.md) The following instructions help you create a tenant app registration to authorize Easy Button access to Graph. With these permissions, the BIG-IP pushes the configurations to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP. -1. Sign in to the [Azure portal](https://portal.azure.com/) with Application Administrative permissions. +1. Sign in to the [Azure portal](https://portal.azure.com) with Application Administrative permissions. 2. From the left navigation pane, select the **Azure Active Directory** service. 3. Under **Manage**, select **App registrations > New registration**. 4. Enter an application **Name**. |
active-directory | F5 Big Ip Sap Erp Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md | In this article, learn to secure SAP ERP using Azure Active Directory (Azure AD) * [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) * [What is Conditional Access?](../conditional-access/overview.md) * Single sign-on (SSO) between Azure AD and BIG-IP published services-* Manage identities and access from the [Azure portal](https://portal.azure.com/) +* Manage identities and access from the [Azure portal](https://portal.azure.com) Learn more: This tutorial uses Guided Configuration 16.1 with an Easy Button template. With ## Register Easy Button + Before a client or service accesses Microsoft Graph, the Microsoft identity platform must trust it. See, [Quickstart: Register an application with the Microsoft identity platform](../develop/quickstart-register-app.md) Register the Easy Button client in Azure AD, then it's allowed to establish a trust between SAML SP instances of a BIG-IP published application, and Azure AD as the SAML IdP. -1. Sign in to the [Azure portal](https://portal.azure.com/) with Application Administrator permissions. +1. Sign in to the [Azure portal](https://portal.azure.com) with Application Administrator permissions. 2. In the left navigation pane, select the **Azure Active Directory** service. 3. Under Manage, select **App registrations > New registration**. 4. Enter a **Name**. |
active-directory | F5 Bigip Deployment Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md | If you don't have the previous items for testing, you can deploy an AD domain en ## Azure deployment + You can deploy a BIG-IP in different topologies. This guide focuses on a network interface card (NIC) deployment. However, if your BIG-IP deployment requires multiple network interfaces for high availability, network segregation, or more than 1-GB throughput, consider using F5 pre-compiled [Azure Resource Manager (ARM) templates](https://clouddocs.f5.com/cloud/public/v1/azure/Azure_multiNIC.html). To deploy BIG-IP VE from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps). -1. Sign in to the [Azure portal](https://portal.azure.com/#home) using an account with permissions to create VMs, such as Contributor. +1. Sign in to the [Azure portal](https://portal.azure.com) using an account with permissions to create VMs, such as Contributor. 2. In the top ribbon search box, type **marketplace** 3. Select **Enter**. 4. Type **F5** into the Marketplace filter. |
active-directory | F5 Passwordless Vpn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-passwordless-vpn.md | To improve the tutorial experience, you can learn industry-standard terminology ## Add F5 BIG-IP from the Azure AD gallery + Set up a SAML federation trust between the BIG-IP to allow the Azure AD BIG-IP to hand off the pre-authentication and [Conditional Access](../conditional-access/overview.md) to Azure AD, before it grants access to the published VPN service. 1. Sign in to the [Azure portal](https://portal.azure.com) with application admin rights. |
active-directory | Grant Admin Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md | By default, granting tenant-wide admin consent to an application will allow all Granting tenant-wide admin consent may revoke any permissions that had previously been granted tenant-wide for that application. Permissions that have previously been granted by users on their own behalf won't be affected. - ## Prerequisites Granting tenant-wide admin consent requires you to sign in as a user that is authorized to consent on behalf of the organization. To grant tenant-wide admin consent, you need: ## Grant tenant-wide admin consent in Enterprise apps + You can grant tenant-wide admin consent through the **Enterprise applications** panel if the application has already been provisioned in your tenant. For example, an app could be provisioned in your tenant if at least one user has already consented to the application. For more information, see [How and why applications are added to Azure Active Directory](../develop/how-applications-are-added.md). :::zone pivot="portal" |
active-directory | Hide Application From User Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md | To hide an enterprise application using [Graph Explorer](https://developer.micro ## Hide Microsoft 365 applications from the My Apps portal + Use the following steps to hide all Microsoft 365 applications from the My Apps portal. The applications are still visible in the Office 365 portal. 1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator for your directory. |
active-directory | Manage Application Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md | In this article, you learn how to review permissions granted to applications in The steps in this article apply to all applications that were added to your Azure AD tenant via user or admin consent. For more information on consenting to applications, see [User and admin consent](user-admin-consent-overview.md). - ## Prerequisites To review permissions granted to applications, you need: Please see [Restore permissions granted to applications](restore-permissions.md) ## Review and revoke permissions + You can access the Azure portal to view the permissions granted to an app. You can revoke permissions granted by admins for your entire organization, and you can get contextual PowerShell scripts to perform other actions. To revoke an application's permissions that have been granted for the entire organization: |
active-directory | Manage Self Service Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-self-service-access.md | Using this feature, you can: - Optionally automatically assign self-service assigned users to an application role directly. - ## Prerequisites To enable self-service application access, you need: To enable self-service application access, you need: ## Enable self-service application access to allow users to find their own applications + Self-service application access is a great way to allow users to self-discover applications, and optionally allow the business group to approve access to those applications. For password single-sign on applications, you can also allow the business group to manage the credentials assigned to those users from their own My Apps portal. To enable self-service application access to an application, follow the steps below: |
active-directory | Migrate Applications From Okta | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-applications-from-okta.md | To complete the migration, repeat the configuration for all applications in the ## Migrate an OpenID Connect or OAuth 2.0 application to Azure AD + To migrate an OpenID Connect (OIDC) or OAuth 2.0 application to Azure AD, in your Azure AD tenant, configure the application for access. In this example, we convert a custom OIDC app. To complete the migration, repeat configuration for all applications in the Okta tenant. -1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** > **Enterprise applications**. +1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory** > **Enterprise applications**. 2. Under **All applications**, select **New application**. 3. Select **Create your own application**. 4. On the menu that appears, name the OIDC app and then select **Register an application you're working on to integrate with Azure AD**. |
active-directory | Migrate Okta Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation.md | For this tutorial, you configure password hash synchronization and seamless SSO. ## Configure staged rollout features + Before you test defederating a domain, in Azure AD use a cloud authentication staged rollout to test defederating users. Learn more: [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md) After you enable password hash sync and seamless SSO on the Azure AD Connect server, configure a staged rollout: -1. In the [Azure portal](https://portal.azure.com/#home), select **View** or **Manage Azure Active Directory**. +1. Sign in to the [Azure portal](https://portal.azure.com), then select **View** or **Manage Azure Active Directory**. ![Screenshot of the Azure portal with welcome message.](media/migrate-okta-federation/portal.png) Users that converted to managed authentication might need access to applications Configure the enterprise application registration for Okta. -1. In the [Azure portal](https://portal.azure.com/#home), under **Manage Azure Active Directory**, select **View**. +1. Sign in to the [Azure portal](https://portal.azure.com), then under **Manage Azure Active Directory**, select **View**. 2. On the left menu, under **Manage**, select **Enterprise applications**. ![Screenshot of the left menu of the Azure portal.](media/migrate-okta-federation/enterprise-application.png) |
active-directory | Migrate Okta Sign On Policies Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-conditional-access.md | If you deployed hybrid Azure AD join, you can deploy another group policy to com ## Configure Azure AD Multi-Factor Authentication tenant settings + Before you convert to Conditional Access, confirm the base MFA tenant settings for your organization. 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Migrate Okta Sync Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning.md | After you disable Okta provisioning, the Azure AD Connect server can synchronize ## Enable cloud sync agents + After you disable Okta provisioning, the Azure AD cloud sync agent can synchronize objects. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 2. Browse to **Azure Active Directory**. 3. Select **Azure AD Connect**. 4. Select **Cloud Sync**. |
active-directory | Review Admin Consent Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md | To review and take action on admin consent requests, you need: ## Review and take action on admin consent requests + To review the admin consent requests and take action: 1. Sign in to the [Azure portal](https://portal.azure.com) as one of the registered reviewers of the admin consent workflow. |
active-directory | Tutorial Govern Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-govern-monitor.md | Using the information in this tutorial, an administrator of the application lear > * Access the sign-ins report > * Send logs to Azure Monitor - ## Prerequisites - An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Using the information in this tutorial, an administrator of the application lear ## Create an access review + The administrator wants to make sure that users or guests have appropriate access. They decide to ask users of the application to participate in an access review and recertify or attest to their need for access. When the access review is finished, they can then make changes and remove access from users who no longer need it. For more information, see [Manage user and guest user access with access reviews](../governance/manage-access-review.md). To create an access review: -1. Sign in to the [Azure portal](https://portal.azure.com/) with one of the roles listed in the prerequisites. +1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites. 1. Go to **Azure Active Directory**, and then select **Identity Governance**. 1. On the left menu, select **Access reviews**. 1. Select **New access review** to create a new access review. |
active-directory | Tutorial Manage Access Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-access-security.md | Using the information in this tutorial, an administrator learns how to: > * Communicate a term of use to users of the application > * Create a collection in the My Apps portal - ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Using the information in this tutorial, an administrator learns how to: ## Grant tenant wide admin consent + For the application that the administrator added to their tenant, they want to set it up so that all users in the organization can use it and not have to individually request consent to use it. To avoid the need for user consent, they can grant consent for the application on behalf of all users in the organization. For more information, see [Consent and permissions overview](consent-and-permissions-overview.md). 1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites. |
active-directory | Tutorial Manage Certificates For Federated Single Sign On | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md | In this tutorial, an administrator of the application learns how to: > * Add email notification address for certificate expiration dates > * Renew certificates - ## Prerequisites - An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). The following two sections help you perform these steps. ### Create a new certificate + First, create and save new certificate with a different expiration date: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | View Applications Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/view-applications-portal.md | In this quickstart, you learn how to use the Azure portal to search for and view It's recommended that you use a nonproduction environment to test the steps in this quickstart. - ## Prerequisites To view applications that have been registered in your Azure AD tenant, you need: To view applications that have been registered in your Azure AD tenant, you need ## View a list of applications + To view the enterprise applications registered in your tenant: 1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites. |
active-directory | How Manage User Assigned Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md | In this article, you learn how to create, list, delete, or assign a role to a us - If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - ## Create a user-assigned managed identity + To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment. 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Howto Assign Access Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-portal.md | After you've configured an Azure resource with a managed identity, you can give ## Use Azure RBAC to assign a managed identity access to another resource + >[!IMPORTANT] > The steps outlined below show is how you grant access to a service using Azure RBAC. Check specific service documentation on how to grant access - for example check Azure Data Explorer for instructions. Some Azure services are in the process of adopting Azure RBAC on the data plane After you've enabled managed identity on an Azure resource, such as an [Azure VM - [Managed identity for Azure resources overview](overview.md) - To enable managed identity on an Azure virtual machine, see [Configure managed identities for Azure resources on a VM using the Azure portal](qs-configure-portal-windows-vm.md). - To enable managed identity on an Azure virtual machine scale set, see [Configure managed identities for Azure resources on a virtual machine scale set using the Azure portal](qs-configure-portal-windows-vmss.md).-- |
active-directory | Msi Tutorial Linux Vm Access Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/msi-tutorial-linux-vm-access-arm.md | The response contains details for the role assignment created, similar to the fo ## Get an access token using the VM's identity and use it to call Resource Manager + For the remainder of the tutorial, we will work from the VM we created earlier. To complete these steps, you need an SSH client. If you are using Windows, you can use the SSH client in the [Windows Subsystem for Linux](/windows/wsl/about). |
active-directory | Qs Configure Portal Windows Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md | Refer to the following Quickstarts to create a VM: ### Enable system-assigned managed identity on an existing VM + To enable system-assigned managed identity on a VM that was originally provisioned without it, your account needs the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role assignment. No other Azure AD directory role assignments are required. 1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with the Azure subscription that contains the VM. |
active-directory | Qs Configure Portal Windows Vmss | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md | Currently, the Azure portal does not support enabling system-assigned managed id ### Enable system-assigned managed identity on an existing virtual machine scale set + To enable the system-assigned managed identity on a virtual machine scale set that was originally provisioned without it: 1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with the Azure subscription that contains the virtual machine scale set. |
active-directory | Tutorial Linux Vm Access Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-arm.md | You learn how to: ## Grant access + When you use managed identities for Azure resources, your code can get access tokens to authenticate to resources that support Azure AD authentication. The Azure Resource Manager API supports Azure AD authentication. First, we need to grant this VM's identity access to a resource in Azure Resource Manager, in this case, the Resource Group in which the VM is contained. 1. Sign in to the [Azure portal](https://portal.azure.com) with your administrator account. |
active-directory | Tutorial Linux Vm Access Nonaad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md | You learn how to: ## Create a Key Vault   + This section shows how to grant your VM access to a secret stored in a Key Vault. Using managed identities for Azure resources, your code can get access tokens to authenticate to resources that support Azure AD authentication.  However, not all Azure services support Azure AD authentication. To use managed identities for Azure resources with those services, store the service credentials in Azure Key Vault, and use the VM's managed identity to access Key Vault to retrieve the credentials. First, we need to create a Key Vault and grant our VM's system-assigned managed identity access to the Key Vault. |
active-directory | Tutorial Vm Managed Identities Cosmos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md | Then read and write data as described in [these samples](../../cosmos-db/sql/sql ## Clean up steps + # [Portal](#tab/azure-portal) -1. In the [Azure portal](https://portal.azure.com), select the resource you want to delete. +1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Select the resource you want to delete. 1. Select **Delete**. |
active-directory | Tutorial Windows Vm Access Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md | This tutorial shows you how to access the Azure Resource Manager API using a Win ## Grant your VM access to a resource group in Resource Manager + Using managed identities for Azure resources, your application can get access tokens to authenticate to resources that support Azure AD authentication. The Azure Resource Manager API supports Azure AD authentication. We grant this VM's identity access to a resource in Azure Resource Manager, in this case a Resource Group. We assign the [Reader](../../role-based-access-control/built-in-roles.md#reader) role to the managed-identity at the scope of the resource group. 1. Sign in to the [Azure portal](https://portal.azure.com) with your administrator account. |
active-directory | Tutorial Windows Vm Access Nonaad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md | You learn how to: ## Create a Key Vault   + This section shows how to grant your VM access to a secret stored in a Key Vault. When you use managed identities for Azure resources, your code can get access tokens to authenticate to resources that support Azure AD authentication.  However, not all Azure services support Azure AD authentication. To use managed identities for Azure resources with those services, store the service credentials in Azure Key Vault, and use the VM's managed identity to access Key Vault to retrieve the credentials. First, we need to create a Key Vault and grant our VM’s system-assigned managed identity access to the Key Vault. |
active-directory | Tutorial Windows Vm Ua Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-ua-arm.md | CanDelegate: False ### Get an access token + For the remainder of the tutorial, you will work from the VM we created earlier. 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Cross Tenant Synchronization Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md | By the end of this article, you'll be able to: ## Step 2: Enable user synchronization in the target tenant + ![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant** 1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator in the target tenant. |
active-directory | Azure Pim Resource Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-pim-resource-rbac.md | My audit enables you to view your personal role activity. ## Get reason, approver, and ticket number for approval events + 1. Sign in to the [Azure portal](https://portal.azure.com) with Privileged Role administrator role permissions, and open Azure AD. 1. Select **Audit logs**. 1. Use the **Service** filter to display only audit events for the Privileged identity Management service. On the **Audit logs** page, you can: |
active-directory | Groups Activate Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md | This article is for eligible members or owners who want to activate their group ## Activate a role + When you need to take on a group membership or ownership, you can request activation by using the **My roles** navigation option in PIM. 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Groups Discover Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md | In Azure Active Directory (Azure AD), part of Microsoft Entra, you can use Privi ## Identify groups to manage + Before you will start, you need an Azure AD Security group or Microsoft 365 group. To learn more about group management in Azure AD, see [Manage Azure Active Directory groups and group membership](../fundamentals/how-to-manage-groups.md). Dynamic groups and groups synchronized from on-premises environment cannot be managed in PIM for Groups. |
active-directory | Groups Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md | Role settings are defined per role per group. All assignments for the same role ## Update role settings + To open the settings for a group role: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Pim Approval Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-approval-workflow.md | With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), ## View pending requests + As a delegated approver, you'll receive an email notification when an Azure AD role request is pending your approval. You can view these pending requests in Privileged Identity Management. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Open **Azure AD Privileged Identity Management**. |
active-directory | Pim Complete Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-complete-roles-and-resource-roles-review.md | Once the review has been created, follow the steps in this article to complete t ## Complete access reviews -1. Sign in to the [Azure portal](https://portal.azure.com/). For **Azure resources**, navigate to **Privileged Identity Management** and select **Azure resources** under **Manage** from the dashboard. For **Azure AD roles**, select **Azure AD roles** from the same dashboard. ++1. Sign in to the [Azure portal](https://portal.azure.com). For **Azure resources**, navigate to **Privileged Identity Management** and select **Azure resources** under **Manage** from the dashboard. For **Azure AD roles**, select **Azure AD roles** from the same dashboard. 2. For **Azure resources**, select your resource under **Azure resources** and then select **Access reviews** from the dashboard. For **Azure AD roles**, proceed directly to the **Access reviews** on the dashboard. |
active-directory | Pim Create Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-roles-and-resource-roles-review.md | Access Reviews for **Service Principals** requires an Entra Workload Identities ## Create access reviews -1. Sign in to the [Azure portal](https://portal.azure.com/) as a user that is assigned to one of the prerequisite role(s). ++1. Sign in to the [Azure portal](https://portal.azure.com) as a user that is assigned to one of the prerequisite role(s). 2. Select **Identity Governance**. |
active-directory | Pim Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-getting-started.md | Once Privileged Identity Management is set up, you can learn your way around. ## Add a PIM tile to the dashboard + To make it easier to open Privileged Identity Management, add a PIM tile to your Azure portal dashboard. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **All services** and find the **Azure AD Privileged Identity Management** service. |
active-directory | Pim How To Activate Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md | This article is for administrators who need to activate their Azure AD role in P ## Activate a role + When you need to assume an Azure AD role, you can request activation by opening **My roles** in Privileged Identity Management. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). |
active-directory | Pim How To Add Role To User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md | Privileged Identity Management support both built-in and custom Azure AD roles. ## Assign a role + Follow these steps to make a user eligible for an Azure AD admin role. -1. Sign in to the [Azure portal](https://portal.azure.com/) with a user that is a member of the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role. +1. Sign in to the [Azure portal](https://portal.azure.com) with a user that is a member of the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role. 1. Open **Azure AD Privileged Identity Management**. |
active-directory | Pim How To Change Default Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md | PIM role settings are also known as PIM policies. ## Open role settings + To open the settings for an Azure AD role: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Pim How To Configure Security Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md | Severity: **Low** ## Customize security alert settings + Follow these steps to configure security alerts for Azure AD roles in Privileged Identity Management: -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). |
active-directory | Pim Perform Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-perform-roles-and-resource-roles-review.md | If you're a privileged role administrator or global administrator interested in ## Approve or deny access + You can approve or deny access based on whether the user still needs access to the role. Choose **Approve** if you want them to stay in the role, or **Deny** if they do not need the access anymore. The users' assignment status will not change until the review closes and the administrator applies the results. Common scenarios in which certain denied users cannot have results applied to them may include the following: - **Reviewing members of a synced on-premises Windows AD group**: If the group is synced from an on-premises Windows AD, the group cannot be managed in Azure AD and therefore membership cannot be changed. You can approve or deny access based on whether the user still needs access to t Follow these steps to find and complete the access review: -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** and open **Privileged Identity Management**. 1. Select **Review access**. If you have any pending access reviews, they will appear in the access reviews page. |
active-directory | Pim Resource Roles Activate Your Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md | This article is for members who need to activate their Azure resource role in Pr ## Activate a role + When you need to take on an Azure resource role, you can request activation by using the **My roles** navigation option in Privileged Identity Management. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). |
active-directory | Pim Resource Roles Approval Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-approval-workflow.md | Follow the steps in this article to approve or deny requests for Azure resource ## View pending requests + As a delegated approver, you'll receive an email notification when an Azure resource role request is pending your approval. You can view these pending requests in Privileged Identity Management. -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Open **Azure AD Privileged Identity Management**. |
active-directory | Pim Resource Roles Assign Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md | For more information, see [What is Azure attribute-based access control (Azure A ## Assign a role + Follow these steps to make a user eligible for an Azure resource role. -1. Sign in to the [Azure portal](https://portal.azure.com/) with Owner or User Access Administrator role permissions. +1. Sign in to the [Azure portal](https://portal.azure.com) with Owner or User Access Administrator role permissions. 1. Open **Azure AD Privileged Identity Management**. |
active-directory | Pim Resource Roles Configure Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md | Alert | Severity | Trigger | Recommendation ## Configure security alert settings + Follow these steps to configure security alerts for Azure roles in Privileged Identity Management: -1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). |
active-directory | Pim Resource Roles Configure Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md | PIM role settings are also known as PIM policies. ## Open role settings + To open the settings for an Azure resource role: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Pim Resource Roles Discover Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-discover-resources.md | You can view and manage the management groups or subscriptions to which you have ## Discover resources -1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Sign in to the [Azure portal](https://portal.azure.com). 1. Open **Azure AD Privileged Identity Management**. |
active-directory | Pim Security Wizard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-security-wizard.md | Also, keep role assignments permanent if a user has a Microsoft account (in othe ## Open Discovery and insights (preview) -1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Sign in to the [Azure portal](https://portal.azure.com). 1. Open **Azure AD Privileged Identity Management**. |
active-directory | How To View Applied Conditional Access Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md | The Azure AD Graph PowerShell module doesn't support viewing applied Conditional ## View Conditional Access policies in Azure AD sign-in logs + The activity details of sign-in logs contain several tabs. The **Conditional Access** tab lists the Conditional Access policies applied to that sign-in event. 1. Sign in to the [Azure portal](https://portal.azure.com) using the Security Reader role. |
active-directory | Howto Access Activity Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md | The following roles provide read access to audit and sign-in logs. Always use th ## Access the activity logs in the portal + 1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. 1. Go to **Azure AD** and select **Audit logs**, **Sign-in logs**, or **Provisioning logs**. 1. Adjust the filter according to your needs. |
active-directory | Howto Analyze Activity Logs Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md | To follow along, you need: ## Navigate to the Log Analytics workspace + 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **Azure Active Directory**, and then select **Logs** from the **Monitoring** section to open your Log Analytics workspace. The workspace will open with a default query. The workbooks provide several reports related to common scenarios involving audi * [Get started with queries in Azure Monitor logs](../../azure-monitor/logs/get-started-queries.md) * [Create and manage alert groups in the Azure portal](../../azure-monitor/alerts/action-groups.md)-* [Install and use the log analytics views for Azure Active Directory](howto-install-use-log-analytics-views.md) +* [Install and use the log analytics views for Azure Active Directory](howto-install-use-log-analytics-views.md) |
active-directory | Howto Configure Prerequisites For Reporting Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md | To enable your application to access Microsoft Graph without user intervention, ### Register an Azure AD application -1. In the [Azure portal](https://portal.azure.com), go to **Azure Active Directory** > **App registrations**. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Go to **Azure Active Directory** > **App registrations**. + 1. Select **New registration**. ![Screenshot of the App registrations page, with the New registration button highlighted.](./media/howto-configure-prerequisites-for-reporting-api/new-app-registration.png) |
active-directory | Howto Integrate Activity Logs With Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md | To use this feature, you need: ## Send logs to Azure Monitor + Follow the steps below to send logs from Azure Active Directory to Azure Monitor. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. 1. Sign in to the [Azure portal](https://portal.azure.com) as a **Security Administrator** or **Global Administrator**. If you do not see logs appearing in the selected destination after 15 minutes, s * [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) * [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md) * [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)-- |
active-directory | Howto Manage Inactive User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md | The following details relate to the `lastSignInDateTime` property. ## How to investigate a single user + If you need to view the latest sign-in activity for a user, you can view the user's sign-in details in Azure AD. You can also use the Microsoft Graph **users by name** scenario described in the [previous section](#detect-inactive-user-accounts-with-microsoft-graph). 1. Sign in to the [Azure portal](https://portal.azure.com). The last sign-in date and time shown on this tile may take up to 6 hours to upda * [Get data using the Azure Active Directory reporting API with certificates](tutorial-access-api-with-certificates.md) * [Audit API reference](/graph/api/resources/directoryaudit) * [Sign-in activity report API reference](/graph/api/resources/signin)- |
active-directory | Howto Troubleshoot Sign In Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md | You need: ## Gather sign-in details + 1. Sign in to the [Azure portal](https://portal.azure.com) using a role of least privilege access. 1. Go to **Azure AD** > **Sign-ins**. 1. Use the filters to narrow down the results If all else fails, or the issue persists despite taking the recommended course o * [Sign-ins error codes reference](./concept-sign-ins.md) * [Sign-ins report overview](concept-sign-ins.md)-* [How to use the Sign-in diagnostics](howto-use-sign-in-diagnostics.md) +* [How to use the Sign-in diagnostics](howto-use-sign-in-diagnostics.md) |
active-directory | Howto Use Azure Monitor Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md | To use Azure Workbooks for Azure AD, you need: ## How to access Azure Workbooks for Azure AD + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**. - **Workbooks**: All workbooks created in your tenant Workbooks can be created from scratch or from a template. When creating a new wo ## Next steps * [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).-* [Create custom Azure Monitor queries using Azure PowerShell](../governance/entitlement-management-logs-and-reporting.md). +* [Create custom Azure Monitor queries using Azure PowerShell](../governance/entitlement-management-logs-and-reporting.md). |
active-directory | Quickstart Access Log With Graph Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md | To complete the scenario in this quickstart, you need: ## Perform a failed sign-in + The goal of this step is to create a record of a failed sign-in in the Azure AD sign-ins log. **To complete this step:** -1. Sign in to the [Azure portal](https://portal.azure.com/) as Isabella Simonsen using an incorrect password. +1. Sign in to the [Azure portal](https://portal.azure.com) as Isabella Simonsen using an incorrect password. 2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log. For more information, see [Activity reports](reference-reports-latencies.md#activity-reports). |
active-directory | Quickstart Analyze Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-analyze-sign-in.md | To complete the scenario in this quickstart, you need: ## Perform a failed sign-in + The goal of this step is to create a record of a failed sign-in in the Azure AD sign-ins log. **To complete this step:** -1. Sign in to the [Azure portal](https://portal.azure.com/) as Isabella Simonsen using an incorrect password. +1. Sign in to the [Azure portal](https://portal.azure.com) as Isabella Simonsen using an incorrect password. 2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log. For more information, see [Activity reports](reference-reports-latencies.md#activity-reports). |
active-directory | Quickstart Azure Monitor Route Logs To Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md | To use this feature, you need: ## Archive logs to an Azure storage account + 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **Azure Active Directory** > **Monitoring** > **Audit logs**. |
active-directory | Tutorial Azure Monitor Stream Logs To Event Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md | To use this feature, you need: ## Stream logs to an event hub + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Audit logs**. |
active-directory | Tutorial Log Analytics Wizard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-log-analytics-wizard.md | Familiarize yourself with these articles: ## Configure a workspace + This procedure outlines how to configure a log analytics workspace for your audit and sign-in logs. Configuring a log analytics workspace consists of two main steps: Create a new column by combining the values to two other columns: `SigninLogs | limit 10 | extend RiskUser = strcat(RiskDetail, "-", Identity) | project RiskUser, ClientAppUsed` ----## Create an alert rule +## Create an alert rule This procedure shows how to send alerts when the breakglass account is used. **To create an alert rule:** - 1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. 2. Search for **Azure Active Directory**. This procedure shows how to create a new workbook using the quickstart template. -## Add a query to a workbook template +## Add a query to a workbook template This procedure shows how to add a query to an existing workbook template. The example is based on a query that shows the distribution of conditional access success to failures. |
active-directory | Admin Units Assign Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md | You can assign an Azure AD role with an administrative unit scope by using the A ### Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Administrative units** and then select the administrative unit that you want to assign a user role scope to. |
active-directory | Admin Units Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md | You can create a new administrative unit by using either the Azure portal, Power ### Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Administrative units**. |
active-directory | Admin Units Members Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-add.md | You can add users, groups, or devices to administrative units using the Azure po ### Add a single user, group, or device to administrative units + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory**. |
active-directory | Admin Units Members Dynamic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-dynamic.md | Follow these steps to create administrative units with dynamic membership rules ### Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory**. Body - [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md) - [Add users or groups to an administrative unit](admin-units-members-add.md) - [Azure AD administrative units: Troubleshooting and FAQ](admin-units-faq-troubleshoot.yml)- |
active-directory | Admin Units Members List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-list.md | You can list the users, groups, or devices in administrative units using the Azu ### List the administrative units for a single user, group, or device + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory**. |
active-directory | Admin Units Members Remove | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-remove.md | You can remove users, groups, or devices from administrative units individually ### Remove a single user, group, or device from administrative units + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory**. |
active-directory | Assign Roles Different Scopes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/assign-roles-different-scopes.md | This section describes how to assign roles at the tenant scope. ### Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Roles and administrators** to see the list of all available roles. Follow these instructions to assign a role at application scope using the Micros * [List Azure AD role assignments](view-assignments.md). * [Assign Azure AD roles to users](manage-roles-portal.md).-* [Assign Azure AD roles to groups](groups-assign-role.md) +* [Assign Azure AD roles to groups](groups-assign-role.md) |
active-directory | Custom Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-create.md | For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr ### Create a new custom role to grant access to manage app registrations + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Roles and administrators** > **New custom role**. |
active-directory | Custom Enterprise Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-apps.md | Granting the update permission is done in two steps: ### Create a new custom role + >[!NOTE] > Custom roles are created and managed at an organization-wide level and are available only from the organization's Overview page. |
active-directory | Groups Assign Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md | For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr ## Azure portal + Assigning an Azure AD role to a group is similar to assigning users and service principals except that only groups that are role-assignable can be used. > [!TIP] |
active-directory | Groups Create Eligible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-create-eligible.md | For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr ## Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Groups** > **All groups** > **New group**. |
active-directory | Groups Remove Assignment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-remove-assignment.md | For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr ## Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Roles and administrators** > *role name*. |
active-directory | Groups View Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-view-assignments.md | For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr ## Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Groups**. |
active-directory | List Role Assignments Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/list-role-assignments-users.md | A role can be assigned to a user directly or transitively via a group. This arti For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md). ## Azure portal++ Follow these steps to list Azure AD roles for a user using the Azure portal. Your experience will be different depending on whether you have [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) enabled. 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Manage Roles Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/manage-roles-portal.md | Follow these steps to assign Azure AD roles using the Azure portal. Your experie ### Assign a role + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Roles and administrators** to see the list of all available roles. |
active-directory | My Staff Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/my-staff-configure.md | To complete this article, you need the following resources and privileges: ## How to enable My Staff + Once you have configured administrative units, you can apply this scope to your users who access My Staff. Only users who are assigned an administrative role can access My Staff. To enable My Staff, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator, User Administrator, or Group Administrator. |
active-directory | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/prerequisites.md | To use AzureADPreview, follow these steps to make sure it is imported into the c ## Graph Explorer + To manage Azure AD roles using the [Microsoft Graph API](/graph/overview) and [Graph Explorer](/graph/graph-explorer/graph-explorer-overview), you must do the following: 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Quickstart App Registration Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/quickstart-app-registration-limits.md | For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr ### Create a custom role + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Roles and administrators** and then select **New custom role**. |
active-directory | Role Definitions List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/role-definitions-list.md | For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr ## Azure portal + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Azure Active Directory** > **Roles and administrators** to see the list of all available roles. |
active-directory | View Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/view-assignments.md | For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr ## Azure portal + This procedure describes how to list role assignments with organization-wide scope. 1. Sign in to the [Azure portal](https://portal.azure.com). |
active-directory | Bigpanda Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bigpanda-tutorial.md | To integrate Azure Active Directory with BigPanda, you need: * An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-* BigPanda single sign-on (SSO) enabled subscription. +* A BigPanda account with the Single Sign On role set to Full Access. See [Roles and Resource Permissions](https://docs.bigpanda.io/docs/roles-management#roles-and-resource-permissions) in the BigPanda documentation for more information. ## Add application and assign a test user Complete the following steps to enable Azure AD single sign-on in the Azure port `https://api.bigpanda.io/login/<INSTANCE>` > [!NOTE]- > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [BigPanda support team](mailto:support@bigpanda.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual Reply URL and Sign on URL. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. -1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. +1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the file and save it on your computer. - ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate") + ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate") -1. On the **Set up BigPanda** section, copy the appropriate URL(s) based on your requirement. +1. On the **Set up BigPanda** section, copy the **Login URL**. ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata") ## Configure BigPanda SSO -To configure single sign-on on **BigPanda** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [BigPanda support team](mailto:support@bigpanda.io). They set this setting to have the SAML SSO connection set properly on both sides. --### Create BigPanda test user --In this section, a user called B.Simon is created in BigPanda. BigPanda supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in BigPanda, a new one is commonly created after authentication. +To configure single sign-on on **BigPanda** side, please follow the instructions from [BigPanda documentation](https://docs.bigpanda.io/docs/azure-ad-active-directory). ## Test SSO |
active-directory | Certify Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/certify-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. 4. On the **Basic SAML Configuration** section, perform the following steps: In the **Identifier** text box, type the URL:- `https://www.certify.com` + `https://expense.certify.com` 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Raw)** from the given options as per your requirement and save it on your computer. |
active-directory | Netskope Cloud Exchange Administration Console Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netskope-cloud-exchange-administration-console-tutorial.md | Complete the following steps to enable Azure AD single sign-on in the Azure port `https://<Cloud_Exchange_FQDN>/login` > [!NOTE]- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Netskope Cloud Exchange Administration Console support team](mailto:support@netskope.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL based on your cloud exchange deployment. You can also contact [Netskope Cloud Exchange Administration Console support team](mailto:support@netskope.com) to get help to determine these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. Netskope Cloud Exchange Administration Console application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Complete the following steps to enable Azure AD single sign-on in the Azure port > [!NOTE] > Please click [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to know how to configure Role in Azure AD. -1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. +1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. - ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate") + ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") 1. On the **Set up Netskope Cloud Exchange Administration Console** section, copy the appropriate URL(s) based on your requirement. Complete the following steps to enable Azure AD single sign-on in the Azure port ## Configure Netskope Cloud Exchange Administration Console SSO -To configure single sign-on on **Netskope Cloud Exchange Administration Console** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Netskope Cloud Exchange Administration Console support team](mailto:support@netskope.com). They set this setting to have the SAML SSO connection set properly on both sides +To configure single sign-on on **Netskope Cloud Exchange Administration Console** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Netskope Cloud Exchange Administration Console support team](mailto:support@netskope.com). They set this setting to have the SAML SSO connection set properly on both sides ### Create Netskope Cloud Exchange Administration Console test user |
active-directory | Verifiable Credentials Configure Issuer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md | The following diagram illustrates the Microsoft Entra Verified ID architecture a ## Create the verified credential expert card in Azure + In this step, you create the verified credential expert card by using Microsoft Entra Verified ID. After you create the credential, your Azure AD tenant can issue it to users who initiate the process. -1. Using the [Azure portal](https://portal.azure.com/), search for **Verified ID** and select it. +1. Sign in to the [Azure portal](https://portal.azure.com) +1. Search for **Verified ID** and select it. 1. After you [set up your tenant](verifiable-credentials-configure-tenant.md), the **Create credential** should appear. Alternatively, you can select **Credentials** in the left hand menu and select **+ Add a credential**. 1. In **Create credential**, select **Custom Credential** and click **Next**: |
active-directory | Verifiable Credentials Configure Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md | After you create your key vault, Verifiable Credentials generates a set of keys ### Set access policies for the Verified ID Admin user -1. In the [Azure portal](https://portal.azure.com/), go to the key vault you use for this tutorial. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Go to the key vault you use for this tutorial. 1. Under **Settings**, select **Access policies**. After you create your key vault, Verifiable Credentials generates a set of keys To set up Verified ID, follow these steps: -1. In the [Azure portal](https://portal.azure.com/), search for *Verified ID*. Then, select **Verified ID**. +1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Search for *Verified ID*. Then, select **Verified ID**. 1. From the left menu, select **Setup**. If you ever are in need of manually resetting the permissions, the access policy Your application needs to get access tokens when it wants to call into Microsoft Entra Verified ID so it can issue or verify credentials. To get access tokens, you have to register an application and grant API permission for the Verified ID Request Service. For example, use the following steps for a web application: -1. Sign in to the [Azure portal](https://portal.azure.com/) with your administrative account. +1. Sign in to the [Azure portal](https://portal.azure.com) with your administrative account. 1. If you have access to multiple tenants, select the **Directory + subscription**. Then, search for and select your **Azure Active Directory**. |
active-directory | Workload Identity Federation Block Using Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-block-using-azure-policy.md | The Not allowed resource types built-in policy can be used to block the creation ## Create a policy assignment + To create a policy assignment for the Not allowed resource types that blocks the creation of federated identity credentials in a subscription or resource group: 1. Sign in to the [Azure portal](https://portal.azure.com). To create a policy assignment for the Not allowed resource types that blocks the ## Next steps -Learn how to [manage a federated identity credential on a user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) in Azure Active Directory (Azure AD). +Learn how to [manage a federated identity credential on a user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) in Azure Active Directory (Azure AD). |
ai-services | Intro To Spatial Analysis Public Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/intro-to-spatial-analysis-public-preview.md | Spatial Analysis can also be configured to detect if a person is wearing a prote ![Spatial Analysis classifies whether people have facemasks in an elevator](https://user-images.githubusercontent.com/11428131/137015842-ce524f52-3ac4-4e42-9067-25d19b395803.png) +## Input requirements ++Spatial Analysis works on videos that meet the following requirements: +* The video must be in RTSP, rawvideo, MP4, FLV, or MKV format. +* The video codec must be H.264, HEVC(H.265), rawvideo, VP9, or MPEG-4. + ## Get started Follow the [quickstart](spatial-analysis-container.md) to set up the Spatial Analysis container and begin analyzing video. |
ai-services | Concept Invoice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-invoice.md | monikerRange: '<=doc-intel-3.0.0' [!INCLUDE [applies to v2.1](includes/applies-to-v2-1.md)] ::: moniker-end -The Document Intelligence invoice model uses powerful Optical Character Recognition (OCR) capabilities to analyze and extract key fields and line items from sales invoices, utility bills, and purchase orders. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports both English and Spanish invoices. +The Document Intelligence invoice model uses powerful Optical Character Recognition (OCR) capabilities to analyze and extract key fields and line items from sales invoices, utility bills, and purchase orders. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports invoices in 27 languages. **Supported document types:** See how data, including customer information, vendor details, and line items, is | • Italian (it) | Italy (it)| | • Portuguese (pt) | Portugal (pt), Brazil (br)| | • Dutch (nl) | Netherlands (nl)|+| • Czech (cs) | Czechoslovakia (cz)| +| • Danish (da) | Denmark (dk)| +| • Estonian (et) | Estonia (ee)| +| • Finnish (fi) | Finland (fl)| +| • Croation (hr) | Bosnia and Herzegovina (ba), Croatia (hr), Serbia (rs)| +| • Hungarian (hu) | Hungary (hu)| +| • Icelandic (is) | Iceland (is)| +| • Japanese (ja) | Japan (ja)| +| • Korean (ko) | Korea (kr)| +| • Lithuanian (lt) | Lithuania (lt)| +| • Latvian (lv) | Latvia (lv)| +| • Malay (ms) | Malasia (ms)| +| • Norwegian (nb) | Norway (no)| +| • Polish (pl) | Poland (pl)| +| • Romanian (ro) | Romania (ro)| +| • Slovak (sk) | Slovakia (sv)| +| • Slovenian (sl) | Slovenia (sl)| +| • Serbian (sr-Latn) | Serbia (latn-rs)| +| • Albanian (sq) | Albania (al)| +| • Swedish (sv) | Sweden (se)| +| • Chinese (simplified (zh-hans) | China (zh-hans-cn)| +| • Chinese (traditional (zh-hant) | Hong Kong (zh-hant-hk), Taiwan (zh-hant-tw)| ++| Supported Currency Codes | Details | +|:-|:| +| • ARS | United States (us) | +| • AUD | Australia (au) | +| • BRL | United States (us) | +| • CAD | Canada (ca) | +| • CLP | United States (us) | +| • CNY | United States (us) | +| • COP | United States (us) | +| • CRC | United States (us) | +| • CZK | United States (us) | +| • DKK | United States (us) | +| • EUR | United States (us) | +| • GBP | United Kingdom (uk) | +| • HUF | United States (us) | +| • IDR | United States (us) | +| • INR | United States (us) | +| • ISK | United States (us) | +| • JPY | Japan (jp) | +| • KRW | United States (us) | +| • NOK | United States (us) | +| • PAB | United States (us) | +| • PEN | United States (us) | +| • PLN | United States (us) | +| • RON | United States (us) | +| • RSD | United States (us) | +| • SEK | United States (us) | +| • TWD | United States (us) | +| • USD | United States (us) | ## Field extraction The invoice key-value pairs and line items extracted are in the `documentResults ### Key-value pairs -The prebuilt invoice **2022-06-30** and later releases support returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures. +The prebuilt invoice **2022-06-30** and later releases support the optional return of key-value pairs. By defualt the return of key-value pairs is disabled. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures. Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key is either customer or user (based on context). |
ai-services | Data Formats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/custom/how-to/data-formats.md | In the abstractive document summarization scenario, each document (whether it ha ## Custom summarization conversation sample format -In the abstractive conversation summarization scenario, each conversation (whether it has a provided label or not) is expected to be provided in a plain .txt file. Each conversation turn must be provided in a single line that is formatted as Speaker + ΓÇ£: ΓÇ£ + text (I.e., Speaker and text are separated by a colon followed by a space). The following is an example conversation of three turns between two speakers (Agent and Customer). --Agent: Hello, how can I help you? --Customer: How do I upgrade office? I have been getting error messages all day. --Agent: Please press the upgrade button, then sign in and follow the instructions. + In the abstractive conversation summarization scenario, each conversation (whether it has a provided label or not) is expected to be provided in a .json file, which is similar to the input format for our [pre-built conversation summarization service](https://learn.microsoft.com/rest/api/language/2023-04-01/analyze-conversation/submit-job?tabs=HTTP#textconversation). The following is an example conversation of three turns between two speakers (Agent and Customer). +```json +{ + "conversationItems": [ + { + "text": "Hello, how can I help you?", + "modality": "text", + "id": "1", + "participantId": "Agent", + "role": "Agent" + }, + { + "text": "How do I upgrade office? I have been getting error messages all day.", + "modality": "text", + "id": "2", + "participantId": "Customer", + "role": "Customer" + }, + { + "text": "Please press the upgrade button, then sign in and follow the instructions.", + "modality": "text", + "id": "3", + "participantId": "Agent", + "role": "Agent" + } + ], + "modality": "text", + "id": "conversation1", + "language": "en" +} +``` -## Custom summarization document and sample mapping JSON format +## Sample mapping JSON format In both document and conversation summarization scenarios, a set of documents and corresponding labels can be provided in a single JSON file that references individual document/conversation and summary files. -<! The JSON file is expected to contain the following fields: +The JSON file is expected to contain the following fields: ```json-projectFileVersion": TODO, -"stringIndexType": TODO, -"metadata": { - "projectKind": TODO, - "storageInputContainerName": TODO, - "projectName": a string project name, - "multilingual": TODO, - "description": a string project description, - "language": TODO: -}, -"assets": { - "projectKind": TODO, - "documents": a list of document-label pairs, each is defined with three fields: - [ - { - "summaryLocation": a string path to the summary txt file, - "location": a string path to the document txt file, - "language": TODO - } - ] +{ + projectFileVersion": The version of the exported project, + "stringIndexType": Specifies the method used to interpret string offsets. For additional information see https://aka.ms/text-analytics-offsets, + "metadata": { + "projectKind": The project kind you need to import. Values for summarization are CustomAbstractiveSummarization and CustomConversationSummarization. Both projectKind fields must be identical., + "storageInputContainerName": The name of the storage container that contains the documents/conversations and the summaries, + "projectName": a string project name, + "multilingual": A flag denoting whether this project should allow multilingual documents or not. For Summarization this option is turned off, + "description": a string project description, + "language": The default language of the project. Possible values are ΓÇ£enΓÇ¥ and ΓÇ£en-usΓÇ¥ + }, + "assets": { + "projectKind": The project kind you need to import. Values for summarization are CustomAbstractiveSummarization and CustomConversationSummarization. Both projectKind fields must be identical., + "documents": a list of document-label pairs, each is defined with three fields:[ + { + "summaryLocation": a string path to the summary txt (for documents) or json (for conversations) file, + "location": a string path to the document txt (for documents) or json (for conversations) file, + "language": The language of the documents. Possible values are ΓÇ£enΓÇ¥ and ΓÇ£en-usΓÇ¥ + } + ] + } }-``` > +``` +## Custom document summarization mapping sample The following is an example mapping file for the abstractive document summarization scenario with three documents and corresponding labels. The following is an example mapping file for the abstractive document summarizat } ``` +## Custom conversation summarization mapping sample ++The following is an example mapping file for the abstractive conversation summarization scenario with three documents and corresponding labels. ++```json +{ + "projectFileVersion": "2022-10-01-preview", + "stringIndexType": "Utf16CodeUnit", + "metadata": { + "projectKind": "CustomAbstractiveSummarization", + "storageInputContainerName": "abstractivesummarization", + "projectName": "sample_custom_summarization", + "multilingual": false, + "description": "Creating a custom summarization model", + "language": "en-us" + } + "assets": { + "projectKind": "CustomAbstractiveSummarization", + "documents": [ + { + "summaryLocation": "conv1_summary.txt", + "location": "conv1.json", + "language": "en-us" + }, + { + "summaryLocation": "conv2_summary.txt", + "location": "conv2.json", + "language": "en-us" + }, + { + "summaryLocation": "conv3_summary.txt", + "location": "conv3.json", + "language": "en-us" + } + ] + } +} +``` + ## Next steps [Get started with custom summarization](../../custom/quickstart.md) |
ai-services | Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/alerts.md | Metrics Advisor supports four different types of hooks: email, Teams, webhook, a ### Email hook > [!Note]-> Metrics Advisor resource administrators need to configure the Email settings, and input **SMTP related information** into Metrics Advisor before anomaly alerts can be sent. The resource group admin or subscription admin needs to assign at least one *Azure AI Metrics Advisor Administrator* role in the Access control tab of the Metrics Advisor resource. [Learn more about e-mail settings configuration](../faq.yml#how-to-set-up-email-settings-and-enable-alerting-by-email-). +> Metrics Advisor resource administrators need to configure the Email settings, and input **SMTP related information** into Metrics Advisor before anomaly alerts can be sent. The resource group admin or subscription admin needs to assign at least one *Cognitive Services Metrics Advisor Administrator* role in the Access control tab of the Metrics Advisor resource. [Learn more about e-mail settings configuration](../faq.yml#how-to-set-up-email-settings-and-enable-alerting-by-email-). An email hook is the channel for anomaly alerts to be sent to email addresses specified in the **Email to** section. Two types of alert emails will be sent: **Data feed not available** alerts, and **Incident reports**, which contain one or multiple anomalies. |
ai-services | Enable Anomaly Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md | This section will share the practice of using an SMTP server to send email notif **Step 1.** Assign your account as the 'Cognitive Service Metrics Advisor Administrator' role -- A user with the subscription administrator or resource group administrator privileges needs to navigate to the Metrics Advisor resource that was created in the Azure portal, and select the Access control(IAM) tab.+- A user with the subscription administrator or resource group administrator privileges needs to navigate to the Metrics Advisor resource that was created in the Azure portal, and select the Access control (IAM) tab. - Select 'Add role assignments'.-- Pick a role of 'Azure AI Metrics Advisor Administrator', select your account as in the image below.+- Pick a role of 'Cognitive Services Metrics Advisor Administrator', select your account as in the image below. - Select 'Save' button, then you've been successfully added as administrator of a Metrics Advisor resource. All the above actions need to be performed by a subscription administrator or resource group administrator. It might take up to one minute for the permissions to propagate. ![Screenshot that shows how to assign admin role to a specific role](../media/tutorial/access-control.png) |
ai-services | Manage Qna Maker App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/manage-qna-maker-app.md | The following steps use the collaborator role but any of the roles can be added |--| |Owner| |Contributor|- |Azure AI QnA Maker Reader| - |Azure AI QnA Maker Editor| - |Azure AI services User| + |Cognitive Services QnA Maker Reader| + |Cognitive Services QnA Maker Editor| + |Cognitive Services User| :::image type="content" source="../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-add-role-iam.png" alt-text="QnA Maker IAM add role."::: |
ai-services | Speech Synthesis Markup Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-structure.md | Usage of the `mstts:silence` element's attributes are described in the following | Attribute | Description | Required or optional | | - | - | - |-| `type` | Specifies where and how to add silence. The following silence types are supported:<br/><ul><li>`Leading` ΓÇô Additional silence at the beginning of the text. The value that you set is added to the natural silence before the start of text.</li><li>`Leading-exact` ΓÇô Silence at the beginning of the text. The value is an absolute silence length.</li><li>`Tailing` ΓÇô Additional silence at the end of text. The value that you set is added to the natural silence after the last word.</li><li>`Tailing-exact` ΓÇô Silence at the end of the text. The value is an absolute silence length.</li><li>`Sentenceboundary` ΓÇô Additional silence between adjacent sentences. The actual silence length for this type includes the natural silence after the last word in the previous sentence, the value you set for this type, and the natural silence before the starting word in the next sentence.</li><li>`Sentenceboundary-exact` ΓÇô Silence between adjacent sentences. The value is an absolute silence length.</li><li>`Comma-exact` ΓÇô Silence at the comma in half-width or full-width format. The value is an absolute silence length.</li><li>`Semicolon-exact` ΓÇô Silence at the semicolon in half-width or full-width format. The value is an absolute silence length.</li><li>`Enumerationcomma-exact` ΓÇô Silence at the enumeration comma in full-width format. The value is an absolute silence length.</li></ul><br/>An absolute silence type (with the `-exact` suffix) replaces any otherwise natural leading or trailing silence. Absolute silence types take precedence over the corresponding non-absolute type. For example, if you set both `Leading` and `Leading-exact` types, the `Leading-exact` type will take effect.| Required | +| `type` | Specifies where and how to add silence. The following silence types are supported:<br/><ul><li>`Leading` ΓÇô Additional silence at the beginning of the text. The value that you set is added to the natural silence before the start of text.</li><li>`Leading-exact` ΓÇô Silence at the beginning of the text. The value is an absolute silence length.</li><li>`Tailing` ΓÇô Additional silence at the end of text. The value that you set is added to the natural silence after the last word.</li><li>`Tailing-exact` ΓÇô Silence at the end of the text. The value is an absolute silence length.</li><li>`Sentenceboundary` ΓÇô Additional silence between adjacent sentences. The actual silence length for this type includes the natural silence after the last word in the previous sentence, the value you set for this type, and the natural silence before the starting word in the next sentence.</li><li>`Sentenceboundary-exact` ΓÇô Silence between adjacent sentences. The value is an absolute silence length.</li><li>`Comma-exact` ΓÇô Silence at the comma in half-width or full-width format. The value is an absolute silence length.</li><li>`Semicolon-exact` ΓÇô Silence at the semicolon in half-width or full-width format. The value is an absolute silence length.</li><li>`Enumerationcomma-exact` ΓÇô Silence at the enumeration comma in full-width format. The value is an absolute silence length.</li></ul><br/>An absolute silence type (with the `-exact` suffix) replaces any otherwise natural leading or trailing silence. Absolute silence types take precedence over the corresponding non-absolute type. For example, if you set both `Leading` and `Leading-exact` types, the `Leading-exact` type will take effect. The [WordBoundary event](how-to-speech-synthesis.md#subscribe-to-synthesizer-events) takes precedence over punctuation-related silence settings including `Comma-exact`, `Semicolon-exact`, or `Enumerationcomma-exact`. When using both the `WordBoundary` event and punctuation-related silence settings, the punctuation-related silence settings won't take effect.| Required | | `Value` | The duration of a pause in seconds (such as `2s`) or milliseconds (such as `500ms`). Valid values range from 0 to 5000 milliseconds. If you set a value greater than the supported maximum, the service will use `5000ms`.| Required | ### mstts silence examples |
ai-services | Speech Synthesis Markup Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md | By default, all neural voices are fluent in their own language and English witho The `<lang xml:lang>` element is primarily intended for multilingual neural voices. You can adjust the speaking language for the multilingual neural voice at the sentence level and word level. The supported languages for multilingual voices are [provided in a table](#multilingual-voices-with-the-lang-element) following the `<lang>` syntax and attribute definitions. +The multilingual voices `en-US-JennyMultilingualV2Neural` and `en-US-RyanMultilingualNeural` auto-detect the language of the input text. However, you can still use the `<lang>` element to adjust the speaking language for these voices. + Usage of the `lang` element's attributes are described in the following table. | Attribute | Description | Required or optional | |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/language-support.md | -|Greek|`el`|Yes|Yes| +|Greek|`el`|No|No| |Gujarati|`gu`|No|No| |Haitian Creole|`ht`|Yes|Yes| |Hebrew|`he`|No|No| |
aks | Open Service Mesh About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md | OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the > | 1.24.0 or greater | 1.2.5 | > | Between 1.23.5 and 1.24.0 | 1.1.3 | > | Below 1.23.5 | 1.0.0 |+> +> Older versions of OSM may not be available for install or be actively supported if the corresponding AKS version has reached end of life. You can check the [AKS Kubernetes release calendar](./supported-kubernetes-versions.md#aks-kubernetes-release-calendar) for information on AKS version support windows. ## Capabilities and features |
aks | Open Service Mesh Binary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-binary.md | zone_pivot_groups: client-operating-system This article will discuss how to download the OSM client library to be used to operate and configure the OSM add-on for AKS, and how to configure the binary for your environment. > [!IMPORTANT]-> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM: -> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.5* of OSM. -> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. -> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM. -+> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM. +> +> |Kubernetes version | OSM version installed | +> ||--| +> | 1.24.0 or greater | 1.2.5 | +> | Between 1.23.5 and 1.24.0 | 1.1.3 | +> | Below 1.23.5 | 1.0.0 | +> +> Older versions of OSM may not be available for install or be actively supported if the corresponding AKS version has reached end of life. You can check the [AKS Kubernetes release calendar](./supported-kubernetes-versions.md#aks-kubernetes-release-calendar) for information on AKS version support windows. ::: zone pivot="client-operating-system-linux" |
aks | Open Service Mesh Deploy Addon Az Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md | -> Based on the version of Kubernetes your cluster runs, the OSM add-on installs a different version of OSM: +> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM. >-> - If your cluster runs a Kubernetes version *1.24.0 or greater*, the OSM add-on installs OSM version *1.2.5*. -> - If your cluster runs a Kubernetes version *between 1.23.5 and 1.24.0*, the OSM add-on installs OSM version *1.1.3*. -> - If your cluster runs a Kubernetes version *below 1.23.5*, the OSM add-on installs OSM version *1.0.0*. +> |Kubernetes version | OSM version installed | +> ||--| +> | 1.24.0 or greater | 1.2.5 | +> | Between 1.23.5 and 1.24.0 | 1.1.3 | +> | Below 1.23.5 | 1.0.0 | +> +> Older versions of OSM may not be available for install or be actively supported if the corresponding AKS version has reached end of life. You can check the [AKS Kubernetes release calendar](./supported-kubernetes-versions.md#aks-kubernetes-release-calendar) for information on AKS version support windows. ## Prerequisites |
aks | Open Service Mesh Deploy Addon Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md | -> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM: -> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.5* of OSM. -> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. -> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM. +> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM. +> +> |Kubernetes version | OSM version installed | +> ||--| +> | 1.24.0 or greater | 1.2.5 | +> | Between 1.23.5 and 1.24.0 | 1.1.3 | +> | Below 1.23.5 | 1.0.0 | +> +> Older versions of OSM may not be available for install or be actively supported if the corresponding AKS version has reached end of life. You can check the [AKS Kubernetes release calendar](./supported-kubernetes-versions.md#aks-kubernetes-release-calendar) for information on AKS version support windows. [Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language that uses declarative syntax to deploy Azure resources. You can use Bicep in place of creating [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to deploy your infrastructure-as-code Azure resources. |
aks | Use Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md | A custom control plane managed identity enables access to the existing identity ### Update control plane identity on an existing cluster +> [!NOTE] +> Migrating control plane identity from system-assigned to user-assigned doesn't cause any downtime for control plane and agent pools. Meanwhile, control plane components will keep using old system-assigned identity for several hours until the next token refresh. + * If you don't have a managed identity, create one using the [`az identity create`][az-identity-create] command. ```azurecli-interactive |
api-management | Api Management Howto Log Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md | Include a JSON snippet similar to the following in your Azure Resource Manager t For prerequisites, see [Configure API Management managed identity](#option-2-configure-api-management-managed-identity). -#### [PowerShell](#tab/PowerShell) +#### [REST API](#tab/PowerShell) Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) or a Bicep or ARM template to configure a logger to an event hub with system-assigned managed identity credentials. +```JSON +{ + "properties": { + "loggerType": "azureEventHub", + "description": "adding a new logger with system assigned managed identity", + "credentials": { + "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net/<EventHubName>", + "identityClientId":"SystemAssigned", + "name":"<EventHubName>" + } + } +} ++``` + #### [Bicep](#tab/bicep) Include a snippet similar to the following in your Bicep template. |
app-service | Tutorial Multi Container App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md | To complete this tutorial, you need experience with [Docker Compose](https://doc ## Download the sample -For this tutorial, you use the compose file from [Docker](https://docs.docker.com/samples/wordpress/), but you'll modify it to include Azure Database for MySQL, persistent storage, and Redis. The configuration file can be found at [Azure Samples](https://github.com/Azure-Samples/multicontainerwordpress). For supported configuration options, see [Docker Compose options](configure-custom-container.md#docker-compose-options). +For this tutorial, you use the compose file from [Docker](https://docs.docker.com/samples/wordpress/), but you'll modify it to include Azure Database for MySQL, persistent storage, and Redis. The configuration file can be found at [Azure Samples](https://github.com/Azure-Samples/multicontainerwordpress). In the sample below, note that `depends_on` is an **unsupported option** and is ignored. For supported configuration options, see [Docker Compose options](configure-custom-container.md#docker-compose-options). [!code-yml[Main](../../azure-app-service-multi-container/docker-compose-wordpress.yml)] |
application-gateway | Alb Controller Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-release-notes.md | Instructions for new or existing deployments of ALB Controller are found in the - [New deployment of ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#for-new-deployments) - [Upgrade existing ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#for-existing-deployments) -## Release history +## Latest Release (Recommended) +July 24, 2023 - 0.4.023961 - Improved Ingress support +## Release history July 24, 2023 - 0.4.023921 - Initial release of ALB Controller * Minimum supported Kubernetes version: v1.25 |
application-gateway | Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/diagnostics.md | Activity logging is automatically enabled for every Resource Manager resource. Y $log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup allLogs -RetentionPolicyDay 30 -RetentionPolicyEnabled $true New-AzDiagnosticSetting -Name 'AppGWForContainersLogs' -ResourceId "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/acctest5097/providers/Microsoft.ServiceNetworking/trafficControllers/myagfc" -StorageAccountId $storageAccount.Id -Log $log -Metric $metric ```+++ > [!Note] > After initially enabling diagnostic logs, it may take up to one hour before logs are available at your selected destination. |
application-gateway | Quickstart Deploy Application Gateway For Containers Alb Controller | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md | You need to complete the following tasks prior to deploying Application Gateway ```azurecli-interactive az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \- --version 0.4.023921 \ + --version 0.4.023961 \ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ``` You need to complete the following tasks prior to deploying Application Gateway ```azurecli-interactive az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \- --version 0.4.023921 \ + --version 0.4.023961 \ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ``` |
automanage | Virtual Machines Custom Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-custom-profile.md | The following ARM template will create an Automanage custom profile. Details on "location": "[parameters('location')]", "properties": { "configuration": {- "Antimalware/Enable": "true", - "Antimalware/EnableRealTimeProtection": "true", - "Antimalware/RunScheduledScan": "true", + "Antimalware/Enable": true, + "Antimalware/EnableRealTimeProtection": true, + "Antimalware/RunScheduledScan": true, "Antimalware/ScanType": "Quick", "Antimalware/ScanDay": "7", "Antimalware/ScanTimeInMinutes": "120", "AzureSecurityBaseline/Enable": true, "AzureSecurityBaseline/AssignmentType": "[parameters('azureSecurityBaselineAssignmentType')]",- "AzureSecurityCenter/Enable": true, - "Backup/Enable": "true", + "Backup/Enable": true, "Backup/PolicyName": "dailyBackupPolicy", "Backup/TimeZone": "UTC", "Backup/InstantRpRetentionRangeInDays": "2", The following ARM template will create an Automanage custom profile. Details on "LogAnalytics/Workspace": "[parameters('logAnalyticsWorkspace')]", "UpdateManagement/Enable": true, "VMInsights/Enable": true,+ "WindowsAdminCenter/Enable": true, + "GuestConfiguration/Enable": true, + "DefenderForCloud/Enable": true, "Tags/ResourceGroup": { "foo": "rg" }, |
azure-arc | Monitor Gitops Flux 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md | + + Title: Monitor GitOps (Flux v2) status and activity Last updated : 07/21/2023++description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2. +++# Monitor GitOps (Flux v2) status and activity ++We provide dashboards to help you monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2 in your Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. These JSON dashboards can be imported to Grafana to help you view and analyze your data in real time. ++## Prerequisites ++To import and use these dashboards, you need: ++- One or more existing Arc-enabled Kubernetes clusters or AKS clusters. +- The [microsoft.flux extension](extensions-release.md#flux-gitops) installed on the clusters. +- At least one [Flux configuration](tutorial-use-gitops-flux2.md) created on the clusters. ++## Monitor deployment and compliance status ++Follow these steps to import dashboards that let you monitor Flux extension deployment and status across clusters, and the compliance status of Flux configuration on those clusters. ++> [!NOTE] +> These steps describe the process for importing the dashboard to [Azure Managed Grafana](/azure/managed-grafana/overview). You can also [import this dashboard to any Grafana instance](https://grafana.com/docs/grafana/latest/dashboards/manage-dashboards/#import-a-dashboard). With this option, a service principal must be used; managed identity is not supported for data connection outside of Azure Managed Grafana. ++1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). This connection lets the dashboard access Azure Resource Graph. +1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance. +1. Ensure that the user account that will access the dashboard has the **Reader** role on the subscriptions and/or resource groups where the clusters are located. ++ If you're using a managed identity, follow these steps to enable this access: ++ 1. In the Azure portal, navigate to the subscription that you want to add. + 1. Select **Access control (IAM)**. + 1. Select **Add role assignment**. + 1. Select the **Reader** role, then select **Next**. + 1. On the **Members** tab, select **Managed identity**, then choose **Select members**. + 1. From the **Managed identity** list, select the subscription where you created your Azure Managed Grafana Instance. Then select **Azure Managed Grafana** and the name of your Azure Managed Grafana instance. + 1. Select **Review + Assign**. ++ If you're using a service principal, grant the **Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.) ++1. Download the [GitOps Flux - Application Deployments Dashboard](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/GitOps%20Flux%20-%20Application%20Deployments%20Dashboard.json). +1. Follow the steps to [import the JSON dashboard to Grafana](/azure/managed-grafana/how-to-create-dashboard#import-a-json-dashboard). ++After you have imported the dashboard, it will display information from the clusters that you're monitoring, with several panels that provide details. For more details on an item, select the link to visit the Azure portal, where you can find more information about configurations, errors and logs. +++The **Flux Extension Deployment Status** table lists all clusters where the Flux extension is deployed, along with current deployment status. +++The **Flux Configuration Compliance Status** table lists all Flux configurations created on the clusters, along with their compliance status. To see status and error logs for configuration objects such as Helm releases and kustomizations, select the **Non-Compliant** link from the **ComplianceState** column. +++The **Count of Flux Extension Deployments by Status** chart shows the count of clusters, based on their provisioning state. +++The **Count of Flux Configurations by Compliance Status** chart shows the count of Flux configurations, based on their compliance status with respect to the source repository. +++## Monitor resource consumption and reconciliations ++Follow these steps to import dashboards that let you monitor Flux resource consumption, reconciliations, API requests, and reconciler status. ++1. Follow the steps to [create an Azure Monitor Workspace](/azure/azure-monitor/essentials/azure-monitor-workspace-manage). +1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). +1. Enable Prometheus metrics collection on the [AKS clusters](/azure/azure-monitor/essentials/azure-monitor-workspace-manage) and/or [Arc-enabled Kubernetes clusters](/azure/azure-monitor/essentials/prometheus-metrics-from-arc-enabled-cluster) that you want to monitor. +1. Configure Azure Monitor Agent to scrape the Azure Managed Flux metrics by creating a [configmap](/azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration): ++ ```yaml + kind: ConfigMap + apiVersion: v1 + data: + schema-version: + #string.used by agent to parse config. supported versions are {v1}. Configs with other schema versions will be rejected by the agent. + v1 + config-version: + #string.used by customer to keep track of this config file's version in their source control/repository (max allowed 10 chars, other chars will be truncated) + ver1 + default-scrape-settings-enabled: |- + kubelet = true + coredns = false + cadvisor = true + kubeproxy = false + apiserver = false + kubestate = true + nodeexporter = true + windowsexporter = false + windowskubeproxy = false + kappiebasic = true + prometheuscollectorhealth = false + # Regex for which namespaces to scrape through pod annotation based scraping. + # This is none by default. Use '.*' to scrape all namespaces of annotated pods. + pod-annotation-based-scraping: |- + podannotationnamespaceregex = "flux-system" + default-targets-scrape-interval-settings: |- + kubelet = "30s" + coredns = "30s" + cadvisor = "30s" + kubeproxy = "30s" + apiserver = "30s" + kubestate = "30s" + nodeexporter = "30s" + windowsexporter = "30s" + windowskubeproxy = "30s" + kappiebasic = "30s" + prometheuscollectorhealth = "30s" + podannotations = "30s" + metadata: + name: ama-metrics-settings-configmap + namespace: kube-system + ``` + +1. Download the [Flux Control Plane](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/Flux%20Control%20Plane.json) and [Flux Cluster Stats](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/Flux%20Control%20Plane.json) dashboards. +1. Follow the steps to [import these JSON dashboards to Grafana](/azure/managed-grafana/how-to-create-dashboard#import-a-json-dashboard). ++After you have imported the dashboards, they'll display information from the clusters that you're monitoring. ++The **Flux Control Plane** dashboard shows details about status resource consumption, reconciliations at the cluster level, and Kubernetes API requests. +++The **Flux Cluster Stats** dashboard shows details about the number of reconcilers, along with the status and execution duration of each reconciler. +++## Filter dashboard data ++You can filter data in these dashboards to change the information shown. For example, you can show data for only certain subscriptions or resource groups, or limit data to a particular cluster. To do so, select the filter option from any column header. ++For example, in the **Flux Configuration Compliance Status** table on the **Application Deployments** dashboard, you can select a specific commit from the **SourceLastSyncCommit** column. By doing so, you can track the status of a configuration deployment for all of the clusters affected by that commit. ++In the **Application Deployments** dashboard, some fields in the **Flux Extension Deployment Status** and **Flux Configuration Compliance Status** panels are hidden by default (such as **SubscriptionID**, **ResourceGroupName**, and **ClusterType**). To show hidden fields, select the panel and then select **Edit**. On the **Overrides** tab, find the field you want to show, then unselect the **Hide in table** option. ++## Next steps ++- Review our tutorial on [using GitOps with Flux v2 to manage configuration and application deployment](tutorial-use-gitops-flux2.md). +- Learn about [Azure Monitor Container Insights](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json). |
azure-functions | Functions Bindings Service Bus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md | This section describes the configuration settings available for this binding, wh "maxConcurrentCalls": 16, "maxConcurrentSessions": 8, "maxMessageBatchSize": 1000,+ "minMessageBatchSize": 1, + "maxBatchWaitTime": "00:00:30", "sessionIdleTimeout": "00:01:00", "enableCrossEntityTransactions": false } The `clientRetryOptions` settings only apply to interactions with the Service Bu |**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that should be initiated per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting is used only when the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) is set to `false`. This setting only applies for functions that receive a single message at a time.| |**maxConcurrentSessions**|`8`|The maximum number of sessions that can be handled concurrently per scaled instance. This setting is used only when the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) is set to `true`. This setting only applies for functions that receive a single message at a time.| |**maxMessageBatchSize**|`1000`|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|+|**minMessageBatchSize**<sup>1</sup>|`1`|The minimum number of messages desired in a batch. The minimum applies only when the function is receiving multiple messages and must be less than `maxMessageBatchSize`. <br/> The minimum size isn't strictly guaranteed. A partial batch is dispatched when a full batch can't be prepared before the `maxBatchWaitTime` has elapsed.| +|**maxBatchWaitTime**<sup>1</sup>|`00:00:30`|The maximum interval that the trigger should wait to fill a batch before invoking the function. The wait time is only considered when `minMessageBatchSize` is larger than 1 and is ignored otherwise. If less than `minMessageBatchSize` messages were available before the wait time elapses, the function is invoked with a partial batch. The longest allowed wait time is 50% of the entity message lock duration, meaning the maximum allowed is 2 minutes and 30 seconds. Otherwise, you may get lock exceptions. <br/><br/>**NOTE:** This interval is not a strict guarantee for the exact timing on which the function is invoked. There is a small magin of error due to timer precision.| |**sessionIdleTimeout**|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the session will be closed and the function will attempt to process another session. |**enableCrossEntityTransactions**|`false`|Whether or not to enable transactions that span multiple entities on a Service Bus namespace.| +<sup>1</sup> Using `minMessageBatchSize` and `maxBatchWaitTime` requires [v5.10.0](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus/5.10.0) of the `Microsoft.Azure.WebJobs.Extensions.ServiceBus` package, or a later version. + # [Functions 2.x+](#tab/functionsv2) ```json The `clientRetryOptions` settings only apply to interactions with the Service Bu } } ```- When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `true`, the `sessionHandlerOptions` is honored. When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `false`, the `messageHandlerOptions` is honored. |Property |Default | Description | |
azure-maps | Routing Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md | This article provides coverage information for Azure Maps routing. Upon a search ## Routing information supported -In the [Azure Maps routing coverage tables](#azure-maps-routing-coverage-tables), the following information is available. +In the [Azure Maps routing coverage tables], the following information is available. ### Calculate Route -The Calculate Route service calculates a route between an origin and a destination, passing through waypoints if they're specified. For more information, see [Get Route Directions](/rest/api/maps/route/get-route-directions) in the REST API documentation. +The Calculate Route service calculates a route between an origin and a destination, passing through waypoints if they're specified. For more information, see [Get Route Directions] in the REST API documentation. ### Calculate Reachable Range -The Calculate Reachable Range service calculates a set of locations that can be reached from the origin point. For more information, see [Get Route Range](/rest/api/maps/route/get-route-range) in the REST API documentation. +The Calculate Reachable Range service calculates a set of locations that can be reached from the origin point. For more information, see [Get Route Range] in the REST API documentation. ### Matrix Routing -The Matrix Routing service calculates travel time and distance between all possible pairs in a list of origins and destinations. It does not provide any detailed information about the routes. You can get one-to-many, many-to-one, or many-to-many route options simply by varying the number of origins and/or destinations. For more information, see [Matrix Routing service](/rest/api/maps/route/post-route-matrix) in the REST API documentation. +The Matrix Routing service calculates travel time and distance between all possible pairs in a list of origins and destinations. It doesn't provide any detailed information about the routes. You can get one-to-many, many-to-one, or many-to-many route options simply by varying the number of origins and/or destinations. For more information, see [Matrix Routing service] in the REST API documentation. ### Real-time Traffic -Delivers real-time information about traffic jams, road closures, and a detailed view of the current speed and travel times across the entire road network. For more information, see [Traffic](/rest/api/maps/traffic) in the REST API documentation. +Delivers real-time information about traffic jams, road closures, and a detailed view of the current speed and travel times across the entire road network. For more information, see [Traffic service] in the REST API documentation. ### Truck routes -The Azure Maps Truck Routing API provides travel routes which take truck attributes into consideration. Truck attributes include things such as width, height, weight, turning radius and type of cargo. This is important as not all trucks can travel the same routes as other vehicles. Here are some examples: +The Azure Maps Truck Routing API provides travel routes that take truck attributes into consideration. Truck attributes include things such as width, height, weight, turning radius and type of cargo. This is important as not all trucks can travel the same routes as other vehicles. Here are some examples: - Bridges have heights and weight limits. - Tunnels often have restrictions on flammable or hazardous materials. The Azure Maps Truck Routing API provides travel routes which take truck attribu - Highways often have a separate speed limit for trucks. - Certain trucks may want to avoid roads that have steep gradients. -Azure Maps supports truck routing in the countries/regions indicated in the tables below. +Azure Maps supports truck routing in the countries/regions indicated in the following tables. <! ### Legend The following tables provide coverage information for Azure Maps routing. ## Next steps -For more information about Azure Maps routing, see the [Routing](/rest/api/maps/route) reference pages. +For more information about Azure Maps routing, see the [Routing service] documentation. For more coverage tables, see: -- Check out coverage for [**Geocoding**](geocoding-coverage.md).-- Check out coverage for [**Traffic**](traffic-coverage.md). -- Check out coverage for [**Render**](render-coverage.md).+- Check out coverage for [Geocoding]. +- Check out coverage for [Traffic]. +- Check out coverage for [Render]. ++[Azure Maps routing coverage tables]: #azure-maps-routing-coverage-tables +[Geocoding]: geocoding-coverage.md +[Get Route Directions]: /rest/api/maps/route/get-route-directions +[Get Route Range]: /rest/api/maps/route/get-route-range +[Matrix Routing service]: /rest/api/maps/route/post-route-matrix +[Render]: render-coverage.md +[Routing service]: /rest/api/maps/route +[Traffic service]: /rest/api/maps/traffic +[Traffic]: traffic-coverage.md |
azure-maps | Set Drawing Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md | -The Azure Maps Web SDK provides a [drawing tools module]. This module makes it easy to draw and edit shapes on the map using an input device such as a mouse or touch screen. The core class of this module is the [drawing manager]. The drawing manager provides all the capabilities needed to draw and edit shapes on the map. It can be used directly, and it's integrated with a custom toolbar UI. You can also use the built-in [drawing toolbar] class. +The Azure Maps Web SDK provides a [drawing tools module]. This module makes it easy to draw and edit shapes on the map using an input device such as a mouse or touch screen. The core class of this module is the [drawing manager]. The drawing manager provides all the capabilities needed to draw and edit shapes on the map. It can be used directly, and it's integrated with a custom toolbar UI. You can also use the built-in [DrawingToolbar class]. ## Loading the drawing tools module in a webpage Learn more about the classes and methods used in this article: > [Drawing manager] > [!div class="nextstepaction"]-> [drawing toolbar] +> [DrawingToolbar class] [Add a drawing toolbar]: map-add-drawing-toolbar.md [azure-maps-drawing-tools]: https://www.npmjs.com/package/azure-maps-drawing-tools [Drawing manager options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Drawing%20manager%20options/Drawing%20manager%20options.html [Drawing manager options]: https://samples.azuremaps.com/drawing-tools-module/drawing-manager-options [drawing manager]: /javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager-[drawing toolbar]: /javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar +[DrawingToolbar class]: /javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar [drawing tools module]: https://www.npmjs.com/package/azure-maps-drawing-tools [Get shape data]: map-get-shape-data.md [How to use the Azure Maps map control npm package]: how-to-use-npm-package.md |
azure-maps | Supported Browsers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md | -The Azure Maps Web SDK provides a helper function called [atlas.isSupported](/javascript/api/azure-maps-control/atlas#issupported-boolean-). This function detects whether a web browser has the minimum set of WebGL features required to support loading and rendering the map control. Here's an example of how to use the function: +The Azure Maps Web SDK provides a helper function called [atlas.isSupported]. This function detects whether a web browser has the minimum set of WebGL features required to support loading and rendering the map control. Here's an example of how to use the function: ```JavaScript if (!atlas.isSupported()) { The Azure Maps Web SDK supports the following desktop browsers: - Mozilla Firefox (current and previous version) - Apple Safari (macOS X) (current and previous version) -See also [Target legacy browsers](#Target-Legacy-Browsers) later in this article. +See also [Target legacy browsers] later in this article. ## Mobile The Azure Maps Web SDK supports the following mobile browsers: - Current version of Chrome for iOS > [!TIP]-> If you're embedding a map inside a mobile application by using a WebView control, you might prefer to use the [npm package of the Azure Maps Web SDK](https://www.npmjs.com/package/azure-maps-control) instead of referencing the version of the SDK that's hosted on Azure Content Delivery Network. This approach reduces loading time because the SDK is already be on the user's device and doesn't need to be downloaded at run time. +> If you're embedding a map inside a mobile application by using a WebView control, you might prefer to use the [npm package of the Azure Maps Web SDK] instead of referencing the version of the SDK that's hosted on Azure Content Delivery Network. This approach reduces loading time because the SDK is already be on the user's device and doesn't need to be downloaded at run time. ## Node.js The following Web SDK modules are also supported in Node.js: -- Services module ([documentation](how-to-use-services-module.md) | [npm module](https://www.npmjs.com/package/azure-maps-rest))+- Services module ([documentation] | [npm module]) ## <a name="Target-Legacy-Browsers"></a>Target legacy browsers -You might want to target older browsers that don't support WebGL or that have only limited support for it. In such cases, you can use Azure Maps services together with an open-source map control like [Leaflet](https://leafletjs.com/). +You might want to target older browsers that don't support WebGL or that have only limited support for it. In such cases, you can use Azure Maps services together with an open-source map control like [Leaflet]. The [Render Azure Maps in Leaflet] Azure Maps sample shows how to render Azure Maps Raster Tiles in the Leaflet JS map control. This sample uses the open source [Azure Maps Leaflet plugin]. For the source code for this sample, see [Render Azure Maps in Leaflet sample source code]. For a list of third-party map control plug-ins, see [Azure Maps community - Open Learn more about the Azure Maps Web SDK: > [!div class="nextstepaction"]-> [Map control](how-to-use-map-control.md) +> [Map control] > [!div class="nextstepaction"]-> [Services module](how-to-use-services-module.md) +> [Services module] -[Render Azure Maps in Leaflet]: https://samples.azuremaps.com/third-party-map-controls/render-azure-maps-in-leaflet -[Render Azure Maps in Leaflet sample source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Third%20Party%20Map%20Controls/Render%20Azure%20Maps%20in%20Leaflet/Render%20Azure%20Maps%20in%20Leaflet.html +[atlas.isSupported]: /javascript/api/azure-maps-control/atlas#issupported-boolean- +[Azure Maps community - Open-source projects]: open-source-projects.md#third-party-map-control-plugins [Azure Maps Leaflet plugin]: https://github.com/azure-samples/azure-maps-leaflet [Azure Maps Samples]: https://samples.azuremaps.com/?search=leaflet-[Azure Maps community - Open-source projects]: open-source-projects.md#third-party-map-control-plugins +[documentation]: how-to-use-services-module.md +[Leaflet]: https://leafletjs.com +[Map control]: how-to-use-map-control.md +[npm module]: https://www.npmjs.com/package/azure-maps-rest +[npm package of the Azure Maps Web SDK]: https://www.npmjs.com/package/azure-maps-control +[Render Azure Maps in Leaflet sample source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Third%20Party%20Map%20Controls/Render%20Azure%20Maps%20in%20Leaflet/Render%20Azure%20Maps%20in%20Leaflet.html +[Render Azure Maps in Leaflet]: https://samples.azuremaps.com/third-party-map-controls/render-azure-maps-in-leaflet +[Services module]: how-to-use-services-module.md +[Target legacy browsers]: #Target-Legacy-Browsers |
azure-maps | Supported Languages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md | Title: Localization support with Microsoft Azure Maps -description: See which regions Azure Maps supports with services such as maps, search, routing, weather, and traffic incidents. Learn how to set up the View parameter. + Title: Localization support in Microsoft Azure Maps +description: Lists the regions Azure Maps supports with services such as maps, search, routing, weather, and traffic incidents, and shows how to set up the View parameter. Last updated 01/05/2022 Azure Maps have been localized in variety languages across its services. The fol | zh-HanT-TW | Chinese (Traditional, Taiwan) | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | <sup>1</sup> Neutral Ground Truth (Local) - Official languages for all regions in local scripts if available<br>-<sup>2</sup> Neutral Ground Truth (Latin) - Latin exonyms will be used if available +<sup>2</sup> Neutral Ground Truth (Latin) - Latin exonyms are used if available ## Azure Maps supported views -> [!NOTE] -> On August 1, 2019, Azure Maps was released in the following countries/regions: -> -> * Argentina -> * India -> * Morocco -> * Pakistan -> -> After August 1, 2019, the **View** parameter will define the returned map content for the new countries/regions listed above. Azure Maps **View** parameter (also referred to as "user region parameter") is a two letter ISO-3166 Country Code that will show the correct maps for that country/region specifying which set of geopolitically disputed content is returned via Azure Maps services, including borders and labels displayed on the map. - Make sure you set up the **View** parameter as required for the REST APIs and the SDKs, which your services are using.- ++Azure Maps **View** parameter (also referred to as "user region parameter") is a two letter ISO-3166 Country Code that will show the correct maps for that country/region specifying which set of geopolitically disputed content is returned via Azure Maps services, including borders and labels displayed on the map. + ### REST APIs Ensure that you have set up the View parameter as required. View parameter specifies which set of geopolitically disputed content is returned via Azure Maps services. Ensure that you have set up the **View** parameter as required, and you have the * Azure Maps Web SDK * Azure Maps Android SDK -By default, the View parameter is set to **Unified**, even if you haven't defined it in the request. Determine the location of your users. Then, set the **View** parameter correctly for that location. Alternatively, you can set 'View=Auto', which will return the map data based on the IP address of the request. The **View** parameter in Azure Maps must be used in compliance with applicable laws, including those laws about mapping of the country/region where maps, images, and other data and third-party content that you're authorized to access via Azure Maps is made available. +By default, the View parameter is set to **Unified**, even if you haven't defined it in the request. Determine the location of your users. Then, set the **View** parameter correctly for that location. Alternatively, you can set 'View=Auto', which returns the map data based on the IP address of the request. The **View** parameter in Azure Maps must be used in compliance with applicable laws, including those laws about mapping of the country/region where maps, images, and other data and third-party content that you're authorized to access via Azure Maps is made available. The following table provides supported views. |
azure-maps | Supported Map Styles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md | -Azure Maps supports several different built-in map styles as described below. +Azure Maps supports several different built-in map styles as described in this article. >[!important]->The procedure in this section requires an Azure Maps account in Gen 1 or Gen 2 pricing tier. For more information on pricing tiers, see [Choose the right pricing tier in Azure Maps](choose-pricing-tier.md). +>The procedure in this section requires an Azure Maps account in Gen 1 or Gen 2 pricing tier. For more information on pricing tiers, see [Choose the right pricing tier in Azure Maps]. ## road A **road** map is a standard map that displays roads. It also displays natural a **Applicable APIs:** -* [Map image](/rest/api/maps/render/getmapimage) -* [Map tile](/rest/api/maps/render/getmaptile) +* [Map image] +* [Map tile] * Web SDK map control * Android map control * Power BI visual ## blank and blank_accessible -The **blank** and **blank_accessible** map styles provide a blank canvas for visualizing data. The **blank_accessible** style will continue to provide screen reader updates with map's location details, even though the base map isn't displayed. +The **blank** and **blank_accessible** map styles provide a blank canvas for visualizing data. The **blank_accessible** style continues to provide screen reader updates with map's location details, even though the base map isn't displayed. > [!NOTE] > In the Web SDK, you can change the background color of the map by setting the CSS `background-color` style of map DIV element. The **satellite** style is a combination of satellite and aerial imagery. **Applicable APIs:** -* [Satellite tile](/rest/api/maps/render/getmapimagerytilepreview) +* [Satellite tile] * Web SDK map control * Android map control * Power BI visual This map style is a hybrid of roads and labels overlaid on top of satellite and **Applicable APIs:** -* [Map image](/rest/api/maps/render/getmapimage) -* [Map tile](/rest/api/maps/render/getmaptile) +* [Map image] +* [Map tile] * Web SDK map control * Android map control * Power BI visual This map style is a hybrid of roads and labels overlaid on top of satellite and **Applicable APIs:** -* [Map tile](/rest/api/maps/render/getmaptile) +* [Map tile] * Web SDK map control * Android map control * Power BI visual The interactive Azure Maps map controls use vector tiles in the map styles to po | Map style | Color contrast | Screen reader | Notes | ||-||-|-| `blank` | N/A | No | A blank canvas useful for developers who want to use their own tiles as the base map, or want to view their data without any background. The screen reader will not rely on the vector tiles for descriptions. | -| `blank_accessible` | N/A | Yes | Under the hood this map style continues to load the vector tiles used to render the map, but makes that data transparent. This way the data will still be loaded, and can be used to power the screen reader. | -| `grayscale_dark` | Partial | Yes | This map style is primarily designed for business intelligence scenarios but useful for overlaying colorful layers such as weather radar imagery. | +| `blank` | N/A | No | A blank canvas useful for developers who want to use their own tiles as the base map, or want to view their data without any background. The screen reader doesn't rely on the vector tiles for descriptions. | +| `blank_accessible` | N/A | Yes | This map style continues to load the vector tiles used to render the map, but makes that data transparent. This way the data still loads, and can be used to power the screen reader. | +| `grayscale_dark` | Partial | Yes | Primarily designed for business intelligence scenarios. Also useful for overlaying colorful layers such as weather radar imagery. | | `grayscale_light` | Partial | Yes | This map style is primarily designed for business intelligence scenarios. |-| `high_contrast_dark` | Yes | Yes | Fully accessible map style for users in high contrast mode with a dark setting. When the map loads, high contrast settings will automatically be detected. | -| `high_contrast_light` | Yes | Yes | Fully accessible map style for users in high contrast mode with a light setting. When the map loads, high contrast settings will automatically be detected. | +| `high_contrast_dark` | Yes | Yes | Fully accessible map style for users in high contrast mode with a dark setting. When the map loads, high contrast settings are automatically detected. | +| `high_contrast_light` | Yes | Yes | Fully accessible map style for users in high contrast mode with a light setting. When the map loads, high contrast settings are automatically detected. | | `night` | Partial | Yes | This style is designed for when the user is in low light conditions and you donΓÇÖt want to overwhelm their senses with a bright map. |-| `road` | Partial | Yes | This is the main colorful road map style in Azure Maps. Due to the number of different colors and possible overlapping color combinations, it's nearly impossible to make it 100% accessible. That said, this map style goes through regular accessibility testing and is improved as needed to make labels clearer to read. | -| `road_shaded_relief` | Partial | Yes | This is nearly the same style the main road map style, but has an added tile layer in the background that adds shaded relief of mountains and land cover coloring when zoomed out at higher levels. | +| `road` | Partial | Yes | The main colorful road map style in Azure Maps. Due to the number of different colors and possible overlapping color combinations, it's nearly impossible to make it 100% accessible. That said, this map style goes through regular accessibility testing and is improved as needed to make labels clearer to read. | +| `road_shaded_relief` | Partial | Yes | Similar to the main road map style, but has an added tile layer in the background that adds shaded relief of mountains and land cover coloring when zoomed out. | | `satellite` | N/A | Yes | Purely satellite and aerial imagery, no labels, or road lines. The vector tiles are loaded behind the scenes to power the screen reader and to make for a smoother transition when switching to `satellite_with_roads`. |-| `satellite_with_roads` | No | Yes | Satellite and aerial imagery, with labels and road lines overlaid. On a global scale, there is an unlimited number of color combinations that may occur between the overlaid data and the imagery. A focus on making labels readable in most common scenarios, however, in some places the color contrast with the background imagery may make labels difficult to read. | +| `satellite_with_roads` | No | Yes | Satellite and aerial imagery, with labels and road lines overlaid. On a global scale, there's an unlimited number of color combinations that may occur between the overlaid data and the imagery. A focus on making labels readable in most common scenarios, however, in some places the color contrast with the background imagery may make labels difficult to read. | ## Next steps Learn about how to set a map style in Azure Maps: > [!div class="nextstepaction"]-> [Choose a map style](./choose-map-style.md) +> [Choose a map style] ++[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md +[Map image]: /rest/api/maps/render/getmapimage +[Map tile]: /rest/api/maps/render/getmaptile +[Satellite tile]: /rest/api/maps/render/getmapimagerytilepreview +[Choose a map style]: choose-map-style.md |
azure-maps | Supported Search Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-search-categories.md | Title: Search Categories | Microsoft Azure Maps + Title: Search Categories + description: Learn which search categories are supported in Azure Maps. View all supported category codes and the types of points of interest that each one represents. -When doing a [category search](/rest/api/maps/search/getsearchpoicategory) for points of interest, there are over a hundred supported categories. Below is a list of the category codes for supported category names. Category codes are generated for top-level categories. All sub categories share same category code. This category list is subject to change with new data releases. +When doing a [category search] for points of interest, there are over a hundred supported categories. The following list contains the category codes for supported category names. Category codes are generated for top-level categories. All sub categories share same category code. This category list is subject to change with new data releases. <br/> When doing a [category search](/rest/api/maps/search/getsearchpoicategory) for p | FUEL\_FACILITIES | fuel facilities | | GEOGRAPHIC\_FEATURE | bay, cove, pan, locale, ridge, mineral/hot springs, well, reservoir, marsh/swamp/vlei, quarry, river crossing, valley, mountain peak, reef, dune, lagoon, plain/flat, rapids, cape, plateau, oasis, harbor, cave, rocks, geographic feature, promontory(-ies), islands, headland, pier, crater lake, cliff(s), hill, desert, portage, glacier(s), gully, geyser, coral reef(s), gap, gulf, jetty, ghat, hole, crater lakes, gas field, islet, crater(s), cove(s), grassland, gravel area, fracture zone, heath, gorge(s), island, headwaters, hanging valley, hills, hot spring(s), furrow, anabranch | | GOLF\_COURSE | golf course |-| GOVERNMENT\_OFFICE | order 5 area, order 8 area, order 9 area, order 2 area, order 7 area, order 3 area, supra national, order 4 area, order 6 area, government office, diplomatic facility, united states government establishment, local government office, customs house, customs post | +| GOVERNMENT\_OFFICE | order 5 area, order 8 area, order 9 area, order 2 area, order 7 area, order 3 area, supra national, order 4 area, order 6 area, government office, diplomatic facility, United States government establishment, local government office, customs house, customs post | | HEALTH\_CARE\_SERVICE | blood bank, personal service, personal care facility, ambulance unit, health care service, leprosarium, sanatorium, hospital, medical center, clinic | | HELIPAD\_HELICOPTER\_LANDING | helipad/helicopter landing | | HOLIDAY\_RENTAL | bungalow, cottage, chalet, villa, apartment, holiday rental |-| HOSPITAL\_POLYCLINIC | special, hospital of Chinese medicine, hospital for women children, general, hospital/polyclinic | +| HOSPITAL\_POLYCLINIC | special, hospital of Chinese medicine, hospital for women, children, general, hospital/polyclinic | | HOTEL\_MOTEL | cabins lodges, bed breakfast guest houses, hotel, rest camps, motel, resort, hostel, hotel/motel, resthouse, hammock(s), guest house | | ICE\_SKATING\_RINK | ice skating rink | | IMPORTANT\_TOURIST\_ATTRACTION | building, observatory, arch, tunnel, statue, tower, bridge, planetarium, mausoleum/grave, monument, water hole, natural attraction, important tourist attraction, promenade, pyramids, pagoda, castle, palace, hermitage, pyramid, fort, gate, country house, dam, lighthouse, grave | When doing a [category search](/rest/api/maps/search/getsearchpoicategory) for p | WEIGH\_STATION | weigh scales, weigh station | | WELFARE\_ORGANIZATION | welfare organization | | WINERY | winery |-| ZOOS\_ARBORETA\_BOTANICAL\_GARDEN | wildlife park, aquatic zoo marine park, arboreta botanical gardens, zoo, zoos, arboreta botanical garden | +| ZOOS\_ARBORETA\_BOTANICAL\_GARDEN | wildlife park, aquatic zoo marine park, arboreta botanical gardens, zoo, zoos, arboreta botanical garden | ++[category search]: /rest/api/maps/search/getsearchpoicategory |
azure-maps | Tutorial Ev Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md | -The Azure Maps REST APIs can be called from languages such as Python and R to enable geospatial data analysis and machine learning scenarios. Azure Maps offers a robust set of [routing APIs](/rest/api/maps/route) that allow users to calculate routes between several data points. The calculations are based on various conditions, such as vehicle type or reachable area. +The Azure Maps REST APIs can be called from languages such as Python and R to enable geospatial data analysis and machine learning scenarios. Azure Maps offers a robust set of [routing APIs] that allow users to calculate routes between several data points. The calculations are based on various conditions, such as vehicle type or reachable area. In this tutorial, you walk help a driver whose electric vehicle battery is low. The driver needs to find the closest possible charging station from the vehicle's location. In this tutorial, you will: > [!div class="checklist"]-> * Create and run a Jupyter Notebook file on [Azure Notebooks](https://notebooks.azure.com) in the cloud. +> * Create and run a Jupyter Notebook file on [Azure Notebooks] in the cloud. > * Call Azure Maps REST APIs in Python. > * Search for a reachable range based on the electric vehicle's consumption model. > * Search for electric vehicle charging stations within the reachable range, or isochrone. In this tutorial, you will: To follow along with this tutorial, you need to create an Azure Notebooks project and download and run the Jupyter Notebook file. The Jupyter Notebook file contains Python code, which implements the scenario in this tutorial. To create an Azure Notebooks project and upload the Jupyter Notebook document to it, do the following steps: -1. Go to [Azure Notebooks](https://notebooks.azure.com) and sign in. For more information, see [Quickstart: Sign in and set a user ID](https://notebooks.azure.com). +1. Go to [Azure Notebooks] and sign in. For more information, see [Quickstart: Sign in and set a user ID]. 1. At the top of your public profile page, select **My Projects**. ![The My Projects button](./media/tutorial-ev-routing/myproject.png) To follow along with this tutorial, you need to create an Azure Notebooks projec 1. Select **Create**. -1. After your project is created, download this [Jupyter Notebook document file](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/EVrouting.ipynb) from the [Azure Maps Jupyter Notebook repository](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook). +1. After your project is created, download this [Jupyter Notebook document file] from the [Azure Maps Jupyter Notebook repository]. 1. In the projects list on the **My Projects** page, select your project, and then select **Upload** to upload the Jupyter Notebook document file. Try to understand the functionality that's implemented in the Jupyter Notebook f To run the code in Jupyter Notebook, install packages at the project level by doing the following steps: -1. Download the [*requirements.txt*](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/requirements.txt) file from the [Azure Maps Jupyter Notebook repository](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook), and then upload it to your project. +1. Download the [*requirements.txt*] file from the [Azure Maps Jupyter Notebook repository], and then upload it to your project. 1. On the project dashboard, select **Project Settings**. 1. In the **Project Settings** pane, select the **Environment** tab, and then select **Add**. 1. Under **Environment Setup Steps**, do the following: from IPython.display import Image, display A package delivery company has some electric vehicles in its fleet. During the day, electric vehicles need to be recharged without having to return to the warehouse. Every time the remaining charge drops to less than an hour, you search for a set of charging stations that are within a reachable range. Essentially, you search for a charging station when the battery is low on charge. And, you get the boundary information for that range of charging stations. -Because the company prefers to use routes that require a balance of economy and speed, the requested routeType is *eco*. The following script calls the [Get Route Range API](/rest/api/maps/route/getrouterange) of the Azure Maps routing service. It uses parameters for the vehicle's consumption model. The script then parses the response to create a polygon object of the geojson format, which represents the car's maximum reachable range. +Because the company prefers to use routes that require a balance of economy and speed, the requested routeType is *eco*. The following script calls the [Get Route Range API] of the Azure Maps routing service. It uses parameters for the vehicle's consumption model. The script then parses the response to create a polygon object of the geojson format, which represents the car's maximum reachable range. To determine the boundaries for the electric vehicle's reachable range, run the script in the following cell: boundsData = { After you've determined the reachable range (isochrone) for the electric vehicle, you can search for charging stations within that range. -The following script calls the Azure Maps [Post Search Inside Geometry API](/rest/api/maps/search/postsearchinsidegeometry). It searches for charging stations for electric vehicle, within the boundaries of the car's maximum reachable range. Then, the script parses the response to an array of reachable locations. +The following script calls the Azure Maps [Post Search Inside Geometry API]. It searches for charging stations for electric vehicle, within the boundaries of the car's maximum reachable range. Then, the script parses the response to an array of reachable locations. To search for electric vehicle charging stations within the reachable range, run the following script: for loc in range(len(searchPolyResponse["results"])): ## Upload the reachable range and charging points to Azure Maps Data service -On a map, you'll want to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle. To do so, upload the boundary data and charging stations data as geojson objects to Azure Maps Data service. Use the [Data Upload API](/rest/api/maps/data-v2/upload). +It's helpful to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle on a map. To do so, upload the boundary data and charging stations data as geojson objects to Azure Maps Data service. Use the [Data Upload API]. To upload the boundary and charging point data to Azure Maps Data service, run the following two cells: poiUdid = getPoiUdid["udid"] ## Render the charging stations and reachable range on a map -After you've uploaded the data to the data service, call the Azure Maps [Get Map Image service](/rest/api/maps/render/getmapimage). This service is used to render the charging points and maximum reachable boundary on the static map image by running the following script: +After you've uploaded the data to the data service, call the Azure Maps [Get Map Image service]. This service is used to render the charging points and maximum reachable boundary on the static map image by running the following script: ```python # Get boundaries for the bounding box. display(Image(poiRangeMap)) First, you want to determine all the potential charging stations within the reachable range. Then, you want to know which of them can be reached in a minimum amount of time. -The following script calls the Azure Maps [Matrix Routing API](/rest/api/maps/route/postroutematrix). It returns the specified vehicle location, the travel time, and the distance to each charging station. The script in the next cell parses the response to locate the closest reachable charging station with respect to time. +The following script calls the Azure Maps [Matrix Routing API]. It returns the specified vehicle location, the travel time, and the distance to each charging station. The script in the next cell parses the response to locate the closest reachable charging station with respect to time. To find the closest reachable charging station that can be reached in the least amount of time, run the script in the following cell: closestChargeLoc = ",".join(str(i) for i in minDistLoc) ## Calculate the route to the closest charging station -Now that you've found the closest charging station, you can call the [Get Route Directions API](/rest/api/maps/route/getroutedirections) to request the detailed route from the electric vehicle's current location to the charging station. +Now that you've found the closest charging station, you can call the [Get Route Directions API] to request the detailed route from the electric vehicle's current location to the charging station. To get the route to the charging station and to parse the response to create a geojson object that represents the route, run the script in the following cell: routeData = { ## Visualize the route -To help visualize the route, you first upload the route data as a geojson object to Azure Maps Data service . To do so, use the Azure Maps [Data Upload API](/rest/api/maps/data-v2/upload). Then, call the rendering service, [Get Map Image API](/rest/api/maps/render/getmapimage), to render the route on the map, and visualize it. +To help visualize the route, you first upload the route data as a geojson object to Azure Maps Data service. To do so, use the Azure Maps [Data Upload API]. Then, call the rendering service, [Get Map Image API]), to render the route on the map, and visualize it. To get an image for the rendered route on the map, run the following script: In this tutorial, you learned how to call Azure Maps REST APIs directly and visu To explore the Azure Maps APIs that are used in this tutorial, see: -* [Get Route Range](/rest/api/maps/route/getrouterange) -* [Post Search Inside Geometry](/rest/api/maps/search/postsearchinsidegeometry) -* [Data Upload](/rest/api/maps/data-v2/upload) -* [Render - Get Map Image](/rest/api/maps/render/getmapimage) -* [Post Route Matrix](/rest/api/maps/route/postroutematrix) -* [Get Route Directions](/rest/api/maps/route/getroutedirections) -* [Azure Maps REST APIs](./consumption-model.md) +* [Get Route Range] +* [Post Search Inside Geometry] +* [Data Upload] +* [Render - Get Map Image] +* [Post Route Matrix] +* [Get Route Directions] +* [Azure Maps REST APIs] ## Clean up resources There are no resources that require cleanup. To learn more about Azure Notebooks, see > [!div class="nextstepaction"]-> [Azure Notebooks](https://notebooks.azure.com) +> [Azure Notebooks] [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account +[Azure Maps Jupyter Notebook repository]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook +[Azure Maps REST APIs]: ./consumption-model.md +[Azure Notebooks]: https://notebooks.azure.com +[Data Upload API]: /rest/api/maps/data-v2/upload +[Data Upload]: /rest/api/maps/data-v2/upload +[Get Map Image API]: /rest/api/maps/render/getmapimage +[Get Map Image service]: /rest/api/maps/render/getmapimage +[Get Route Directions API]: /rest/api/maps/route/getroutedirections +[Get Route Directions]: /rest/api/maps/route/getroutedirections +[Get Route Range API]: /rest/api/maps/route/getrouterange +[Get Route Range]: /rest/api/maps/route/getrouterange +[Jupyter Notebook document file]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/EVrouting.ipynb [manage authentication in Azure Maps]: how-to-manage-authentication.md+[Matrix Routing API]: /rest/api/maps/route/postroutematrix +[Post Route Matrix]: /rest/api/maps/route/postroutematrix +[Post Search Inside Geometry API]: /rest/api/maps/search/postsearchinsidegeometry +[Post Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry +[Quickstart: Sign in and set a user ID]: https://notebooks.azure.com +[Render - Get Map Image]: /rest/api/maps/render/getmapimage +[*requirements.txt*]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/requirements.txt +[routing APIs]: /rest/api/maps/route +[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account |
azure-maps | Tutorial Geofence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md | This tutorial walks you through the basics of creating and using Azure Maps geof Consider the following scenario: -*A construction site manager must track equipment as it enters and leaves the perimeters of a construction area. Whenever a piece of equipment exits or enters these perimeters, an email notification is sent to the operations manager.* +*A construction site manager must track equipment as it enters and leaves the perimeters of a construction area. Whenever a piece of equipment exits or enters these perimeters, an email notification is sent to the Operations Manager.* -Azure Maps provides a number of services to support the tracking of equipment entering and exiting the construction area. In this tutorial, you will: +Azure Maps provides services to support the tracking of equipment entering and exiting the construction area. In this tutorial, you will: > [!div class="checklist"] > > * Create an Azure Maps account with a global region.-> * Upload [Geofencing GeoJSON data](geofence-geojson.md) that defines the construction site areas you want to monitor. You'll use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload geofences as polygon coordinates to your Azure Maps account. -> * Set up two [logic apps](../event-grid/handler-webhooks.md#logic-apps) that, when triggered, send email notifications to the construction site operations manager when equipment enters and exits the geofence area. -> * Use [Azure Event Grid](../event-grid/overview.md) to subscribe to enter and exit events for your Azure Maps geofence. You set up two webhook event subscriptions that call the HTTP endpoints defined in your two logic apps. The logic apps then send the appropriate email notifications of equipment moving beyond or entering the geofence. -> * Use [Search Geofence Get API](/rest/api/maps/spatial/getgeofence) to receive notifications when a piece of equipment exits and enters the geofence areas. +> * Upload [Geofencing GeoJSON data] that defines the construction site areas you want to monitor. You'll use the [Data Upload API] to upload geofences as polygon coordinates to your Azure Maps account. +> * Set up two [logic apps] that, when triggered, send email notifications to the construction site operations manager when equipment enters and exits the geofence area. +> * Use [Azure Event Grid] to subscribe to enter and exit events for your Azure Maps geofence. You set up two webhook event subscriptions that call the HTTP endpoints defined in your two logic apps. The logic apps then send the appropriate email notifications of equipment moving beyond or entering the geofence. +> * Use [Search Geofence Get API] to receive notifications when a piece of equipment exits and enters the geofence areas. ## Prerequisites -* This tutorial uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment. +* This tutorial uses the [Postman] application, but you can use a different API development environment. ++>[!IMPORTANT] +> +> * In the URL examples, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ## Create an Azure Maps account with a global region -The Geofence API async event requires the region property of your Azure Maps account be set to ***Global***. This isn't given as an option when creating an Azure Maps account in the Azure portal, however you do have several other options for creating a new Azure Maps account with the *global* region setting. This section lists the three methods that can be used to create an Azure Maps account with the region set to *global*. +The Geofence API async event requires the region property of your Azure Maps account be set to ***Global***. This setting isn't given as an option when creating an Azure Maps account in the Azure portal, however you do have several other options for creating a new Azure Maps account with the *global* region setting. This section lists the three methods that can be used to create an Azure Maps account with the region set to *global*. > [!NOTE] > The `location` property in both the ARM template and PowerShell `New-AzMapsAccount` command refer to the same property as the `Region` field in the Azure portal. ### Use an ARM template to create an Azure Maps account with a global region -You will need to [Create your Azure Maps account using an ARM template](how-to-create-template.md), making sure to set `location` to `global` in the `resources` section of the ARM template. +[Create your Azure Maps account using an ARM template], making sure to set `location` to `global` in the `resources` section of the ARM template. ### Use PowerShell to create an Azure Maps account with a global region New-AzMapsAccount -ResourceGroupName your-Resource-Group -Name name-of-maps-acco ### Use Azure CLI to create an Azure Maps account with a global region -The Azure CLI command [az maps account create](/cli/azure/maps/account?view=azure-cli-latest&preserve-view=true#az-maps-account-create) doesnΓÇÖt have a location property, but defaults to ΓÇ£globalΓÇ¥, making it useful for creating an Azure Maps account with a global region setting for use with the Geofence API async event. +The Azure CLI command [az maps account create] doesnΓÇÖt have a location property, but defaults to `global`, making it useful for creating an Azure Maps account with a global region setting for use with the Geofence API async event. ## Upload geofencing GeoJSON data -In this tutorial, you''ll upload geofencing GeoJSON data that contains a `FeatureCollection`. The `FeatureCollection` contains two geofences that define polygonal areas within the construction site. The first geofence has no time expiration or restrictions. The second can only be queried against during business hours (9:00 AM-5:00 PM in the Pacific Time zone), and will no longer be valid after January 1, 2022. For more information on the GeoJSON format, see [Geofencing GeoJSON data](geofence-geojson.md). +This tutorial demonstrates how to upload geofencing GeoJSON data that contains a `FeatureCollection`. The `FeatureCollection` contains two geofences that define polygonal areas within the construction site. The first geofence has no time expiration or restrictions. The second can only be queried against during business hours (9:00 AM-5:00 PM in the Pacific Time zone), and will no longer be valid after January 1, 2022. For more information on the GeoJSON format, see [Geofencing GeoJSON data]. >[!TIP]->You can update your geofencing data at any time. For more information, see [Data Upload API](/rest/api/maps/data-v2/upload). +>You can update your geofencing data at any time. For more information, see [Data Upload API]. To upload the geofencing GeoJSON data: To upload the geofencing GeoJSON data: 4. Select the **POST** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): +5. Enter the following URL. The request should look like the following URL: ```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson To upload the geofencing GeoJSON data: 10. In the response window, select the **Headers** tab. -11. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the GeoJSON data upload. +11. Copy the value of the **Operation-Location** key, which is the `status URL`. The `status URL` is used to check the status of the GeoJSON data upload. ```http https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0 To check the status of the GeoJSON data and retrieve its unique ID (`udid`): 4. Select the **GET** HTTP method. -5. Enter the `status URL` you copied in [Upload Geofencing GeoJSON data](#upload-geofencing-geojson-data). The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): +5. Enter the `status URL` you copied in [Upload Geofencing GeoJSON data]. The request should look like the following URL: ```HTTP https://us.atlas.microsoft.com/mapData/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} To retrieve content metadata: 4. Select the **GET** HTTP method. -5. Enter the `resource Location URL` you copied in [Check the GeoJSON data upload status](#check-the-geojson-data-upload-status). The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): +5. Enter the `resource Location URL` you copied in [Check the GeoJSON data upload status]. The request should look like the following URL: ```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} To retrieve content metadata: ## Create workflows in Azure Logic Apps -Next, we'll create two [logic app](../event-grid/handler-webhooks.md#logic-apps) endpoints that trigger an email notification. +Next, create two [logic app] endpoints that trigger an email notification. To create the logic apps: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Azure portal]. 2. In the upper-left corner of the Azure portal, select **Create a resource**. To create the logic apps: :::image type="content" source="./media/tutorial-geofence/logic-app-email.png" alt-text="Screenshot of create a logic app send email step."::: >[!TIP]- > You can retrieve GeoJSON response data, such as `geometryId` or `deviceId`, in your email notifications. You can configure Logic Apps to read the data sent by Event Grid. For information on how to configure Logic Apps to consume and pass event data into email notifications, see [Tutorial: Send email notifications about Azure IoT Hub events using Event Grid and Logic Apps](../event-grid/publish-iot-hub-events-to-logic-apps.md). + > You can retrieve GeoJSON response data, such as `geometryId` or `deviceId`, in your email notifications. You can configure Logic Apps to read the data sent by Event Grid. For information on how to configure Logic Apps to consume and pass event data into email notifications, see [Tutorial: Send email notifications about Azure IoT Hub events using Event Grid and Logic Apps]. 13. In the upper-left corner of **Logic App Designer**, select **Save**. To create the logic apps: ## Create Azure Maps events subscriptions -Azure Maps supports [three event types](../event-grid/event-schema-azure-maps.md). In this tutorial, we'll create subscriptions to the following two events: +Azure Maps supports [three event types]. This tutorial demonstrates how to create subscriptions to the following two events: * Geofence enter events * Geofence exit events -To create an geofence exit and enter event subscription: +Create geofence exit and enter event subscriptions: 1. In your Azure Maps account, select **Subscriptions**. To create an geofence exit and enter event subscription: ## Use Spatial Geofence Get API -Next, we'll use the [Spatial Geofence Get API](/rest/api/maps/spatial/getgeofence) to send email notifications to the operations manager when a piece of equipment enters or exits the geofences. +Next, we use the [Spatial Geofence Get API] to send email notifications to the Operations Manager when a piece of equipment enters or exits the geofences. Each piece of equipment has a `deviceId`. In this tutorial, you're tracking a single piece of equipment, with a unique ID of `device_1`. -The following diagram shows the five locations of the equipment over time, beginning at the *Start* location, which is somewhere outside the geofences. For the purposes of this tutorial, the *Start* location is undefined, because you won't query the device at that location. +The following diagram shows the five locations of the equipment over time, beginning at the *Start* location, which is somewhere outside the geofences. For the purposes of this tutorial, the *Start* location is undefined, because you don't query the device at that location. -When you query the [Spatial Geofence Get API](/rest/api/maps/spatial/getgeofence) with an equipment location that indicates initial geofence entry or exit, Event Grid calls the appropriate logic app endpoint to send an email notification to the operations manager. +When you query the [Spatial Geofence Get API] with an equipment location that indicates initial geofence entry or exit, Event Grid calls the appropriate logic app endpoint to send an email notification to the Operations Manager. Each of the following sections makes API requests by using the five different location coordinates of the equipment. Each of the following sections makes API requests by using the five different lo 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.638237&lon=-122.1324831&searchBuffer=5&isAsync=True&mode=EnterAndExit Each of the following sections makes API requests by using the five different lo } ``` -In the preceding GeoJSON response, the negative distance from the main site geofence means that the equipment is inside the geofence. The positive distance from the subsite geofence means that the equipment is outside the subsite geofence. Because this is the first time this device has been located inside the main site geofence, the `isEventPublished` parameter is set to `true`. The operations manager receives an email notification that equipment has entered the geofence. +In the preceding GeoJSON response, the negative distance from the main site geofence means that the equipment is inside the geofence. The positive distance from the subsite geofence means that the equipment is outside the subsite geofence. Because this is the first time this device has been located inside the main site geofence, the `isEventPublished` parameter is set to `true`. The Operations Manager receives an email notification that equipment has entered the geofence. ### Location 2 (47.63800,-122.132531) In the preceding GeoJSON response, the negative distance from the main site geof 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udId={udId}&lat=47.63800&lon=-122.132531&searchBuffer=5&isAsync=True&mode=EnterAndExit In the preceding GeoJSON response, the negative distance from the main site geof } ```` -In the preceding GeoJSON response, the equipment has remained in the main site geofence and hasn't entered the subsite geofence. As a result, the `isEventPublished` parameter is set to `false`, and the operations manager doesn't receive any email notifications. +In the preceding GeoJSON response, the equipment has remained in the main site geofence and hasn't entered the subsite geofence. As a result, the `isEventPublished` parameter is set to `false`, and the Operations Manager doesn't receive any email notifications. ### Location 3 (47.63810783315048,-122.13336020708084) In the preceding GeoJSON response, the equipment has remained in the main site g 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63810783315048&lon=-122.13336020708084&searchBuffer=5&isAsync=True&mode=EnterAndExit In the preceding GeoJSON response, the equipment has remained in the main site g } ```` -In the preceding GeoJSON response, the equipment has remained in the main site geofence, but has entered the subsite geofence. As a result, the `isEventPublished` parameter is set to `true`. The operations manager receives an email notification indicating that the equipment has entered a geofence. +In the preceding GeoJSON response, the equipment has remained in the main site geofence, but has entered the subsite geofence. As a result, the `isEventPublished` parameter is set to `true`. The Operations Manager receives an email notification indicating that the equipment has entered a geofence. >[!NOTE] >If the equipment had moved into the subsite after business hours, no event would be published and the operations manager wouldn't receive any notifications. In the preceding GeoJSON response, the equipment has remained in the main site g 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.637988&userTime=2023-01-16&lon=-122.1338344&searchBuffer=5&isAsync=True&mode=EnterAndExit In the preceding GeoJSON response, the equipment has remained in the main site g } ```` -In the preceding GeoJSON response, the equipment has remained in the main site geofence, but has exited the subsite geofence. Notice, however, that the `userTime` value is after the `expiredTime` as defined in the geofence data. As a result, the `isEventPublished` parameter is set to `false`, and the operations manager doesn't receive an email notification. +In the preceding GeoJSON response, the equipment has remained in the main site geofence, but has exited the subsite geofence. Notice, however, that the `userTime` value is after the `expiredTime` as defined in the geofence data. As a result, the `isEventPublished` parameter is set to `false`, and the Operations Manager doesn't receive an email notification. ### Location 5 (47.63799, -122.134505) In the preceding GeoJSON response, the equipment has remained in the main site g 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63799&lon=-122.134505&searchBuffer=5&isAsync=True&mode=EnterAndExit In the preceding GeoJSON response, the equipment has remained in the main site g } ```` -In the preceding GeoJSON response, the equipment has exited the main site geofence. As a result, the `isEventPublished` parameter is set to `true`, and the operations manager receives an email notification indicating that the equipment has exited a geofence. +In the preceding GeoJSON response, the equipment has exited the main site geofence. As a result, the `isEventPublished` parameter is set to `true`, and the Operations Manager receives an email notification indicating that the equipment has exited a geofence. -You can also [Send email notifications using Event Grid and Logic Apps](../event-grid/publish-iot-hub-events-to-logic-apps.md) and check [Supported Events Handlers in Event Grid](../event-grid/event-handlers.md) using Azure Maps. +You can also [Send email notifications using Event Grid and Logic Apps] and check [Supported Events Handlers in Event Grid] using Azure Maps. ## Clean up resources There are no resources that require cleanup. ## Next steps > [!div class="nextstepaction"]-> [Handle content types in Azure Logic Apps](../logic-apps/logic-apps-content-type.md) +> [Handle content types in Azure Logic Apps] ++[Geofencing GeoJSON data]: geofence-geojson.md +[Data Upload API]: /rest/api/maps/data-v2/upload +[logic apps]: ../event-grid/handler-webhooks.md#logic-apps +[Azure Event Grid]: ../event-grid/overview.md +[Search Geofence Get API]: /rest/api/maps/spatial/getgeofence +[Postman]: https://www.postman.com +[Create your Azure Maps account using an ARM template]: how-to-create-template.md +[az maps account create]: /cli/azure/maps/account?view=azure-cli-latest&preserve-view=true#az-maps-account-create +[Upload Geofencing GeoJSON data]: #upload-geofencing-geojson-data +[Check the GeoJSON data upload status]: #check-the-geojson-data-upload-status +[logic app]: ../event-grid/handler-webhooks.md#logic-apps +[Azure portal]: https://portal.azure.com +[Tutorial: Send email notifications about Azure IoT Hub events using Event Grid and Logic Apps]: ../event-grid/publish-iot-hub-events-to-logic-apps.md +[three event types]: ../event-grid/event-schema-azure-maps.md +[Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence +[Upload Geofencing GeoJSON data section]: #upload-geofencing-geojson-data +[Send email notifications using Event Grid and Logic Apps]: ../event-grid/publish-iot-hub-events-to-logic-apps.md +[Supported Events Handlers in Event Grid]: ../event-grid/event-handlers.md +[Handle content types in Azure Logic Apps]: ../logic-apps/logic-apps-content-type.md |
azure-monitor | Azure Monitor Agent Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-health.md | description: Experience to view agent health at scale and troubleshoot issues re Previously updated : 7/5/2023 Last updated : 7/25/2023 It includes agents deployed across virtual machines, scale sets and [Arc-enabled :::image type="content" source="media/azure-monitor-agent/azure-monitor-agent-health.png" lightbox="media/azure-monitor-agent/azure-monitor-agent-health.png" alt-text="Screenshot of the Azure Monitor Agent Health workbook. The screenshot highlights the various charts and drill-down scope provided out-of-box. It also shows additional tabs on top for more scoped investigations."::: -This will be available soon under Azure Monitor > Workbooks > Azure Monitor Essentials. Watch this space for an update shortly, or [reach out](mailto:obs-agent-pms@microsoft.com) if you have questions. +You can access this workbook on the portal with preview enabled, or by clicking [workbook link here](https://ms.portal.azure.com/#blade/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Monitor/ConfigurationId/community-Workbooks%2FAzure%20Monitor%20-%20Agents%2FAMA%20Health/Type/workbook/WorkbookTemplateName/AMA%20Health%20(Preview)). Try it out and [share your feedback](mailto:obs-agent-pms@microsoft.com) with us. |
azure-monitor | Java Spring Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md | to change the location for a file outside the classpath. } ``` -#### Setting up the configuration file --Open your configuration file (either `application.properties` or `application.yaml`) in the *resources* folder. Update the file with the following. --##### application.yaml --```yaml --Dapplicationinsights:- runtime-attach: - configuration: - classpath: - file: "applicationinsights-dev.json" -``` --##### application.properties --```properties --Dapplicationinsights.runtime-attach.configuration.classpath.file = "applicationinsights-dev.json"-``` - #### Self-diagnostic log file location By default, when enabling Application Insights Java programmatically, the `applicationinsights.log` file containing |
azure-monitor | Best Practices Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md | This article describes [Cost optimization](/azure/architecture/framework/cost/) ### Design checklist > [!div class="checklist"]-> - Use sampling to tune the amount of data collected. +> - Change to Workspace-based Application Insights. > - Use sampling to tune the amount of data collected. > - Limit the number of Ajax calls. > - Disable unneeded modules. |
azure-monitor | Container Insights Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md | -Container Insights offers the ability to collect Syslog events from Linux nodes in your [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) clusters. Customers can use Syslog for monitoring security and health events, typically by ingesting syslog into SIEM systems like [Microsoft Sentinel](https://azure.microsoft.com/products/microsoft-sentinel/#overview). +Container Insights offers the ability to collect Syslog events from Linux nodes in your [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) clusters. This includes the ability to collect logs from control plane componemts like kubelet. Customers can also use Syslog for monitoring security and health events, typically by ingesting syslog into a SIEM system like [Microsoft Sentinel](https://azure.microsoft.com/products/microsoft-sentinel/#overview). > [!IMPORTANT] > Syslog collection with Container Insights is a preview feature. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. The following table provides different examples of log queries that retrieve Sys | Query | Description | |: |: | | `Syslog` |All Syslogs |-| `Syslog | where SeverityLevel == "error"` |All Syslog records with severity of error | -| `Syslog | summarize AggregatedValue = count() by Computer` |Count of Syslog records by computer | -| `Syslog | summarize AggregatedValue = count() by Facility` |Count of Syslog records by facility | +| `Syslog | where SeverityLevel == "error"` | All Syslog records with severity of error | +| `Syslog | summarize AggregatedValue = count() by Computer` | Count of Syslog records by computer | +| `Syslog | summarize AggregatedValue = count() by Facility` | Count of Syslog records by facility | +| `Syslog | where ProcessName == "kubelet"` | All Syslog records from the kubelet process | +| `Syslog | where ProcessName == "kubelet" and SeverityLevel == "error"` | Syslog records from kubelet process with errors | ## Editing your Syslog collection settings |
azure-monitor | Code Optimizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/code-optimizations.md | With Code Optimizations, you can: - View real-time performance data and insights gathered from your production environment. - Make informed decisions about optimizing your code. +## Demo video ++<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/eu1P_vLTZO0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> + ## Requirements for using Code Optimizations Before you can use Code Optimizations on your application: |
azure-monitor | Azure Monitor Data Explorer Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md | description: Use Azure Monitor to perform cross-product queries between Azure Da Previously updated : 03/28/2022 Last updated : 07/25/2023 |
azure-monitor | Data Collection Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collection-troubleshoot.md | Title: Troubleshoot why data is no longer being collected in Azure Monitor description: Steps to take if data is no longer being collected in Log Analytics workspace in Azure Monitor. Previously updated : 03/31/2022 Last updated : 07/25/2023 # Troubleshoot why data is no longer being collected in Azure Monitor |
azure-monitor | Private Link Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md | Title: Configure your private link description: This article shows the steps to configure a private link. Previously updated : 1/5/2022 Last updated : 07/25/2023 # Configure your private link |
azure-monitor | Private Link Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-security.md | Title: Use Azure Private Link to connect networks to Azure Monitor description: Set up an Azure Monitor Private Link Scope to securely connect networks to Azure Monitor. Previously updated : 1/5/2022 Last updated : 07/25/2023 # Use Azure Private Link to connect networks to Azure Monitor |
azure-monitor | Vminsights Enable Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-policy.md | description: This article describes how you enable VM insights for multiple Azur - Previously updated : 12/13/2022+ Last updated : 07/09/2023 The initiatives apply to new machines you create and machines you modify, but no | Legacy: Enable Azure Monitor for VMs | Installs the Log Analytics agent and Dependency agent on virtual machine scale sets. | | Legacy: Enable Azure Monitor for virtual machine scale sets | Installs the Log Analytics agent and Dependency agent on virtual machine scale sets. | +## Support for custom images ++Azure Monitor Agent-based VM insights policy and initiative definitions have a `scopeToSupportedImages` parameter that's set to `true` by default to enable onboarding Dependency Agent on supported images only. Set this parameter to `false`to allow onboarding Dependency Agent on custom images. + ## Assign a VM insights policy initiative To assign a VM insights policy initiative to a subscription or management group from the Azure portal: |
azure-monitor | Vminsights Enable Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-resource-manager.md | If you aren't familiar how to deploy a Resource Manager template, see [Deploy te >The template needs to be deployed in the same resource group as the virtual machine or virtual machine scale set being enabled. ## Azure Monitor agent-Download the [Azure Monitor agent templates](https://aka.ms/vminsights/downloadAMADaVmiArmTemplates). You must first install the data collection rule and can then install agents to use that DCR. +Download the [Azure Monitor agent templates](https://github.com/Azure/AzureMonitorForVMs-ArmTemplates/releases/download/vmi_ama_ga/DeployDcr.zip). You must first install the data collection rule and can then install agents to use that DCR. ### Deploy data collection rule You only need to perform this step once. This will install the DCR that's used by each agent. The DCR will be created in the same resource group as the workspace with a name in the format "MSVMI-{WorkspaceName}". |
azure-monitor | Vminsights Migrate Deprecated Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-migrate-deprecated-policies.md | + + Title: Migrate from deprecated VM insights policies +description: This article explains how to migrate from deprecated VM insights policies to their replacement policies. ++++ Last updated : 07/09/2023++++# Migrate from deprecated VM insights policies ++We're deprecating the VM insights DCR deployment policies and replacing them with new policies because of a race condition issue. The deprecated policies will continue to work on existing assignments, but will no longer be available for new assignments. If you're using deprecated policies, we recommend you migrate to the new policies as soon as possible. ++This article explains how to migrate from deprecated VM insights policies to their replacement policies. ++## Prerequisites ++- An existing user-assigned managed identity. ++## Deprecated VM insights policies ++These policies are deprecated and will be removed in 2026. We recommend you migrate to the replacement policies as soon as possible: ++- [[Preview]: Deploy a VMInsights Data Collection Rule and Data Collection Rule Association for Arc Machines in the Resource Group](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7c4214e9-ea57-487a-b38e-310ec09bc21d) +- [[Preview]: Deploy a VMInsights Data Collection Rule and Data Collection Rule Association for all the VMs in the Resource Group](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa0f27bdc-5b15-4810-b81d-7c4df9df1a37) +- [[Preview]: Deploy a VMInsights Data Collection Rule and Data Collection Rule Association for all the virtual machine scale sets in the Resource Group](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc7f3bf36-b807-4f18-82dc-f480ad713635) +++## New VM insights policies ++These policies replace the deprecated policies: ++- [Configure Linux Machines to be associated with a Data Collection Rule or a Data Collection Endpoint](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2ea82cdd-f2e8-4500-af75-67a2e084ca74) +- [Configure Windows Machines to be associated with a Data Collection Rule or a Data Collection Endpoint](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2feab1f514-22e3-42e3-9a1f-e1dc9199355c) ++## Migrate from deprecated VM insights policies to replacement policies ++1. [Download the Azure Monitor Agent-based VM insights data collection rule templates](https://github.com/Azure/AzureMonitorForVMs-ArmTemplates/releases/download/vmi_ama_ga/DeployDcr.zip). ++1. Deploy the VM insights data collection rule using an ARM template, as described in [Quickstart: Create and deploy ARM templates by using the Azure portal](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md#edit-and-deploy-the-template). ++ 1. Select **Build your own template in the editor** > **Load file** to upload the template you downloaded in the previous step. ++ - To collect only VM insights performance metrics, deploy the `ama-vmi-default-perf-dcr` data collection rule by uploading the **DeployDcr**>**PerfOnlyDcr**>**DeployDcrTemplate** file. + - To collect VM insights performance metrics and Service Map data, deploy the `ama-vmi-default-perfAndda-dcr` data collection rule by uploading the **DeployDcr**>**PerfAndMapDcr**>**DeployDcrTemplate** file. ++ 1. When the data collection rule deployment is complete, select **Go to resource** > **JSON View** and copy the data collection rule's **Resource ID**. ++1. Select one of the [new VM insights policies for Windows and Linux VMs](#new-vm-insights-policies). +1. Select **Assign**. +1. In the **Scope** field on the **Basics** tab, select your subscription and the resource group that contains the VMs you want to monitor. +1. In the **Data Collection Rule Resource Id** field on the **Parameters** tab, paste the resource ID of the data collection rule you created in the previous step. +1. On the **Remediation** tab: + 1. In the **Scope** field, select your subscription and the resource group that contains your user-assigned managed identity. + 2. Set **Type of managed identity** to **User Assigned Managed Identity** and select your user-assigned identity. ++## Next steps ++Learn how to: +- [View VM insights Map](vminsights-maps.md) to see application dependencies. +- [View Azure VM performance](vminsights-performance.md) to identify bottlenecks and overall utilization of your VM's performance. |
azure-netapp-files | Azure Netapp Files Network Topologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md | Azure NetApp Files volumes are designed to be contained in a special purpose sub <a name="regions-edit-network-features"></a>The option to *[edit network features for existing volumes](configure-network-features.md#edit-network-features-option-for-existing-volumes)* is supported for the following regions: -* Australia Central -* Australia Central 2 -* Australia East -* Brazil South -* Canada Central -* East Asia -* Germany North -* Japan West -* Korea Central -* North Central US -* Norway East -* South Africa North -* South India -* Sweden Central -* UAE Central -* UAE North +* Australia Central +* Australia Central 2 +* Australia East +* Australia Southeast +* Brazil South +* Canada Central +* Central India +* East Asia +* East US +* East US 2 +* France Central +* Germany North +* Germany West Central +* Japan East +* Japan West +* Korea Central +* North Central US +* North Europe +* Norway East +* Norway West +* Qatar Central +* South Africa North +* South India +* Southeast Asia +* Sweden Central +* Switzerland North +* Switzerland West +* UAE Central +* UAE North +* West Europe +* West US +* West US 2 +* West US 3 + ## Considerations |
azure-portal | Azure Portal Safelist Urls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md | login.live.com #### Azure portal framework ```+portal.azure.com *.portal.azure.com *.hosting.portal.azure.net+reactblade.portal.azure.net *.reactblade.portal.azure.net management.azure.com *.ext.azure.com+graph.windows.net *.graph.windows.net+graph.microsoft.com *.graph.microsoft.com ``` management.azure.com *.account.microsoft.com *.bmx.azure.com *.subscriptionrp.trafficmanager.net+signup.azure.com *.signup.azure.com ``` |
azure-resource-manager | Bicep Functions Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md | The possible uses of `list*` are shown in the following table. | Microsoft.DevTestLab/labs/schedules | [ListApplicable](/rest/api/dtl/schedules/listapplicable) | | Microsoft.DevTestLab/labs/users/serviceFabrics | [ListApplicableSchedules](/rest/api/dtl/servicefabrics/listapplicableschedules) | | Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) |-| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2022-11-15/database-accounts/list-connection-strings) | -| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2022-11-15/database-accounts/list-keys) | -| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2022-11-15/notebook-workspaces/list-connection-info) | +| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/database-accounts/list-keys?tabs=HTTP) | +| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2023-03-15-preview/notebook-workspaces/list-connection-info?tabs=HTTP) | | Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/domains/list-shared-access-keys) | |
azure-resource-manager | Bicep Functions String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-string.md | The following example shows how to use the format function. param greeting string = 'Hello' param name string = 'User' param numberToFormat int = 8175133+param objectToFormat object = { prop: 'value' } output formatTest string = format('{0}, {1}. Formatted number: {2:N0}', greeting, name, numberToFormat)+output formatObject string = format('objectToFormat: {0}', objectToFormat) + ``` The output from the preceding example with the default values is: | Name | Type | Value | | - | - | -- |-| formatTest | String | Hello, User. Formatted number: 8,175,133 | +| formatTest | String | `Hello, User. Formatted number: 8,175,133` | +| formatObject | String | `objectToFormat: {'prop':'value'}` | ## guid The output from the preceding example with the default values is: `string(valueToConvert)` Converts the specified value to a string.+Strings are returned as-is. Other types are converted to their equivalent JSON representation. +If you need to convert a string to JSON, i.e. quote/escape it, you can use `substring(string([value]), 1, length(string([value]) - 2)`. Namespace: [sys](bicep-functions.md#namespaces-for-functions). param testObject object = { valueB: 'Example Text' } param testArray array = [- 'a' - 'b' - 'c' + '\'a\'' + '"b"' + '\\c\\' ] param testInt int = 5+param testString string = 'foo " \' \\' output objectOutput string = string(testObject) output arrayOutput string = string(testArray) output intOutput string = string(testInt)+output stringOutput string = string(testString) +output stringEscapedOutput string = substring(string([testString]), 1, length(string([testString])) - 2) + ``` The output from the preceding example with the default values is: | Name | Type | Value | | - | - | -- |-| objectOutput | String | {"valueA":10,"valueB":"Example Text"} | -| arrayOutput | String | ["a","b","c"] | -| intOutput | String | 5 | +| objectOutput | String | `{"valueA":10,"valueB":"Example Text"}` | +| arrayOutput | String | `["'a'","\"b\"","\\c\\"]` | +| intOutput | String | `5` | +| stringOutput | String | `foo " ' \` | +| stringEscapedOutput | String | `"foo \" ' \\"` | ## substring |
azure-resource-manager | Parameter Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md | Last updated 06/26/2023 Rather than passing parameters as inline values in your script, you can use a Bicep parameters file with the `.bicepparam` file extension or a JSON parameters file that contains the parameter values. This article shows how to create parameters files. > [!NOTE]-> The Bicep parameters file is only supported in [Bicep CLI](./install.md) version 0.18.4 or newer, and [Azure CLI](/azure/install-azure-cli.md) version 2.47.0 or newer. +> The Bicep parameters file is only supported in [Bicep CLI](./install.md) version 0.18.4 or newer, and [Azure CLI](/azure/developer/azure-developer-cli/install-azd?tabs=winget-windows%2Cbrew-mac%2Cscript-linux&pivots=os-windows) version 2.47.0 or newer. A single Bicep file can have multiple Bicep parameters files associated with it. However, each Bicep parameters file is intended for one particular Bicep file. This relationship is established using the `using` statement within the Bicep parameters file. For more information, see [Bicep parameters file](#parameters-file). |
azure-resource-manager | User Defined Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md | To enable this preview, modify your project's [bicepconfig.json](./bicep-config. You can use the `type` statement to define user-defined data types. In addition, you can also use type expressions in some places to define custom types. ```bicep-Type <userDefinedDataTypeName> = <typeExpression> +type <userDefinedDataTypeName> = <typeExpression> ``` The valid type expressions include: |
azure-resource-manager | Resource Providers And Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-providers-and-types.md | Title: Resource providers and resource types description: Describes the resource providers that support Azure Resource Manager. It describes their schemas, available API versions, and the regions that can host the resources. Previously updated : 08/05/2022 Last updated : 07/14/2023 +content_well_notification: + - AI-contribution # Azure resource providers and types -An Azure resource provider is a collection of REST operations that provide functionality for an Azure service. For example, the Key Vault service consists of a resource provider named **Microsoft.KeyVault**. The resource provider defines [REST operations](/rest/api/keyvault/) for working with vaults, secrets, keys, and certificates. +An Azure resource provider is a set of REST operations that enable functionality for a specific Azure service. For example, the Key Vault service consists of a resource provider named **Microsoft.KeyVault**. The resource provider defines [REST operations](/rest/api/keyvault/) for managing vaults, secrets, keys, and certificates. -The resource provider defines the Azure resources that are available for you to deploy to your account. The name of a resource type is in the format: **{resource-provider}/{resource-type}**. The resource type for a key vault is **Microsoft.KeyVault/vaults**. +The resource provider defines the Azure resources you can deploy to your account. A resource type's name follows the format: **{resource-provider}/{resource-type}**. The resource type for a key vault is **Microsoft.KeyVault/vaults**. In this article, you learn how to: In this article, you learn how to: * View valid locations for a resource type * View valid API versions for a resource type -You can do these steps through the Azure portal, Azure PowerShell, or Azure CLI. - For a list that maps resource providers to Azure services, see [Resource providers for Azure services](azure-services-resource-providers.md). ## Register resource provider -Before using a resource provider, your Azure subscription must be registered for the resource provider. Registration configures your subscription to work with the resource provider. +Before you use a resource provider, you must make sure your Azure subscription is registered for the resource provider. Registration configures your subscription to work with the resource provider. > [!IMPORTANT]-> Only register a resource provider when you're ready to use it. The registration step enables you to maintain least privileges within your subscription. A malicious user can't use resource providers that aren't registered. +> Register a resource provider only when you're ready to use it. This registration step helps maintain least privileges within your subscription. A malicious user can't use unregistered resource providers. +> +> Registering unnecessary resource providers may result in unrecognized apps appearing in your Azure Active Directory tenant. Microsoft adds the app for a resource provider when you register it. These apps are typically added by the Windows Azure Service Management API. To prevent unnecessary apps in your tenant, only register needed resource providers. Some resource providers are registered by default. For a list of resource providers registered by default, see [Resource providers for Azure services](azure-services-resource-providers.md). -Other resource providers are registered automatically when you take certain actions. When you create a resource through the portal, the resource provider is typically registered for you. When you deploy an Azure Resource Manager template or Bicep file, resource providers defined in the template are registered automatically. However, if a resource in the template creates supporting resources that aren't in the template, such as monitoring or security resources, you need to manually register those resource providers. +Other resource providers are registered automatically when you take certain actions. When you create a resource through the portal, the resource provider is typically registered for you. When you deploy an Azure Resource Manager template or Bicep file, resource providers defined in the template are registered automatically. Sometimes, a resource in the template requires supporting resources that aren't in the template. Common examples are monitoring or security resources. You need to register those resource providers manually. For other scenarios, you may need to manually register a resource provider. You must have permission to do the `/register/action` operation for the resource You can't unregister a resource provider when you still have resource types from that resource provider in your subscription. +Reregister a resource provider when the resource provider supports new locations that you need to use. + ## Azure portal ### Register resource provider To see all resource providers, and the registration status for your subscription 1. Sign in to the [Azure portal](https://portal.azure.com). 1. On the Azure portal menu, search for **Subscriptions**. Select it from the available options. - :::image type="content" source="./media/resource-providers-and-types/search-subscriptions.png" alt-text="search subscriptions"::: + :::image type="content" source="./media/resource-providers-and-types/search-subscriptions.png" alt-text="Screenshot of searching for subscriptions in the Azure portal."::: 1. Select the subscription you want to view. - :::image type="content" source="./media/resource-providers-and-types/select-subscription.png" alt-text="select subscriptions"::: + :::image type="content" source="./media/resource-providers-and-types/select-subscription.png" alt-text="Screenshot of selecting a subscription in the Azure portal."::: 1. On the left menu, under **Settings**, select **Resource providers**. - :::image type="content" source="./media/resource-providers-and-types/select-resource-providers.png" alt-text="select resource providers"::: + :::image type="content" source="./media/resource-providers-and-types/select-resource-providers.png" alt-text="Screenshot of selecting resource providers in the Azure portal."::: -6. Find the resource provider you want to register, and select **Register**. To maintain least privileges in your subscription, only register those resource providers that you're ready to use. +1. Find the resource provider you want to register, and select **Register**. To maintain least privileges in your subscription, only register those resource providers that you're ready to use. - :::image type="content" source="./media/resource-providers-and-types/register-resource-provider.png" alt-text="register resource providers"::: + :::image type="content" source="./media/resource-providers-and-types/register-resource-provider.png" alt-text="Screenshot of registering a resource provider in the Azure portal."::: -> [!IMPORTANT] -> As [noted earlier](#register-resource-provider), **don't block the creation of resources** for a resource provider that is in the **registering** state. By not blocking a resource provider in the registering state, your application can continue much sooner than waiting for all regions to complete. + > [!IMPORTANT] + > As [noted earlier](#register-resource-provider), **don't block the creation of resources** for a resource provider that is in the **registering** state. By not blocking a resource provider in the registering state, your application can continue much sooner than waiting for all regions to complete. ++1. **Re-register** a resource provider to use locations that have been added since the previous registration. + :::image type="content" source="./media/resource-providers-and-types/re-register-resource-provider.png" alt-text="Screenshot of reregistering a resource provider in the Azure portal."::: ### View resource provider To see information for a particular resource provider: 1. Sign in to the [Azure portal](https://portal.azure.com).-2. On the Azure portal menu, select **All services**. -3. In the **All services** box, enter **resource explorer**, and then select **Resource Explorer**. +1. On the Azure portal menu, select **All services**. +1. In the **All services** box, enter **resource explorer**, and then select **Resource Explorer**. - :::image type="content" source="./media/resource-providers-and-types/select-resource-explorer.png" alt-text="Screenshot of selecting All services in the Azure portal."::: + :::image type="content" source="./media/resource-providers-and-types/select-resource-explorer.png" alt-text="Screenshot of selecting All services in the Azure portal to access Resource Explorer."::: -4. Expand **Providers** by selecting the right arrow. +1. Expand **Providers** by selecting the right arrow. - :::image type="content" source="./media/resource-providers-and-types/select-providers.png" alt-text="Screenshot of selecting providers in the Azure Resource Explorer."::: + :::image type="content" source="./media/resource-providers-and-types/select-providers.png" alt-text="Screenshot of expanding the Providers section in the Azure Resource Explorer."::: -5. Expand a resource provider and resource type that you want to view. +1. Expand a resource provider and resource type that you want to view. - :::image type="content" source="./media/resource-providers-and-types/select-resource-type.png" alt-text="Screenshot of selecting a resource type in the Azure Resource Explorer."::: + :::image type="content" source="./media/resource-providers-and-types/select-resource-type.png" alt-text="Screenshot of expanding a resource provider and resource type in the Azure Resource Explorer."::: -6. Resource Manager is supported in all regions, but the resources you deploy might not be supported in all regions. Also, there may be limitations on your subscription that prevent you from using some regions that support the resource. The resource explorer displays valid locations for the resource type. +1. Resource Manager is supported in all regions, but the resources you deploy might not be supported in all regions. Also, there may be limitations on your subscription that prevent you from using some regions that support the resource. The resource explorer displays valid locations for the resource type. - :::image type="content" source="./media/resource-providers-and-types/show-locations.png" alt-text="Screenshot of showing locations for a resource type in the Azure Resource Explorer."::: + :::image type="content" source="./media/resource-providers-and-types/show-locations.png" alt-text="Screenshot of displaying valid locations for a resource type in the Azure Resource Explorer."::: -7. The API version corresponds to a version of REST API operations that are released by the resource provider. As a resource provider enables new features, it releases a new version of the REST API. The resource explorer displays valid API versions for the resource type. +1. The API version corresponds to a version of the resource provider's REST API operations. As a resource provider enables new features, it releases a new version of the REST API. The resource explorer displays valid API versions for the resource type. - :::image type="content" source="./media/resource-providers-and-types/show-api-versions.png" alt-text="Screenshot of showing API versions for a resource type in the Azure Resource Explorer."::: + :::image type="content" source="./media/resource-providers-and-types/show-api-versions.png" alt-text="Screenshot of displaying valid API versions for a resource type in the Azure Resource Explorer."::: ## Azure PowerShell Locations : {West Europe, East US, East US 2, West US...} > [!IMPORTANT] > As [noted earlier](#register-resource-provider), **don't block the creation of resources** for a resource provider that is in the **registering** state. By not blocking a resource provider in the registering state, your application can continue much sooner than waiting for all regions to complete. +Reregister a resource provider to use locations that have been added since the previous registration. To reregister, run the registration command again. + To see information for a particular resource provider, use: ```azurepowershell-interactive locations locations/quotas ``` -The API version corresponds to a version of REST API operations that are released by the resource provider. As a resource provider enables new features, it releases a new version of the REST API. +The API version corresponds to a version of the resource provider's REST API operations. As a resource provider enables new features, it releases a new version of the REST API. To get the available API versions for a resource type, use: To get the available API versions for a resource type, use: The command returns: ```output-2017-05-01 -2017-01-01 -2015-12-01 -2015-09-01 -2015-07-01 +2023-05-01 +2022-10-01 +2022-06-01 +2022-01-01 +2021-06-01 +2021-01-01 +... ``` Resource Manager is supported in all regions, but the resources you deploy might not be supported in all regions. Also, there may be limitations on your subscription that prevent you from using some regions that support the resource. locations locations/quotas ``` -The API version corresponds to a version of REST API operations that are released by the resource provider. As a resource provider enables new features, it releases a new version of the REST API. +The API version corresponds to a version of the resource provider's REST API operations. As a resource provider enables new features, it releases a new version of the REST API. To get the available API versions for a resource type, use: The command returns: ```output Result -2017-05-01 -2017-01-01 -2015-12-01 -2015-09-01 -2015-07-01 +2023-05-01 +2022-10-01 +2022-06-01 +2022-01-01 +... ``` Resource Manager is supported in all regions, but the resources you deploy might not be supported in all regions. Also, there may be limitations on your subscription that prevent you from using some regions that support the resource. West US ... ``` +## Python ++To see all resource providers in Azure, and the registration status for your subscription, use: ++```python +import os +from azure.identity import DefaultAzureCredential +from azure.mgmt.resource import ResourceManagementClient + +# Authentication +credential = DefaultAzureCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] + +# Initialize Resource Management client +resource_management_client = ResourceManagementClient(credential, subscription_id) + +# List available resource providers and select ProviderNamespace and RegistrationState +providers = resource_management_client.providers.list() + +for provider in providers: + print(f"ProviderNamespace: {provider.namespace}, RegistrationState: {provider.registration_state}") +``` ++The command returns: ++```output +ProviderNamespace: Microsoft.AlertsManagement, RegistrationState: Registered +ProviderNamespace: Microsoft.AnalysisServices, RegistrationState: Registered +ProviderNamespace: Microsoft.ApiManagement, RegistrationState: Registered +ProviderNamespace: Microsoft.Authorization, RegistrationState: Registered +ProviderNamespace: Microsoft.Batch, RegistrationState: Registered +... +``` ++To see all registered resource providers for your subscription, use: ++```python +# List available resource providers with RegistrationState "Registered" and select ProviderNamespace and RegistrationState +providers = resource_management_client.providers.list() +registered_providers = [provider for provider in providers if provider.registration_state == "Registered"] + +# Sort by ProviderNamespace +sorted_registered_providers = sorted(registered_providers, key=lambda x: x.namespace) + +for provider in sorted_registered_providers: + print(f"ProviderNamespace: {provider.namespace}, RegistrationState: {provider.registration_state}") +``` ++To maintain least privileges in your subscription, only register those resource providers that you're ready to use. To register a resource provider, use: ++```python +import os +from azure.identity import DefaultAzureCredential +from azure.mgmt.resource import ResourceManagementClient + +# Authentication +credential = DefaultAzureCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] + +# Initialize Resource Management client +resource_management_client = ResourceManagementClient(credential, subscription_id) + +# Register resource provider +provider_namespace = "Microsoft.Batch" +registration_result = resource_management_client.providers.register(provider_namespace) + +print(f"ProviderNamespace: {registration_result.namespace}, RegistrationState: {registration_result.registration_state}") +``` ++The command returns: ++```output +ProviderNamespace: Microsoft.Batch, RegistrationState: Registered +``` ++> [!IMPORTANT] +> As [noted earlier](#register-resource-provider), **don't block the creation of resources** for a resource provider that is in the **registering** state. By not blocking a resource provider in the registering state, your application can continue much sooner than waiting for all regions to complete. ++Reregister a resource provider to use locations that have been added since the previous registration. To reregister, run the registration command again. ++To see information for a particular resource provider, use: ++```python +import os +from azure.identity import DefaultAzureCredential +from azure.mgmt.resource import ResourceManagementClient + +# Authentication +credential = DefaultAzureCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] + +# Initialize Resource Management client +resource_management_client = ResourceManagementClient(credential, subscription_id) + +# Get resource provider by ProviderNamespace +provider_namespace = "Microsoft.Batch" +provider = resource_management_client.providers.get(provider_namespace) + +print(f"ProviderNamespace: {provider.namespace}, RegistrationState: {provider.registration_state}\n") + +# Add resource types, locations, and API versions with new lines to separate results +for resource_type in provider.resource_types: + print(f"ResourceType: {resource_type.resource_type}\nLocations: {', '.join(resource_type.locations)}\nAPIVersions: {', '.join(resource_type.api_versions)}\n") ++``` ++The command returns: ++```output +ProviderNamespace: Microsoft.Batch, RegistrationState: Registered ++ResourceType: batchAccounts +Locations: West Europe, East US, East US 2, West US, North Central US, Brazil South, North Europe, Central US, East Asia, Japan East, Australia Southeast, Japan West, Korea South, Korea Central, Southeast Asia, South Central US, Australia East, Jio India West, South India, Central India, West India, Canada Central, Canada East, UK South, UK West, West Central US, West US 2, France Central, South Africa North, UAE North, Australia Central, Germany West Central, Switzerland North, Norway East, Brazil Southeast, West US 3, Sweden Central, Qatar Central, Poland Central, East US 2 EUAP, Central US EUAP +APIVersions: 2023-05-01, 2022-10-01, 2022-06-01, 2022-01-01, 2021-06-01, 2021-01-01, 2020-09-01, 2020-05-01, 2020-03-01-preview, 2020-03-01, 2019-08-01, 2019-04-01, 2018-12-01, 2017-09-01, 2017-05-01, 2017-01-01, 2015-12-01, 2015-09-01, 2015-07-01, 2014-05-01-privatepreview ++... +``` ++To see the resource types for a resource provider, use: ++```python +# Get resource provider by ProviderNamespace +provider_namespace = "Microsoft.Batch" +provider = resource_management_client.providers.get(provider_namespace) + +# Get ResourceTypeName of the resource types +resource_type_names = [resource_type.resource_type for resource_type in provider.resource_types] + +for resource_type_name in resource_type_names: + print(resource_type_name) +``` ++The command returns: ++```output +batchAccounts +batchAccounts/pools +batchAccounts/detectors +batchAccounts/certificates +operations +locations +locations/quotas +locations/checkNameAvailability +locations/accountOperationResults +locations/virtualMachineSkus +locations/cloudServiceSkus +``` ++The API version corresponds to a version of the resource provider's REST API operations. As a resource provider enables new features, it releases a new version of the REST API. ++To get the available API versions for a resource type, use: ++```python +# Get resource provider by ProviderNamespace +provider_namespace = "Microsoft.Batch" +provider = resource_management_client.providers.get(provider_namespace) + +# Filter resource type by ResourceTypeName and get its ApiVersions +resource_type_name = "batchAccounts" +api_versions = [ + resource_type.api_versions + for resource_type in provider.resource_types + if resource_type.resource_type == resource_type_name +] + +for api_version in api_versions[0]: + print(api_version) ++``` ++The command returns: ++```output +2023-05-01 +2022-10-01 +2022-06-01 +2022-01-01 +... +``` ++Resource Manager is supported in all regions, but the resources you deploy might not be supported in all regions. Also, there may be limitations on your subscription that prevent you from using some regions that support the resource. ++To get the supported locations for a resource type, use. ++```python +# Get resource provider by ProviderNamespace +provider_namespace = "Microsoft.Batch" +provider = resource_management_client.providers.get(provider_namespace) + +# Filter resource type by ResourceTypeName and get its Locations +resource_type_name = "batchAccounts" +locations = [ + resource_type.locations + for resource_type in provider.resource_types + if resource_type.resource_type == resource_type_name +] + +for location in locations[0]: + print(location) +``` ++The command returns: ++```output +West Europe +East US +East US 2 +West US +... +``` ++ ## Next steps * To learn about creating Resource Manager templates, see [Authoring Azure Resource Manager templates](../templates/syntax.md). * To view the resource provider template schemas, see [Template reference](/azure/templates/). * For a list that maps resource providers to Azure services, see [Resource providers for Azure services](azure-services-resource-providers.md).-* To view the operations for a resource provider, see [Azure REST API](/rest/api/). +* To view the operations for a resource provider, see [Azure REST API](/rest/api/). |
azure-resource-manager | Template Functions Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md | The possible uses of `list*` are shown in the following table. | Microsoft.DevTestLab/labs/schedules | [ListApplicable](/rest/api/dtl/schedules/listapplicable) | | Microsoft.DevTestLab/labs/users/serviceFabrics | [ListApplicableSchedules](/rest/api/dtl/servicefabrics/listapplicableschedules) | | Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) |-| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2022-11-15/database-accounts/list-connection-strings) | -| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2022-11-15/database-accounts/list-keys) | -| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2022-11-15/notebook-workspaces/list-connection-info) | +| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/database-accounts/list-keys?tabs=HTTP) | +| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2023-03-15-preview/notebook-workspaces/list-connection-info?tabs=HTTP) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/domains/list-shared-access-keys) | | Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/topics/list-shared-access-keys) | |
azure-resource-manager | Template Tutorial Add Outputs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-outputs.md | Now, let's look at the resource group and deployment history. 1. Depending on the steps you did, you should have at least one and perhaps several storage accounts in the resource group. 1. You should also have several successful deployments listed in the history. Select that link. - ![Select deployments](./media/template-tutorial-add-outputs/select-deployments.png) + :::image type="content" source="./media/template-tutorial-add-outputs/select-deployments.png" alt-text="Screenshot of the Azure portal showing the deployments link."::: 1. You see all of your deployments in the history. Select the deployment called **addoutputs**. - ![Show deployment history](./media/template-tutorial-add-outputs/show-history.png) + :::image type="content" source="./media/template-tutorial-add-outputs/show-history.png" alt-text="Screenshot of the Azure portal showing the deployment history."::: 1. You can review the inputs. - ![Show inputs](./media/template-tutorial-add-outputs/show-inputs.png) + :::image type="content" source="./media/template-tutorial-add-outputs/show-inputs.png" alt-text="Screenshot of the Azure portal showing the deployment inputs."::: 1. You can review the outputs. - ![Show outputs](./media/template-tutorial-add-outputs/show-outputs.png) + :::image type="content" source="./media/template-tutorial-add-outputs/show-outputs.png" alt-text="Screenshot of the Azure portal showing the deployment outputs."::: 1. You can review the template. - ![Show template](./media/template-tutorial-add-outputs/show-template.png) + :::image type="content" source="./media/template-tutorial-add-outputs/show-template.png" alt-text="Screenshot of the Azure portal showing the deployment template."::: ## Clean up resources |
azure-resource-manager | Template Tutorial Create First Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md | Okay, you're ready to start learning about templates. Here's what your Visual Studio Code environment looks like: - :::image type="content" source="./media/template-tutorial-create-first-template/resource-manager-visual-studio-code-first-template.png" alt-text="ARM template Visual Studio Code first template."::: + :::image type="content" source="./media/template-tutorial-create-first-template/resource-manager-visual-studio-code-first-template.png" alt-text="Screenshot of Visual Studio Code displaying an empty ARM template with JSON structure in the editor."::: This template doesn't deploy any resources. We're starting with a blank template so you can get familiar with the steps to deploy a template while minimizing the chance of something going wrong. The deployment command returns results. Look for `ProvisioningState` to see whet # [PowerShell](#tab/azure-powershell) - :::image type="content" source="./media/template-tutorial-create-first-template/resource-manager-deployment-provisioningstate.png" alt-text="PowerShell deployment provisioning state."::: + :::image type="content" source="./media/template-tutorial-create-first-template/resource-manager-deployment-provisioningstate.png" alt-text="Screenshot of PowerShell output showing the successful deployment provisioning state."::: # [Azure CLI](#tab/azure-cli) - :::image type="content" source="./media/template-tutorial-create-first-template/azure-cli-provisioning-state.png" alt-text="Azure CLI deployment provisioning state."::: + :::image type="content" source="./media/template-tutorial-create-first-template/azure-cli-provisioning-state.png" alt-text="Screenshot of Azure CLI output displaying the successful deployment provisioning state."::: You can verify the deployment by exploring the resource group from the Azure por 1. Notice in the middle of the overview, in the **Essentials** section, the page displays the deployment status next to **Deployments**. Select **1 Succeeded**. - :::image type="content" source="./media/template-tutorial-create-first-template/deployment-status.png" alt-text="See deployment status."::: + :::image type="content" source="./media/template-tutorial-create-first-template/deployment-status.png" alt-text="Screenshot of Azure portal showing the deployment status in the Essentials section of the resource group."::: 1. You see a history of deployment for the resource group. Check the box to the left of **blanktemplate** and select **blanktemplate**. - :::image type="content" source="./media/template-tutorial-create-first-template/select-from-deployment-history.png" alt-text="Select deployment."::: + :::image type="content" source="./media/template-tutorial-create-first-template/select-from-deployment-history.png" alt-text="Screenshot of Azure portal displaying the deployment history with the blanktemplate deployment selected."::: 1. You see a summary of the deployment. In this case, there's not a lot to see because no resources are deployed. Later in this series you might find it helpful to review the summary in the deployment history. Notice on the left you can see inputs, outputs, and the template that the deployment used. - :::image type="content" source="./media/template-tutorial-create-first-template/view-deployment-summary.png" alt-text="See deployment summary."::: + :::image type="content" source="./media/template-tutorial-create-first-template/view-deployment-summary.png" alt-text="Screenshot of Azure portal showing the deployment summary for the blanktemplate deployment."::: ## Clean up resources If you're stopping now, you might want to delete the resource group. 3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name. 4. Select **Delete resource group** from the top menu. - :::image type="content" source="./media/template-tutorial-create-first-template/resource-deletion.png" alt-text="See deletion."::: + :::image type="content" source="./media/template-tutorial-create-first-template/resource-deletion.png" alt-text="Screenshot of Azure portal with the Delete resource group option highlighted in the resource group view."::: ## Next steps |
azure-resource-manager | Template Tutorial Export Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-export-template.md | This template works well for deploying storage accounts, but you might want to a - **Region**: Select an Azure location from the drop-down menu, such as **Central US**. - **Pricing Tier**: To save costs, select **Change size** to change the **SKU and size** to **first Basic (B1)**, under **Dev / Test** for less demanding workloads. - ![Resource Manager template export template portal](./media/template-tutorial-export-template/resource-manager-template-export.png) + :::image type="content" source="./media/template-tutorial-export-template/resource-manager-template-export.png" alt-text="Screenshot of the Create App Service Plan page in the Azure portal."::: 1. Select **Review and create**. 1. Select **Create**. It takes a few moments to create the resource. This template works well for deploying storage accounts, but you might want to a 1. Select **Go to resource**. - ![Go to resource](./media/template-tutorial-export-template/resource-manager-template-export-go-to-resource.png) + :::image type="content" source="./media/template-tutorial-export-template/resource-manager-template-export-go-to-resource.png" alt-text="Screenshot of the Go to resource button in the Azure portal."::: 1. From the left menu, under **Automation**, select **Export template**. - ![Resource Manager template export template](./media/template-tutorial-export-template/resource-manager-template-export-template.png) + :::image type="content" source="./media/template-tutorial-export-template/resource-manager-template-export-template.png" alt-text="Screenshot of the Export template option in the Azure portal."::: The export template feature takes the current state of a resource and generates a template to deploy it. Exporting a template can be a helpful way of quickly getting the JSON you need to deploy a resource. 1. Look at the `Microsoft.Web/serverfarms` definition and the parameter definition in the exported template. You don't need to copy these sections. You can just use this exported template as an example of how you want to add this resource to your template. - ![Resource Manager template export template exported template](./media/template-tutorial-export-template/resource-manager-template-exported-template.png) + :::image type="content" source="./media/template-tutorial-export-template/resource-manager-template-exported-template.png" alt-text="Screenshot of the exported template JSON code in the Azure portal."::: > [!IMPORTANT] > Typically, the exported template is more verbose than you might want when creating a template. The SKU object, for example, in the exported template has five properties. This template works, but you could just use the `name` property. You can start with the exported template and then modify it as you like to fit your requirements. |
backup | Disk Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-overview.md | Title: Overview of Azure Disk Backup description: Learn about the Azure Disk backup solution. Previously updated : 03/10/2022 Last updated : 07/21/2023 Consider Azure Disk Backup in scenarios where: - Currently Azure Disk Backup supports operational backup of managed disks and doesn't copy the backups to Backup Vault storage. Refer to the [support matrix](disk-backup-support-matrix.md) for a detailed list of supported and unsupported scenarios, and region availability. +## How does the disk backup scheduling and retention period work? ++Azure Disk Backup currently supports only the Operational Tier, which helps store backups as Disk Snapshots in your tenant that aren't moved to the vault. The backup policy defines the schedule and retention period of your backups in the Operational Tier (when the snapshots will be taken and how long they will be retained). ++By using the Azure Disk backup policy, you can define the backup schedule with Hourly frequency of 1, 2, 4, 6, 8, or 12 hours and Daily frequency. Although backups have scheduled timing as per the policy, there can be a difference in the actual start time of the backups from the scheduled one. ++The retention period of snapshots is governed by the snapshot limit for a disk. Currently, a maximum of 500 snapshots can be retained for a disk. If the limit is reached, no new snapshots can be taken, and you need to delete the older snapshots. ++The retention period for a backup also follows the maximum limit of 450 snapshots with 50 snapshots kept aside for on-demand backups. ++For example, if the scheduling frequency for backups is set as Daily, then you can set the retention period for backups at a maximum value of 450 days. Similarly, if the scheduling frequency for backups is set as Hourly with a 1-hour frequency, then you can set the retention for backups at a maximum value of 18 days. + ## Pricing Azure Backup uses [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md) of the managed disk. Incremental snapshots are charged per GiB of the storage occupied by the delta changes since the last snapshot. For example, if you're using a managed disk with a provisioned size of 128 GiB, with 100 GiB used, the first incremental snapshot is billed only for the used size of 100 GiB. 20 GiB of data is added on the disk before you create the second snapshot. Now, the second incremental snapshot is billed for only 20 GiB. |
backup | Sql Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sql-support-matrix.md | Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 07/18/2023 Last updated : 07/25/2023 You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server manually installed) VMs are supported. **Supported regions** | Azure Backup for SQL Server databases is available in all regions, except France South (FRS), UK North (UKN), UK South 2 (UKS2), UG IOWA (UGI), and Germany (Black Forest). **Supported operating systems** | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012 (all versions), Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported.-**Supported SQL Server versions** | SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported. +**Supported SQL Server versions** | SQL Server 2022, SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported. **Supported .NET versions** | .NET Framework 4.5.2 or later installed on the VM **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server that is manually installed) VMs are supported. Support for standalone instances is always on [availability groups](backup-sql-server-on-availability-groups.md). **Cross Region Restore** | Supported. [Learn more](restore-sql-database-azure-vm.md#cross-region-restore). |
chaos-studio | Chaos Studio Fault Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md | Currently, the Windows agent doesn't reduce memory pressure when other applicati | Prerequisites | Agent must run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. | | Urn | urn:csci:microsoft:agent:networkPacketLoss/1.0 | | Parameters (key, value) | |-| lossRate | The rate at which packets matching the destination filters will be lost, ranging from 0.0 to 1.0. | +| packetLossRate | The rate at which packets matching the destination filters will be lost, ranging from 0.0 to 1.0. | | virtualMachineScaleSetInstances | An array of instance IDs when this fault is applied to a virtual machine scale set. Required for virtual machine scale sets. | | destinationFilters | Delimited JSON array of packet filters (parameters below) that define which outbound packets to target for fault injection. Maximum of three.| | address | IP address that indicates the start of the IP range. | Currently, the Windows agent doesn't reduce memory pressure when other applicati "value": "[{\"address\":\"23.45.229.97\",\"subnetMask\":\"255.255.255.224\",\"portLow\":5000,\"portHigh\":5200}]" }, {- "key": "lossRate", + "key": "packetLossRate", "value": "0.5" }, { |
communication-services | Azure Communication Services Azure Cognitive Services Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md | -Azure Communication Services Call Automation APIs provide developers the ability to steer and control the ACS Telephony, VoIP or WebRTC calls using real-time event triggers to perform actions based on custom business logic specific to their domain. Within the Call Automation APIs developers can use simple AI powered APIs, which can be used to play personalized greeting messages, recognize conversational voice inputs to gather information on contextual questions to drive a more self-service model with customers, use sentiment analysis to improve customer service overall. These content specific APIs are orchestrated through **Azure Cognitive Services** with support for customization of AI models without developers needing to terminate media streams on their services and streaming back to Azure for AI functionality. +Azure Communication Services Call Automation APIs provide developers the ability to steer and control the Azure Communication Services Telephony, VoIP or WebRTC calls using real-time event triggers to perform actions based on custom business logic specific to their domain. Within the Call Automation APIs developers can use simple AI powered APIs, which can be used to play personalized greeting messages, recognize conversational voice inputs to gather information on contextual questions to drive a more self-service model with customers, use sentiment analysis to improve customer service overall. These content specific APIs are orchestrated through **Azure Cognitive Services** with support for customization of AI models without developers needing to terminate media streams on their services and streaming back to Azure for AI functionality. All this is possible with one-click where enterprises can access a secure solution and link their models through the portal. Furthermore, developers and enterprises don't need to manage credentials. Connecting your Cognitive Services uses managed identities to access user-owned resources. Developers can use managed identities to authenticate any resource that supports Azure Active Directory authentication. With the ability to, connect your Cognitive Services to Azure Communication Serv ## Azure portal experience You can also configure and bind your Communication Services and Cognitive Services through the Azure portal. -### Add a Managed Identity to the ACS Resource +### Add a Managed Identity to the Azure Communication Services Resource -1. Navigate to your ACS Resource in the Azure portal. +1. Navigate to your Azure Communication Services Resource in the Azure portal. 2. Select the Identity tab. 3. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed. You can also configure and bind your Communication Services and Cognitive Servic 9. Click ΓÇ£Review + assignΓÇ¥, this assigns the role to the managed identity. -### Option 2: Add role through ACS Identity tab +### Option 2: Add role through Azure Communication Services Identity tab -1. Navigate to your ACS resource in the Azure portal. +1. Navigate to your Azure Communication Services resource in the Azure portal. 2. Select Identity tab. 3. Click on "Azure role assignments". |
communication-services | Matching Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/matching-concepts.md | This document describes the registration of workers, the submission of jobs and ## Worker Registration -Before a worker can receive offers to service a job, it must be registered. -In order to register, we need to specify which queues the worker will listen on, which channels it can handle and a set of labels. +Before a worker can receive offers to service a job, it must be registered first by setting `availableForOffers` to true. Next, we need to specify which queues the worker listens on and which channels it can handle. Once registered, you receive a [RouterWorkerRegistered](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerregistered) event from Event Grid. -In the following example we register a worker to +In the following example, we register a worker to - Listen on `queue-1` and `queue-2`-- Be able to handle both the voice and chat channels. In this case, the worker could either take a single `voice` job at one time or two `chat` jobs at the same time. This is configured by specifying the total capacity of the worker and assigning a cost per job for each channel.+- Be able to handle both the voice and chat channels. In this case, the worker could either take a single `voice` job at one time or two `chat` jobs at the same time. This setting is configured by specifying the total capacity of the worker and assigning a cost per job for each channel. - Have a set of labels that describe things about the worker that could help determine if it's a match for a particular job. ::: zone pivot="programming-language-csharp" In the following example we register a worker to ```csharp await client.CreateWorkerAsync(new CreateWorkerOptions(workerId: "worker-1", totalCapacity: 2) {- QueueIds = { ["queue1"] = new RouterQueueAssignment(), ["queue2"] = new RouterQueueAssignment() }, + AvailableForOffers = true, + QueueAssignments = { ["queue1"] = new RouterQueueAssignment(), ["queue2"] = new RouterQueueAssignment() }, ChannelConfigurations = { ["voice"] = new ChannelConfiguration(capacityCostPerJob: 2), await client.CreateWorkerAsync(new CreateWorkerOptions(workerId: "worker-1", tot ```typescript await client.createWorker("worker-1", {+ availableForOffers = true, totalCapacity: 2, queueAssignments: { queue1: {}, queue2: {} }, channelConfigurations: { await client.createWorker("worker-1", { ```python client.create_worker(worker_id = "worker-1", router_worker = RouterWorker(+ available_for_offers = True, total_capacity = 2, queue_assignments = { "queue2": RouterQueueAssignment() client.create_worker(worker_id = "worker-1", router_worker = RouterWorker( ```java client.createWorker(new CreateWorkerOptions("worker-1", 2)+ .setAvailableForOffers(true) .setQueueAssignments(Map.of( "queue1", new RouterQueueAssignment(), "queue2", new RouterQueueAssignment())) client.createWorker(new CreateWorkerOptions("worker-1", 2) ::: zone-end -> [!NOTE] -> If a worker is registered and idle for more than 7 days, it'll be automatically deregistered and you'll receive a `WorkerDeregistered` event from EventGrid. - ## Job Submission -In the following example, we'll submit a job that +In the following example, we submit a job that - Goes directly to `queue1`. - For the `chat` channel. client.createJob(new CreateJobOptions("job1", "chat", "queue1") ::: zone-end -Job Router will now try to match this job to an available worker listening on `queue1` for the `chat` channel, with `English` set to `true` and `Skill` greater than `10`. -Once a match is made, an offer is created. The distribution policy that is attached to the queue will control how many active offers there can be for a job and how long each offer is valid. [You'll receive][subscribe_events] an [OfferIssued Event][offer_issued_event] which would look like this: +Job Router tries to match this job to an available worker listening on `queue1` for the `chat` channel, with `English` set to `true` and `Skill` greater than `10`. +Once a match is made, an offer is created. The distribution policy that is attached to the queue controls how many active offers there can be for a job and how long each offer is valid. [You receive][subscribe_events] an [OfferIssued Event][offer_issued_event] that would look like this: ```json { Once a match is made, an offer is created. The distribution policy that is attac } ``` -The [OfferIssued Event][offer_issued_event] includes details about the job, worker, how long the offer is valid and the `offerId` which you'll need to accept or decline the job. +The [OfferIssued Event][offer_issued_event] includes details about the job, worker, how long the offer is valid and the `offerId` that you need to accept or decline the job. > [!NOTE] > The maximum lifetime of a job is 90 days, after which it'll automatically expire. +## Worker Deregistration ++If a worker would like to stop receiving offers, it can be deregistered by setting `AvailableForOffers` to `false` when updating the worker and you receive a [RouterWorkerDeregistered](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerderegistered) event from Event Grid. Any existing offers for the worker are revoked and you receive a [RouterWorkerOfferRevoked](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferrevoked) event for each offer. +++```csharp +await client.UpdateWorkerAsync(new UpdateWorkerOptions(workerId: "worker-1") { AvailableForOffers = false }); +``` ++++```typescript +await client.updateWorker("worker-1", { availableForOffers = false }); +``` ++++```python +client.update_worker(worker_id = "worker-1", router_worker = RouterWorker(available_for_offers = False)) +``` ++++```java +client.updateWorker(new UpdateWorkerOptions("worker-1").setAvailableForOffers(true)); +``` +++> [!NOTE] +> If a worker is registered and idle for more than 7 days, it'll be automatically deregistered. + <!-- LINKS --> [subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md-[job_classified_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassified [offer_issued_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferissued |
communication-services | Worker Capacity Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/worker-capacity-concepts.md | In this example, we configure a worker with total capacity of 100 and set the vo var worker = await client.CreateWorkerAsync( new CreateWorkerOptions(workerId: "worker1", totalCapacity: 100) {- QueueIds = { ["queue1"] = new RouterQueueAssignment() }, + QueueAssignments = { ["queue1"] = new RouterQueueAssignment() }, ChannelConfigurations = { ["voice"] = new ChannelConfiguration(capacityCostPerJob: 100), In this example, a worker is configured with total capacity of 100. Next, the v var worker = await client.CreateWorkerAsync( new CreateWorkerOptions(workerId: "worker1", totalCapacity: 100) {- QueueIds = { ["queue1"] = new RouterQueueAssignment() }, + QueueAssignments = { ["queue1"] = new RouterQueueAssignment() }, ChannelConfigurations = { ["voice"] = new ChannelConfiguration(capacityCostPerJob: 60), |
communication-services | Ui Library Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/ui-library/ui-library-overview.md | Title: UI Library overview description: Learn about the Azure Communication Services UI Library.-+ -+ Last updated 06/30/2021 |
communication-services | Ui Library Use Cases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/ui-library/ui-library-use-cases.md | Title: UI Library use cases description: Learn about the UI Library and how it can help you build communication experiences-+ -+ Last updated 06/30/2021 |
communication-services | Data Channel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/data-channel.md | -The Data Channel API enables real-time messaging during audio and video calls. With this API, you can now easily integrate chat and data exchange functionalities into the applications, providing a seamless communication experience for users. Key features include: +The Data Channel API enables real-time messaging during audio and video calls. With this API, you can now easily integrate data exchange functionalities into the applications, providing a seamless communication experience for users. Key features include: * Real-time Messaging: The Data Channel API enables users to instantly send and receive messages during an ongoing audio or video call, promoting smooth and efficient communication. In group call scenarios, messages can be sent to a single participant, a specific set of participants, or all participants within the call. This flexibility enhances communication and collaboration among users during group interactions. * Unidirectional Communication: Unlike bidirectional communication, the Data Channel API is designed for unidirectional communication. It employs distinct objects for sending and receiving messages: the DataChannelSender object for sending and the DataChannelReceiver object for receiving. This separation simplifies message management in group calls, leading to a more streamlined user experience. These are two common use cases: ### Messaging between participants in a call The Data Channel API enables the transmission of binary type messages among call participants.-With appropriate serialization in the application, it can deliver various message types, extending beyond mere chat texts. -Although other messaging libraries may offer similar functionality, the Data Channel API offers the advantage of low-latency communication. -Moreover, by removing the need for maintaining a separate participant list, user management is simplified. +With appropriate serialization in the application, it can deliver various message types for different purposes. +There are also other liberies or services providing the messaging functionalities. +Each of them has its advantages and disadvantages. You should choose the suitable one for your usage scenario. +For example, the Data Channel API offers the advantage of low-latency communication, and simplifies user management as there is no need to maintain a separate participant list. +However, the data channel feature doesn't provide message persistence and doesn't guarantee that message won't be lost in an end-to-end manner. +If you need the stateful messaging or guaranteed delivery, you may want to consider alternative solutions. ### File sharing The decoupling of sender and receiver objects simplifies message handling in gro ### Channel Every Data Channel message is associated with a specific channel identified by `channelId`. It's important to clarify that this channelId isn't related to the `id` property in the WebRTC Data Channel.-This channelId can be utilized to differentiate various application uses, such as using 100 for chat messages and 101 for image transfers. +This channelId can be utilized to differentiate various application uses, such as using 1000 for control messages and 1001 for image transfers. The channelId is assigned during the creation of a DataChannelSender object, and can be either user-specified or determined by the SDK if left unspecified. |
communication-services | Customize Worker Scoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/customize-worker-scoring.md | var queue = await administrationClient.CreateQueueAsync( // Create workers var worker1 = await client.CreateWorkerAsync(new CreateWorkerOptions(workerId: "Worker_1", totalCapacity: 100) {- QueueIds = { [queue.Value.Id] = new RouterQueueAssignment() }, + QueueAssignments = { [queue.Value.Id] = new RouterQueueAssignment() }, ChannelConfigurations = { ["Xbox_Chat_Channel"] = new ChannelConfiguration(capacityCostPerJob: 10) }, Labels = { var worker1 = await client.CreateWorkerAsync(new CreateWorkerOptions(workerId: " var worker2 = await client.CreateWorkerAsync(new CreateWorkerOptions(workerId: "Worker_2", totalCapacity: 100) {- QueueIds = { [queue.Value.Id] = new RouterQueueAssignment() }, + QueueAssignments = { [queue.Value.Id] = new RouterQueueAssignment() }, ChannelConfigurations = { ["Xbox_Chat_Channel"] = new ChannelConfiguration(capacityCostPerJob: 10) }, Labels = { var worker2 = await client.CreateWorkerAsync(new CreateWorkerOptions(workerId: " var worker3 = await client.CreateWorkerAsync(new CreateWorkerOptions(workerId: "Worker_3", totalCapacity: 100) {- QueueIds = { [queue.Value.Id] = new RouterQueueAssignment() }, + QueueAssignments = { [queue.Value.Id] = new RouterQueueAssignment() }, ChannelConfigurations = { ["Xbox_Chat_Channel"] = new ChannelConfiguration(capacityCostPerJob: 10) }, Labels = { |
communication-services | Job Classification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/job-classification.md | client.updateJob(new UpdateJobOptions("job1") ``` ::: zone-end++> [!NOTE] +> If the job labels, queueId, channelId or worker selectors are updated, any existing offers on the job are revoked and you'll receive a [RouterWorkerOfferRevoked](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferrevoked) event for each offer from EventGrid. The job will be re-queued and you'll receive a [RouterJobQueued](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobqueued) event. Job offers may also be revoked when a worker's total capacity is reduced, or the channel configurations are updated. |
communication-services | Calling Hero Sample | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/calling-hero-sample.md | |
communication-services | Chat Hero Sample | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/chat-hero-sample.md | Title: Chat Hero Sample description: Overview of chat hero sample using Azure Communication Services to enable developers to learn more about the inner workings of the sample and learn how to modify it.-+ -+ Last updated 06/30/2021 |
communication-services | Trusted Auth Sample | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/trusted-auth-sample.md | Title: Trusted Authentication Service Hero Sample description: Overview of trusted authentication services hero sample using Azure Communication Services.-+ -+ Last updated 06/30/2021 |
communications-gateway | Interoperability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability.md | Azure Communications Gateway provides all the features of a traditional session Azure Communications Gateway also offers metrics for monitoring your deployment. -You must provide the networking connection between Azure Communications Gateway and your core networks. For Teams Phone Mobile, you must also provide a network element that can route calls into the Microsoft Phone System for call anchoring. +You must provide the networking connection between Azure Communications Gateway and your core networks. ### Compliance with Certified SBC specifications |
communications-gateway | Mobile Control Point | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/mobile-control-point.md | Title: Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile -description: Azure Communication Gateway optionally contains Mobile Control Point for anchoring Teams Phone Mobile calls in the Microsoft Could +description: Azure Communication Gateway optionally contains Mobile Control Point for anchoring Teams Phone Mobile calls in the Microsoft Cloud |
confidential-computing | Enclave Development Oss | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/enclave-development-oss.md | For example, you can use these open-source frameworks: - [The EGo SDK](#ego) - [The Intel SGX SDK](#intel-sdk) - [The Confidential Consortium Framework (CCF)](#ccf)+- [Intel® Cloud Optimization Modules for Kubeflow](#intel-kubeflow) If you're not looking to write new application code, you can wrap a containerized application using [confidential container enablers](confidential-containers.md) The [Confidential Consortium Framework](https://www.microsoft.com/research/proje In the CCF, the decentralized ledger is made up of recorded changes to a Key-Value store that is replicated across all the network nodes. Each of these nodes runs a transaction engine that can be triggered by users of the blockchain over TLS. When you trigger an endpoint, you mutate the Key-Value store. Before the encrypted change is recorded to the decentralized ledger, it must be agreed upon by more than one node to reach agreement. +### Intel® Cloud Optimization Modules for Kubeflow <a id="intel-kubeflow"></a> ++The [Intel® Cloud Optimization Modules for Kubeflow](https://github.com/intel/kubeflow-intel-azure/tree/main) provide an optimized machine learning Kubeflow Pipeline using XGBoost to predict the probability of a loan default. The reference architecture leverages the secure and confidential [Intel® Software Guard Extensions](../../articles/confidential-computing/confidential-computing-enclaves.md) virtual machines on an [Azure Kubernetes Services (AKS) cluster](../../articles/confidential-computing/confidential-containers-enclaves.md). It also enables the use of [Intel® optimizations for XGBoost](https://www.intel.com/content/www/us/en/developer/tools/oneapi/optimization-for-xgboost.html) and [Intel® daal4py](https://www.intel.com/content/www/us/en/developer/articles/guide/a-daal4py-introduction-and-getting-started-guide.html) to accelerate model training and inference in a full end-to-end machine learning pipeline. ++ ## Next steps - [Attesting application enclaves](attestation.md) |
container-registry | Container Registry Api Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-api-deprecation.md | Unless noted otherwise, a feature, product, SKU, SDK, utility, or tool that's su When support is removed for a version of API, you can use a latest version of API, as long as the API remains in support. +For CLI users, we recommend to use latest version of [Azure CLI][Azure Cloud Shell], for invoking SDK implementation. Run `az --version` to find the version. + To avoid errors due to using a deprecated API, we recommend moving to a newer version of the ACR API. You can find a list of [supported versions here.](/azure/templates/microsoft.containerregistry/allversions) You may be consuming this API via one or more SDKs. Use a newer API version by updating to a newer version of the SDK. You can find a [list of SDKs and their latest versions here.](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html?search=containerregistry) For more information, see the following articles: >* [Supported API versions](/azure/templates/microsoft.containerregistry/allversions) >* [SDKs and their latest versions](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html?search=containerregistry)++<!-- LINKS - External --> +[Azure Cloud Shell]: /azure/cloud-shell/quickstart |
container-registry | Container Registry Image Lock | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-lock.md | To see the current attributes of a tag, run the following [az acr repository sho ```azurecli az acr repository show \- --name myregistry --image myimage:tag \ + --name myregistry --image myrepo:tag \ --output jsonc ``` ### Lock an image by tag -To lock the *myimage:tag* image in *myregistry*, run the following [az acr repository update][az-acr-repository-update] command: +To lock the *myrepo:tag* image in *myregistry*, run the following [az acr repository update][az-acr-repository-update] command: ```azurecli az acr repository update \- --name myregistry --image myimage:tag \ + --name myregistry --image myrepo:tag \ --write-enabled false ``` ### Lock an image by manifest digest -To lock a *myimage* image identified by manifest digest (SHA-256 hash, represented as `sha256:...`), run the following command. (To find the manifest digest associated with one or more image tags, run the [az acr manifest list-metadata][az-acr-manifest-list-metadata] command.) +To lock a *myrepo* image identified by manifest digest (SHA-256 hash, represented as `sha256:...`), run the following command. (To find the manifest digest associated with one or more image tags, run the [az acr manifest list-metadata][az-acr-manifest-list-metadata] command.) ```azurecli az acr repository update \- --name myregistry --image myimage@sha256:123456abcdefg \ + --name myregistry --image myrepo@sha256:123456abcdefg \ --write-enabled false ``` az acr repository update \ ```bash registry="myregistry"-repo="myimage" +repo="myrepo" tag="mytag" az login az acr manifest show-metadata -r $registry -n "$repo@$digest" ### Protect an image from deletion -To allow the *myimage:tag* image to be updated but not deleted, run the following command: +To allow the *myrepo:tag* image to be updated but not deleted, run the following command: ```azurecli az acr repository update \- --name myregistry --image myimage:tag \ + --name myregistry --image myrepo:tag \ --delete-enabled false --write-enabled true ``` az acr repository update \ ## Prevent read operations on an image or repository -To prevent read (pull) operations on the *myimage:tag* image, run the following command: +To prevent read (pull) operations on the *myrepo:tag* image, run the following command: ```azurecli az acr repository update \- --name myregistry --image myimage:tag \ + --name myregistry --image myrepo:tag \ --read-enabled false ``` az acr repository update \ ## Unlock an image or repository -To restore the default behavior of the *myimage:tag* image so that it can be deleted and updated, run the following command: +To restore the default behavior of the *myrepo:tag* image so that it can be deleted and updated, run the following command: ```azurecli az acr repository update \- --name myregistry --image myimage:tag \ + --name myregistry --image myrepo:tag \ --delete-enabled true --write-enabled true ``` |
cosmos-db | Cosmos Db Vs Mongodb Atlas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/cosmos-db-vs-mongodb-atlas.md | Last updated 06/03/2023 [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] -[Azure Cosmos DB for MongoDB](../introduction.md) provides a powerful fully managed MongoDB compatible database while seamlessly integrating with the Azure ecosystem. This allows developers to reap the benefits of Cosmos DB's robust features such as global distribution, 99.999% high availability SLA, and strong security measures, while retaining the ability to use their familiar MongoDB tools and applications. Developers can remain vendor agnostic, without needing to adapt to a new set of tools or drastically change their current operations. This ensures a smooth transition and operation for MongoDB developers, making Cosmos DB for MongoDB a compelling choice for a scalable, secure, and efficient database solution for their MongoDB workloads. +[Azure Cosmos DB for MongoDB](../introduction.md) provides a powerful fully managed MongoDB compatible database while seamlessly integrating with the Azure ecosystem. This allows developers to reap the benefits of Azure Cosmos DB's robust features such as global distribution, 99.999% high availability SLA, and strong security measures, while retaining the ability to use their familiar MongoDB tools and applications. Developers can remain vendor agnostic, without needing to adapt to a new set of tools or drastically change their current operations. This ensures a smooth transition and operation for MongoDB developers, making Azure Cosmos DB for MongoDB a compelling choice for a scalable, secure, and efficient database solution for their MongoDB workloads. > [!TIP] > Want to try the Azure Cosmos DB for MongoDB with no commitment? Create an Azure Cosmos DB account using [Try Azure Cosmos DB](../try-free.md) for free. |
cosmos-db | Feature Support 42 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md | Azure Cosmos DB for MongoDB supports the following database commands. | Command | Supported | | - | |-| `[change streams](change-streams.md)` | Yes | +| `change streams` | Yes | | `delete` | Yes | | `eval` | No | | `find` | Yes | |
cosmos-db | How To Java Change Feed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-java-change-feed.md | -This how-to guide walks you through a simple Java application, which uses the Azure Cosmos DB for NoSQL to insert documents into an Azure Cosmos DB container, while maintaining a materialized view of the container using Change Feed and Change Feed Processor. The Java application communicates with the Azure Cosmos DB for NoSQL using Azure Cosmos DB Java SDK v4. +Azure Cosmos DB is a fully managed NoSQL database service provided by Microsoft. It allows you to build globally distributed and highly scalable applications with ease. This how-to guide walks you through the process of creating a Java application that uses the Azure Cosmos DB for NoSQL database and implements the Change Feed Processor for real-time data processing. The Java application communicates with the Azure Cosmos DB for NoSQL using Azure Cosmos DB Java SDK v4. > [!IMPORTANT] -> This tutorial is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4. +> This tutorial is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), [Change feed processor in Azure Cosmos DB](change-feed-processor.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4. > ## Prerequisites -* The URI and key for your Azure Cosmos DB account +* Azure Cosmos DB Account: you can create it from the [Azure portal](https://portal.azure.com/) or you can use [Azure Cosmos DB Emulator](../local-emulator.md) as well. -* Maven +* Java Development Environment: Ensure you have Java Development Kit (JDK) installed on your machine with at least 8 version. -* Java 8 +* [Azure Cosmos DB Java SDK V4](sdk-java-v4.md): provides the necessary features to interact with Azure Cosmos DB. ## Background -The Azure Cosmos DB change feed provides an event-driven interface to trigger actions in response to document insertion. This has many uses. For example in applications, which are both read and write heavy, a chief use of change feed is to create a real-time **materialized view** of a container as it is ingesting documents. The materialized view container will hold the same data but partitioned for efficient reads, making the application both read and write efficient. +The Azure Cosmos DB change feed provides an event-driven interface to trigger actions in response to document insertion that has many uses. The work of managing change feed events is largely taken care of by the change feed Processor library built into the SDK. This library is powerful enough to distribute change feed events among multiple workers, if that is desired. All you have to do is provide the change feed library a callback. -This simple example demonstrates change feed Processor library with a single worker creating and deleting documents from a materialized view. +This simple example of Java application is demonstrating real-time data processing with Azure Cosmos DB and the Change Feed Processor. The application inserts sample documents into a "feed container" to simulate a data stream. The Change Feed Processor, bound to the feed container, processes incoming changes and logs the document content. The processor automatically manages leases for parallel processing. -## Setup +## Source code -If you haven't already done so, clone the app example repo: +You can clone the SDK example repo and find this example in `SampleChangeFeedProcessor.java`: ```bash-git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-app-example.git -``` --Open a terminal in the repo directory. Build the app by running --```bash -mvn clean package +git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples.git +cd azure-cosmos-java-sql-api-sample/src/main/java/com/azure/cosmos/examples/changefeed/ ``` ## Walkthrough -1. As a first check, you should have an Azure Cosmos DB account. Open the **Azure portal** in your browser, go to your Azure Cosmos DB account, and in the left pane navigate to **Data Explorer**. -- :::image type="content" source="media/how-to-java-change-feed/cosmos_account_empty.JPG" alt-text="Azure Cosmos DB account"::: --1. Run the app in the terminal using the following command: -- ```bash - mvn exec:java -Dexec.mainClass="com.azure.cosmos.workedappexample.SampleGroceryStore" -DACCOUNT_HOST="your-account-uri" -DACCOUNT_KEY="your-account-key" -Dexec.cleanupDaemonThreads=false - ``` --1. Press enter when you see -- ```bash - Press enter to create the grocery store inventory system... - ``` -- then return to the Azure portal Data Explorer in your browser. You'll see a database **GroceryStoreDatabase** has been added with three empty containers: -- * **InventoryContainer** - The inventory record for our example grocery store, partitioned on item ```id```, which is a UUID. - * **InventoryContainer-pktype** - A materialized view of the inventory record, optimized for queries over item ```type``` - * **InventoryContainer-leases** - A leases container is always needed for change feed; leases track the app's progress in reading the change feed. -- :::image type="content" source="media/how-to-java-change-feed/cosmos_account_resources_lease_empty.JPG" alt-text="Empty containers"::: --1. In the terminal, you should now see a prompt -- ```bash - Press enter to start creating the materialized view... - ``` -- Press enter. Now the following block of code will execute and initialize the change feed processor on another thread: -- ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API -- [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=InitializeCFP)] -- ```"SampleHost_1"``` is the name of the Change Feed processor worker. ```changeFeedProcessorInstance.start()``` is what actually starts the Change Feed processor. -- Return to the Azure portal Data Explorer in your browser. Under the **InventoryContainer-leases** container, select **items** to see its contents. You'll see that Change Feed Processor has populated the lease container, that is, the processor has assigned the ```SampleHost_1``` worker a lease on some partitions of the **InventoryContainer**. -- :::image type="content" source="media/how-to-java-change-feed/cosmos_leases.JPG" alt-text="Leases"::: --1. Press enter again in the terminal. This will trigger 10 documents to be inserted into **InventoryContainer**. Each document insertion appears in the change feed as JSON; the following callback code handles these events by mirroring the JSON documents into a materialized view: -- ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API -- [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=CFPCallback)] --1. Allow the code to run 5-10 sec. Then return to the Azure portal Data Explorer and navigate to **InventoryContainer > items**. You should see that items are being inserted into the inventory container; note the partition key (```id```). -- :::image type="content" source="media/how-to-java-change-feed/cosmos_items.JPG" alt-text="Feed container"::: +1. Configure the [`ChangeFeedProcessorOptions`](/java/api/com.azure.cosmos.models.changefeedprocessoroptions) in a Java application using Azure Cosmos DB and Azure Cosmos DB Java SDK V4. The [`ChangeFeedProcessorOptions`](/java/api/com.azure.cosmos.models.changefeedprocessoroptions) provides essential settings to control the behavior of the Change Feed Processor during data processing. + [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=ChangeFeedProcessorOptions)] -1. Now, in Data Explorer navigate to **InventoryContainer-pktype > items**. This is the materialized view - the items in this container mirror **InventoryContainer** because they were inserted programmatically by change feed. Note the partition key (```type```). So this materialized view is optimized for queries filtering over ```type```, which would be inefficient on **InventoryContainer** because it's partitioned on ```id```. +2. Initialize [`ChangeFeedProcessor`](/java/api/com.azure.cosmos.changefeedprocessor) with relevant configurations, including the host name, feed container, lease container, and data handling logic. The [`start()`](/java/api/com.azure.cosmos.changefeedprocessor#com-azure-cosmos-changefeedprocessor-start()) method initiates the data processing, enabling concurrent and real-time processing of incoming data changes from the feed container. + [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=StartChangeFeedProcessor)] - :::image type="content" source="media/how-to-java-change-feed/cosmos_materializedview2.JPG" alt-text="Screenshot shows the Data Explorer page for an Azure Cosmos DB account with Items selected."::: +3. Specify the delegate handles incoming data changes using the `handleChanges()` method. The method processes the received JsonNode documents from the Change Feed. As a developer you have two options for handling the JsonNode document provided to you by Change Feed. One option is to operate on the document in the form of a JsonNode. This is great especially if you don't have a single uniform data model for all documents. The second option - transform the JsonNode to a POJO having the same structure as the JsonNode. Then you can operate on the POJO. + [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=Delegate)] -1. We're going to delete a document from both **InventoryContainer** and **InventoryContainer-pktype** using just a single ```upsertItem()``` call. First, take a look at Azure portal Data Explorer. We'll delete the document for which ```/type == "plums"```; it's encircled in red below +4. Build and run the Java application. The application starts the Change Feed Processor, insert sample documents into the feed container, and process the incoming changes. - :::image type="content" source="media/how-to-java-change-feed/cosmos_materializedview-emph-todelete.JPG" alt-text="Screenshot shows the Data Explorer page for an Azure Cosmos DB account with a particular item ID selected."::: +## Conclusion - Hit enter again to call the function ```deleteDocument()``` in the example code. This function, shown below, upserts a new version of the document with ```/ttl == 5```, which sets document Time-To-Live (TTL) to 5 sec. - - ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API +In this guide, you learned how to create a Java application using [Azure Cosmos DB Java SDK V4](sdk-java-v4.md) that uses the Azure Cosmos DB for NoSQL database and uses the Change Feed Processor for real-time data processing. You can extend this application to handle more complex use cases and build robust, scalable, and globally distributed applications using Azure Cosmos DB. - [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=DeleteWithTTL)] +## Additional resources - The change feed ```feedPollDelay``` is set to 100 ms; therefore, change feed responds to this update almost instantly and calls ```updateInventoryTypeMaterializedView()``` shown above. That last function call will upsert the new document with TTL of 5 sec into **InventoryContainer-pktype**. +* [Azure Cosmos DB Java SDK V4](sdk-java-v4.md) +* [Additional samples on GitHub](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples) - The effect is that after about 5 seconds, the document will expire and be deleted from both containers. +## Next steps - This procedure is necessary because change feed only issues events on item insertion or update, not on item deletion. +You can now proceed to learn more about change feed estimator in the following articles: -1. Press enter one more time to close the program and clean up its resources. +* [Use the change feed estimator](how-to-use-change-feed-estimator.md) |
cosmos-db | Abs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/abs.md | The following example shows the results of using this function on three differen ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_NUMBER`](is-number.md) |
cosmos-db | Acos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/acos.md | The following example calculates the arccosine of the specified values using the ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`COS`](cos.md) |
cosmos-db | Array Concat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-concat.md | The following example shows how to concatenate two arrays. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [Introduction to Azure Cosmos DB](../../introduction.md) - [`ARRAY_SLICE`](array-slice.md) |
cosmos-db | Array Contains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-contains.md | The following example illustrates how to check for specific values or objects in ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ARRAY_CONCAT`](array-concat.md) |
cosmos-db | Array Length | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-length.md | The following example illustrates how to get the length of an array using the fu ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ARRAY_SLICE`](array-slice.md) |
cosmos-db | Array Slice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-slice.md | The following example shows how to get different slices of an array using the fu ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ARRAY_LENGTH`](array-length.md) |
cosmos-db | Asin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/asin.md | The following example calculates the arcsine of the specified angle using the fu ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`SIN`](sin.md) |
cosmos-db | Atan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/atan.md | The following example calculates the arctangent of the specified angle using the ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`TAN`](tan.md) - [`ATN2`](atn2.md) |
cosmos-db | Atn2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/atn2.md | The following example calculates the arctangent for the specified `x` and `y` co ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`TAN`](tan.md) - [`ATAN`](atan.md) |
cosmos-db | Average | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/average.md | In this example, the function is used to average the values of a specific field ## Next steps -- [System functions in Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`SUM`](sum.md) |
cosmos-db | Ceiling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/ceiling.md | The following example shows positive numeric, negative, and zero values evaluate ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`FLOOR`](floor.md) |
cosmos-db | Concat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/concat.md | This example uses the function to select two expressions from the item. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`CONTAINS`](contains.md) |
cosmos-db | Contains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/contains.md | The following example checks if various static substrings exist in a string. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`CONCAT`](concat.md) |
cosmos-db | Cos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/cos.md | The following example calculates the cosine of the specified angle using the fun ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`SIN`](sin.md) |
cosmos-db | Cot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/cot.md | The following example calculates the cotangent of the specified angle using the ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`TAN`](tan.md) |
cosmos-db | Count | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/count.md | In this example, the function counts the number of times the specified scalar fi ## Next steps -- [System functions in Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`AVG`](average.md) |
cosmos-db | Datetimeadd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimeadd.md | The following example adds various values (one year, one month, one day, one hou ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`DateTimeBin`](datetimebin.md) |
cosmos-db | Datetimebin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimebin.md | The following example bins the date **January 8, 2021** at **18:35 UTC** by vari ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`DateTimeAdd`](datetimeadd.md) |
cosmos-db | Datetimediff | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimediff.md | The following examples compare **February 4, 2019 16:00 UTC** and **March 5, 201 ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`DateTimeBin`](datetimebin.md) |
cosmos-db | Datetimefromparts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimefromparts.md | The following example uses various combinations of the arguments to create date ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`DateTimePart`](datetimepart.md) |
cosmos-db | Datetimepart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimepart.md | The following example returns various parts of the date and time **May 29, 2016 ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`DateTimeFromParts`](datetimefromparts.md) |
cosmos-db | Datetimetoticks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimetoticks.md | The following example measures the ticks since the date and time **May 19, 2015 ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`DateTimeToTimestamp`](datetimetotimestamp.md) |
cosmos-db | Datetimetotimestamp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimetotimestamp.md | The following example converts the date and time **May 19, 2015 12:00 UTC** to a ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`DateTimeToTicks`](datetimetoticks.md) |
cosmos-db | Degrees | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/degrees.md | The following example returns the degrees for various radian values. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`RADIANS`](radians.md) |
cosmos-db | Endswith | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/endswith.md | The following example checks if the string `abc` ends with `b` or `bC`. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`STARTSWITH`](startswith.md) |
cosmos-db | Exp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/exp.md | The following example returns the exponential value for various numeric inputs. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`LOG`](log.md) |
cosmos-db | Floor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/floor.md | The following example shows positive numeric, negative, and zero values evaluate ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [Introduction to Azure Cosmos DB](../../introduction.md) |
cosmos-db | Getcurrentdatetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrentdatetime.md | The following example shows how to get the current UTC date and time string. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`GetCurrentDateTimeStatic`](getcurrentdatetimestatic.md) |
cosmos-db | Getcurrentticks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrentticks.md | The following example returns the current time measured in ticks: ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`GetCurrentTicksStatic`](getcurrentticksstatic.md) |
cosmos-db | Getcurrenttimestamp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrenttimestamp.md | The following example shows how to get the current timestamp. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`GetCurrentTimestampStatic`](getcurrenttimestampstatic.md) |
cosmos-db | Index Of | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/index-of.md | The following example returns the index of various substrings inside the larger ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`SUBSTRING`](substring.md) |
cosmos-db | Is Array | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-array.md | The following example checks objects of various types using the function. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_OBJECT`](is-object.md) |
cosmos-db | Is Bool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-bool.md | The following example checks objects of various types using the function. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_NUMBER`](is-number.md) |
cosmos-db | Is Defined | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-defined.md | The following example checks for the presence of a property within the specified ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_NULL`](is-null.md) |
cosmos-db | Is Null | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-null.md | The following example checks objects of various types using the function. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_OBJECT`](is-object.md) |
cosmos-db | Is Number | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-number.md | The following example various values to see if they're a number. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_FINITE_NUMBER`](is-finite-number.md) |
cosmos-db | Is Object | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-object.md | The following example various values to see if they're an object. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_PRIMITIVE`](is-primitive.md) |
cosmos-db | Is Primitive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-primitive.md | The following example various values to see if they're a primitive. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_OBJECT`](is-object.md) |
cosmos-db | Is String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-string.md | The following example various values to see if they're a string. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_NUMBER`](is-number.md) |
cosmos-db | Left | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/left.md | The following example returns the left part of the string `Microsoft` for variou ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`RIGHT`](right.md) |
cosmos-db | Length | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/length.md | The following example returns the length of a static string. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`REVERSE`](reverse.md) |
cosmos-db | Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/log.md | The following example returns the logarithm value of various values. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`LOG10`](log10.md) |
cosmos-db | Log10 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/log10.md | The following example returns the logarithm value of various values. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`LOG`](log.md) |
cosmos-db | Lower | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/lower.md | The following example shows how to use the function to modify various strings. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`UPPER`](upper.md) |
cosmos-db | Ltrim | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/ltrim.md | The following example shows how to use this function with various parameters ins ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`RTRIM`](rtrim.md) |
cosmos-db | Max | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/max.md | For this example, the `MAX` function is used in a query that includes the numeri ## Next steps -- [System functions in Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`MIN`](min.md) |
cosmos-db | Min | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/min.md | For this example, the `MIN` function is used in a query that includes the numeri ## Next steps -- [System functions in Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`MAX`](max.md) |
cosmos-db | Pi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/pi.md | The following example returns the constant value of Pi. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`SQRT`](sqrt.md) |
cosmos-db | Power | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/power.md | The following example demonstrates raising a number to various powers. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`SQRT`](sqrt.md) |
cosmos-db | Radians | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/radians.md | The following example returns the radians for various degree values. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`DEGREES`](degrees.md) |
cosmos-db | Rand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/rand.md | The following example returns randomly generated numeric values. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_NUMBER`](is-number.md) |
cosmos-db | Regexmatch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/regexmatch.md | This example uses a regular expression match as a filter to return a subset of i ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_STRING`](is-string.md) |
cosmos-db | Replace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/replace.md | The following example shows how to use this function to replace static values. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`SUBSTRING`](substring.md) |
cosmos-db | Replicate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/replicate.md | The following example shows how to use this function to build a repeating string ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`REPLACE`](replace.md) |
cosmos-db | Reverse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/reverse.md | The following example shows how to use this function to reverse multiple strings ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`LENGTH`](length.md) |
cosmos-db | Right | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/right.md | The following example returns the right part of the string `Microsoft` for vario ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`LEFT`](left.md) |
cosmos-db | Round | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/round.md | The following example rounds positive and negative numbers to the nearest intege ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`POWER`](power.md) |
cosmos-db | Rtrim | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/rtrim.md | The following example shows how to use this function with various parameters ins ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`LTRIM`](ltrim.md) |
cosmos-db | Sign | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sign.md | The following example returns the sign of various numbers from -2 to 2. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ABS`](abs.md) |
cosmos-db | Sin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sin.md | The following example calculates the sine of the specified angle using the funct ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`COS`](cos.md) |
cosmos-db | Sqrt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sqrt.md | The following example returns the square roots of various numeric values. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`POWER`](power.md) |
cosmos-db | Square | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/square.md | The following example returns the squares of various numbers. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`SQRT`](sqrt.md) - [`POWER`](power.md) |
cosmos-db | St Area | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-area.md | The following example shows how to return the area of a polygon. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ST_WITHIN`](st-within.md) |
cosmos-db | St Distance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-distance.md | The example shows how to use the function as a filter to return items within a s ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ST_INTERSECTS`](st-intersects.md) |
cosmos-db | St Intersects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-intersects.md | The following example shows how to find if two polygons intersect. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ST_WITHIN`](st-within.md) |
cosmos-db | St Isvalid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-isvalid.md | The following example how to check validity of multiple objects. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ST_ISVALIDDETAILED`](st-isvaliddetailed.md) |
cosmos-db | St Isvaliddetailed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-isvaliddetailed.md | The following example how to check validity of multiple objects. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ST_ISVALID`](st-isvalid.md) |
cosmos-db | St Within | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-within.md | The following example shows how to find if a **Point** is within a **Polygon**. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ST_INTERSECT`](st-intersects.md) |
cosmos-db | Startswith | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/startswith.md | The following example checks if the string `abc` starts with `b` or `ab`. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`ENDSWITH`](endswith.md) |
cosmos-db | Stringequals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringequals.md | The following example checks if "abc" matches "abc" and if "abc" matches "ABC." ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`SUBSTRING`](substring.md) |
cosmos-db | Stringtoarray | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtoarray.md | The following example illustrates how this function works with various inputs. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`StringToObject`](stringtoobject.md) |
cosmos-db | Stringtoboolean | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtoboolean.md | The following example illustrates how this function works with various data type ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`StringToNumber`](stringtonumber.md) |
cosmos-db | Stringtonull | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtonull.md | The following example illustrates how this function works with various data type ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`StringToBoolean`](stringtoboolean.md) |
cosmos-db | Stringtonumber | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtonumber.md | The following example illustrates how this function works with various data type ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`StringToBoolean`](stringtoboolean.md) |
cosmos-db | Stringtoobject | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtoobject.md | The following example illustrates how this function works with various inputs. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`StringToArray`](stringtoarray.md) |
cosmos-db | Substring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/substring.md | The following example returns substrings with various lengths and starting posit ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`StringEquals`](stringequals.md) |
cosmos-db | Sum | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sum.md | The `SUM` function is used to sum the values of the `quantity` field, when it ex ## Next steps -- [System functions in Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`AVG`](average.md) |
cosmos-db | Tan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/tan.md | The following example calculates the cotangent of the specified angle using the ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`COT`](cot.md) |
cosmos-db | Tickstodatetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/tickstodatetime.md | The following example converts the ticks to a date and time value. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`TimestampToDateTime`](timestamptodatetime.md) |
cosmos-db | Timestamptodatetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/timestamptodatetime.md | The following example converts the ticks to a date and time value. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`TicksToDateTime`](tickstodatetime.md) |
cosmos-db | Tostring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/tostring.md | This example converts multiple scalar and object values to a string. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`IS_OBJECT`](is-object.md) |
cosmos-db | Trim | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/trim.md | This example illustrates various ways to trim a string expression. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`TRUNC`](trunc.md) |
cosmos-db | Trunc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/trunc.md | This example illustrates various ways to truncate a number to the closest intege ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`TRIM`](trim.md) |
cosmos-db | Upper | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/upper.md | The following example shows how to use the function to modify various strings. ## Next steps -- [System functions Azure Cosmos DB](system-functions.yml)+- [System functions](system-functions.yml) - [`LOWER`](lower.md) |
cost-management-billing | Capabilities Allocation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-allocation.md | |
cost-management-billing | Capabilities Analysis Showback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-analysis-showback.md | |
cost-management-billing | Capabilities Anomalies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-anomalies.md | |
cost-management-billing | Capabilities Budgets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-budgets.md | |
cost-management-billing | Capabilities Chargeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-chargeback.md | |
cost-management-billing | Capabilities Commitment Discounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-commitment-discounts.md | |
cost-management-billing | Capabilities Culture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-culture.md | |
cost-management-billing | Capabilities Education | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-education.md | |
cost-management-billing | Capabilities Efficiency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-efficiency.md | |
cost-management-billing | Capabilities Forecasting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-forecasting.md | |
cost-management-billing | Capabilities Frameworks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-frameworks.md | |
cost-management-billing | Capabilities Ingestion Normalization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-ingestion-normalization.md | |
cost-management-billing | Capabilities Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-onboarding.md | |
cost-management-billing | Capabilities Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-policy.md | |
cost-management-billing | Capabilities Shared Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-shared-cost.md | |
cost-management-billing | Capabilities Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-structure.md | |
cost-management-billing | Capabilities Unit Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-unit-costs.md | |
cost-management-billing | Capabilities Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-workloads.md | |
cost-management-billing | Conduct Finops Iteration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/conduct-finops-iteration.md | |
cost-management-billing | Overview Finops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/overview-finops.md | |
cost-management-billing | Direct Ea Administration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md | An Azure EA account is an organizational unit in the Azure portal. In the Azure As an EA admin, you can allow account owners in your organization to create subscriptions based on the EA Dev/Test offer. To do so, select the **Dev/Test** option in the Edit account window. After you've selected the Dev/Test option, let the account owner know so that they can create EA Dev/Test subscriptions needed for their teams of Dev/Test subscribers. The offer enables active Visual Studio subscribers to run development and testing workloads on Azure at special Dev/Test rates. It provides access to the full gallery of Dev/Test images including Windows 8.1 and Windows 10. +>[!NOTE] +> The Enterprise Dev/Test Offer isn't available for Azure Government customers. If you're an Azure Government customer, your can't enable the Dev/Test option. + ### To set up the Enterprise Dev/Test offer 1. Sign in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). As an EA admin, you can allow account owners in your organization to create subs When a user is added as an account owner, any Azure subscriptions associated with the user that is based on either the pay-as-you-go Dev/Test offer or the monthly credit offers for Visual Studio subscribers get converted to the [Enterprise Dev/Test](https://azure.microsoft.com/pricing/offers/ms-azr-0148p/) offer. Subscriptions based on other offer types, such as pay-as-you-go, associated with the account owner get converted to Microsoft Azure Enterprise offers. -Currently, the Enterprise Dev/Test Offer isn't applicable to Azure Gov customers. - ## Create a subscription Account owners can view and manage subscriptions. You can use subscriptions to give teams in your organization access to development environments and projects. For example: |
cost-management-billing | Direct Ea Azure Usage Charges Invoices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md | Title: View your Azure usage summary details and download reports for EA enrollm description: This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 07/07/2023 Last updated : 07/14/2023 To review and verify the charges on your invoice, you must be an Enterprise Admi ## Review usage charges -To view detailed usage for specific accounts, download the usage detail report: +To view detailed usage for specific accounts, download the usage detail report. Usage files may be large. If you prefer, you can use the exports feature to get the same data exported to an Azure Storage account. For more information, see [Export usage details to a storage account](../costs/tutorial-export-acm-data.md). As an enterprise administrator: The following table lists the terms and descriptions shown on the Usage + Charge | **Term** | **Description** | | | |-| Month | The Usage month | -| Charges against Credits | The credit applied during that specific period. | -| Service Overage | Your organization's usage charges exceed your credit balance | -| Billed Separately | The services your organization used aren't covered by the credit. | -| Azure Marketplace | Azure Marketplace purchases and usage aren't covered by your organization's credit and are billed separately | +| Month | The month when consumption and purchases were made. | +| Charges against Credits | The credit applied during the specific period. | +| Service Overage | Your organization's usage charges exceed your credit balance. | +| Billed Separately | Charges for services that aren't eligible to use available credit. | +| Azure Marketplace | Azure Marketplace charges that are billed separately. | | Total Charges | Charges against credits + Service Overage + Billed Separately + Azure Marketplace | | Refunded Overage credits | Sum of refunded overage amount. The following section describes it further. | -### Refunded overage credits +### Understand refunded overage credits -In the past, when a reservation refund was required, Microsoft manually reviewed closed bills - sometimes going back multiple years. The manual review sometime led to issues. To resolve the issues, the refund review process is changing to a forward-looking review that doesn't require reviewing closed bills. +This section explains how the previous refunded overage credits process worked and how the new process works. -The new review process is being deployed in phases. The current phase begins on May 1, 2023. In this phase, Microsoft is addressing only refunds that result in an overage. For example, an overage that generates a credit note. +Previously, when a reservation purchase refund occurred in a closed billing period, Microsoft updated your account retroactively, sometimes going back multiple years. The refund, if applied retroactively, could negatively affect financial reporting and cause problems. -To better understand the change, let's look at a detailed example of the old process. Assume that a reservation was bought in February 2022 with an overage credit (no Azure prepayment or Monetary Commitment was involved). You decided to return the reservation in August 2022. Refunds use the same payment method as the purchase. So, you received a credit note in August 2022 for the February 2022 billing period. However, the credit amount reflects the month of purchase. In this example, that's February 2022. The refund results in the change to the service overage and total charges. +Now, to prevent problems with the new process, a refund is applied as a credit. The refund doesn't cause any change to a closed billing period. A refund is reimbursed to the same payment method that you used when you made the purchase. If the refund results from an overage, then a credit note is issued to you. If a refund goes toward Azure prepayment (also called Monetary Commitment (MC)), then the overage portion results in issuing a credit note. Azure prepayment is applied as an adjustment. -Here's how the example used to appear in the Azure portal. +> [!NOTE] +> The reservation refund applies only to purchase refunds completed in previously closed billing periods. There's no change to refund behavior completed in an open billing period. When a refund is completed before the purchase is invoiced, then the refund is reimbursed as part of the purchase and noted on the invoice. ++#### Overage refund examples ++Let's look at a detailed overage refund example with the previous process. Assume that a reservation was bought in February 2022 with an overage credit (no Azure prepayment or Monetary Commitment was involved). You decided to return the reservation in August 2022. Refunds use the same payment method as the purchase. So, you received a credit note in August 2022 for the February 2022 billing period. However, the credit amount reflects the month of purchase. In this example, that's February 2022. The refund results in the change to the service overage and total charges. ++Here's how a refund example appeared in the Azure portal for the previous refund process. The following points explain the refund. :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/old-view-usage-charges.png" alt-text="Screenshot showing the old view for Usage + charges." lightbox="./media/direct-ea-azure-usage-charges-invoices/old-view-usage-charges.png" ::: -- After the reservation return in August 2022, you're entitled to $400 credit. You receive the credit note for the refund amount.+- After the reservation return in August 2022, you get $400 as a credit note for the refund amount. - The service overage is changed from $1947.03 to $1547.03. The total charges change from $1947.83 to $1547.83. However, the changes donΓÇÖt reconcile with the usage details file. In this example, that's $1947.83. Also, the invoice for February 2022 didn't reconcile. - Return line items appear for the month of return. For example, August 2022 in usage details file. Here's how the example now appears in the Azure portal. - There are no changes to the February 2022 service overage or total charges after the refund. You're able to reconcile the refund as you review the usage details file and your invoice. - Return line items continue to appear in the month of return. For example August 2022, because there's no behavior or process change. +### Purchase refund with overage and monetary credit examples ++In the previous refund process, assume that you bought a reservation June 2022 using overage and monetary credit. Later, you returned some reservations in July 2022 after you received your invoice. ++#### Example of the previous refund process ++Refunds use the same payment methods used for the purchase. In July 2022, your monetary credit is adjusted with the relative credit amount. In August 2022, you also receive a credit for the overage portion of the refund. The credit amount and adjustment appears in the Azure portal for June 2022. The adjustment for the month of return (June 2022) results in a change as service overage. You can view the total charges on the **Usage + charges** page. You can see the **Credit applied towards charges** value shown on the **Credits + Commitments** page. +++- After the reservation return was completed for July 2022, you're entitled to $200 of credits. You receive the credit note for the refund amount of $100. The other $100 goes back to monetary credit under **Adjustments**. +- The adjustment changes the service overage for June 2022. The adjustment also changes the total charges. They no longer reconcile with the invoice received for June 2022. And, it changes the credits applied for charges in June 2022. +- The return line items are shown for the return month (July 2022) in the usage details file. ++#### Example of the current refund process ++In the current refund process, totals in the purchase month overage, total charges, and **Credits applied towards Charges** don't change (for example, June 2022). Credits given for that month are shown under **Refunded overage credits**. Adjustments are shown for the refund month on the **Credits + Commitments** page. +++- After the reservation return completed for July 2022, you're entitled to $100 credit. You receive the credit note for the refund amount. You can view it on the **Invoices** page. The same credit appears under **Refunded overage credits** on the **Usage & Charges** page. The $100 of adjustments are shown on the **Credits + Commitments** page. There's no change for adjustments shown on the **Credits + Commitments** page. +- There are no changes to the June 2022 service overage, total charges, and Credits applied towards charges after as refund. You can reconcile your totals with the usage details file and with the invoice. +- The return line items continue to appear for the month of return (for example, July 2022) in the usage details file. There's no behavior or process change. ++ >[!IMPORTANT]-> - Refunds continue to appear for the purchase month for Azure prepayment and when there's a mix of overage and Azure prepayment. -> - New behaviour (refunds to reflect in the month of return) will be enabled for MC involved scenarios tentatively by June 2023. -> - There's no change to the process when there are: -> - Adjustment charges -> - Back-dated credits -> - Discounts. -> The preceding items result in bill regeneration. The regenerated bill shows the new refund billing process. +> When there are adjustment charges, back-dated credits, or discounts for the account that result in an invoice getting rebilled, it resets the refund behavior. Refunds are shown in the rebilled invoice for the rebilled period. #### Common refunded overage credits questions Answer: The `Refunded Overage Credits` attribute applies to reservation and savi Question: Are `Refunded Overage credits` values included in total charges?<br> Answer: No, it's standalone field that shows the sum of credits received for the month. +Question: Does the new behavior apply to all refunds that happened previously?<br> +Answer: No, it only applies to overage refunds that happen in the future. Refunded overage credits appear as `0` for previous months. ++Question: Why do I see some overage refunds going back to the purchase month?<br> +Answer: If the refund is a combination of Overage and monetary credit, then refunds that were completed by August 1 still go back to the purchase month. ++Question: Why do I see some refunds that aren't included in *Refunded Overage credits*?<br> +Answer: If the refund happened before the purchase is invoiced, then it appears on the invoice and it reduces the purchase charge. The invoice date cut-off is the fifth day of every month (UTC 12:00 am). Any refunds that happen between the first and fifth day are considered as being on the previous month's invoice because the purchase isn't invoiced yet. + Question: How do I reconcile the amount shown in **Refunded Overage Credits**?<br> Answer: 1. In the Azure portal, navigate to **Reservation Transactions**. Answer: :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/reservation-transactions.png" alt-text="Screenshot showing the Reservation transactions page with refund amounts." lightbox="./media/direct-ea-azure-usage-charges-invoices/reservation-transactions.png" ::: 3. Navigate to **Usage + charges** look at the value shown in **Refunded Overage Credits**. The value is sum of all reservation and savings plan refunds that happened in the month. :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/refunded-overage-credits.png" alt-text="Screenshot showing the refunded overage credits values." lightbox="./media/direct-ea-azure-usage-charges-invoices/refunded-overage-credits.png" :::- > [!NOTE] - > Savings plan refunds are not shown in **Reservation Transactions**. However, **Refunded Overage Credits** shows the sum of reservations and savings plans. ++Question: How do I reconcile the reservation-related credits provided as *adjustments*?<br> +Answer: +1. Go to the **Reservation Transactions** page and look in the **MC** column at the refund amount for the month you want to reconcile. + :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/reservation-transactions-refund.png" alt-text="Screenshot showing the MC refund amount for Reservation transactions." lightbox="./media/direct-ea-azure-usage-charges-invoices/reservation-transactions-refund.png" ::: +1. Navigate to the **Credits + Commitments** page and look at the value shown in **Adjustments**. It shows all refunds applied to the **MC** balance for the month. + :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/credits-commitments-refund.png" alt-text="Screenshot showing the MC refund amount Credits + Commitments." lightbox="./media/direct-ea-azure-usage-charges-invoices/credits-commitments-refund.png" ::: +> [!NOTE] +> Savings plan refunds aren't shown on the **Reservation Transactions** page. However, **Refunded Overage Credits** shows the sum of reservations and savings plans. ## Download usage charges CSV file |
data-factory | How To Change Data Capture Resource With Schema Evolution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource-with-schema-evolution.md | + + Title: Capture changed data with schema evolution using change data capture resource +description: This tutorial provides step-by-step instructions on how to capture changed data with schema evolution from Azure SQL DB to Delta sink using a change data capture resource. +++++++ Last updated : 07/21/2023+++# How to capture changed data with Schema evolution from Azure SQL DB to Delta sink using a Change Data Capture (CDC) resource ++In this tutorial, you will use the Azure Data Factory user interface (UI) to create a new Change Data Capture (CDC) resource that picks up changed data from an Azure SQL Database source to Delta Lake stored in Azure Data Lake Storage (ADLS) Gen2 in real-time showcasing the support of schema evolution. The configuration pattern in this tutorial can be modified and expanded upon. ++In this tutorial, you follow these steps: +* Create a Change Data Capture resource. +* Make dynamic schema changes to source table. +* Validate schema changes at target Delta sink. ++## Prerequisites ++* **Azure subscription.** If you don't have an Azure subscription, create a free Azure account before you begin. +* **Azure SQL Database.** You use Azure SQL DB as a source data store. If you donΓÇÖt have an Azure SQL DB, create one in the Azure portal first before continuing the tutorial. +* **Azure storage account.** You use delta lake stored in ADLS Gen 2 storage as a target data store. If you don't have a storage account, see Create an Azure storage account for steps to create one. ++## Create a change data capture artifact +++1. Navigate to the **Author** blade in your data factory. You see a new top-level artifact below **Pipelines** called **Change Data Capture (preview)**. + + :::image type="content" source="media/adf-cdc/change-data-capture-resource-100.png" alt-text="Screenshot of new top level artifact shown under Factory resources panel." lightbox="media/adf-cdc/change-data-capture-resource-100.png"::: + +2. To create a new **Change Data Capture**, hover over **Change Data Capture (preview)** until you see 3 dots appear. Select on the **Change Data Capture (preview) Actions**. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-101.png" alt-text="Screenshot of Change Data Capture (preview) Actions after hovering on the new top-level artifact." lightbox="media/adf-cdc/change-data-capture-resource-101.png"::: ++3. Select **New CDC (preview)**. This opens a flyout to begin the guided process. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-102.png" alt-text="Screenshot of a list of Change Data Capture actions." lightbox="media/adf-cdc/change-data-capture-resource-102.png"::: + +4. You are prompted to name your CDC resource. By default, the name is set to ΓÇ£adfcdcΓÇ¥ and continue to increment up by 1. You can replace this default name with your own. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-103.png" alt-text="Screenshot of the text box to update the name of the resource."::: + +5. Use the drop-down selection list to choose your data source. For this tutorial, we use **Azure SQL Database**. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-104.png" alt-text="Screenshot of the guided process flyout with source options in a drop-down selection menu."::: ++6. You will then be prompted to select a linked service. Create a new linked service or select an existing one. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-105.png" alt-text="Screenshot of the selection box to choose or create a new linked service."::: ++7. Once the linked service is selected, you will be prompted for selection of the source table. Use the checkbox to select the source table(s) then select the **Incremental column** using the drop-down selection. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-106.png" alt-text="Screenshot of the selection box to choose source table(s) and selection of incremental column."::: ++> [!NOTE] +> Only table(s) with supported incremental column data types are listed here. ++> [!NOTE] +> To enable Change Data Capture (CDC) with schema evolution in SQL Azure Database source, we should choose watermark column-based tables rather than native SQL CDC enabled tables. ++8. Once youΓÇÖve selected a folder path, select **Continue** to set your data target. + + :::image type="content" source="media/adf-cdc/change-data-capture-resource-107.png" alt-text="Screenshot of the continue button in the guided process to proceed to select data targets."::: ++9. Then, select a **Target type** using the drop-down selection. For this tutorial, we select **Delta**. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-108.png" alt-text="Screenshot of a drop-down selection menu of all data target types."::: ++10. You are prompted to select a linked service. Create a new linked service or select an existing one. + + :::image type="content" source="media/adf-cdc/change-data-capture-resource-109.png" alt-text="Screenshot of the selection box to choose or create a new linked service to your data target."::: ++11. Use the **Browse** button to select your target data folder. + + :::image type="content" source="media/adf-cdc/change-data-capture-resource-110.png" alt-text="Screenshot of a folder icon to browse for a folder path."::: ++> [!NOTE] +> You can either use **Browse** button under Target base path which helps you to auto-populate the browse path for all the new table(s) selected for source (or) use **Browse** button outside to individually select the folder path. ++12. Once youΓÇÖve selected a folder path, select **Continue** button. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-111.png" alt-text="Screenshot of the continue button in the guided process to proceed to next step."::: ++13. You automatically land in a new change data capture tab, where you can configure your new resource. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-112.png" alt-text="Screenshot of the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-112.png"::: + +14. A new mapping will automatically be created for you. You can update the **Source** and **Target** selections for your mapping by using the drop-down selection lists. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-113.png" alt-text="Screenshot of the source to target mapping in the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-113.png"::: ++15. Once youΓÇÖve selected your tables, you should see that their columns are auto mapped by default with the **Auto map** toggle on. Auto map automatically maps the columns by name in the sink, picks up new column changes when source schema evolves and flows this to the supported sink types. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-114.png" alt-text="Screenshot of default Auto map toggle set to on." lightbox="media/adf-cdc/change-data-capture-resource-114.png"::: ++> [!NOTE] +> Schema evolution works with Auto map toggle set to on only. If you want to know how to edit column mappings or include transformations, please refer [Capture changed data with a change data capture resource](how-to-change-data-capture-resource.md) ++16. You can click the **Keys** link and select the Keys column to be used for tracking the delete operations. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-115.png" alt-text="Screenshot of Keys link to enable Keys column selection." lightbox="media/adf-cdc/change-data-capture-resource-115.png"::: ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-116.png" alt-text="Screenshot of selecting a Keys column for the selected source."::: ++17. Once your mappings are complete, set your CDC latency using the **Set Latency** button. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-117.png" alt-text="Screenshot of the set frequency button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-117.png"::: + +18. Select the latency of your CDC and select **Apply** to make the changes. By default, it is set to **15 minutes**. For this tutorial, we select the **Real-time** latency. Real-time latency will continuously keep picking up changes in your source data in a less than 1-minute interval. ++ For other latencies, say if you select 15 minutes, every 15 minutes, your change data capture will process your source data and pick up any changed data since the last processed time. +++19. Once everything has been finalized, select the **Publish All** to publish your changes. +++> [!NOTE] +> If you do not publish your changes, you will not be able to start your CDC resource. The start button will be greyed out. ++20. Select **Start** to start running your **Change Data Capture**. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-120.png" alt-text="Screenshot of the start button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-120.png"::: ++21. Using monitoring page, you can see how many changes (insert/update/delete) were read and written and other diagnostic information. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-121.png" alt-text="Screenshot of the monitoring page of a selected change data capture." lightbox="media/adf-cdc/change-data-capture-resource-121.png"::: ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-122.png" alt-text="Screenshot of the monitoring page of a selected change data capture with detailed view." lightbox="media/adf-cdc/change-data-capture-resource-122.png"::: ++22. You can validate that the change data has landed onto the Delta Lake stored in Azure Data Lake Storage (ADLS) Gen2 in delta format ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-123.png" alt-text="Screenshot of the target delta folder." lightbox="media/adf-cdc/change-data-capture-resource-123.png"::: + +23. You can validate schema of the change data that has landed. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-124.png" alt-text="Screenshot of actual delta file." lightbox="media/adf-cdc/change-data-capture-resource-124.png"::: ++## Make dynamic schema changes at source ++1. Now you can proceed to make schema level changes to the source tables. For this tutorial, we will use the Alter table T-SQL to add a new column to the source table. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-125.png" alt-text="Screenshot of Alter command in Azure Data Studio."::: ++2. You can validate that the new column as been added to the existing table. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-126.png" alt-text="Screenshot of the new table design."::: + +## Validate schema changes at target Delta ++1. Validate change data with schema changes have landed at the Delta sink. For this tutorial, you can see the new column has been added to the sink. ++ :::image type="content" source="media/adf-cdc/change-data-capture-resource-128.png" alt-text="Screenshot of actual Delta file with schema change." lightbox="media/adf-cdc/change-data-capture-resource-128.png"::: ++## Next steps +- [Learn more about the change data capture resource](concepts-change-data-capture-resource.md) + |
defender-for-cloud | Monitoring Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md | The following use cases explain how deployment of the Log Analytics agent works - **A pre-existing VM extension is present**: - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud may upgrade the extension version to the latest version in this process.- - To see to which workspace the existing extension is sending data to, run the test to [Validate connectivity with Microsoft Defender for Cloud](/archive/blogs/yuridiogenes/validating-connectivity-with-azure-security-center). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection. + - To see to which workspace the existing extension is sending data to, run the *TestCloudConnection.exe* tool to validate connectivity with Microsoft Defender for Cloud, as described in [Verify Log Analytics Agent connectivity](/services-hub/health/assessments-troubleshooting#verify-log-analytics-agent-connectivity). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection. - If you have an environment where the Log Analytics agent is installed on client workstations and reporting to an existing Log Analytics workspace, review the list of [operating systems supported by Microsoft Defender for Cloud](security-center-os-coverage.md) to make sure your operating system is supported. Learn more about [working with the Log Analytics agent](working-with-log-analytics-agent.md). |
defender-for-cloud | Plan Defender For Servers Agents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-agents.md | Azure Arc helps you onboard Amazon Web Services (AWS), Google Cloud Platform (GC ### Foundational cloud security posture management -For free foundational cloud security posture management (CSPM) features, Azure Arc running on AWS or GCP machines isn't required. For full functionality, we recommend that you *do* have Azure Arc running on AWS or GCP machines. +The free foundational cloud security posture management (CSPM) features for AWS and GCP machines don't require Azure Arc. For full functionality, we recommend that you *do* have Azure Arc running on AWS or GCP machines. Azure Arc onboarding is required for on-premises machines. |
defender-for-cloud | Secret Scanning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secret-scanning.md | Secrets that don't have a known attack path, are referred to as `secrets without ## Remediate secrets with cloud security explorer -The [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer) allows you to proactively identify potential security risks within your cloud environment. By querying the [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph), the context engine of Defender for Cloud. The cloud security explorer allows your security team to prioritize any concerns, while also considering the specific context and conventions of your organization. +The [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer) enables you to proactively identify potential security risks within your cloud environment. It does so by querying the [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph), which is the context engine of Defender for Cloud. The cloud security explorer allows your security team to prioritize any concerns, while also considering the specific context and conventions of your organization. **To remediate secrets with cloud security explorer**: |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 06/21/2023 Last updated : 07/25/2023 # Important upcoming changes to Microsoft Defender for Cloud > [!IMPORTANT] > The information on this page relates to pre-release products or features, which may be substantially modified before they are commercially released, if ever. Microsoft makes no commitments or warranties, express or implied, with respect to the information provided here.- +[Defender for Servers](#defender-for-servers) On this page, you can learn about changes that are planned for Defender for Cloud. It describes planned modifications to the product that might affect things like your secure score or workflows. > [!TIP] If you're looking for the latest release notes, you can find them in the [What's | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | July 2023 | | [General availability release of agentless container posture in Defender CSPM](#general-availability-ga-release-of-agentless-container-posture-in-defender-cspm) | July 2023 | | [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | August 2023 |+| [Update naming format of Azure Center for Internet Security standards in regulatory compliance](#update-naming-format-of-azure-center-for-internet-security-standards-in-regulatory-compliance) | August 2023 | | [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | August 2023 | | [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | September 2023 |+| [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | August 2024 | ++### Defender for Cloud plan and strategy for the Log Analytics agent deprecation ++**Estimated date for change: August 2024** ++The Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024.](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/) As a result, features of the two Defender for Cloud plans that rely on the Log Analytics agent are impacted, and they have updated strategies: [Defender for Servers](#defender-for-servers) and [Defender for SQL Server on machines](#defender-for-sql-server-on-machines). ++#### Key strategy points ++- The Azure monitoring Agent (AMA) wonΓÇÖt be a requirement of the Defender for Servers offering, but will remain required as part of Defender for SQL. +- Defender for Servers MMA-based features and capabilities will be deprecated in their Log Analytics version in August 2024, and delivered over alternative infrastructures, before the MMA deprecation date. +- In addition, the currently shared autoprovisioning process that provides the installation and configuration of both agents (MMA/AMA), will be adjusted accordingly. ++#### Defender for Servers ++The following table explains how each capability will be provided after the Log Analytics agent retirement: ++| **Feature** | **Support** | **Alternative** | +| | | | +| Defender for Endpoint/Defender for Cloud integration for down level machines (Windows Server 2012 R2, 2016) | Defender for Endpoint integration that uses the legacy Defender for Endpoint sensor and the Log Analytics agent (for Windows Server 2016 and Windows Server 2012 R2 machines) wonΓÇÖt be supported after August 2024. | Enable the GA [unified agent](/microsoft-365/security/defender-endpoint/configure-server-endpoints#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) integration to maintain support for machines, and receive the full extended feature set. For more information, see [Enable the Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md#windows). | +| OS-level threat detection (agent-based) | OS-level threat detection based on the Log Analytics agent wonΓÇÖt be available after August 2024. A full list of deprecated detections will be provided soon. | OS-level detections are provided by Defender for Endpoint integration and are already GA. | +| Adaptive application controls | The [current GA version](adaptive-application-controls.md) based on the Log Analytics agent will be deprecated in August 2024, along with the preview version based on the Azure monitoring agent. | The next generation of this feature is currently under evaluation, further information will be provided soon. | +| Endpoint protection discovery recommendations | The current [GA and preview recommendations](endpoint-protection-recommendations-technical.md) to install endpoint protection and fix health issues in the detected solutions will be deprecated in August 2024. | A new agentless version will be provided for discovery and configuration gaps by April 2024. As part of this upgrade, this feature will be provided as a component of Defender for Servers plan 2 and Defender for CSPM, and wonΓÇÖt cover on-premises or Arc-connected machines. | +| Missing OS patches (system updates) | Recommendations to apply system updates based on the Log Analytics agent wonΓÇÖt be available after August 2024. | [New recommendations](release-notes.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga), based on integration with Update Management Center, are already in GA, with no agent dependencies. | +| OS misconfigurations (Azure Security Benchmark recommendations) | The [current GA version](apply-security-baseline.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. The current preview version that uses the Guest Configuration agent will be deprecated as the Microsoft Defender Vulnerability Management integration becomes available. | A new version, based on integration with Premium Microsoft Defender Vulnerability Management, will be available early in 2024, as part of Defender for Servers plan 2. | +| File integrity monitoring | The [current GA version](file-integrity-monitoring-enable-log-analytics.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. | A new version of this feature, either agent-based or agentless, will be available by April 2024. | +| The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion | The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion over the defined tables will remain supported via the AMA agent for the machines under subscriptions covered by Defender for Servers P2. Every machine is eligible for the benefit only once, even if both Log Analytics agent and Azure Monitor agent are installed on it. | | ++##### Log analytics and Azure Monitoring agents autoprovisioning experience ++- The MMA autoprovisioning mechanism and its related policy initiative will remain optional until August 2024. ++- In October 2023, the current shared Log Analytics agent/Azure Monitor agent autoprovisioning mechanism will be updated and applied to the Log Analytics agent only. The Azure Monitor agent related (Public Preview) policy initiatives will be deprecated. ++- The AMA autoprovisioning mechanism will still serve current customers with the Public Preview policy initiative enabled, but they won't be eligible for support. To disable the Azure Monitor agent provisioning, manually remove the policy initiative. ++- If MMA autoprovisioning is enabled and AMA agents are already installed on the machines, MMA wonΓÇÖt be provisioned. However, AMA will remain functional. ++To ensure the security of your servers and receive all the security updates from Defender for Servers, make sure to have [Defender for Endpoint integration](integration-defender-for-endpoint.md) and [agentless disk scanning](concept-agentless-data-collection.md) enabled on your subscriptions. This will also keep your servers up-to-date with the alternative deliverables. ++#### Defender for SQL Server on machines ++The Defender for SQL Server on machines plan relies on the Log Analytics agent (MMA) / Azure monitoring agent (AMA) to provide Vulnerability Assessment and Advanced Threat Protection to IaaS SQL Server instances. The plan supports Log Analytics agent autoprovisioning in GA, and Azure Monitoring agent autoprovisioning in Public Preview. ++The following section describes the planned introduction of a new and improved SQL Server-targeted Azure monitoring agent (AMA) autoprovisioning process and the deprecation procedure of the Log Analytics agent (MMA). On-premises SQL servers using MMA will require the Azure Arc agent when migrating to the new process due to AMA requirements. Customers who use the new autoprovisioning process will benefit from a simple and seamless agent configuration, reducing onboarding errors and providing broader protection coverage. ++| Milestone | Date | More information | +| | - | | +| SQL-targeted AMA autoprovisioning Public Preview release | October 2023 | The new autoprovisioning process will only target Azure registered SQL servers (SQL Server on Azure VM/ Arc-enabled SQL Server). The current AMA autoprovisioning process and its related policy initiative will be deprecated. It can still be used customers, but they won't be eligible for support. | +| SQL-targeted AMA autoprovisioning GA release | December 2023 | GA release of a SQL-targeted AMA autoprovisioning process. Following the release, it will be defined as the default option for all new customers. | +| MMA deprecation | August 2024 | The current MMA autoprovisioning process and its related policy initiative will be deprecated. It can still be used customers, but they won't be eligible for support. | ### Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled" Existing customers of Defender for Key-Vault, Defender for Azure Resource Manage - **Defender for Azure Resource Manager**: This plan will have a fixed price per subscription per month. Customers will have the option to switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model. +Existing customers of Defender for Key-Vault, Defender for Azure Resource Manager, and Defender for DNS will keep their current business model and pricing unless they actively choose to switch to the new business model and price. ++- **Defender for Azure Resource Manager**: This plan will have a fixed price per subscription per month. Customers will have the option to switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model. + - **Defender for Key Vault**: This plan will have a fixed price per vault at per month with no overage charge. Customers will have the option to switch to the new business model by selecting the Defender for Key Vault new per-vault model - **Defender for DNS**: Defender for Servers Plan 2 customers will gain access to Defender for DNS value as part of Defender for Servers Plan 2 at no extra cost. Customers that have both Defender for Server Plan 2 and Defender for DNS will no longer be charged for Defender for DNS. Defender for DNS will no longer be available as a standalone plan. For more information on all of these plans, check out the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h) +### Update naming format of Azure Center for Internet Security standards in regulatory compliance ++**Estimated date for change: August 2023** ++The naming format of Azure CIS (Center for Internet Security) foundations benchmarks in the compliance dashboard is set for change from `[Cloud] CIS [version number]` to `CIS [Cloud] Foundations v[version number]`. Refer to the following table: ++| Current Name | New Name | +|--|--| +| Azure CIS 1.1.0 | CIS Azure Foundations v1.1.0 | +| Azure CIS 1.3.0 | CIS Azure Foundations v1.3.0 | +| Azure CIS 1.4.0 | CIS Azure Foundations v1.4.0 | +| AWS CIS 1.2.0 | CIS AWS Foundations v1.2.0 | +| AWS CIS 1.5.0 | CIS AWS Foundations v1.5.0 | +| GCP CIS 1.1.0 | CIS GCP Foundations v1.1.0 | +| GCP CIS 1.2.0 | CIS GCP Foundations v1.2.0 | ++Learn how to [improve your regulatory compliance](regulatory-compliance-dashboard.md). + ### Preview alerts for DNS servers to be deprecated **Estimated date for change: August 2023** The following table lists the alerts to be deprecated: | Digital currency mining activity (Preview) | DNS_CurrencyMining | | Network intrusion detection signature activation (Preview) | DNS_SuspiciousDomain | | Attempted communication with suspicious sinkholed domain (Preview) | DNS_SinkholedDomain |-| Communication with possible phishing domain (Preview) | DNS_PhishingDomain| +| Communication with possible phishing domain (Preview) | DNS_PhishingDomain| | Possible data transfer via DNS tunnel (Preview) | DNS_DataObfuscation |-| Possible data exfiltration via DNS tunnel (Preview) | DNS_DataExfiltration | -| Communication with suspicious algorithmically generated domain (Preview) | DNS_DomainGenerationAlgorithm | +| Possible data exfiltration via DNS tunnel (Preview) | DNS_DataExfiltration | +| Communication with suspicious algorithmically generated domain (Preview) | DNS_DomainGenerationAlgorithm | | Possible data download via DNS tunnel (Preview) | DNS_DataInfiltration | | Anonymity network activity (Preview) | DNS_DarkWeb | | Anonymity network activity using web proxy (Preview) | DNS_DarkWebProxy | The following table lists the alerts to be deprecated: Azure monitor offers the capability to [set a daily cap](../azure-monitor/logs/daily-cap.md) on the data that is ingested on your Log analytics workspaces. However, Defender for Cloud security events are currently not supported in those exclusions. -Starting on September 18, 2023 the Log Analytics Daily Cap will no longer exclude the below set of data types: +Starting on September 18, 2023 the Log Analytics Daily Cap will no longer exclude the following set of data types: - WindowsEvent - SecurityAlert Learn more about [workspaces with Microsoft Defender for Cloud](../azure-monitor ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).- |
defender-for-cloud | Update Regulatory Compliance Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md | Microsoft Defender for Cloud continually compares the configuration of your reso > [!TIP] > Learn more about Defender for Cloud's regulatory compliance dashboard in the [common questions](faq-regulatory-compliance.yml). -## How are regulatory compliance standards represented in Defender for Cloud? +## How are compliance standards represented in Defender for Cloud? Industry standards, regulatory standards, and benchmarks are represented in Defender for Cloud's regulatory compliance dashboard. Each standard is an initiative defined in Azure Policy. Microsoft tracks the regulatory standards themselves and automatically improves ## What regulatory compliance standards are available in Defender for Cloud? -By default, every Azure subscription has the Microsoft cloud security benchmark assigned. This is the Microsoft-authored, cloud specific guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Microsoft cloud security benchmark](/security/benchmark/azure/introduction). +By default: +- Azure subscriptions get the Microsoft cloud security benchmark assigned. This is the Microsoft-authored, cloud specific guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Microsoft cloud security benchmark](/security/benchmark/azure/introduction). +- AWS accounts get the AWS Foundational Security Best Practices assigned. This is the AWS-specific guideline for security and compliance best practices based on common compliance frameworks. +- GCP projects get the "GCP Default" standard assigned. -**Available regulatory standards**: --- PCI-DSS v3.2.1 **(deprecated)**-- PCI DSS v4-- SOC TSP-- SOC 2 Type 2-- ISO 27001:2013-- Azure CIS 1.1.0-- Azure CIS 1.3.0-- Azure CIS 1.4.0-- NIST SP 800-53 R4-- NIST SP 800-53 R5-- NIST SP 800 171 R2-- CMMC Level 3-- FedRAMP H-- FedRAMP M-- HIPAA/HITRUST-- SWIFT CSP CSCF v2020-- UK OFFICIAL and UK NHS-- Canada Federal PBMM-- New Zealand ISM Restricted-- New Zealand ISM Restricted v3.5-- Australian Government ISM Protected-- RMIT Malaysia--**AWS**: When users onboard, every AWS account has the AWS Foundational Security Best Practices assigned and can be viewed under Recommendations. This is the AWS-specific guideline for security and compliance best practices based on common compliance frameworks. --Users that have one Defender bundle enabled can enable other standards. --**Available AWS regulatory standards**: --- CIS 1.2.0-- CIS 1.5.0-- PCI DSS 3.2.1-- AWS Foundational Security Best Practices--To add regulatory compliance standards on AWS accounts: --1. Navigate to **Environment settings**. -1. Select the relevant account. -1. Select **Standards**. -1. Select **Add** and choose **Standard**. -1. Choose a standard from the drop-down menu. -1. Select **Save**. -- :::image type="content" source="media/update-regulatory-compliance-packages/add-aws-regulatory-compliance.png" alt-text="Screenshot of adding regulatory compliance standard to AWS account." lightbox="media/update-regulatory-compliance-packages/add-aws-regulatory-compliance.png"::: --**GCP**: When users onboard, every GCP project has the "GCP Default" standard assigned. +If a subscription, account, or project has *any* Defender plan enabled, additional standards can be applied. -Users that have one Defender bundle enabled can enable other standards. -**Available GCP regulatory standards**: +**Available regulatory standards**: -- CIS 1.1.0, 1.2.0-- PCI DSS 3.2.1-- NIST 800 53-- ISO 27001+| Standards for Azure subscriptions | Standards for AWS accounts | Standards for GCP projects | +| | - | | +| - PCI-DSS v3.2.1 **(deprecated)** | - CIS 1.2.0 | - CIS 1.1.0, 1.2.0 | +| - PCI DSS v4 | - CIS 1.5.0 | - PCI DSS 3.2.1 | +| - SOC TSP | - PCI DSS 3.2.1 | - NIST 800 53 | +| - SOC 2 Type 2 | - AWS Foundational Security Best Practices | - ISO 27001 | +| - ISO 27001:2013 ||| +| - Azure CIS 1.1.0 ||| +| - Azure CIS 1.3.0 ||| +| - Azure CIS 1.4.0 ||| +| - NIST SP 800-53 R4 ||| +| - NIST SP 800-53 R5 ||| +| - NIST SP 800 171 R2 ||| +| - CMMC Level 3 ||| +| - FedRAMP H ||| +| - FedRAMP M ||| +| - HIPAA/HITRUST ||| +| - SWIFT CSP CSCF v2020 ||| +| - UK OFFICIAL and UK NHS ||| +| - Canada Federal PBMM ||| +| - New Zealand ISM Restricted ||| +| - New Zealand ISM Restricted v3.5 ||| +| - Australian Government ISM Protected ||| +| - RMIT Malaysia ||| > [!TIP]-> Standards are added to the dashboard as they become available. The preceding list might not contain recently added standards. +> Standards are added to the dashboard as they become available. This table might not contain recently added standards. ## Add a regulatory standard to your dashboard To add standards to your dashboard: - The subscription must have Defender for Cloud's enhanced security features enabled - The user must have owner or policy contributor permissions -### Add a standard to your Azure resources +### Add a standard to your Azure subscriptions -1. From Defender for Cloud's menu, select **Regulatory compliance** to open the regulatory compliance dashboard. Here you can see the compliance standards currently assigned to the currently selected subscriptions. +1. From Defender for Cloud's menu, select **Regulatory compliance** to open the regulatory compliance dashboard. Here you'll see the compliance standards assigned to the currently selected subscriptions. 1. From the top of the page, select **Manage compliance policies**. To add standards to your dashboard: :::image type="content" source="media/concept-regulatory-compliance/compliance-dashboard.png" alt-text="Screenshot showing regulatory compliance dashboard." lightbox="media/concept-regulatory-compliance/compliance-dashboard.png"::: -### Add a standard to your AWS resources +### Add a standard to your AWS accounts To add regulatory compliance standards on AWS accounts: To add regulatory compliance standards on AWS accounts: :::image type="content" source="media/update-regulatory-compliance-packages/add-aws-regulatory-compliance.png" alt-text="Screenshot of adding regulatory compliance standard to AWS account." lightbox="media/update-regulatory-compliance-packages/add-aws-regulatory-compliance.png"::: + ## Remove a standard from your dashboard You can continue to customize the regulatory compliance dashboard, to focus only on the standards that are applicable to you, by removing any of the supplied regulatory standards that aren't relevant to your organization. |
dev-box | How To Configure Azure Compute Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md | The image version must meet the following requirements: - Windows 10 Enterprise version 20H2 or later. - Windows 11 Enterprise 21H2 or later. - Generalized VM image.- - You must create the image using the following sysprep options: `/mode:vm flag: Sysprep /generalize /oobe /mode:vm`. </br> + - You must create the image using these three sysprep options: `/mode:vm flag: Sysprep /generalize /oobe /mode:vm`. </br> For more information, see: [Sysprep Command-Line Options](/windows-hardware/manufacture/desktop/sysprep-command-line-options?view=windows-11#modevm&preserve-view=true). - To speed up the Dev Box creation time, you can disable the reserved storage state feature in the image by using the following command: `DISM.exe /Online /Set-ReservedStorageState /State:Disabled`. </br> For more information, see: [DISM Storage reserve command-line options](/windows-hardware/manufacture/desktop/dism-storage-reserve?view=windows-11#set-reservedstoragestate&preserve-view=true). |
dev-box | How To Configure Dev Box Hibernation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-dev-box-hibernation.md | These settings are known to be incompatible with hibernation, and aren't support ## Enable hibernation on your dev box image -The Visual Studio and Microsoft 365 images that dev Box provides in the Azure Marketplace are already configured to support hibernation. You don't need to enable hibernation on these images, they're ready to use. +The Visual Studio and Microsoft 365 images that Dev Box provides in the Azure Marketplace are already configured to support hibernation. You don't need to enable hibernation on these images, they're ready to use. If you plan to use a custom image from an Azure Compute Gallery, you need to enable hibernation capabilities as you create the new image. To enable hibernation capabilities, set the IsHibernateSupported flag to true. You must set the IsHibernateSupported flag when you create the image, existing images can't be modified. To enable hibernation capabilities, set the `IsHibernateSupported` flag to true: -```azurecli-interactive -az sig image-definition create / - --resource-group <resourcegroupname> --gallery-name <galleryname> --gallery-image-definition <imageName> --location <location> / - --publisher <publishername> --offer <offername> --sku <skuname> --os-type windows --os-state Generalized / - --features "IsHibernateSupported=true SecurityType=TrustedLaunch" --hyper-v-generation V2 +```azurecli +az sig image-definition create +--resource-group <resourcegroupname> --gallery-name <galleryname> --gallery-image-definition <imageName> --location <location> +--publisher <publishername> --offer <offername> --sku <skuname> --os-type windows --os-state Generalized +--features "IsHibernateSupported=true SecurityType=TrustedLaunch" --hyper-v-generation V2 +``` ++If you're using sysprep and a generalized VM to create a custom image, capture your image using the Azure CLI: ++```azurecli +az sig image-version create +--resource-group <resourcegroupname> --gallery-name <galleryname> --gallery-image-definition <imageName> +--gallery-image-version <versionNumber> --virtual-machine <VMResourceId> ``` For more information about creating a custom image, see [Configure a dev box by using Azure VM Image Builder](how-to-customize-devbox-azure-image-builder.md). You can enable hibernation on a dev box definition by using the Azure portal or ### Update an existing dev box definition by using the CLI -```azurecli-interactive -az devcenter admin devbox-definition update --dev-box-definition-name <DevBoxDefinitionName> -ΓÇôdev-center-name <devcentername> --resource-group <resourcegroupname> ΓÇô-hibernateSupport enabled +```azurecli +az devcenter admin devbox-definition update +--dev-box-definition-name <DevBoxDefinitionName> -ΓÇôdev-center-name <devcentername> --resource-group <resourcegroupname> ΓÇô-hibernateSupport enabled ``` ## Disable hibernation on a dev box definition You can disable hibernation on a dev box definition by using the Azure portal or ### Disable hibernation on an existing dev box definition by using the CLI -```azurecli-interactive -az devcenter admin devbox-definition update --dev-box-definition-name <DevBoxDefinitionName> -ΓÇôdev-center-name <devcentername> --resource-group <resourcegroupname> ΓÇô-hibernateSupport disabled +```azurecli +az devcenter admin devbox-definition update +--dev-box-definition-name <DevBoxDefinitionName> -ΓÇôdev-center-name <devcentername> --resource-group <resourcegroupname> ΓÇô-hibernateSupport disabled ``` ## Next steps |
dms | Concepts Migrate Azure Mysql Login Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-login-migration.md | + + Title: MySQL to Azure Database for MySQL Data Migration - MySQL Login Migration +description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Login Migration ++ Last updated : 07/24/2023++++++# MySQL to Azure Database for MySQL Data Migration - MySQL Login Migration ++MySQL Login Migration is a new feature that allows users to migrate user account and privileges, including users with no passwords. With this feature, businesses can now migrate a subset of the data in the ΓÇÿmysqlΓÇÖ system database from the source to the target for both offline and online migration scenarios. This login migration experience automates manual tasks such as the synchronization of logins with their corresponding user mappings and replicating server permissions and server roles. ++## Current implementation ++In the current implementation, users can select the **Migrate user account and privileges** checkbox in the **Select databases** tab under **Select Server Objects** section when configuring the DMS migration project. ++Additionally, any corresponding databases that have related grants must also be selected for migration in the **Select Databases** section. ++The progress and overall migration summary can be viewed in the **Initial Load** tab. On the **migration summary** blade, users can click into the **ΓÇÿmysqlΓÇÖ** system database to review the results of migrating server level objects, like users and grants. ++### How Login Migration works ++As part of Login migration, we migrate a subset of the tables in the ΓÇÿmysqlΓÇÖ system database depending on the version of your source. The tables we migrate for all versions are: user, db, tables_priv, columns_priv, and procs_priv. For 8.0 sources we also migrate the following tables: role_edges, default_roles, and global_grants. +Users without password ++## Limitations ++* Static privileges such as "CREATE TABLESPACE", "FILE", "SHUTDOWN" and "SUPER" aren't supported by Azure Database for MySQL - Flexible Server and hence not supported by login migration. +* Only users configured with the mysql_native_password, caching_sha2_password and sha256_password authentication plug-ins are migrated to the target server. Users relying on other plug-ins aren't supported. +* The account_locked field from the user table isn't migrated. If the account is locked on the source server, it isn't locked on the target server after migration. +* The proxies_priv grant table and password_history grant table aren't migrated. +* The password_expired field from user table isn't migrated. +* Migration of global_grants table only migrates the following grants: xa_recover_admin, role_admin. +* AAD logins migration isn't supported. ++## Next steps ++* [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS](tutorial-mysql-azure-mysql-offline-portal.md) |
dms | Concepts Migrate Azure Mysql Replicate Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-replicate-changes.md | + + Title: MySQL to Azure Database for MySQL Data Migration - MySQL Replicate Changes +description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Replicate Changes +++ Last updated : 07/24/2023++++++# MySQL to Azure Database for MySQL Data Migration - MySQL Replicate Changes ++Running a Replicate changes Migration, with our offline scenario with "Enable Transactional Consistency," enables businesses to migrate their databases to Azure while the databases remain operational. In other words, migrations can be completed with minimum downtime for critical applications, limiting the impact on service level availability and inconvenience to their end customers. ++> [!NOTE] +> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. ++## Current implementation ++You must run an offline migration scenario with "Enable Transactional Consistency" to get the bin log file and position to replicate the incoming changes. The DMS portal UI shows the binary log filename and position aligned to the time the locks were obtained on the source for the consistent snapshot. You can use this value in our replicate changes migration to stream the incoming changes. +++While running the replicate changes migration, when your target is almost caught up with the source server, stop all incoming transactions to the source database and wait until all pending transactions have been applied to the target database. To confirm that the target database is up-to-date on the source server, run the query 'SHOW MASTER STATUS;', then compare that position to the last committed binlog event (displayed under Migration Progress). When the two positions are the same, the target has caught up with all changes, and you can start the cutover. ++### How Replicate Changes works ++The current implementation is based on streaming binlog changes from the source server and applying them to the target server. Like Data-in replication, this is easier to set up and doesn't require a physical connection between the source and the target servers. ++The server can send Binlog as a stream that contains binary data as documented [here](https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_replication.html). The client can specify the initial log position to start the stream with. The log file name describes the log position, the position within that file, and optionally GTID (Global Transaction ID) if gtid mode is enabled at the source. ++The data changes are sent as "row" events, which contain data for individual rows (prior and/or after the change depending on the operation type, which is insert, delete, or update). The row events are then applied in their raw format using a BINLOG statement: [MySQL 8.0 Reference Manual :: 13.7.8.1 BINLOG Statement](https://dev.mysql.com/doc/refman/8.0/en/binlog.html). ++## Prerequisites ++To complete the replicate changes migration successfully, ensure that the following prerequisites are in place: ++- Use the MySQL command line tool of your choice to determine whether **log_bin** is enabled on the source server. The Binlog isn't always turned on by default, so verify that it's enabled before starting the migration. To determine whether log_bin is enabled on the source server, run the command: **SHOW VARIABLES LIKE 'log_bin'** +- Ensure that the user has **"REPLICATION_APPLIER"** or **"BINLOG_ADMIN"** permission on target server for applying the bin log. +- Ensure that the user has **"REPLICATION SLAVE"** permission on the target server. +- Ensure that the user has **"REPLICATION CLIENT"** and **"REPLICATION SLAVE"** permission on the source server for reading and applying the bin log. +- Run an offline migration scenario with "**Enable Transactional Consistency"** to get the bin log file and position. +- If you're targeting a replicate changes migration, configure the **binlog_expire_logs_seconds** parameter on the source server to ensure that binlog files aren't purged before the replica commits the changes. We recommend at least two days, to begin with. After a successful cutover, the value can be reset. ++## Limitations ++- When performing a replicate changes migration, the name of the database on the target server must be the same as the name on the source server. +- Support is limited to the ROW binlog format. +- DDL changes replication is supported only when you have selected the option for migrating entire server on DMS UI. ++## Next steps ++- [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS](tutorial-mysql-azure-mysql-offline-portal.md) |
dms | Concepts Migrate Azure Mysql Schema Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-schema-migration.md | + + Title: MySQL to Azure Database for MySQL Data Migration - MySQL Schema Migration +description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Schema Migration ++ Last updated : 07/23/2023++++++# MySQL to Azure Database for MySQL Data Migration - MySQL Schema Migration ++MySQL Schema Migration is a new feature that allows users to migrate the schema for objects such as databases, tables, views, triggers, events, stored procedures, and functions. This feature is useful for automating some of work required to prepare the target database prior to starting a migration. ++## Current implementation ++In the current implementation, users can select the **server objects (views, triggers, events, routines)** that they want to migrate in **Select databases** tab under **Select Server Objects** section when configuring the DMS migration project. Additionally, they can select the databases under **Select databases** section whose schema is to be migrated. ++To migrate the schema for table objects, navigate to the **Select tables** tab. Before the tab populates, DMS fetches the tables from the selected database(s) on the source and target, and then determines whether the table exists and contains data. If you select a table in the source database that doesn't exist on the target database, the box under **Migrate schema** is selected by default. For tables that do exist in the target database, a note indicates that the selected table already contains and will be truncated. In addition, if the schema of a table on the target server doesn't match the schema on the source, the table will be dropped before the migration continues. ++ :::image type="content" source="media/tutorial-mysql-to-azure-mysql-online/17-select-tables.png" alt-text="Screenshot of a Select Tables."::: ++When you continue to the next tab, DMS validates your inputs and confirms that the tables selected match if they were selected without the schema migration input. Once the validation passes, you can begin the migration scenario. ++After you begin the migration and as the migration progresses, each table is created prior to migrating its data from the source to the target. Except for triggers and views, which are migrated after data migration is complete, other objects are created for tables prior to the data migration. ++### How Schema Migration works ++Schema migration is supported by MySQLΓÇÖs **ΓÇ£SHOW CREATEΓÇ¥** syntax to gather schema information for objects from the source. When migrating the schema for the objects from the source to the target, DMS processes the input and individually migrates the objects. DMS also wraps the collation, character set, and other relevant information that is provided by the ΓÇ£SHOW CREATEΓÇ¥ query to the create query that is then processed on to the target. ++**Routines** and **Events** are migrated before any data is migrated. The schema for each individual **table** is migrated immediately prior to data movement starting for the table. **Triggers** are migrated after the data migration portion. For **views**, since MySQL validates the contents of views and they can depend on other tables, DMS first creates tables for views before the start of database data movement and then drops the temporary table and creates the view. ++When querying the source and target, if a transient error occurs, DMS **retries** the queries. However, if an error occurs that DMS can't recover from ΓÇô as an example, an invalid syntax when performing a version upgrade migration ΓÇô DMS fails and report that error message on completion. If the failure occurs when creating a table, the data for that table isn't migrated, but the data and schema migration for the other selected tables is attempted. If an unrecoverable error occurs for events, routines, or when creating the temporary table for views, the migration fails prior to running the migration for the selected tables and the objects that are migrated following the data migration portion. ++Since a temporary table is created for views, if there's a failure migrating a view, the temporary table is left on the target. After the underlying issue is fixed and the migration is retried, DMS deletes that table prior to creating the view. Alternatively, if electing not to use schema migration for views in a future migration, the temporary table needs to be manually deleted prior to manually migrating the view. ++## Prerequisites ++To complete a schema migration successfully, ensure that the following prerequisites are in place. ++* ΓÇ£READΓÇ¥ privilege on the source database. +* ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database +* If migrating views, the user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege. +* If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege. +* If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: + * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. + * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine. +* If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the events are to be shown. ++## Limitations ++* When migrating non table objects, DMS doesn't support renaming databases. +* When migrating to a target server that has bin_log enabled, log_bin_trust_function_creators should be enabled to allow for creation of routines and triggers. +* Currently there's no support for migrating the DEFINER clause for objects. All object types with definers on source get dropped and after the migration the default definer for tables are set to the login used to run the migration. +* Some version upgrades aren't supported if there are breaking changes in version compatibility. Refer to the MySQL docs for more information on version upgrades. +* Currently we can only migrate schema as part of data movement. If nothing is selected for data movement, no schema migration happens. If a table is selected for schema migration, it is selected for data movement. ++## Next steps ++- Learn more about [Data-in Replication](../mysql/concepts-data-in-replication.md) ++- [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS](tutorial-mysql-azure-mysql-offline-portal.md) |
dms | Known Issues Azure Mysql Fs Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-mysql-fs-online.md | Known issues associated with migrations to Azure Database for MySQL are describe ## Incompatible SQL Mode -One or more incompatible SQL modes can cause a number of different errors. Below is an example error along with server modes that should be looked at if this error occurs.`` +One or more incompatible SQL modes can cause many different errors. Below is an example error along with server modes that should be looked at if this error occurs. -- **Error**: An error occurred while preparing the table '{table}' in database '{database}' on server '{server}' for migration during activity '{activity}'. As a result, this table will not be migrated.+- **Error**: An error occurred while preparing the table '{table}' in database '{database}' on server '{server}' for migration during activity '{activity}'. As a result, this table won't be migrated. **Limitation**: This error occurs when one of the below SQL modes is set on one server but not the other server. One or more incompatible SQL modes can cause a number of different errors. Below | NO_ZERO_DATE | NO_AUTO_CREATE_USER | | - | - |- | When the default value for a date on a table or the data is 0000-00-00 on the source, and the target server has the NO_ZERO_DATE SQL mode set, the schema and/or data migration will fail. There are two possible workarounds, the first is to change the default values of the columns to be NULL or a valid date. The second option, is to remove the NO_ZERO_DATE SQL mode from the global SQL mode variable. | When running migrations from MySQL source server 5.7 to MySQL target server 8.0 that are doing **schema migration of routines**, it will run into errors if no_auto_create_user SQL mode is set on MySQL source server 5.7. | + | When the default value for a date on a table or the data is 0000-00-00 on the source, and the target server has the NO_ZERO_DATE SQL mode set, the schema and/or data migration will fail. There are two possible workarounds, the first is to change the default values of the columns to be NULL or a valid date. The second option is to remove the NO_ZERO_DATE SQL mode from the global SQL mode variable. | When running migrations from MySQL source server 5.7 to MySQL target server 8.0 that are doing **schema migration of routines**, it will run into errors if no_auto_create_user SQL mode is set on MySQL source server 5.7. | ## Binlog Retention Issues -- **Error**: - - Binary log is not open. - - Could not find first log file name in binary log index file. +- **Error**: Fatal error reading binlog. This error may indicate that the binlog file name and/or the initial position were specified incorrectly. **Limitation**: This error occurs if binlog retention period is too short. One or more incompatible SQL modes can cause a number of different errors. Below **Limitation**: This error occurs when there is a timeout while obtaining locks on all the tables when transactional consistency is enabled. - **Workaround**: Ensure that the selected tables are not locked or that no long running transactions are running on them. + **Workaround**: Ensure that the selected tables aren't locked or that no long running transactions are running on them. ## Write More Than 4 MB of Data to Azure Storage - **Error**: The request body is too large and exceeds the maximum permissible limit. - **Limitation**: This error likely occurs when there are too many tables to migrate (>10k). There is a 4 MB limit for each call to the Azure Storage service. + **Limitation**: This error likely occurs when there are too many tables to migrate (>10k). There's a 4 MB limit for each call to the Azure Storage service. - **Workaround**: Please reach out to support by [creating a support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview?DMC=troubleshoot) and we can provide custom scripts to access our REST APIs directly. + **Workaround**: Reach out to support by [creating a support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview?DMC=troubleshoot) and we can provide custom scripts to access our REST APIs directly. ++## Duplicate key entry issue ++- **Error**: The error is often a symptom of timeouts, network issues or target scaling. ++ **Potential error message**: A batch couldn't be written to the table '{table}' due to a SQL error raised by the target server. For context, the batch contained a subset of rows returned by the following source query. ++ **Limitation**: This error can be caused by timeout or broken connection to the target, resulting in duplicate primary keys. It may also be related to multiple migrations to the target running at the same time, or the user having test workloads running on the target while the migration is running. Additionally, the target may require primary keys to be unique, even though they aren't required to be so on the source. ++ **Workaround**: To resolve this issue, ensure that there are no duplicate migrations running and that the source primary keys are unique. If error persists, reach out to support by [creating a support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview?DMC=troubleshoot) and we can provide custom scripts to access our REST APIs directly. ++## Replicated operation had mismatched rows error ++- **Error**: Online Migration Fails to Replicate Expected Number of Changes. ++ **Potential error message**: An error occurred applying records to the target server which were read from the source server's binary log. The changes started at binary log '{mysql-bin.log}' and position '{position}' and ended at binary log '{mysql-bin.log}' and position '{position}'. All records on the source server prior to position '{position}' in binary log '{mysql-bin.log}' have been committed to the target. ++ **Limitation**: On the source, there were insert and delete statements into a table, and the deletions were by an apparent unique index. ++ **Workaround**: We recommend migrating the table manually. ++## Table data truncated error ++- **Error**: Enum column has a null value in one or more rows and the target SQL mode is set to strict. ++ **Potential error message**: A batch couldn't be written to the table '{table}' due to a data truncation error. Please ensure that the data isn't too large for the data type of the MySQL table column. If the column type is an enum, make sure SQL Mode isn't set as TRADITIONAL, STRICT_TRANS_TABLES or STRICT_ALL_TABLES and is the same on source and target. ++ **Limitation**: The error occurs when historical data was written to the source server when they had certain setting, but when it's changed, data cannot move. ++ **Workaround**: To resolve the issue, we recommend changing the target SQL mode to non-strict or changing all null values to be valid values. ++## Creating object failure ++- **Error**: An error occurred after view validation failed. ++ **Limitation**: The error occurs when trying to migrate a view and the table that the view is supposed to be referencing cannot be found. ++ **Workaround**: We recommend migrating views manually. ++## Unable to find table ++- **Error**: An error occurred as referencing table cannot be found. ++ **Potential error message**: The pipeline was unable to create the schema of object '{object}' for activity '{activity}' using strategy MySqlSchemaMigrationViewUsingTableStrategy because of a query execution. ++ **Limitation**: The error can occur when the view is referring to a table that has been deleted or renamed, or when the view was created with incorrect or incomplete information. ++ **Workaround**: We recommend migrating views manually. ++## All pooled connections broken ++- **Error**: All connections on the source server were broken. ++ **Limitation**: The error occurs when all the connections that are acquired at the start of initial load are lost due to server restart, network issues, heavy traffic on the source server or other transient problems. This error isn't recoverable. ++ **Workaround**: The migration must be restarted, and we recommend increasing the performance of the source server. Another issue is scripts that kill long running connections, prevents these scripts from working. ++## Consistent snapshot broken ++ **Limitation**: The error occurs when the customer performs DDL during the initial load of the migration instance. ++ **Workaround**: To resolve this issue, we recommend refraining from making DDL changes during the Initial Load. ++## Foreign key constraint ++- **Error**: The error occurs when there is a change in the referenced foreign key type from the table. ++ **Potential error message**: Referencing column '{pk column 1}' and referenced column '{fk column 1}' in foreign key constraint '{key}' are incompatible. ++ **Limitation**: The error can cause schema migration of a table to fail, as the PK column in table 1 may not be compatible with the FK column in table 2. ++ **Workaround**: To resolve this issue, we recommend dropping the foreign key and re-creating it after the migration process is completed. ## Next steps |
dms | Migrate Azure Mysql Consistent Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-azure-mysql-consistent-backup.md | Title: MySQL to Azure Database for MySQL Data Migration - MySQL Consistent Backup (Preview) + Title: MySQL to Azure Database for MySQL Data Migration - MySQL Consistent Backup description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Consistent Backup for transaction consistency even without making the Source server read-only -# MySQL to Azure Database for MySQL Data Migration - MySQL Consistent Backup (Preview) +# MySQL to Azure Database for MySQL Data Migration - MySQL Consistent Backup MySQL Consistent Backup is a new feature that allows users to take a Consistent Backup of a MySQL server without losing data integrity at source because of ongoing CRUD (Create, Read, Update, and Delete) operations. Transactional consistency is achieved without the need to set the source server to read-only mode through this feature. ## Current implementation -In the current implementation, users can enable the **Make Source Server Read Only** option during offline migration. This maintains the data integrity of the target database as the source is migrated by preventing Write/Delete operations on the source server during migration. When you make the source server read only as part of the migration process, the selection applies to all the source serverΓÇÖs databases, regardless of whether they are selected for migration. +In the current implementation, users can enable the **Make Source Server Read Only** option during offline migration. Selecting this option maintains the data integrity of the target database as the source is migrated by preventing Write/Delete operations on the source server during migration. When you make the source server read only as part of the migration process, the selection applies to all the source serverΓÇÖs databases, regardless of whether they are selected for migration. :::image type="content" source="media/migrate-azure-mysql-consistent-backup/dms-mysql-make-source-read-only.png" alt-text="MySQL to Azure Database for MySQL Data Migration Wizard - Read Only" lightbox="media/migrate-azure-mysql-consistent-backup/dms-mysql-make-source-read-only.png"::: The undo log makes repeatable reads possible and helps generate the snapshot tha ### How Consistent Backup works -When you initiate a migration, the service flushes all tables on the source server with a **read** lock to obtain the point-in-time snapshot. This is done because a global lock is more reliable than attempting to lock individual databases or tables. As a result, even if you are not migrating all databases in a server, they are locked as part of setting up the migration process. The migration service initiates a repeatable read and combines the current table state with contents of the undo log for the snapshot. The **snapshot** is generated after obtaining the server wide lock and spawning several connections for the migration. After the creation of all connections that will be used for the migration, the locks on the table are released. +When you initiate a migration, the service flushes all tables on the source server with a **read** lock to obtain the point-in-time snapshot. This flushing is done because a global lock is more reliable than attempting to lock individual databases or tables. As a result, even if you are not migrating all databases in a server, they are locked as part of setting up the migration process. The migration service initiates a repeatable read and combines the current table state with contents of the undo log for the snapshot. The **snapshot** is generated after obtaining the server wide lock and spawning several connections for the migration. After the creation of all connections used for the migration, the locks on the table are released. -The migration threads are used to perform the migration with repeatable read enabled for all transactions and the source server hides all new changes from the offline migration. Clicking on the specific database in the Azure Database Migration Service (DMS) Portal UI during the migration displays the migration status of all the tables - completed or in progress - in the migration. In case of connection issues, the status of the database changes to **Retrying** and the error information is displayed if the migration fails. +The migration threads are used to perform the migration with repeatable read enabled for all transactions and the source server hides all new changes from the offline migration. Clicking on the specific database in the Azure Database Migration Service (DMS) Portal UI during the migration displays the migration status of all the tables - completed or in progress - in the migration. If there are connection issues, the status of the database changes to **Retrying** and the error information is displayed if the migration fails. -Repeatable reads are enabled to keep the undo logs accessible during the migration, which will increase the storage required on the source because of long running connections. It is important to note that the longer a migration runs the more table changes that occur, the undo log's history of changes will be more extensive. The longer a migration, the more slowly it runs as the undo logs to retrieve the unmodified data from will be longer. This could also increase the compute requirements and load on the source server. +Repeatable reads are enabled to keep the undo logs accessible during the migration, which increase the storage required on the source because of long running connections. It is important to note that the longer a migration runs the more table changes that occur, the undo log's history of changes become more extensive. The longer a migration, the more slowly it runs as the undo logs to retrieve the unmodified data becomes longer. This could also increase the compute requirements and load on the source server. ### The binary log -The [binary log (or binlog)](https://dev.mysql.com/doc/refman/8.0/en/binary-log.html) is an artifact that is reported to the user after the offline migration is complete. As the service spawns threads for migration during read lock, the migration service records the initial binlog position because the binlog position could change after the server is unlocked. While the migration service attempts to obtain the locks and set up the migration, the bin log position will display the status **Waiting for data movement to start...**. +The [binary log (or binlog)](https://dev.mysql.com/doc/refman/8.0/en/binary-log.html) is an artifact that is reported to the user after the offline migration is complete. As the service spawns threads for migration during read lock, the migration service records the initial binlog position because the binlog position could change after the server is unlocked. While the migration service attempts to obtain the locks and set up the migration, the bin log position displays the status **Waiting for data movement to start...**. :::image type="content" source="media/migrate-azure-mysql-consistent-backup/dms-wait-for-binlog-status.png" alt-text="MySQL to Azure Database for MySQL Data Migration Wizard - Waiting for data movement to start" lightbox="media/migrate-azure-mysql-consistent-backup/dms-wait-for-binlog-status.png"::: The binlog keeps a record of all the CRUD operations in the source server. The D This binlog position can be used in conjunction with [Data-in replication](../mysql/concepts-data-in-replication.md) or third-party tools (such as Striim or Attunity) that provide for replaying binlog changes to a different server, if required. -The binary log is deleted periodically, so the user must take necessary precautions if Change Data Capture (CDC) is used later to migrate the post-migration updates at the source. Configure the **binlog_expire_logs_seconds** parameter on the source server to ensure that binlogs are not purged before the replica commits the changes. If non-zero, binary logs will be purged after **binlog_expire_logs_seconds** seconds. Post successful cut-over, you can reset the value. Users will need to leverage the changes in the binlog to carry out the online migration. Users can take advantage of DMS to provide the initial seeding of the data and then stitch that together with the CDC solution of their choice to implement a minimal downtime migration. +The binary log is deleted periodically, so the user must take necessary precautions if Change Data Capture (CDC) is used later to migrate the post-migration updates at the source. Configure the **binlog_expire_logs_seconds** parameter on the source server to ensure that binlogs are not purged before the replica commits the changes. If non-zero, binary logs are purged after **binlog_expire_logs_seconds** seconds. Post successful cut-over, you can reset the value. Users need to leverage the changes in the binlog to carry out the online migration. Users can take advantage of DMS to provide the initial seeding of the data and then stitch that together with the CDC solution of their choice to implement a minimal downtime migration. ## Prerequisites To complete the migration successfully with Consistent Backup enabled to: - Use the mysql client tool to determine whether log_bin is enabled on the source server. The Bin log is not always turned on by default and should be checked to see if it is enabled before starting the migration. The mysql client tool is used to determine whether **log_bin** is enabled on the source by running the command: **SHOW VARIABLES LIKE 'log_bin';** > [!NOTE]-> With Azure Database for MySQL Single Server, which supports up to 4TB, this is not enabled by default. However, if you promote a read replica for the source server and then delete read replica, the parameter will be set to ON. +> With Azure Database for MySQL Single Server, which supports up to 4TB, this is not enabled by default. However, if you promote a read replica for the source server and then delete read replica, the parameter is set to ON. - Configure the **binlog_expire_logs_seconds** parameter on the source server to ensure that binlog files are not purged before the replica commits the changes. Post successful cutover, the value can be reset. |
event-grid | Create View Manage Namespaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-namespaces.md | Last updated 05/23/2023 A namespace in Azure Event Grid is a logical container for one or more topics, clients, client groups, topic spaces and permission bindings. It provides a unique namespace, allowing you to have multiple resources in the same Azure region. With an Azure Event Grid namespace you can group now together related resources and manage them as a single unit in your Azure subscription. -> [!IMPORTANT] -> The Namespace resource is currently in PREVIEW. This article shows you how to use the Azure portal to create, view and manage an Azure Event Grid namespace. |
event-grid | Mqtt Publish And Subscribe Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-cli.md | In this article, you use the Azure CLI to do the following tasks: ## Generate sample client certificate and thumbprint If you don't already have a certificate, you can create a sample certificate using the [step CLI](https://smallstep.com/docs/step-cli/installation/). Consider installing manually for Windows. After a successful installation of Step, you should open a command prompt in your user profile folder (Win+R type %USERPROFILE%). -To create root and intermediate certificates, run the following command: +After a successful installation of Step, you should open a command prompt in your user profile folder (Win+R type %USERPROFILE%). ++1. To create root and intermediate certificates, run the following command. Remember the password, which needs to be used in the next step. ```powershell step ca init --deployment-type standalone --name MqttAppSamplesCA --dns localhost --address 127.0.0.1:443 --provisioner MqttAppSamplesCAProvisioner ``` -Using the CA files generated to create certificate for the client. +2. Using the CA files generated to create certificate for the client. Ensure to use the correct path for the cert and secrets files in the command. ```powershell step certificate create client1-authnID client1-authnID.pem client1-authnID.key --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --no-password --insecure --not-after 2400h ``` -To view the thumbprint, run the Step command. +3. To view the thumbprint, run the Step command. ```powershell step certificate fingerprint client1-authnID.pem |
event-grid | Mqtt Publish And Subscribe Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-portal.md | In this article, you use the Azure portal to do the following tasks: ## Generate sample client certificate and thumbprint If you don't already have a certificate, you can create a sample certificate using the [step CLI](https://smallstep.com/docs/step-cli/installation/). Consider installing manually for Windows. -1. Once you installed Step, in Windows PowerShell, run the command to create root and intermediate certificates. +After a successful installation of Step, you should open a command prompt in your user profile folder (Win+R type %USERPROFILE%). - ```powershell - step ca init --deployment-type standalone --name MqttAppSamplesCA --dns localhost --address 127.0.0.1:443 --provisioner MqttAppSamplesCAProvisioner - ``` -2. Using the CA files generated in step 1 to create certificate for the client. +1. To create root and intermediate certificates, run the following command. Remember the password, which needs to be used in the next step. ++```powershell +step ca init --deployment-type standalone --name MqttAppSamplesCA --dns localhost --address 127.0.0.1:443 --provisioner MqttAppSamplesCAProvisioner +``` ++2. Using the CA files generated to create certificate for the client. Ensure to use the correct path for the cert and secrets files in the command. ++```powershell +step certificate create client1-authnID client1-authnID.pem client1-authnID.key --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --no-password --insecure --not-after 2400h +``` - ```powershell - step certificate create client1-authnID client1-authnID.pem client1-authnID.key --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --no-password --insecure --not-after 2400h - ``` 3. To view the thumbprint, run the Step command. - ```powershell - step certificate fingerprint client1-authnID.pem - ``` +```powershell +step certificate fingerprint client1-authnID.pem +``` ## Create a Namespace |
event-hubs | Explore Captured Avro Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/explore-captured-avro-files.md | The Avro files produced by Event Hubs Capture have the following Avro schema: ## Azure Storage Explorer You can verify that captured files were created in the Azure Storage account using tools such as [Azure Storage Explorer][Azure Storage Explorer]. You can download files locally to work on them. -An easy way to explore Avro files is by using the [Avro Tools][Avro Tools] jar from Apache. You can also use [Apache Drill][Apache Drill] for a lightweight SQL-driven experience or [Apache Spark][Apache Spark] to perform complex distributed processing on the ingested data. -+An easy way to explore Avro files is by using the [Avro Tools][Avro Tools] jar from Apache. You can also use [Apache Spark][Apache Spark] to perform complex distributed processing on the ingested data. ## Use Apache Spark [Apache Spark][Apache Spark] is a "unified analytics engine for large-scale data processing." It supports different languages, including SQL, and can easily access Azure Blob storage. There are a few options to run Apache Spark in Azure, and each provides easy access to Azure Blob storage: Event Hubs Capture is the easiest way to get data into Azure. Using Azure Data L [Apache Avro]: https://avro.apache.org/-[Apache Drill]: https://drill.apache.org/ [Apache Spark]: https://spark.apache.org/ [support request]: https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade [Azure Storage Explorer]: https://github.com/microsoft/AzureStorageExplorer/releases Event Hubs Capture is the easiest way to get data into Azure. Using Azure Data L [Event Hubs overview]: ./event-hubs-about.md [HDInsight: Address files in Azure storage]: ../hdinsight/hdinsight-hadoop-use-blob-storage.md [Azure Databricks: Azure Blob Storage]:https://docs.databricks.com/spark/latest/data-sources/azure/azure-storage.html-[Apache Drill: Azure Blob Storage Plugin]:https://drill.apache.org/docs/azure-blob-storage-plugin/ [Streaming at Scale: Event Hubs Capture]:https://github.com/yorek/streaming-at-scale/tree/master/event-hubs-capture |
frontdoor | How To Add Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md | After you validate your custom domain, you can associate it to your Azure Front 1. Once the CNAME record gets created and the custom domain is associated to the Azure Front Door endpoint completes, traffic flow will start flowing. - > [!NOTE] - > If HTTPS is enabled, certificate provisioning and propagation may take a few minutes because propagation is being done to all edge locations. + > [!NOTE] + > * If HTTPS is enabled, certificate provisioning and propagation may take a few minutes because propagation is being done to all edge locations. + > * If your domain CNAME is indirectly pointed to a Front Door endpoint, for example, using Azure Traffic Manager for multi-CDN failover, the **DNS state** column shows as **CNAME/Alias record currently not detected**. Azure Front Door can't guarantee 100% detection of the CNAME record in this case. If you've configured an Azure Front Door endpoint to Azure Traffic Manager and still see this message, it doesnΓÇÖt mean you didn't set up correctly, therefore further no action is neccessary from your side. ## Verify the custom domain |
governance | Azure Security Benchmarkv1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md | initiative definition. |[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) | |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |
governance | Hipaa Hitrust 9 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Document and implement wireless access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04b3e7f6-4841-888d-4799-cda19a0084f6) |CMA_0190 - Document and implement wireless access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0190.json) | |[Document wireless access security controls](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8f835d6a-4d13-9a9c-37dc-176cebd37fda) |CMA_C1695 - Document wireless access security controls |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1695.json) | |[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |
hdinsight | Apache Hadoop On Premises Migration Best Practices Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-infrastructure.md | description: Learn infrastructure best practices for migrating on-premises Hadoo Previously updated : 06/29/2022 Last updated : 07/25/2023 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - infrastructure best practices |
hdinsight | Hdinsight Troubleshoot Invalidnetworksecuritygroupsecurityrules Cluster Creation Fails | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-invalidnetworksecuritygroupsecurityrules-cluster-creation-fails.md | Title: InvalidNetworkSecurityGroupSecurityRules error - Azure HDInsight description: Cluster Creation Fails with the ErrorCode InvalidNetworkSecurityGroupSecurityRules Previously updated : 06/30/2022 Last updated : 07/25/2023 # Scenario: InvalidNetworkSecurityGroupSecurityRules - cluster creation fails in Azure HDInsight |
hdinsight | Hdinsight Hadoop Stack Trace Error Messages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-stack-trace-error-messages.md | description: Index of Hadoop stack trace error messages in Azure HDInsight. Find Previously updated : 06/29/2022 Last updated : 07/25/2023 # Index of Apache Hadoop in HDInsight troubleshooting articles |
hdinsight | Gateway Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/gateway-best-practices.md | Title: Gateway deep dive and best practices for Apache Hive in Azure HDInsight description: Learn how to navigate the best practices for running Hive queries over the Azure HDInsight gateway Previously updated : 06/29/2022 Last updated : 07/25/2023 # Gateway deep dive and best practices for Apache Hive in Azure HDInsight |
hdinsight | Interactive Query Troubleshoot Error Message Hive View | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-error-message-hive-view.md | Title: Error message not shown in Apache Hive View - Azure HDInsight description: Query fails in Apache Hive View without any details on Azure HDInsight cluster. Previously updated : 06/24/2022 Last updated : 07/25/2023 # Scenario: Query error message not displayed in Apache Hive View in Azure HDInsight |
hdinsight | Troubleshoot Gateway Timeout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/troubleshoot-gateway-timeout.md | Title: Exception when running queries from Apache Ambari Hive View in Azure HDIn description: Troubleshooting steps when running Apache Hive queries through Apache Ambari Hive View in Azure HDInsight. Previously updated : 06/29/2022 Last updated : 07/25/2023 # Exception when running queries from Apache Ambari Hive View in Azure HDInsight |
healthcare-apis | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md | Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser > [!Note] > Azure Health Data services is the evolved version of Azure API for FHIR enabling customers to manage FHIR, DICOM, and MedTech services with integrations into other Azure Services. To learn about Azure Health Data Services [click here](https://azure.microsoft.com/products/health-data-services/). +## **July 2023** +**Feature enhancement: Change to the exported file name format** +FHIR service enables customers to export data with $export operation. Export can be conducted across various levels, such as System, Patient and Group of patients. There are name changes with exported file and default storage account name. +* Exported file names will follow the format \<FHIR Resource Name\>-\<Number\>- \<Number\>.ndjson. The order of the files is not guaranteed to correspond to any ordering of the resources in the database. +* Default storage account name is updated to Export-\<Number\>. ++There is no change to number of resources added in individual exported files. + ## **June 2023** **Bug Fix: Metadata endpoint URL in capability statement is relative URL** Per FHIR specification, metadata endpoint URL in capability statement needs to be an absolute URL. |
healthcare-apis | Dicom Extended Query Tags Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-extended-query-tags-overview.md | -By default, the DICOM service supports querying on the DICOM tags specified in the [conformance statement](dicom-services-conformance-statement.md#searchable-attributes). By enabling extended query tags, the list of tags can easily be expanded based on the application's needs. +By default, the DICOM service supports querying on the DICOM tags specified in the [conformance statement](dicom-services-conformance-statement-v2.md#searchable-attributes). By enabling extended query tags, the list of tags can easily be expanded based on the application's needs. Using the APIs listed below, users can index their DICOM studies, series, and instances on both standard and private DICOM tags such that they can be specified in QIDO-RS queries. GET .../operations/{operationId} ### Tag status -The [Status](#extended-query-tag-status) of extended query tag indicates current status. When an extended query tag is first added, its status is set to `Adding`, and a long-running operation is kicked off to reindex existing DICOM instances. After the operation is completed, the tag status is updated to `Ready`. The extended query tag can now be used in [QIDO](dicom-services-conformance-statement.md#search-qido-rs). +The [Status](#extended-query-tag-status) of extended query tag indicates current status. When an extended query tag is first added, its status is set to `Adding`, and a long-running operation is kicked off to reindex existing DICOM instances. After the operation is completed, the tag status is updated to `Ready`. The extended query tag can now be used in [QIDO](dicom-services-conformance-statement-v2.md#search-qido-rs). For example, if the tag Manufacturer Model Name (0008,1090) is added, and in `Ready` status, hereafter the following queries can be used to filter stored instances by the Manufacturer Model Name. |
healthcare-apis | Dicom Service V2 Api Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-service-v2-api-changes.md | + + Title: DICOM Service API v2 Changes - Azure Health Data Services +description: This guide gives an overview of the changes in the v2 API for the DICOM service. +++++ Last updated : 7/21/2023++++# DICOM Service API v2 Changes ++This reference guide provides you with a summary of the changes in the V2 API of the DICOM service. To see the full set of capabilities in v2, see the [DICOM Conformance Statement v2](dicom-services-conformance-statement-v2.md). ++## Summary of changes in v2 ++### Store ++#### Lenient validation of optional attributes +In previous versions, a Store request would fail if any of the [required](dicom-services-conformance-statement-v2.md#store-required-attributes) or [searchable attributes](dicom-services-conformance-statement-v2.md#searchable-attributes) failed validation. Beginning with v2, the request fails only if **required attributes** fail validation. ++Failed validation of attributes not required by the API results in the file being stored with a warning in the response. Warnings result in an HTTP return code of `202 Accepted` and the response payload will contain the `WarningReason` tag (`0008, 1196`). ++A warning is given about each failing attribute per instance. When a sequence contains an attribute that fails validation, or when there are multiple issues with a single attribute, only the first failing attribute reason is noted. ++There are some notable behaviors for optional attributes that fail validation: + * Searches for the attribute that failed validation will not return the study/series/instance. + * The attributes are not returned when retrieving metadata via WADO `/metadata` endpoints. + +Retrieving a study/series/instance will always return the original binary files with the original attributes, even if those attributes failed validation. ++If an attribute is padded with nulls, the attribute is indexed when searchable and is stored as is in dicom+json metadata. No validation warning is provided. ++### Retrieve ++#### Single frame retrieval support +Single frame retrieval is supported by adding the following `Accept` header: +* `application/octet-stream; transfer-syntax=*` ++### Search ++#### Search results may be incomplete for extended query tags with validation warnings +In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag return `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings on [searchable tags](dicom-services-conformance-statement-v2.md#searchable-attributes), subsequent searches containing these tags won't consider any DICOM SOP instance that produced a warning. This behavior may result in incomplete search results. To correct an attribute, delete the stored instance and upload the corrected data. ++#### Fewer Study, Series, and Instance attributes are returned by default +The set of attributes returned by default has been reduced to improve performance. See the detailed list in the [search response](./dicom-services-conformance-statement-v2.md#search-response) documentation. ++#### Null padded attributes can be searched for with or without padding +When an attribute was stored using null padding, it can be searched for with or without the null padding in uri encoding. Results retrieved will be for attributes stored both with and without null padding. ++### Operations ++#### The `completed` status has been renamed to `succeeded` +To align with [Microsoft's REST API guidelines](https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md), the `completed` status has been renamed to `succeeded`. ++### Change Feed ++#### Change feed now accepts a time range +The Change Feed API now accepts optional `startTime` and `endTime` parameters to help scope the results. Changes within a time range can still be paginated using the existing `offset` and `limit` parameters. The offset is relative to the time window defined by `startTime` and `endTime`. For example, the fifth change feed entry starting from 7/24/2023 at 09:00 AM UTC would use the query string `?startTime=2023-07-24T09:00:00Z&offset=5`. ++For v2, it's recommended to always include a time range to improve performance. |
healthcare-apis | Dicom Services Conformance Statement V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md | -> API version 2 is in **Preview** and should be used only for testing. +> API version 2 is the latest API version. For a list of changes in v2 compared to v1, see [DICOM Service API v2 Changes](dicom-service-v2-api-changes.md) The Medical Imaging Server for DICOM supports a subset of the DICOMwebΓäó Standard. Support includes: The following parameters for each query are supported: | Key | Support Value(s) | Allowed Count | Description | | : | :- | : | :- | | `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |-| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes are returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. | +| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes are returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. | | `limit=` | `{value}` | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. | | `offset=` | `{value}` | 0..1 | Skip `{value}` results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response is returned. | | `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It does a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" all match. However "ohn" doesn't match. | We support the following matching types. | Search Type | Supported Attribute | Example | | :- | : | : |-| Range Query | `StudyDate`/`PatientBirthDate` | `{attributeID}={value1}-{value2}`. For date/ time values, we support an inclusive range on the tag. This is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` are matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. | +| Range Query | `StudyDate`/`PatientBirthDate` | `{attributeID}={value1}-{value2}`. For date/ time values, we support an inclusive range on the tag. This range is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` are matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. | | Exact Match | All supported attributes | `{attributeID}={value1}` | | Fuzzy Match | `PatientName`, `ReferringPhysicianName` | Matches any component of the name that starts with the value. | The query API returns one of the following status codes in the response: ### Additional notes * Querying using the `TimezoneOffsetFromUTC (00080201)` isn't supported.-* The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved. +* The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range will be resolved. * When target resource is Study/Series, there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest wins and you can search only on the latest data.-* Paged results are optimized to return matched _newest_ instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added. +* Paged results are optimized to return matched _newest_ instance first, possibly resulting in duplicate records in subsequent pages if newer data matching the query was added. * Matching is case in-sensitive and accent in-sensitive for PN VR types. * Matching is case in-sensitive and accent sensitive for other string VR types. * Only the first value is indexed of a single valued data element that incorrectly has multiple values. There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/curr * `CANCELED` * `COMPLETED` -This transaction will only succeed against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` will return failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction. +This transaction only succeeds against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` will return failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction. | Method | Path | Description | | : | :- | :-- | To update a Workitem currently in the `SCHEDULED` state, the `Transaction UID` a The `Content-Type` header is required, and must have the value `application/dicom+json`. -The request payload contains a dataset with the changes to be applied to the target Workitem. When modifying a sequence, the request must include all Items in the sequence, not just the Items to be modified. +The request payload contains a dataset with the changes to be applied to the target Workitem. When a sequence is modified, the request must include all Items in the sequence, not just the Items to be modified. When multiple Attributes need updated as a group, do this as multiple Attributes in a single request, not as multiple requests. There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be The following parameters for each query are supported: | Key | Support Value(s) | Allowed Count | Description | | : | :- | : | :- | | `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |-| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Only top-level attributes can be specified to be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes will be returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. | +| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Only top-level attributes can be specified to be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes will be returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. | | `limit=` | `{value}` | 0...1 | Integer value to limit the number of values returned in the response. Value can be between the range `1 >= x <= 200`. Defaulted to `100`. | | `offset=` | `{value}` | 0...1 | Skip {value} results. If an offset is provided larger than the number of search query results, a `204 (no content)` response is returned. | | `fuzzymatching=` | `true` \| `false` | 0...1 | If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR). It does a prefix word match of any name part inside these attributes. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` all match. However `ohn` will **not** match. | We support these matching types: | Search Type | Supported Attribute | Example | | :- | : | : |-| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This will be mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. | +| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This range will be mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. | | Exact Match | All supported attributes | `{attributeID}={value1}` | | Fuzzy Match | `PatientName` | Matches any component of the name that starts with the value. | The query API returns one of the following status codes in the response: #### Additional Notes -The query API will not return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved. +The query API won't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved. * Paged results are optimized to return matched newest instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added. * Matching is case insensitive and accent insensitive for PN VR types. |
healthcare-apis | Dicom Services Conformance Statement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md | +> [!NOTE] +> API version 2 is the latest API version and should be used in place of v1. See the [DICOM Conformance Statement v2](dicom-services-conformance-statement-v2.md) for details. + The Medical Imaging Server for DICOM supports a subset of the DICOMwebΓäó Standard. Support includes: * [Studies Service](#studies-service) The `quality` query parameter is also supported. An integer value between `1` an | `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |-| `404 (Not Found)` | The specified DICOM resource couldn't be found, or for rendered request the instance did not contain pixel data. | +| `404 (Not Found)` | The specified DICOM resource couldn't be found, or for rendered request the instance didn't contain pixel data. | | `406 (Not Acceptable)` | The specified `Accept` header isn't supported, or for rendered and transcode requests the file requested was too large. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. | The following parameters for each query are supported: | Key | Support Value(s) | Allowed Count | Description | | :-- | :-- | : | :- | | `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |-| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes are returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. | +| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes are returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. | | `limit=` | `{value}` | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. | | `offset=` | `{value}` | 0..1 | Skip `{value}` results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response is returned. | | `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It does a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" all match. However, "ohn" doesn't match. | The response body is empty. The status code is the only useful information retur ## Worklist Service (UPS-RS) -The DICOM service supports the Push and Pull SOPs of the [Worklist Service (UPS-RS)](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_11). This service provides access to one Worklist containing Workitems, each of which represents a Unified Procedure Step (UPS). +The DICOM service supports the Push and Pull SOPs of the [Worklist Service (UPS-RS)](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_11). The Worklist service provides access to one Worklist containing Workitems, each of which represents a Unified Procedure Step (UPS). Throughout, the variable `{workitem}` in a URI template stands for a Workitem UID. This transaction retrieves a Workitem. It corresponds to the UPS DIMSE N-GET ope Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.5 -If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) Attribute. This is necessary to preserve this Attribute's role as an access lock. +If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) Attribute. This is necessary to preserve the attribute's role as an access lock. | Method | Path | Description | | : | :- | : | To update a Workitem currently in the `SCHEDULED` state, the `Transaction UID` a The `Content-Type` header is required, and must have the value `application/dicom+json`. -The request payload contains a dataset with the changes to be applied to the target Workitem. When modifying a sequence, the request must include all Items in the sequence, not just the Items to be modified. +The request payload contains a dataset with the changes to be applied to the target Workitem. When a sequence is modified, the request must include all Items in the sequence, not just the Items to be modified. When multiple Attributes need to be updated as a group, do this as multiple Attributes in a single request, not as multiple requests. There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be The query API returns one of the following status codes in the response: #### Additional Notes -The query API will not return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved. +The query API won't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved. * Paged results are optimized to return matched newest instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added. * Matching is case insensitive and accent insensitive for PN VR types. |
healthcare-apis | Dicom Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-overview.md | DICOM (Digital Imaging and Communications in Medicine) is the international stan ## DICOM service -The DICOM service is a managed service within [Azure Health Data Services](../healthcare-apis-overview.md) that ingests and persists DICOM objects at multiple thousands of images per second. It facilitates communication and transmission of imaging data with any DICOMweb™ enabled systems or applications via DICOMweb Standard APIs like [Store (STOW-RS)](dicom-services-conformance-statement.md#store-stow-rs), [Search (QIDO-RS)](dicom-services-conformance-statement.md#search-qido-rs), [Retrieve (WADO-RS)](dicom-services-conformance-statement.md#retrieve-wado-rs). It's backed by a managed Platform-as-a Service (PaaS) offering in the cloud with complete [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) compliance that you can upload PHI data to the DICOM service and exchange it through secure networks. +The DICOM service is a managed service within [Azure Health Data Services](../healthcare-apis-overview.md) that ingests and persists DICOM objects at multiple thousands of images per second. It facilitates communication and transmission of imaging data with any DICOMweb™ enabled systems or applications via DICOMweb Standard APIs like [Store (STOW-RS)](dicom-services-conformance-statement-v2.md#store-stow-rs), [Search (QIDO-RS)](dicom-services-conformance-statement-v2.md#search-qido-rs), [Retrieve (WADO-RS)](dicom-services-conformance-statement-v2.md#retrieve-wado-rs). It's backed by a managed Platform-as-a Service (PaaS) offering in the cloud with complete [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) compliance that you can upload PHI data to the DICOM service and exchange it through secure networks. - **PHI Compliant**: Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. The DICOM service implements a layered, in-depth defense and advanced threat protection for your data.-- **Extended Query Tags**: Additionally index DICOM studies, series, and instances on both standard and private DICOM tags by expanding list of tags that are already specified within [DICOM Conformance Statement](dicom-services-conformance-statement.md).+- **Extended Query Tags**: Additionally index DICOM studies, series, and instances on both standard and private DICOM tags by expanding list of tags that are already specified within [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md). - **Change Feed**: Access ordered, guaranteed, immutable, read-only logs of all the changes that occur in DICOM service. Client applications can read these logs at any time independently, in parallel and at their own pace. - **DICOMcast**: Via DICOMcast, the DICOM service can inject DICOM metadata into a FHIR service, or FHIR server, as an imaging study resource allowing a single source of truth for both clinical data and imaging metadata. This feature is available as an open-source feature that can be self-hosted in Azure. Learn more about [deploying DICOMcast](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md). - **Region availability**: DICOM service has wide-range of [availability across many regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir®ions=all) with multi-region failover protection and continuously expanding. |
healthcare-apis | Dicomweb Standard Apis C Sharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-c-sharp.md | This response should return the only frame from the red-triangle. Validate that ## Query DICOM (QIDO) > [!NOTE]-> Refer to the [DICOM Conformance Statement](dicom-services-conformance-statement.md#supported-search-parameters) for supported DICOM attributes. +> Refer to the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md#supported-search-parameters) for supported DICOM attributes. ### Search for studies |
healthcare-apis | Dicomweb Standard Apis Curl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-curl.md | In the following examples, we'll search for items using their unique identifiers This request enables searches for one or more studies by DICOM attributes. -For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md). +For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md). _Details:_ * Path: ../studies?StudyInstanceUID={study} curl --request GET "{Service URL}/v{version}/studies?StudyInstanceUID=1.2.826.0. This request enables searches for one or more series by DICOM attributes. -For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md). +For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md). _Details:_ * Path: ../series?SeriesInstanceUID={series} curl --request GET "{Service URL}/v{version}/series?SeriesInstanceUID=1.2.826.0. This request enables searches for one or more series within a single study by DICOM attributes. -For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md). +For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md). _Details:_ * Path: ../studies/{study}/series?SeriesInstanceUID={series} curl --request GET "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.498.1 This request enables searches for one or more instances by DICOM attributes. -For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md). +For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md). _Details:_ * Path: ../instances?SOPInstanceUID={instance} curl --request GET "{Service URL}/v{version}/instances?SOPInstanceUID=1.2.826.0. This request enables searches for one or more instances within a single study by DICOM attributes. -For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md). +For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md). _Details:_ * Path: ../studies/{study}/instances?SOPInstanceUID={instance} curl --request GET "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.498.1 This request enables searches for one or more instances within a single study and single series by DICOM attributes. -For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md) +For more information about the supported DICOM attributes, see the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md) _Details:_ * Path: ../studies/{study}/series/{series}/instances?SOPInstanceUID={instance} |
healthcare-apis | Dicomweb Standard Apis Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-python.md | response = client.get(url, headers=headers) #, verify=False) In the following examples, we search for items using their unique identifiers. You can also search for other attributes, such as PatientName. -Refer to the [DICOM Conformance Statement](dicom-services-conformance-statement.md#supported-search-parameters) document for supported DICOM attributes. +Refer to the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md#supported-search-parameters) document for supported DICOM attributes. ### Search for studies |
healthcare-apis | Dicomweb Standard Apis With Dicom Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md | This tutorial provides an overview of how to use DICOMweb™ Standard APIs w The DICOM service supports a subset of DICOMweb™ Standard that includes: -* [Store (STOW-RS)](dicom-services-conformance-statement.md#store-stow-rs) -* [Retrieve (WADO-RS)](dicom-services-conformance-statement.md#retrieve-wado-rs) -* [Search (QIDO-RS)](dicom-services-conformance-statement.md#search-qido-rs) -* [Delete](dicom-services-conformance-statement.md#delete) +* [Store (STOW-RS)](dicom-services-conformance-statement-v2.md#store-stow-rs) +* [Retrieve (WADO-RS)](dicom-services-conformance-statement-v2.md#retrieve-wado-rs) +* [Search (QIDO-RS)](dicom-services-conformance-statement-v2.md#search-qido-rs) +* [Delete](dicom-services-conformance-statement-v2.md#delete) Additionally, the following non-standard API(s) are supported: * [Change Feed](dicom-change-feed-overview.md) * [Extended Query Tags](dicom-extended-query-tags-overview.md) -To learn more about our support of DICOM Web Standard APIs, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md) reference document. +To learn more about our support of DICOM Web Standard APIs, see the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md) reference document. ## Prerequisites Once deployment is complete, you can use the Azure portal to navigate to the new ## Overview of various methods to use with DICOM service -Because DICOM service is exposed as a REST API, you can access it using any modern development language. For language-agnostic information on working with the service, see [DICOM Services Conformance Statement](dicom-services-conformance-statement.md). +Because DICOM service is exposed as a REST API, you can access it using any modern development language. For language-agnostic information on working with the service, see [DICOM Services Conformance Statement](dicom-services-conformance-statement-v2.md). To see language-specific examples, refer to the examples below. You can view Postman collection examples in several languages including: |
healthcare-apis | Get Started With Analytics Dicom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-analytics-dicom.md | + + Title: Get started using DICOM data in analytics workloads - Azure Health Data Services +description: This guide demonstrates how to use Azure Data Factory and Microsoft Fabric to perform analytics on DICOM data. +++++ Last updated : 07/14/2023++++# Get Started using DICOM Data in Analytics Workloads ++This article details how to get started using DICOM data in analytics workloads with Azure Data Factory and Microsoft Fabric. ++## Prerequisites +Before getting started, ensure you have done the following steps: ++* Deploy an instance of the [DICOM Service](deploy-dicom-services-in-azure.md). +* Create a [storage account with Azure Data lake Storage Gen2 (ADLS Gen2) capabilities](../../storage/blobs/create-data-lake-storage-account.md) by enabling a hierarchical namespace. + * Create a container to store DICOM metadata, for example, named "dicom". +* Create an instance of [Azure Data Factory (ADF)](../../data-factory/quickstart-create-data-factory.md). + * Ensure that a [system assigned managed identity](../../data-factory/data-factory-service-identity.md) has been enabled. +* Create a [Lakehouse](/fabric/data-engineering/tutorial-build-lakehouse) in Microsoft Fabric. +* Add role assignments to the ADF system assigned managed identity for the DICOM Service and the ADLS Gen2 storage account. + * Add the **DICOM Data Reader** role to grant permission to the DICOM service. + * Add the **Storage Blob Data Contributor** role to grant permission to the ADLS Gen2 account. ++## Configure an Azure Data Factory pipeline for the DICOM service ++In this example, an Azure Data Factory [pipeline](../../data-factory/concepts-pipelines-activities.md) will be used to write DICOM attributes for instances, series, and studies into a storage account in a [Delta table](https://delta.io/) format. ++From the Azure portal, open the Azure Data Factory instance and select **Launch Studio** to begin. +++### Create linked services +Azure Data Factory pipelines read from _data sources_ and write to _data sinks_, typically other Azure services. These connections to other services are managed as _linked services_. The pipeline in this example will read data from a DICOM service and write its output to a storage account, so a linked service must be created for both. ++#### Create linked service for the DICOM service +1. In the Azure Data Factory Studio, select **Manage** from the navigation menu. Under **Connections** select **Linked services** and then select **New**. +++2. On the New linked service panel, search for "REST". Select the **REST** tile and then **Continue**. +++3. Enter a **Name** and **Description** for the linked service. +++4. In the **Base URL** field, enter the Service URL for your DICOM service. For example, a DICOM service named `contosoclinic` in the `contosohealth` workspace will have the Service URL `https://contosohealth-contosoclinic.dicom.azurehealthcareapis.com`. ++5. For Authentication type, select **System Assigned Managed Identity**. ++6. For **AAD resource**, enter `https://dicom.healthcareapis.azure.com`. Note, this URL is the same for all DICOM service instances. ++7. After populating the required fields, select **Test connection** to ensure the identity's roles are correctly configured. ++8. When the connection test is successful, select **Create**. ++#### Create linked service for Azure Data Lake Storage Gen2 +1. In the Azure Data Factory Studio, select **Manage** from the navigation menu. Under **Connections** select **Linked services** and then select **New**. ++2. On the New linked service panel, search for "Azure Data Lake Storage Gen2". Select the **Azure Data Lake Storage Gen2** tile and then **Continue**. +++3. Enter a **Name** and **Description** for the linked service. +++4. For Authentication type, select **System Assigned Managed Identity**. ++5. Enter the storage account details by entering the URL to the storage account manually or by selecting the Azure subscription and storage account from dropdowns. ++6. After populating the required fields, select **Test connection** to ensure the identity's roles are correctly configured. ++7. When the connection test is successful, select **Create**. ++### Create a pipeline for DICOM data +Azure Data Factory pipelines are a collection of _activities_ that perform a task, like copying DICOM metadata to Delta tables. This section details the creation of a pipeline that regularly synchronizes DICOM data to Delta tables as data is added to, updated in, and deleted from a DICOM service. ++1. Select **Author** from the navigation menu. In the **Factory Resources** pane, select the plus (+) to add a new resource. Select **Pipeline** and then **Template gallery** from the menu. +++2. In the Template gallery, search for "DICOM". Select the **Copy DICOM Metadata Changes to ADLS Gen2 in Delta Format** tile and then **Continue**. +++3. In the **Inputs** section, select the linked services previously created for the DICOM service and Azure Data Lake Storage Gen2 account. +++4. Select **Use this template** to create the new pipeline. ++## Scheduling a pipeline +Pipelines are scheduled by _triggers_. There are different types of triggers including _schedule triggers_, which allows pipelines to be triggered on a wall-clock schedule, and _manual triggers_, which triggers pipelines on-demand. In this example, a _tumbling window trigger_ is used to periodically run the pipeline given a starting point and regular time interval. For more information about triggers, see the [pipeline execution and triggers article](../../data-factory/concepts-pipeline-execution-triggers.md). ++### Create a new tumbling window trigger +1. Select **Author** from the navigation menu. Select the pipeline for the DICOM service and select **Add trigger** and **New/Edit** from the menu bar. +++2. In the **Add triggers** panel, select the **Choose trigger** dropdown and then **New**. ++3. Enter a **Name** and **Description** for the trigger. +++4. Select **Tumbling window** as the type. ++5. To configure a pipeline that runs hourly, set the recurrence to **1 Hour**. ++6. Expand the **Advanced** section and enter a **Delay** of **15 minutes**. This will allow any pending operations at the end of an hour to complete before processing. ++7. Set the **Max concurrency** to **1** to ensure consistency across tables. ++8. Select **Ok** to continue configuring the trigger run parameters. ++### Configure trigger run parameters +Triggers not only define when to run a pipeline, they also include [parameters](../../data-factory/how-to-use-trigger-parameterization.md) that are passed to the pipeline execution. The **Copy DICOM Metadata Changes to Delta** template defines a few parameters detailed in the table below. Note, if no value is supplied during configuration, the listed default value will be used for each parameter. ++| Parameter name | Description | Default value | +| :- | :- | : | +| BatchSize | The maximum number of changes to retrieve at a time from the change feed (max 200). | `200` | +| ApiVersion | The API version for the Azure DICOM Service (min 2). | `2` | +| StartTime | The inclusive start time for DICOM changes. | `0001-01-01T00:00:00Z` | +| EndTime | The exclusive end time for DICOM changes. | `9999-12-31T23:59:59Z` | +| ContainerName | The container name for the resulting Delta tables. | `dicom` | +| InstanceTablePath | The path containing the Delta table for DICOM SOP instances within the container.| `instance` | +| SeriesTablePath | The path containing the Delta table for DICOM series within the container. | `series` | +| StudyTablePath | The path containing the Delta table for DICOM studies within the container. | `study` | +| RetentionHours | The maximum retention in hours for data in the Delta tables. | `720` | ++1. In the **Trigger run parameters** panel, enter in the **ContainerName** that matches the name of the storage container created in the prerequisites. +++2. For **StartTime** use the system variable `@formatDateTime(trigger().outputs.windowStartTime)`. ++3. For **EndTime** use the system variable `@formatDateTime(trigger().outputs.windowEndTime)`. ++> [!NOTE] +> Only tumbling window triggers support the system variables: +> * `@trigger().outputs.windowStartTime` and +> * `@trigger().outputs.windowEndTime` +> +> Schedule triggers use different system variables: +> * `@trigger().scheduledTime` and +> * `@trigger().startTime` +> +> Learn more about [trigger types](../../data-factory/concepts-pipeline-execution-triggers.md#trigger-type-comparison). ++4. Select **Save** to create the new trigger. Be sure to select **Publish** on the menu bar to begin your trigger running on the defined schedule. +++After the trigger is published, it can be triggered manually using the **Trigger now** option. If the start time was set for a value in the past, the pipeline will start immediately. ++## Monitoring pipeline runs +Trigger runs and their associated pipeline runs can be monitored in the **Monitor** tab. Here, users can browse when each pipeline ran, how long it took to execute, and potentially debug any problems that arose. +++## Microsoft Fabric +[Microsoft Fabric](https://www.microsoft.com/microsoft-fabric) is an all-in-one analytics solution that sits on top of [Microsoft OneLake](/fabric/onelake/onelake-overview). With the use of [Microsoft Fabric Lakehouse](/fabric/data-engineering/lakehouse-overview), data in OneLake can be managed, structured, and analyzed in a single location. Any data outside of OneLake, written to Azure Data Lake Storage Gen2, can be connected to OneLake as shortcuts to take advantage of FabricΓÇÖs suite of tools. ++### Creating shortcuts +1. Navigate to the lakehouse created in the prerequisites. In the **Explorer** view, select the triple-dot menu (...) next to the **Tables** folder. ++2. Select **New shortcut** to create a new shortcut to the storage account that contains the DICOM analytics data. +++3. Select **Azure Data Lake Storage Gen2** as the source for the shortcut. +++4. Under **Connection settings**, enter the **URL** used in the [Linked Services](#create-linked-service-for-azure-data-lake-storage-gen2) section above. +++5. Select an existing connection or create a new connection, selecting the Authentication kind you want to use. ++> [!NOTE] +> For authenticating between Azure Data Lake Storage Gen2 and Microsoft Fabric, there are a few options, including an organizational account and service principal; it is not recommended to use account keys or Shared Access Signature (SAS) tokens. ++6. Select **Next**. ++7. Enter a **Shortcut Name** that represents the data created by the Azure Data Factory pipeline. For example, for the `instance` Delta table, the shortcut name should probably be **instance**. ++8. Enter the **Sub Path** that matches the `ContainerName` parameter from [run parameters](#configure-trigger-run-parameters) configuration and the name of the table for the shortcut. For example, use "/dicom/instance" for the Delta table with the path `instance` in the `dicom` container. ++9. Select **Create** to create the shortcut. ++10. Repeat steps 2-9 for adding the remaining shortcuts to the other Delta tables in the storage account (e.g. `series` and `study`). ++After the shortcuts have been created, expanding a table will show the names and types of the columns. +++### Running notebooks +Once the tables have been created in the lakehouse, they can be queried from [Microsoft Fabric notebooks](/fabric/data-engineering/how-to-use-notebook). Notebooks may be created directly from the lakehouse by selecting **Open Notebook** from the menu bar. ++On the notebook page, the contents of the lakehouse can still be viewed on the left-hand side, including the newly added tables. At the top of the page, select the language for the notebook (the language may also be configured for individual cells). The following example will use Spark SQL. ++#### Query tables using Spark SQL +In the cell editor, enter a simple Spark SQL query like a `SELECT` statement. ++``` SQL +SELECT * from instance +``` ++This query will select all of the contents from the `instance` table. When ready, select the **Run cell** button to execute the query. +++After a few seconds, the results of the query should appear in a table beneath the cell like (the time may be longer if this is the first Spark query in the session as the Spark context will need to be initialized). +++## Summary +In this article, you learned how to: +* Use Azure Data Factory templates to create a pipeline from the DICOM service to an Azure Data Lake Storage Gen2 account +* Configure a trigger to extract DICOM metadata on an hourly schedule +* Use shortcuts to connect DICOM data in a storage account to a Microsoft Fabric lakehouse +* Use notebooks to query for DICOM data in the lakehouse ++## Next steps ++Learn more about Azure Data Factory pipelines: ++* [Pipelines and activities in Azure Data Factory](../../data-factory/concepts-pipelines-activities.md) ++Learn more about using Microsoft Fabric notebooks: ++* [How to use Microsoft Fabric notebooks](/fabric/data-engineering/how-to-use-notebook) |
healthcare-apis | References For Dicom Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md | This article describes our open-source projects on GitHub that provide source co * [Azure Health Data Services Workshop](https://github.com/microsoft/azure-health-data-services-workshop): This workshop presents a series of hands-on activities to help users gain new skills working with Azure Health Data Services capabilities. The DICOM service challenge includes deployment of the service, exploration of the core API capabilities, a Postman collection to simplify exploration, and instructions for configuring a ZFP DICOM viewer. +### Using the DICOM service with the OHIF viewer ++* [Azure DICOM service with OHIF viewer](https://github.com/microsoft/dicom-ohif): The [OHIF viewer](https://ohif.org/) is an open-source, non-diagnostic DICOM viewer that uses DICOMweb APIs to find and render DICOM images. This project provides the guidance and sample templates for deploying the OHIF viewer and configuring it to integrate with the DICOM service. ++### Medical imaging network demo environment +* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-prem radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow. ++ ## Next steps For more information about using the DICOM service, see |
healthcare-apis | How To Do Custom Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-do-custom-search.md | DELETE {{FHIR_URL}}/SearchParameter/{{SearchParameter_ID}} ``` > [!Warning]-> Be careful when deleting search parameters. Changing an existing search parameter could have impacts on the expected behavior. We recommend running a reindex job immediately. +> Be careful when deleting search parameters. Deleting an existing search parameter could have impacts on the expected behavior. We recommend running a reindex job immediately. |
healthcare-apis | Selectable Search Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/selectable-search-parameters.md | + + Title: Overview of selectable search parameter functionality in Azure Health Data Services +description: This article describes an overview of selectable search parameter that is implemented in Azure Health Data Services ++++ Last updated : 07/24/2023+++++# selectable search parameter +Searching for resources is fundamental to FHIR. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. As the FHIR service in Azure health data services is provisioned, inbuilt search parameters are enabled by default. During the ingestion of data in the FHIR service, specific properties from FHIR resources are extracted and indexed with these search parameters. This is done to perform efficient searches. ++The selectable search parameter functionality allows you to enable or disable inbuilt search parameters. This functionality helps you to store more resources in allocated storage space and have performance improvements, by enabling only needed search parameters. ++> [!IMPORTANT] +> selectable search capability is currently in preview. +> Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. +> For more information, review [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ++This article provides a guide to using selectable search parameter functionality. ++## Guide on using selectable search parameter ++To perform status updates on search parameter(s), you need to follow the steps:\ +Step 1: Get status of search parameter(s)\ +Step 2: Update status of search parameter(s)\ +Step 3: Execute a reindex job ++Throughout this article, we demonstrate FHIR search syntax in example API calls with the {{FHIR_URL}} placeholder to represent the FHIR server URL. ++### Step 1: Get status of search parameter(s) +An API endpoint (ΓÇÿ$statusΓÇÖ) is provided to view the status of search parameters. ++There are four different statuses that are seen in the response: +* **Supported**: This status indicates that the search parameter is supported by the FHIR service, and the user has submitted requests to enable the search parameter. Note: Reindex operation needs to be executed to run from supported to enabled. +* **Enabled**: This status indicates that the search parameter is enabled for searching. This is the next step after the supported status. +* **PendingDisable**: This status indicates that search parameter will be disabled after execution of the reindex operation. +* **Disabled**: This status indicates that the search parameter is disabled. +++To get the status across all search parameters, use the following request +```rest +GET {{FHIR_URL}}/SearchParameter/$status +``` ++This returns a list of all the search parameters with their individual statuses. You can scroll through the list to find the search parameter you need. ++To identify status of individual or subset of search parameters you can use the filters listed. +* **Name**: To identify search parameter status by name use request, +```rest + GET {{FHIR_URL}}/SearchParameter/$status?code=<name of search parameter/ sub string> +``` +* **URL**: To identify search parameter status by its canonical identifier use request, +```rest +GET {{FHIR_URL}}/SearchParameter/$status?url=<SearchParameter url> +``` +* **Resource type**: In FHIR, search parameters are enabled at individual resource level to enable filtering and retrieving specific subset of resources. To identify status of all the search parameters mapped to resource, use request. +```rest +GET {{FHIR_URL}}/SearchParameter/$status?resourcetype=<ResourceType name> +``` ++In response to the GET request to $status endpoint, you'll see Parameters resource type returned with the status of the search parameter. See the example response: +```rest +{ + "resourceType" : "Parameters", + "parameter" : [ + "name" : "searchParameterStatus", + "part" : { + { + "name" : "url", + "valueString" : "http://hl7.org/fhir/SearchParameter/Account-identifier" + }, + { + "name" : "status", + "valueString" : "supported" + } + } + ] +} +``` +Now that you're aware on how to get the status of search parameters, lets move to the next step to understand on updating the status of search parameters to 'Supported' or 'Disabled'. ++### Step 2: Update status of search parameter(s) +Note: To update the status of search parameters you need to have Azure RBAC role assigned: Search Parameter Manager. ++Search Parameter Status can be updated for single search parameter or in bulk. +#### 1. Update Single search parameter status. +To update the status of single search parameter, use the API request. ++```rest +PUT {{FHIR_URL}}/SearchParameter/$status +{ + "resourceType": "Parameters", + "parameter": [ + { + "name": "searchParameterStatus", + "part": [ + { + "name": "url", + "valueUrl": "http://hl7.org/fhir/SearchParameter/Resource-test-id" + }, + { + "name": "status", + "valueString": "Supported" + } + ] + } + ] +} +``` ++Depending on your use case, you can choose to keep the status state value to either ΓÇÿSupportedΓÇÖ or ΓÇÖDisabledΓÇÖ for a search parameter. +Note when sending state 'Disabled' in the request, the response returns in the response as 'PendingDisable', since a reindex job needs to be run to fully remove associations. ++If you receive a 400 HTTP status code in the response, it means there is no unique match for identified search Parameter. Check the search parameter ID. ++#### 2. Update search parameter status in bulk +To update the status of search parameters in bulk, the ΓÇÿPUTΓÇÖ request should have the ΓÇÿParametersΓÇÖ resource list in the request body. The list needs to contain the individual search parameters that need to be updated. ++```rest +PUT {{FHIR_URL}}/SearchParameter/$status +{ + "resourceType" : "Parameters", + "parameter" : [ + { + "name" : "searchParameterStatus", + "part" :{ + "name" : "url", + "valueString" : "http://hl7.org/fhir/SearchParameter/Endpoint-name" + }, + "part":{ + "name" : "status", + "valueString" : "supported" + } + }, + "name" : "searchParameterStatus", + "part" :{ + "name" : "url", + "valueString" : "http://hl7.org/fhir/SearchParameter/HealthcareService-name" + }, + "part":{ + "name" : "status", + "valueString" : "supported" + } + }, + ... + ] +} +``` ++After you have updated the search parameter status to 'Supported' or 'Disabled', the next step is to execute reindex job. ++### Step 3: Execute a reindex job. +Until the search parameter is indexed, the 'Enabled' and 'Disabled' status of the search parameters aren't activated. Reindex job execution helps to update the status from 'Supported' to 'Enabled' or 'PendingDisable' to 'Disabled'. +Reindex job can be executed against entire FHIR service database or against specific search parameters. Reindex job can be performance intensive. [Read guide](how-to-run-a-reindex.md) on reindex job execution in FHIR service. ++Note: A Capability Statement documents a set of capabilities (behaviors) of a FHIR Server. Capability statement is available with /metadata endpoint. 'Enabled' search parameters are listed in capability statement of your FHIR service ++## Next steps ++In this article, you've learned how to update status of built-in search parameters in your FHIR service. To learn how to define custom search parameters, see ++>[!div class="nextstepaction"] +>[Defining custom search parameters](how-to-do-custom-search.md) ++FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. + |
healthcare-apis | Device Messages Through Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md | To begin deployment in the Azure portal, select the **Deploy to Azure** button: - **Location**: A supported Azure region for Azure Health Data Services (the value can be the same as or different from the region your resource group is in). For a list of Azure regions where Health Data Services is available, see [Products available by regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=health-data-services). - - **Fhir Contributor Principle Id** (optional): An Azure Active Directory (Azure AD) user object ID to provide FHIR service read/write permissions. + - **Fhir Contributor Principal Id** (optional): An Azure Active Directory (Azure AD) user object ID to provide FHIR service read/write permissions. - You can use this account to give access to the FHIR service to view the FHIR Observations that are generated in this tutorial. We recommend that you use your own Azure AD user object ID so you can access the messages in the FHIR service. If you choose not to use the **Fhir Contributor Principle Id** option, clear the text box. + You can use this account to give access to the FHIR service to view the FHIR Observations that are generated in this tutorial. We recommend that you use your own Azure AD user object ID so you can access the messages in the FHIR service. If you choose not to use the **Fhir Contributor Principal Id** option, clear the text box. To learn how to get an Azure AD user object ID, see [Find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id). The user object ID that's used in this tutorial is only an example. If you use this option, use your own user object ID or the object ID of another person who you want to be able to access the FHIR service. |
healthcare-apis | How To Use Iotjsonpathcontent Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontent-templates.md | In this example, we're using a device message that is capturing `heartRate` data ```json {- "heartRate" : "78" + "PatientId": "patient1", + "HeartRate" : "78" } ``` The IoT hub enriches and routes the device message to the event hub before the M ```json { "Body": {- "heartRate": "78" - }, - "Properties": { - "iothub-creation-time-utc": "2023-03-13T22:46:01.87500000" + "PatientId": "patient1", + "HeartRate": 78 }, "SystemProperties": {- "iothub-connection-device-id": "device01" + "iothub-enqueuedtime": "2023-07-25T20:41:26.046Z", + "iothub-connection-device-id": "sampleDeviceId" + }, + "Properties": { + "iothub-creation-time-utc": "2023-07-25T20:41:26.046Z" } } ``` We're using this device mapping for the normalization stage: { "templateType": "IotJsonPathContent", "template": {- "typeName": "heartRate", - "typeMatchExpression": "$..[?(@Body.heartRate)]", - "patientIdExpression": "$.SystemProperties.iothub-connection-device-id", + "typeName": "HeartRate", + "typeMatchExpression": "$..[?(@Body.HeartRate)]", + "patientIdExpression": "$.Body.PatientId", "values": [ {- "required": "true", - "valueExpression": "$.Body.heartRate", - "valueName": "hr" + "required": true, + "valueExpression": "$.Body.HeartRate", + "valueName": "HeartRate" } ] } }- ] + ] } ``` The resulting normalized message will look like this after the normalization sta ```json {- "type": "heartRate", - "occurrenceTimeUtc": "2023-03-13T22:46:01.875Z", - "deviceId": "device01", + "type": "HeartRate", + "occurrenceTimeUtc": "2023-07-25T20:41:26.046Z", + "deviceId": "sampleDeviceId", + "patientId": "patient1", "properties": [ {- "name": "hr", + "name": "HeartRate", "value": "78" } ] |
healthcare-apis | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md | +## July 2023 +#### Azure Health Data Services ++#### FHIR Service +**Bug Fix: Continous retry on Import operation** +We observed an issue where $import kept on retrying when NDJSON file size is greater than 2GB. The issue is fixed, for details visit [3342](https://github.com/microsoft/fhir-server/pull/3342). ++**Bug Fix: Patient and Group level export job restart on interruption** +Patient and Group level exports on interruption would restrat from the beginning. Bug is fixed to restart the export jobs from the last sucessfully completed page of results. For more details visit [3205](https://github.com/microsoft/fhir-server/pull/3205). ++ ## June 2023 #### Azure Health Data Services With Incremental Load mode, customers can: > [!IMPORTANT] > Incremental import mode is currently in public preview > Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities.-> > For more information, review [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For details on Incremental Import, visit [Import Documentation](./../healthcare-apis/fhir/configure-import-data.md). |
key-vault | Overview Vnet Service Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md | Here's a list of trusted services that are allowed to access a key vault if the | Microsoft Purview|[Using credentials for source authentication in Microsoft Purview](../../purview/manage-credentials.md) > [!NOTE]-> You must set up the relevant Key Vault access policies to allow the corresponding services to get access to Key Vault. +> You must set up the relevant Key Vault RBAC role assignments or access policies(legacy) to allow the corresponding services to get access to Key Vault. ## Next steps |
logic-apps | Logic Apps Using Sap Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md | The SAP connector has different versions, based on [logic app type and host envi |--|-|-| | **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | | **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label, and the ISE-native version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. <br><br>**Note**: Make sure to use the ISE-native version, not the managed version. <br><br>For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label, and built-in connector (preview), which appears in the designer under the **Built-in** label and is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) | +| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector (preview), which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) | ++## Connector differences ++The SAP built-in connector significantly differs from the SAP managed connector and SAP ISE-versioned connector in the following ways: ++* On-premises connections don't require the on-premises data gateway. ++ Instead, the SAP built-in connector communicates directly with your SAP server in the integrated virtual network, which avoids hops, latency, and failure points for a network gateway. Make sure that you upload or deploy the non-redistributable SAP client libraries with your logic app workflow application. For more information, see the [Prerequisites](#prerequisites) in this guide. ++* Payload sizes up to 100 MB are supported, so you don't have to use a blob URI for large requests. ++* Specific actions are available for **Call BAPI**, **Call RFC**, and **Send IDoc**. These dedicated actions provide a better experience for stateful BAPIs, RFC transactions, and IDoc deduplication, and don't use the older SOAP Windows Communication Foundation (WCF) messaging model. ++ The **Call BAPI** action includes up to two responses with the returned JSON, the XML response from the called BAPI, and the BAPI commit or BAPI rollback response as well and if you use auto-commit. This capability addresses the problem with the SAP managed connector where the outcome from the auto-commit is silent and observable only through logs. ++* Longer timeout at 5 minutes compared to managed connector and ISE-versioned connector. ++ The SAP built-in connector doesn't use the shared or global connector infrastructure, which means timeouts are longer at 5 minutes compared to the SAP managed connector (two minutes) and the SAP ISE-versioned connector (four minutes). Long-running requests work without you having to implement the [long-running webhook-based request action pattern](logic-apps-scenario-function-sb-trigger.md). ++* By default, the preview SAP built-in connector operations are *stateless*. However, you can [enable stateful mode (affinity) for these operations](../connectors/enable-stateful-affinity-built-in-connectors.md). ++ In stateful mode, the SAP built-in connector supports high availability and horizontal scale-out configurations. By comparison, the SAP managed connector has restrictions regarding the on-premises data gateway limited to a single instance for triggers and to clusters only in failover mode for actions. For more information, see [SAP managed connector - Known issues and limitations](#known-issues-limitations). ++* Standard logic app workflows require and use the SAP NCo 3.1 client library, not the SAP NCo 3.0 version. For more information, see [Prerequisites](#prerequisites). ++* Standard logic app workflows provide application settings where you can specify a Personal Security Environment (PSE) and PSE password. ++ This change prevents you from uploading multiple PSE files, which isn't supported and results in SAP connection failures. In Consumption logic app workflows, the SAP managed connector lets you specify these values through connection parameters, which allowed you to upload multiple PSE files and isn't supported, causing SAP connection failures. ++* **Generate Schema** action ++ * You can select from multiple operation types, such as BAPI, IDoc, RFC, and tRFC, versus the same action in the SAP managed connector, which uses the **SapActionUris** parameter and a file system picker experience. ++ * You can directly provide a parameter name as a custom value. For example, you can specify the **RFC Name** parameter from the **Call RFC** action. By comparison, in the SAP managed connector, you had to provide a complex **Action URI** parameter name. ++ * By design, this action doesn't support generating multiple schemas for RFCs, BAPIs, or IDocs in single action execution, which the SAP managed connector supports. This capability change now prevents attempts to send large amounts of content in a single call. <a name="connector-parameters"></a> -### Connector parameters +## Connector parameters Along with simple string and number inputs, the SAP connector accepts the following table parameters (`Type=ITAB` inputs): * Table direction parameters, both input and output, for older SAP releases.-* Changing parameters, which replace the table direction parameters for newer SAP releases. +* Parameter changes, which replace the table direction parameters for newer SAP releases. * Hierarchical table parameters. +<a name="known-issues-limitations"></a> + ## Known issues and limitations ### SAP managed connector The preview SAP built-in connector trigger named **Register SAP RFC server for t > When you use a Premium-level ISE, use the ISE-native SAP connector, not the SAP managed connector, > which doesn't natively run in an ISE. For more information, review the [ISE prerequisites](#ise-prerequisites). -* By default, the preview SAP built-in connector operations are stateless. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md). +* By default, the preview SAP built-in connector operations are *stateless*. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md). * To use either the SAP managed connector trigger named **When a message is received from SAP** or the SAP built-in trigger named **Register SAP RFC server for trigger**, complete the following tasks: For more information about SAP services and ports, review the [TCP/IP Ports of A ### SAP NCo client library prerequisites -To use the SAP connector, you'll need the SAP NCo client library named [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). The following list describes the prerequisites for the SAP NCo client library that you're using with the SAP connector: +To use the SAP connector, based on whether you have a Consumption or Standard workflow, you'll need install the SAP Connector NCo client library for Microsoft .NET 3.0 or 3.1, respectively. The following list describes the prerequisites for the SAP NCo client library, based on which workflow where you're using with the SAP connector: * Version: - * SAP Connector (NCo 3.1) isn't currently supported as dual-version capability is unavailable. -- * For Consumption logic app workflows that use the on-premises data gateway, make sure that you install the latest 64-bit version, [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). The data gateway runs only on 64-bit systems. Installing the unsupported 32-bit version results in a **"bad image"** error. + * For Consumption logic app workflows that use the on-premises data gateway, make sure that you install the latest 64-bit version, [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). SAP Connector (NCo 3.1) isn't currently supported as dual-version capability is unavailable. The data gateway runs only on 64-bit systems. Installing the unsupported 32-bit version results in a **"bad image"** error. Earlier versions of SAP NCo might experience the following issues: To use the SAP connector, you'll need the SAP NCo client library named [SAP Conn * After you upgrade the SAP server environment, you get the following exception message: **"The only destination <some-GUID> available failed when retrieving metadata from <SAP-system-ID> -- see log for details"**. - * For Standard logic app workflows, you can use the 32-bit or 64-bit version for the SAP NCo client library, but make sure that you install the version that matches the configuration in your Standard logic app resource. To check this version, follow these steps: + * For Standard logic app workflows, you can install the latest 64-bit or 32-bit version for [SAP Connector (NCo 3.1) for Microsoft .NET 3.1.2.0 compiled with .NET Framework 4.6.2](https://support.sap.com/en/product/connectors/msnet.html). However, make sure that you install the version that matches the configuration in your Standard logic app resource. To check the version used by your logic app, follow these steps: 1. In the [Azure portal](https://portal.azure.com), open your Standard logic app. See the steps for [SAP logging for Consumption logic apps in multi-tenant workfl +## Enable SAP client library (NCo) logging and tracing (Built-in connector only) ++When you have to investigate any problems with this component, you can set up custom text file-based NCo tracing, which SAP or Microsoft support might request from you. By default, this capability is disabled because enabling this trace might negatively affect performance and quickly consume the application host's storage space. ++You can control this tracing capability at the application level by using the following settings: ++1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource. ++1. On the resource menu, under **Settings**, select **Configuration** to review the application settings. ++1. On the **Configuration** page, add the following application settings: ++ * **SAP_RFC_TRACE_DIRECTORY**: The directory where to store the NCo trace files, for example, **C:\home\LogFiles\NCo**. ++ * **SAP_RFC_TRACE_LEVEL**: The NCo trace level with **Level4** as the suggested value for typical verbose logging. SAP or Microsoft support might request that you set a [different trace level](#trace-levels). ++ For more information about adding application settings, see [Edit host and app settings for Standard logic app workflows](edit-app-settings-host-settings.md#manage-app-settings). ++1. Save your changes. This step restarts the application. ++<a name="trace-levels"></a> ++### Trace levels available ++| Value | Description | +|-|-| +| Level1 | The level for tracing remote function calls. | +| Level2 | The level for tracing remote function calls and public API method calls. | +| Level3 | The level for tracing remote function calls, public API method calls, and internal API method calls. | +| Level4 | The level for tracing remote function calls, public API method calls, internal API method calls, hex dumps for the RFC protocol, and network-related information. | +| Locking | Writes data to the trace files that shows when threads request, acquire, and release locks on objects. | +| Metadata | Traces the metadata involved in a remote function call for each call. | +| None | The level for suppressing all trace output. | +| ParameterData | Traces the container data sent and received during each remote function call. | +| Performance | Writes data to the trace files that can help with analyzing performance issues. | +| PublicAPI | Traces most methods of the public API, except for getters, setters, or related methods. | +| InternalAPI | Traces most methods of the internal API, except for getters, setters, or related methods. | +| RemoteFunctionCall | Traces remote function calls. | +| RfcData | Traces the bytes sent and received during each remote function call. | +| SessionProvider | Traces all methods of the currently used implementation of **ISessionProvider**. | +| SetValue | Writes information to the trace files regarding values set for parameters of functions, or fields of structures or tables. | ++### View the trace ++1. On Standard logic app resource menu, under **Development Tools**, select **Advanced Tools** > **Go**. ++1. On the **Kudu** toolbar, select **Debug Console** > **CMD**. ++1. Browse to the folder for the application setting named **$SAP_RFC_TRACE_DIRECTORY**. ++ A new folder named **NCo**, or whatever folder name that you used, appears for the application setting value, **C:\home\LogFiles\NCo**, that you set earlier. ++ After you open the **$SAP_RFC_TRACE_DIRECTORY** folder, you'll find a file named **dev_nco_rfc.log**, one or multiple files named **dev_nco_rfcNNNN.log**, and one or multiple files named **dev_nco_rfcNNNN.trc** where **NNNN** is a thread identifier. ++1. To view the contant in a log or trace file, select the **Edit** button next to a file. ++ > [!NOTE] + > + > If you download a log or trace file that your logic app workflow opened + > and is currently in use, your download might result in an empty file. + ## Send SAP telemetry for on-premises data gateway to Azure Application Insights With the August 2021 update for the on-premises data gateway, SAP connector operations can send telemetry data from the SAP NCo client library and traces from the Microsoft SAP Adapter to [Application Insights](../azure-monitor/app/app-insights-overview.md), which is a capability in Azure Monitor. This telemetry primarily includes the following data: |
logic-apps | Sap Create Example Scenario Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/sap-create-example-scenario-workflows.md | Next, create an action to send your IDoc to SAP when the workflow's request trig <a name="send-flat-file-idocs"></a> -#### Send flat file IDocs to SAP server +#### Send flat file IDocs to SAP server (Managed connector only) -To send an IDoc using a flat file schema, you can wrap the IDoc in an XML envelope and [follow the general steps to add an SAP action to send an IDoc](#add-sap-action-send-idoc), but with the following changes: +To send an IDoc using a flat file schema when you use the SAP managed connector, you can wrap the IDoc in an XML envelope and [follow the general steps to add an SAP action to send an IDoc](#add-sap-action-send-idoc), but with the following changes. ++> [!NOTE] +> +> If you're using the SAP built-in connector, make sure that you don't wrap a flat file IDoc in an XML envelope. ### Wrap IDoc with XML envelope In the following example, the `STFC_CONNECTION` RFC module generates a request a 1. On the designer toolbar, select **Run Trigger** > **Run** to manually start your workflow. -1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to include your message content with your request. To send the request, use a tool such as [Postman](https://www.getpostman.com/apps). +1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to include your message content with your request. To send the request, use a tool such as the [Postman API client](https://www.postman.com/api-platform/api-client/). For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example: You've now created a workflow that can communicate with your SAP server. Now tha 1. Return to the workflow level. On the workflow menu, select **Overview**. On the toolbar, select **Run** > **Run** to manually start your workflow. -1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to your message content with your request. To send the request, use a tool such as [Postman](https://www.getpostman.com/apps). +1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to your message content with your request. To send the request, use a tool such as the [Postman API client](https://www.postman.com/api-platform/api-client/). For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example: |
machine-learning | How To Manage Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md | Along with managing quotas, you can learn how to [plan and manage costs for Azur ## Special considerations -+ A quota is a credit limit, not a capacity guarantee. If you have large-scale capacity needs, [contact Azure support to increase your quota](#request-quota-increases). ++ Quotas are applied to each subscription in your account. If you have multiple subscriptions, you must request a quota increase for each subscription.+++ A quota is a *credit limit* on Azure resources, *not a capacity guarantee*. If you have large-scale capacity needs, [contact Azure support to increase your quota](#request-quota-increases). + A quota is shared across all the services in your subscriptions, including Azure Machine Learning. Calculate usage across all services when you're evaluating capacity. - Azure Machine Learning compute is an exception. It has a separate quota from the core compute quota. + > [!NOTE] + > Azure Machine Learning compute is an exception. It has a separate quota from the core compute quota. -+ Default limits vary by offer category type, such as free trial, pay-as-you-go, and virtual machine (VM) series (such as Dv2, F, and G). ++ **Default limits vary by offer category type**, such as free trial, pay-as-you-go, and virtual machine (VM) series (such as Dv2, F, and G). ## Default resource quotas In this section, you learn about the default and maximum quota limits for the following resources: + Azure Machine Learning assets- + Azure Machine Learning computes - + Azure Machine Learning managed online endpoints + + Azure Machine Learning computes (including serverless Spark) + + Azure Machine Learning online endpoints (both managed and Kubernetes) + Azure Machine Learning pipelines + Virtual machines + Azure Container Instances In this section, you learn about the default and maximum quota limits for the fo ### Azure Machine Learning assets-The following limits on assets apply on a per-workspace basis. +The following limits on assets apply on a *per-workspace* basis. | **Resource** | **Maximum limit** | | | | The following limits on assets apply on a per-workspace basis. In addition, the maximum **run time** is 30 days and the maximum number of **metrics logged per run** is 1 million. ### Azure Machine Learning Compute-[Azure Machine Learning Compute](concept-compute-target.md#azure-machine-learning-compute-managed) has a default quota limit on both the number of cores (split by each VM Family and cumulative total cores) and the number of unique compute resources allowed per region in a subscription. This quota is separate from the VM core quota listed in the previous section as it applies only to the managed compute resources of Azure Machine Learning. +[Azure Machine Learning Compute](concept-compute-target.md#azure-machine-learning-compute-managed) has a default quota limit on both the *number of cores* and the *number of unique compute resources* that are allowed per region in a subscription. ++> [!NOTE] +> * The *quota on the number of cores* is split by each VM Family and cumulative total cores. +> * The *quota on the number of unique compute resources* per region is separate from the VM core quota, as it applies only to the managed compute resources of Azure Machine Learning. ++To raise the limits for the following items, [Request a quota increase](#request-quota-increases): -[Request a quota increase](#request-quota-increases) to raise the limits for various VM family core quotas, total subscription core quotas, cluster quota and resources in this section. +* VM family core quotas. To learn more about which VM family to request a quota increase for, see [virtual machine sizes in Azure](../virtual-machines/sizes.md). For example, GPU VM families start with an "N" in their family name (such as the NCv3 series). +* Total subscription core quotas +* Cluster quota +* Other resources in this section Available resources: + **Dedicated cores per region** have a default limit of 24 to 300, depending on your subscription offer type. You can increase the number of dedicated cores per subscription for each VM family. Specialized VM families like NCv2, NCv3, or ND series start with a default of zero cores. GPUs also default to zero cores. + **Low-priority cores per region** have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families. -+ **Clusters per region** have a default limit of 200 and it can be increased up to a value of 500 per region within a given subscription.. This limit is shared between training clusters, compute instances and MIR endpoint deployments. (A compute instance is considered a single-node cluster for quota purposes.) Starting 8/30/2023, cluster quota limits will automatically be increased from 200 to 500 on your behalf when usage is approaching close to the 200 default limit, eliminating the need to file for a support ticket. ++ **Clusters per region** have a default limit of 200 and it can be increased up to a value of 500 per region within a given subscription. This limit is shared between training clusters, compute instances and managed online endpoint deployments. A compute instance is considered a single-node cluster for quota purposes. -> [!TIP] -> To learn more about which VM family to request a quota increase for, check out [virtual machine sizes in Azure](../virtual-machines/sizes.md). For instance GPU VM families start with an "N" in their family name (eg. NCv3 series) + > [!TIP] + > Starting 1 September 2023, Microsoft will automatically increase cluster quota limits from 200 to 500 on your behalf when usage approaches the 200 default limit. This change eliminates the need to file for a support ticket to increase the quota on unique compute resources allowed per region. The following table shows more limits in the platform. Reach out to the Azure Machine Learning product team through a **technical** support ticket to request an exception. | **Resource or Action** | **Maximum limit** | | | | | Workspaces per resource group | 800 |-| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a non communication-enabled pool (that is, can't run MPI jobs) | 100 nodes but configurable up to 65,000 nodes | -| Nodes in a single Parallel Run Step **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but configurable up to 65,000 nodes if your cluster is set up to scale per above | -| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a communication-enabled pool | 300 nodes but configurable up to 4000 nodes | -| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a communication-enabled pool on an RDMA enabled VM Family | 100 nodes | -| Nodes in a single MPI **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but can be increased to 300 nodes | +| Nodes in a single Azure Machine Learning compute (AmlCompute) **cluster** set up as a non communication-enabled pool (that is, can't run MPI jobs) | 100 nodes but configurable up to 65,000 nodes | +| Nodes in a single Parallel Run Step **run** on an Azure Machine Learning compute (AmlCompute) cluster | 100 nodes but configurable up to 65,000 nodes if your cluster is set up to scale as mentioned previously | +| Nodes in a single Azure Machine Learning compute (AmlCompute) **cluster** set up as a communication-enabled pool | 300 nodes but configurable up to 4000 nodes | +| Nodes in a single Azure Machine Learning compute (AmlCompute) **cluster** set up as a communication-enabled pool on an RDMA enabled VM Family | 100 nodes | +| Nodes in a single MPI **run** on an Azure Machine Learning compute (AmlCompute) cluster | 100 nodes but can be increased to 300 nodes | | Job lifetime | 21 days<sup>1</sup> | | Job lifetime on a low-priority node | 7 days<sup>2</sup> | | Parameter servers per node | 1 | The following table shows more limits in the platform. Reach out to the Azure Ma ### Azure Machine Learning managed online endpoints -Azure Machine Learning managed online endpoints have limits described in the following table. These are regional limits, meaning that you can use up to these limits per each region you are using. +Azure Machine Learning managed online endpoints have limits described in the following table. These limits are _regional_, meaning that you can use up to these limits per each region you're using. | **Resource** | **Limit** | **Allows exception** | | | | | Azure Machine Learning managed online endpoints have limits described in the fol <sup>2</sup> We reserve 20% extra compute resources for performing upgrades. For example, if you request 10 instances in a deployment, you must have a quota for 12. Otherwise, you receive an error. -<sup>3</sup> The default limit for some subscriptions may be different. For example, when you request a limit increase it may show 100 instead. If you request a limit increase, be sure to calculate related limit increases you might need. For example, if you request a limit increase for requests per second, you might also want to compute the required connections and bandwidth limits and include these limit increases in the same request. +<sup>3</sup> The default limit for some subscriptions may be different. For example, when you request a limit increase it may show 100 instead. If you request a limit increase, be sure to calculate related limit increases you might need. For example, if you request a limit increase for requests per second, you might also want to compute the required connections and bandwidth limits and include that limit increase in the same request. To determine the current usage for an endpoint, [view the metrics](how-to-monitor-online-endpoints.md#metrics). The sum of kubernetes online endpoints and managed online endpoints under each s ### Azure Machine Learning integration with Synapse -Azure Machine Learning serverless Spark provides easy access to distributed computing capability for scaling Apache Spark jobs. This utilizes the same dedicated quota as Azure Machine Learning Compute. Quota limits can be increased by submitting a support ticket and [requesting for quota increase](#request-quota-increases) for ESv3 series under the "Machine Learning Service: Virtual Machine Quota" category. +Azure Machine Learning serverless Spark provides easy access to distributed computing capability for scaling Apache Spark jobs. Serverless Spark utilizes the same dedicated quota as Azure Machine Learning Compute. Quota limits can be increased by submitting a support ticket and [requesting for quota increase](#request-quota-increases) for ESv3 series under the "Machine Learning Service: Virtual Machine Quota" category. To view quota usage, navigate to Machine Learning studio and select the subscription name that you would like to see usage for. Select "Quota" in the left panel. Azure Storage has a limit of 250 storage accounts per region, per subscription. Use workspace-level quotas to manage Azure Machine Learning compute target allocation between multiple [workspaces](concept-workspace.md) in the same subscription. -By default, all workspaces share the same quota as the subscription-level quota for VM families. However, you can set a maximum quota for individual VM families on workspaces in a subscription. This lets you share capacity and avoid resource contention issues. +By default, all workspaces share the same quota as the subscription-level quota for VM families. However, you can set a maximum quota for individual VM families on workspaces in a subscription. Quotas for individual VM families let you share capacity and avoid resource contention issues. 1. Go to any workspace in your subscription. 1. In the left pane, select **Usages + quotas**. |
machine-learning | Concept Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-connections.md | Prompt flow provides various prebuilt connections, including Azure Open AI, Open | [Azure Content Safety](https://aka.ms/acs-doc) | Content Safety (Text) or Python | | [Cognitive Search](https://azure.microsoft.com/products/search) | Vector DB Lookup or Python | | [Serp](https://serpapi.com/) | Serp API or Python |-| Custom | Python | +| [Custom](./tools-reference/python-tool.md#how-to-consume-custom-connection-in-python-tool) | Python | By leveraging connections in Prompt flow, users can easily establish and manage connections to external APIs and data sources, facilitating efficient data exchange and interaction within their AI applications. ## Next steps -- [Get started with Prompt flow](get-started-prompt-flow.md)+- [Get started with Prompt flow](get-started-prompt-flow.md) +- [Consume custom connection in Python Tool](./tools-reference/python-tool.md#how-to-consume-custom-connection-in-python-tool) |
machine-learning | Reference Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md | More information about how to use ARM template can be found from [ARM template d | Date | Version |Version description | ||||+|July 18, 2023 | 1.1.29| Add new identity operator errors. Bug fixes. | |June 4, 2023 | 1.1.28 | Improve auto-scaler to handle multiple node pool. Bug fixes. | | Apr 18 , 2023| 1.1.26 | Bug fixes and vulnerabilities fix. | | Mar 27, 2023| 1.1.25 | Add Azure machine learning job throttle. Fast fail for training job when SSH setup failed. Reduce Prometheus scrape interval to 30s. Improve error messages for inference. Fix vulnerable image. | |
mysql | How To Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-upgrade.md | This article describes how you can upgrade your MySQL major version in-place in This feature enables customers to perform in-place upgrades of their MySQL 5.7 servers to MySQL 8.0 without any data movement or the need to make any application connection string changes. >[!Important]-> - Major version upgrade for Azure Database for MySQL - Flexible Server is available in public preview. > - Major version upgrade is currently unavailable for version 5.7 servers based on the Burstable SKU. > - Duration of downtime varies based on the size of the database instance and the number of tables it contains.+> - When initiating a major version upgrade for Azure MySQL via Rest API or SDK, please avoid modifying other properties of the service in the same request. The simultaneous changes are not permitted and may lead to unintended results or request failure. Please conduct property modifications in separate operations post-upgrade completion. > - Upgrading the major MySQL version is irreversible. Your deployment might fail if validation identifies that the server is configured with any features that are [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) or [deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations). You can make necessary configuration changes on the server and try upgrade again. ## Prerequisites |
mysql | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md | This article summarizes new releases and features in Azure Database for MySQL - - **Autoscale IOPS in Azure Database for MySQL - Flexible Server (General Availability)** -You can now scale IOPS on demand without having to pre-provision a certain amount of IOPS. With this feature, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs. With this feature, you pay only for the IO you use and no longer need to provision and pay for resources they aren't fully using, saving time and money. Autoscale IOPS eliminates the administration required to provide the best performance for Azure Database for MySQL customers at the least cost. [Learn more](./concepts-service-tiers-storage.md#autoscale-iops) + You can now scale IOPS on demand without having to pre-provision a certain amount of IOPS. With this feature, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs. With this feature, you pay only for the IO you use and no longer need to provision and pay for resources they aren't fully using, saving time and money. The autoscale IOPS feature eliminates the administration required to provide the best performance for Azure Database for MySQL customers at the lowest cost. [Learn more](./concepts-service-tiers-storage.md#autoscale-iops) ## June 2023 |
network-watcher | Diagnose Network Security Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-network-security-rules.md | In this section, you create a virtual network with two subnets and an Azure Bast # [**Portal**](#tab/portal) -1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** in the search results. +1. In the search box at the top of the portal, enter ***virtual networks***. Select **Virtual networks** in the search results. - :::image type="content" source="./media/diagnose-network-security-rules/portal-search.png" alt-text="Screenshot shows how to search for virtual networks in the Azure portal." lightbox="./media/diagnose-network-security-rules/portal-search.png"::: + :::image type="content" source="./media/diagnose-network-security-rules/virtual-networks-portal-search.png" alt-text="Screenshot shows how to search for virtual networks in the Azure portal." lightbox="./media/diagnose-network-security-rules/virtual-networks-portal-search.png"::: 1. Select **+ Create**. In **Create virtual network**, enter or select the following values in the **Basics** tab: In this section, you create a virtual network with two subnets and an Azure Bast | | | | **Project Details** | | | Subscription | Select your Azure subscription. |- | Resource Group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. | + | Resource Group | Select **Create new**. </br> Enter ***myResourceGroup*** in **Name**. </br> Select **OK**. | | **Instance details** | |- | Virtual network name | Enter *myVNet*. | + | Virtual network name | Enter ***myVNet***. | | Region | Select **(US) East US**. | 1. Select the **Security** tab, or select the **Next** button at the bottom of the page. In this section, you create a virtual network with two subnets and an Azure Bast | Setting | Value | | | | | **Subnet details** | |- | Name | Enter *mySubnet*. | + | Name | Enter ***mySubnet***. | | **Security** | |- | Network security group | Select **Create new**. </br> Enter *mySubnet-nsg* in **Name**. </br> Select **OK**. | + | Network security group | Select **Create new**. </br> Enter ***mySubnet-nsg*** in **Name**. </br> Select **OK**. | 1. Select the **Review + create**. In this section, you create a virtual network with two subnets and an Azure Bast +> [!IMPORTANT] +> Hourly pricing starts from the moment Bastion host is deployed, regardless of outbound data usage. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion/). We recommend that you delete this resource once you've finished using it. + ## Create a virtual machine In this section, you create a virtual machine and a network security group applied to its network interface. # [**Portal**](#tab/portal) -1. In the search box at the top of the portal, enter *virtual machines*. Select **Virtual machines** in the search results. +1. In the search box at the top of the portal, enter ***virtual machines***. Select **Virtual machines** in the search results. 1. Select **+ Create** and then select **Azure virtual machine**. In this section, you create a virtual machine and a network security group appli | Subscription | Select your Azure subscription. | | Resource Group | Select **myResourceGroup**. | | **Instance details** | |- | Virtual machine name | Enter *myVM*. | + | Virtual machine name | Enter ***myVM***. | | Region | Select **(US) East US**. | | Availability Options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. | In this section, you create a virtual machine and a network security group appli 1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. -1. In the Networking tab, enter or select the following values: +1. In the Networking tab, select the following values: | Setting | Value | | | | In this section, you create a virtual machine and a network security group appli 1. Create a default network security group using [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). ```azurepowershell-interactive- # Create a network security group + # Create a default network security group. New-AzNetworkSecurityGroup -Name 'myVM-nsg' -ResourceGroupName 'myResourceGroup' -Location eastus ``` 1. Create a virtual machine using [New-AzVM](/powershell/module/az.compute/new-azvm). When prompted, enter a username and password. ```azurepowershell-interactive- # Create a virtual machine. + # Create a virtual machine using the latest Windows Server 2022 image. New-AzVm -ResourceGroupName 'myResourceGroup' -Name 'myVM' -Location 'eastus' -VirtualNetworkName 'myVNet' -SubnetName 'mySubnet' -SecurityGroupName 'myVM-nsg' -ImageName 'MicrosoftWindowsServer:WindowsServer:2022-Datacenter-azure-edition:latest' ``` In this section, you create a virtual machine and a network security group appli 1. Create a default network security group using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create). ```azurecli-interactive- # Create a network security group for the network interface of the virtual machine. + # Create a default network security group. az network nsg create --name 'myVM-nsg' --resource-group 'myResourceGroup' --location 'eastus' ``` 1. Create a virtual machine using [az vm create](/cli/azure/vm#az-vm-create). When prompted, enter a username and password. ```azurecli-interactive- # Create a virtual machine. + # Create a virtual machine using the latest Windows Server 2022 image. az vm create --resource-group 'myResourceGroup' --name 'myVM' --location 'eastus' --vnet-name 'myVNet' --subnet 'mySubnet' --public-ip-address '' --nsg 'myVM-nsg' --image 'Win2022AzureEditionCore' ``` In this section, you add a security rule to the network security group associate # [**Portal**](#tab/portal) -1. In the search box at the top of the portal, enter *network security groups*. Select **Network security groups** in the search results. +1. In the search box at the top of the portal, enter ***network security groups***. Select **Network security groups** in the search results. 1. From the list of network security groups, select **myVM-nsg**. In this section, you add a security rule to the network security group associate | Destination port ranges | Enter *. | | Protocol | Select **Any**. | | Action | Select **Deny**. |- | Priority | Enter *1000*. | - | Name | Enter *DenyVnetInBound*. | + | Priority | Enter ***1000***. | + | Name | Enter ***DenyVnetInBound***. | 1. Select **Add**. + :::image type="content" source="./media/diagnose-network-security-rules/add-inbound-security-rule.png" alt-text="Screenshot shows how to add an inbound security rule to the network security group in the Azure portal."::: + # [**PowerShell**](#tab/powershell) Use [Add-AzNetworkSecurityRuleConfig](/powershell/module/az.network/add-aznetworksecurityruleconfig) to create a security rule that denies traffic from the virtual network. Then use [Set-AzNetworkSecurityGroup](/powershell/module/az.network/set-aznetworksecuritygroup) to update the network security group with the new security rule. Use [Add-AzNetworkSecurityRuleConfig](/powershell/module/az.network/add-aznetwor ```azurepowershell-interactive # Place the network security group configuration into a variable. $networkSecurityGroup = Get-AzNetworkSecurityGroup -Name 'myVM-nsg' -ResourceGroupName 'myResourceGroup'-# Create a security rule. +# Create a security rule that denies inbound traffic from the virtual network service tag. Add-AzNetworkSecurityRuleConfig -Name 'DenyVnetInBound' -NetworkSecurityGroup $networkSecurityGroup ` -Access 'Deny' -Protocol '*' -Direction 'Inbound' -Priority '1000' ` -SourceAddressPrefix 'virtualNetwork' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' Set-AzNetworkSecurityGroup -NetworkSecurityGroup $networkSecurityGroup Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to add to the network security group a security rule that denies traffic from the virtual network. ```azurecli-interactive-# Add a security rule to the network security group. +# Add to the network security group a security rule that denies inbound traffic from the virtual network service tag. az network nsg rule create --name 'DenyVnetInBound' --resource-group 'myResourceGroup' --nsg-name 'myVM-nsg' --priority '1000' \ --access 'Deny' --protocol '*' --direction 'Inbound' --source-address-prefixes 'virtualNetwork' --source-port-ranges '*' \ --destination-address-prefixes '*' --destination-port-ranges '*' Use NSG diagnostics to check the security rules applied to the traffic originate | Setting | Value | | - | |- | Subscription | Select the Azure subscription that has the virtual machine that you want to test the connection with. | - | Resource group | Select the resource group that has the virtual machine that you want to test the connection with. | - | Supported resource type | Select **Virtual machine**. | - | Resource | Select the virtual machine that you want to test the connection with. | + | **Target resource** | | + | Target resource type | Select **Virtual machine**. | + | Virtual machine | Select **myVM** virtual machine. | + | **Traffic details** | | | Protocol | Select **TCP**. Other available options are: **Any**, **UDP** and **ICMP**. | | Direction | Select **Inbound**. Other available option is: **Outbound**. | | Source type | Select **IPv4 address/CIDR**. Other available option is: **Service Tag**. |- | IPv4 address/CIDR | Enter *10.0.1.0/26*, which is the IP address range of the Bastion subnet. Acceptable values are: single IP address, multiple IP addresses, single IP prefix, multiple IP prefixes. | - | Destination IP address | Enter *10.0.0.4*, which is the IP address of **myVM**. | + | IPv4 address/CIDR | Enter ***10.0.1.0/26***, which is the IP address range of the Bastion subnet. Acceptable values are: single IP address, multiple IP addresses, single IP prefix, multiple IP prefixes. | + | Destination IP address | Leave the default of **10.0.0.4**, which is the IP address of **myVM**. | | Destination port | Enter * to include all ports. | :::image type="content" source="./media/diagnose-network-security-rules/nsg-diagnostics-vm-values.png" alt-text="Screenshot showing required values for NSG diagnostics to test inbound connections to a virtual machine in the Azure portal." lightbox="./media/diagnose-network-security-rules/nsg-diagnostics-vm-values.png"::: -1. Select **Check** to run the test. Once NSG diagnostics completes checking all security rules, it displays the result. +1. Select **Run NSG diagnostics** to run the test. Once NSG diagnostics completes checking all security rules, it displays the result. :::image type="content" source="./media/diagnose-network-security-rules/nsg-diagnostics-vm-test-result-denied.png" alt-text="Screenshot showing the result of inbound connections to the virtual machine as Denied." lightbox="./media/diagnose-network-security-rules/nsg-diagnostics-vm-test-result-denied.png"::: Use NSG diagnostics to check the security rules applied to the traffic originate - **mySubnet-nsg**: this network security group is applied at the subnet level (subnet of the virtual machine). The rule allows inbound TCP traffic from the Bastion subnet to the virtual machine. - **myVM-nsg**: this network security group is applied at the network interface (NIC) level. The rule denies inbound TCP traffic from the Bastion subnet to the virtual machine. -1. Select **myVM-nsg** to see details about the security rules that this network security group has and which rule denied the traffic. +1. Select **View details** of **myVM-nsg** to see details about the security rules that this network security group has and which rule is denying the traffic. :::image type="content" source="./media/diagnose-network-security-rules/nsg-diagnostics-vm-test-result-denied-details.png" alt-text="Screenshot showing the details of the network security group that denied the traffic to the virtual machine." lightbox="./media/diagnose-network-security-rules/nsg-diagnostics-vm-test-result-denied-details.png"::: - In **myVM-nsg** network security group, the security rule **DenyVnetInBound** denies any traffic coming from the address space of **VirtualNetwork** service tag to the virtual machine. The Bastion host uses IP addresses from **10.0.1.0/26**, which are included **VirtualNetwork** service tag, to connect to the virtual machine. Therefore, the connection from the Bastion host is denied by the **DenyVnetInBound** security rule. + In **myVM-nsg** network security group, the security rule **DenyVnetInBound** denies any traffic coming from the address space of **VirtualNetwork** service tag to the virtual machine. The Bastion host uses IP addresses from the address range: **10.0.1.0/26**, which is included in **VirtualNetwork** service tag, to connect to the virtual machine. Therefore, the connection from the Bastion host is denied by the **DenyVnetInBound** security rule. # [**PowerShell**](#tab/powershell) In **myVM-nsg** network security group, the security rule **DenyVnetInBound** de -- ## Add a security rule to allow traffic from the Bastion subnet -To connect to **myVM** using Azure Bastion, traffic from the Bastion subnet must be allowed by the network security group. To allow traffic from **10.0.1.0/26**, add a security rule with a higher priority (lower priority number) than **DenyVnetInBound** rule. +To connect to **myVM** using Azure Bastion, traffic from the Bastion subnet must be allowed by the network security group. To allow traffic from **10.0.1.0/26**, add a security rule with a higher priority (lower priority number) than **DenyVnetInBound** rule or edit the **DenyVnetInBound** rule to allow traffic from the Bastion subnet. # [**Portal**](#tab/portal) You can add the security rule to the network security group from the Network Wat | Setting | Value | | | | | Source | Select **IP Addresses**. |- | Source IP addresses/CIDR ranges | Enter *10.0.1.0/26*, which is the IP address range of the Bastion subnet. | + | Source IP addresses/CIDR ranges | Enter ***10.0.1.0/26***, which is the IP address range of the Bastion subnet. | | Source port ranges | Enter *. | | Destination | Select **Any**. | | Service | Select **Custom**. | | Destination port ranges | Enter *. | | Protocol | Select **Any**. | | Action | Select **Allow**. |- | Priority | Enter *900*, which is higher priority than **1000** used for **DenyVnetInBound** rule. | - | Name | Enter *AllowBastionConnections*. | + | Priority | Enter ***900***, which is higher priority than **1000** used for **DenyVnetInBound** rule. | + | Name | Enter ***AllowBastionConnections***. | :::image type="content" source="./media/diagnose-network-security-rules/nsg-diagnostics-add-security-rule.png" alt-text="Screenshot showing how to add a new security rule to the network security group to allow the traffic to the virtual machine from the Bastion subnet." lightbox="./media/diagnose-network-security-rules/nsg-diagnostics-add-security-rule.png"::: You can add the security rule to the network security group from the Network Wat +## Clean up resources ++When no longer needed, delete the resource group and all of the resources it contains: ++# [**Portal**](#tab/portal) ++1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results. ++1. Select **Delete resource group**. ++1. In **Delete a resource group**, enter ***myResourceGroup***, and then select **Delete**. ++1. Select **Delete** to confirm the deletion of the resource group and all its resources. ++# [**PowerShell**](#tab/powershell) ++Use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to delete the resource group and all of the resources it contains. ++```azurepowershell-interactive +# Delete the resource group and all the resources it contains. +Remove-AzResourceGroup -Name 'myResourceGroup' -Force +``` ++# [**Azure CLI**](#tab/cli) ++Use [az group delete](/cli/azure/group#az-group-delete) to remove the resource group and all of the resources it contains ++```azurecli-interactive +# Delete the resource group and all the resources it contains. +az group delete --name myResourceGroup --yes --no-wait +``` +++ ## Next steps - To learn about other Network Watcher tools, see [Azure Network Watcher overview](network-watcher-monitoring-overview.md). - To learn how to troubleshoot virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md). |
network-watcher | Network Watcher Nsg Flow Logging Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-portal.md | The comma-separated information for **flowTuples** is as follows: ## Clean up resources -When no longer needed, delete **myResourceGroup** resource group and all of the resources it contains and **myVM-nsg-myResourceGroup-flowlog** flow log: --**Delete the resource group**: +When no longer needed, delete **myResourceGroup** resource group and all of the resources it contains: 1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results. When no longer needed, delete **myResourceGroup** resource group and all of the 1. Select **Delete** to confirm the deletion of the resource group and all its resources. -**Delete the flow log**: --1. In the search box at the top of the portal, enter ***network watcher***. Select **Network Watcher** from the search results. --1. Under **Logs**, select **Flow logs**. --1. In **Network Watcher | Flow logs**, select the checkbox of the flow log. --1. Select **Delete**. +> [!NOTE] +> The **myVM-nsg-myResourceGroup-flowlog** flow log is in the **NetworkWatcherRG** resource group, but it'll be deleted after deleting the **myVM-nsg** network security group (by deleting the **myResourceGroup** resource group). ## Next steps |
postgresql | Concepts Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md | description: This article describes the scheduled maintenance feature in Azure D --++ Previously updated : 11/30/2021 Last updated : 7/25/2023 # Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Flexible Server Azure Database for PostgreSQL - Flexible Server performs periodic maintenance to ## Selecting a maintenance window -You can schedule maintenance during a specific day of the week and a time window within that day. Or you can let the system pick a day and a time window time for you automatically. Either way, the system will alert you five days before running any maintenance. The system will also let you know when maintenance is started, and when it is successfully completed. +You can schedule maintenance during a specific day of the week and a time window within that day. Or you can let the system pick a day and a time window time for you automatically. Either way, the system alerts you five days before running any maintenance. The system will also let you know when maintenance is started, and when it's successfully completed. Notifications about upcoming scheduled maintenance can be: When specifying preferences for the maintenance schedule, you can pick a day of > > However, in case of a critical emergency update such as a severe vulnerability, the notification window could be shorter than five days or be omitted. The critical update may be applied to your server even if a successful scheduled maintenance was performed in the last 30 days. -You can update scheduling settings at any time. If there is maintenance scheduled for your flexible server and you update scheduling preferences, the current rollout will proceed as scheduled and the scheduling settings change will become effective upon its successful completion for the next scheduled maintenance. +You can update scheduling settings at any time. If there's maintenance scheduled for your flexible server and you update scheduling preferences, the current rollout proceeds as scheduled and the scheduling settings change will become effective upon its successful completion for the next scheduled maintenance. You can define system-managed schedule or custom schedule for each flexible server in your Azure subscription. * With custom schedule, you can specify your maintenance window for the server by choosing the day of the week and a one-hour time window. -* With system-managed schedule, the system will pick any one-hour window between 11pm and 7am in your server's region time. +* With system-managed schedule, the system picks any one-hour window between 11pm and 7am in your server's region time. -As part of rolling out changes, we apply the updates to the servers configured with system-managed schedule first followed by servers with custom schedule after a minimum gap of 7-days within a given region. If you intend to receive early updates on fleet of development and test environment servers, we recommend you configure system-managed schedule for servers used in development and test environment. This will allow you to receive the latest update first in your Dev/Test environment for testing and evaluation for validation. If you encounter any behavior or breaking changes, you will have time to address them before the same update is rolled out to production servers with custom-managed schedule. The update starts to roll out on custom-schedule flexible servers after 7 days and is applied to your server at the defined maintenance window. At this time, there is no option to defer the update after the notification has been sent. Custom-schedule is recommended for production environments only. +As part of rolling out changes, we apply the updates to the servers configured with system-managed schedule first followed by servers with custom schedule after a minimum gap of 7-days within a given region. If you intend to receive early updates on fleet of development and test environment servers, we recommend you configure system-managed schedule for servers used in development and test environment. This allows you to receive the latest update first in your Dev/Test environment for testing and evaluation for validation. If you encounter any behavior or breaking changes, you'll have time to address them before the same update is rolled out to production servers with custom-managed schedule. The update starts to roll out on custom-schedule flexible servers after seven days and is applied to your server at the defined maintenance window. At this time, there's no option to defer the update after the notification has been sent. Custom-schedule is recommended for production environments only. -In rare cases, maintenance event can be canceled by the system or may fail to complete successfully. If the update fails, the update will be reverted, and the previous version of the binaries is restored. In such failed update scenarios, you may still experience restart of the server during the maintenance window. If the update is canceled or failed, the system will create a notification about canceled or failed maintenance event respectively notifying you. The next attempt to perform maintenance will be scheduled as per your current scheduling settings and you will receive notification about it five days in advance. +In rare cases, maintenance event can be canceled by the system or may fail to complete successfully. If the update fails, the update is reverted, and the previous version of the binaries is restored. In such failed update scenarios, you may still experience restart of the server during the maintenance window. If the update is canceled or failed, the system creates a notification about canceled or failed maintenance event respectively notifying you. The next attempt to perform maintenance will be scheduled as per your current scheduling settings and you'll receive notification about it five days in advance. ## Next steps |
postgresql | Concepts Pgbouncer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md | |
postgresql | Concepts Query Store Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-best-practices.md | |
postgresql | Concepts Query Store Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-scenarios.md | |
postgresql | Concepts Query Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md | |
postgresql | How To Configure Sign In Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md | Azure AD is a multitenant application. It requires outbound connectivity to perf - **Private access (virtual network integration)**: - You need an outbound network security group (NSG) rule to allow virtual network traffic to only reach the `AzureActiveDirectory` service tag.+ - If you're using a route table, you need to create a rule with destination service tag `AzureActiveDirectory` and next hop `Internet`. - Optionally, if you're using a proxy, you can add a new firewall rule to allow HTTP/S traffic to reach only the `AzureActiveDirectory` service tag. To set the Azure AD admin during server provisioning, follow these steps: |
postgresql | How To Cost Optimization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-cost-optimization.md | Last updated 4/13/2023 # How to optimize costs in Azure Database for Postgres Flexible Server + Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on the [PostgreSQL Community Edition.](https://www.postgresql.org/). It's a fully managed database as a service offering that can handle mission-critical workloads with predictable performance and dynamic scalability. This article provides a list of recommendations for optimizing Azure Postgres Flexible Server cost. The list includes design considerations, a configuration checklist, and recommended database settings to help you optimize your workload. |
private-5g-core | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/whats-new.md | To help you stay up to date with the latest developments, this article covers: This page is updated regularly with the latest developments in Azure Private 5G Core. +## July 2023 ++### 2023-06-01 API ++**Type:** New release ++**Date available:** July 19, 2023 ++The 2023-06-01 ARM API release introduces the ability to configure several upcoming Azure Private 5G Core features. From July 19th, 2023-06-01 is the default API version for Azure Private 5G Core deployments. + +If you use the Azure portal to manage your deployment and all your resources were created using the 2022-04-01-preview API or 2022-11-01, you don't need to do anything. Your portal will use the new API. + +ARM API users with existing resources can continue to use the 2022-04-01-preview API or 2022-11-01 without updating their templates. +ARM API users can migrate to the 2023-06-01 API with their current resources with no ARM template changes (other than specifying the newer API version). + +Note: ARM API users who have done a PUT using the 2023-06-01 API and have enabled configuration only accessible in the up-level API cannot go back to using the 2022-11-01 API for PUTs. If they do, then the up-level config will be deleted. + ## June 2023 ### Packet core 2306 |
route-server | Next Hop Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/next-hop-ip.md | Title: Next Hop IP Support+ description: Learn how to use Next Hop IP feature in Azure Route Server to peer with network virtual appliances (NVAs) behind an internal load balancer. - Previously updated : 07/26/2022+ Last updated : 07/25/2023 # Next Hop IP support -With the support for Next Hop IP in [Azure Route Server](overview.md), you can peer with network virtual appliances (NVAs) that are deployed behind an Azure internal load balancer. The internal load balancer lets you set up active-passive connectivity scenarios and leverage load balancing to improve connectivity performance. +With the support for Next Hop IP in Azure Route Server, you can peer with network virtual appliances (NVAs) that are deployed behind an Azure internal load balancer. The internal load balancer lets you set up active-passive connectivity scenarios and leverage load balancing to improve connectivity performance. :::image type="content" source="./media/next-hop-ip/route-server-next-hop.png" alt-text="Diagram of two NVAs behind a load balancer and a Route Server."::: ## Active-passive NVA connectivity -You can deploy a set of active-passive NVAs behind an internal load balancer to ensure symmetrical routing to and from the NVA. With the support for Next Hop IP, you can define the next hop for both the active and passive NVAs as the IP address of the internal load balancer and set up the load balancer to direct traffic towards the Active NVA instance. +You can deploy a set of active-passive NVAs behind an internal load balancer to ensure symmetrical routing to and from the NVA. With the support for Next hop IP, you can define the next hop for both the active and passive NVAs as the IP address of the internal load balancer and set up the load balancer to direct traffic towards the Active NVA instance. ## Active-active NVA connectivity -You can deploy a set of active-active NVAs behind an internal load balancer to optimize connectivity performance. With the support for Next Hop IP, you can define the next hop for both NVA instances as the IP address of the internal load balancer. Traffic that reaches the load balancer will be sent to both NVA instances. +You can deploy a set of active-active NVAs behind an internal load balancer to optimize connectivity performance. With the support for Next hop IP, you can define the next hop for both NVA instances as the IP address of the internal load balancer. Traffic that reaches the load balancer is sent to both NVA instances. > [!NOTE] > * Active-active NVA connectivity may result in asymmetric routing. ## Next hop IP configuration -Next Hop IPs are set up in the BGP configuration of the target NVAs. The Next Hop IP isn't part of the Azure Route Server configuration. +Next hop IPs are set up in the BGP configuration of the target NVAs. The Next hop IP isn't part of the Azure Route Server configuration. ## Next steps |
sap | Provider Hana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-hana.md | To [enable TLS 1.2 higher](enable-tls-azure-monitor-sap-solutions.md) for the SA - An Azure subscription. - An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](quickstart-portal.md) or the [quickstart for PowerShell](quickstart-powershell.md). -## Configure Azure Monitor for SAP solutions +## Configure SAP HANA provider 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Azure Monitors for SAP solutions** in the search bar. To [enable TLS 1.2 higher](enable-tls-azure-monitor-sap-solutions.md) for the SA ![Diagram that shows the Azure Monitor for SAP solutions resource creation page in the Azure portal, showing all required form fields.](./media/provider-hana/azure-monitor-providers-hana-setup.png) 1. Optionally, select **Enable secure communication** and choose the certificate type from the dropdown menu. 1. For **IP address**, enter the IP address or hostname of the server that runs the SAP HANA instance that you want to monitor. If you're using a hostname, make sure there's connectivity within the virtual network.- 1. For **Database tenant**, enter the HANA database that you want to connect to. We recommend that you use **SYSTEMDB** because tenant databases don't have all monitoring views. For legacy single-container HANA 1.0 instances, leave this field blank. + 1. For **Database tenant**, enter the HANA database that you want to connect to. We recommend that you use **SYSTEMDB** because tenant databases don't have all monitoring views. 1. For **Instance number**, enter the instance number of the database (0-99). The SQL port is automatically determined based on the instance number. 1. For **Database username**, enter the dedicated SAP HANA database user. This user needs the **MONITORING** or **BACKUP CATALOG READ** role assignment. For nonproduction SAP HANA instances, use **SYSTEM** instead. 1. For **Database password**, enter the password for the database username. You can either enter the password directly or use a secret inside Azure Key Vault. 1. Save your changes to the Azure Monitor for SAP solutions resource. +> [!Note] +> Azure Monitor for SAP solutions supports HANA 2.0 SP6 and later versions. Legacy HANA 1.0 is not supported. + ## Next steps > [!div class="nextstepaction"] |
search | Semantic How To Query Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md | There are two main activities to perform: ## Prerequisites -+ A search service on Standard tier (S1, S2, S3) or Storage Optimized tier (L1, L2), in these regions: Australia East, East US, East US 2, North Central US, South Central US, West US, West US 2, North Europe, UK South, West Europe. ++ A search service on Standard tier (S1, S2, S3) or Storage Optimized tier (L1, L2), subject to [region availability](https://azure.microsoft.com/global-infrastructure/services/?products=search). - If you have an existing S1 or greater service in one of these regions, you can enable semantic search without having to create a new service. + If you have an existing S1 or greater service in a supported region, you can enable semantic search without having to create a new service. + Semantic search [enabled on your search service](semantic-search-overview.md#enable-semantic-search). If you happen to include an invalid field, there's no error, but those fields wo ## 3 - Avoid features that bypass relevance scoring -Several query capabilities in Cognitive Search don't undergo relevance scoring, and some bypass the full text search engine altogether. If your query logic includes the following features, you won't get relevance scores or semantic ranking on your results: +Several query capabilities in Cognitive Search bypass relevance scoring. If your query logic includes the following features, you won't get relevance scores or semantic ranking on your results: -+ Filters, fuzzy search queries, and regular expressions iterate over untokenized text, scanning for verbatim matches in the content. Search scores for all of the above query forms are a uniform 1.0, and won't provide meaningful input for semantic ranking. ++ Filters, fuzzy search queries, and regular expressions iterate over untokenized text, scanning for verbatim matches in the content. Search scores for all of the above query forms are a uniform 1.0, and won't provide meaningful input for semantic ranking because there's no way to select the top 50 matches. -+ Sorting (orderBy clauses) on specific fields will also override search scores and semantic score. Given that semantic score is used to order results, including explicit sort logic will cause an HTTP 400 error to be returned. ++ Sorting (orderBy clauses) on specific fields will also override search scores and semantic score. Given that semantic score is used to order results, including explicit sort logic will cause an HTTP 400 error to be returned if you run a semantic query over ordered results. ## 4 - Set up the query The response for the above example query returns the following match as the top "Description": "New Luxury Hotel. Be the first to stay. Bay views from every room, location near the pier, rooftop pool, waterfront dining & more.", "Category": "Luxury" },+ ... +] ``` > [!NOTE] |
search | Semantic Ranking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-ranking.md | Each document is now represented by a single long string. The string is composed of tokens, not characters or words. The maximum token count is 128 unique tokens. For estimation purposes, you can assume that 128 tokens are roughly equivalent to a string that is 128 words in length. > [!NOTE]-> Tokenization is determined in part by the analyzer assignment on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from searchFields. For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer). +> Tokenization is determined in part by the analyzer assignment on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from "searchFields". For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer). ## Extraction |
search | Semantic Search Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md | -Currently in Azure Cognitive Search, "semantic search" is a collection of query-related capabilities that bring semantic relevance and language understanding to textual search results. This article is a high-level introduction to semantic search. The embedded video describes the feature, and the section at the end covers availability and pricing. +Currently in Azure Cognitive Search, "semantic search" is a collection of query-related capabilities that bring semantic relevance and language understanding to textual search results. This article is a high-level introduction to semantic search. The [embedded video](#how-semantic-ranking-works) describes the technology, and the section at the end covers availability and pricing. Semantic search is a premium feature that's billed by usage. We recommend this article for background, but if you'd rather get started, follow these steps: Although semantic search and vector search are closely related, this particular | Feature | Description | ||-| | [Semantic re-ranking](semantic-ranking.md) | Uses the context or semantic meaning of a query to compute a new relevance score over existing results. |-| [Semantic captions and highlights](semantic-how-to-query-request.md) | Extracts sentences and phrases from a document that best summarize the content, with highlights over key passages for easy scanning. Captions that summarize a result are useful when individual content fields are too dense for the results page. Highlighted text elevates the most relevant terms and phrases so that users can quickly determine why a match was considered relevant. | +| [Semantic captions and highlights](semantic-how-to-query-request.md) | Extracts verbatim sentences and phrases from a document that best summarize the content, with highlights over key passages for easy scanning. Captions that summarize a result are useful when individual content fields are too dense for the search results page. Highlighted text elevates the most relevant terms and phrases so that users can quickly determine why a match was considered relevant. | | [Semantic answers](semantic-answers.md) | An optional and extra substructure returned from a semantic query. It provides a direct answer to a query that looks like a question. It requires that a document has text with the characteristics of an answer. | ## How semantic ranking works *Semantic ranking* looks for context and relatedness among terms, elevating matches that make more sense given the query. Language understanding finds summarizations or *captions* and *answers* within your content and includes them in the response, which can then be rendered on a search results page for a more productive search experience. -State-of-the-art pretrained models are used for summarization and ranking. To maintain the fast performance that users expect from search, semantic summarization and ranking are applied to just the top 50 results, as scored by the [default scoring algorithm](index-similarity-and-scoring.md). Using those results as the document corpus, semantic ranking re-scores those results based on the semantic strength of the match. +State-of-the-art pretrained models are used for summarization and ranking. To maintain the fast performance that users expect from search, semantic summarization and ranking are applied to just the top 50 results, as scored by the [default scoring algorithm](index-similarity-and-scoring.md). Using those results as the corpus, semantic ranking re-scores those results based on the semantic strength of the match. The underlying technology is from Bing and Microsoft Research, and integrated into the Cognitive Search infrastructure as an add-on feature. For more information about the research and AI investments backing semantic search, see [How AI from Bing is powering Azure Cognitive Search (Microsoft Research Blog)](https://www.microsoft.com/research/blog/the-science-behind-semantic-search-how-ai-from-bing-is-powering-azure-cognitive-search/). Semantic search and spell check are available on services that meet the criteria <sup>2</sup> Due to the provisioning mechanisms and lifespan of shared (free) search services, a few services happen to have spell check on the free tier. However, spell check availability on free tier services isn't guaranteed and shouldn't be expected. -Charges for semantic search are levied when query requests include "queryType=semantic" and the search string isn't empty (for example, "search=pet friendly hotels in New York". If your search string is empty ("search=*"), you won't be charged, even if the queryType is set to "semantic". +Charges for semantic search are levied when query requests include "queryType=semantic" and the search string isn't empty (for example, "search=pet friendly hotels in New York"). If your search string is empty ("search=*"), you won't be charged, even if the queryType is set to "semantic". ## Enable semantic search |
security | Key Management Choose | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/key-management-choose.md | It also refers to these various key management use cases: - _Encryption at rest_ is typically enabled for Azure IaaS, PaaS, and SaaS models. Applications such as Microsoft 365; Microsoft Purview Information Protection; platform services in which the cloud is used for storage, analytics, and service bus functionality; and infrastructure services in which operating systems and applications are hosted and deployed in the cloud use encryption at rest. _Customer managed keys for encryption at rest_ is used with Azure Storage and Azure AD. For highest security, keys should be HSM-backed, 3k or 4k RSA keys. For more information about encryption at rest, see [Azure Data Encryption at Rest](encryption-atrest.md). - _SSL/TLS Offload_ is supported on Azure Managed HSM and Azure Dedicated HSM. Customers have improved high availability, security, and best price point on Azure Managed HSM for F5 and Nginx. - _Lift and shift_ refer to scenarios where a PKCS11 application on-premises is migrated to Azure Virtual Machines and running software such as Oracle TDE in Azure Virtual Machines. Lift and shift requiring payment processing is supported by Azure Payment HSM. All other scenarios are supported by Azure Dedicated HSM. Legacy APIs and libraries such as PKCS11, JCA/JCE, and CNG/KSP are only supported by Azure Dedicated HSM.-- _Payment transactions/processing_ includes allowing card and mobile payment authorization and 3D-Secure authentication; PIN generation, management, and validation; payment credential issuing for cards, wearables, and connected devices; securing keys and authentication data; and sensitive data protection for point-to-point encryption, security tokenization, and EMV payment tokenization. This also includes certifications such as PCI DSS, PCI 3DS, and PCI PIN. These are only supported by Azure Payment HSM.+- _Payment PIN processing_ includes allowing card and mobile payment authorization and 3D-Secure authentication; PIN generation, management, and validation; payment credential issuing for cards, wearables, and connected devices; securing keys and authentication data; and sensitive data protection for point-to-point encryption, security tokenization, and EMV payment tokenization. This also includes certifications such as PCI DSS, PCI 3DS, and PCI PIN. These are supported by Azure Payment HSM. The flowchart result is a starting point to identify the solution that best matches your needs. Use the table to compare all the solutions side by side. Begin from top to botto | | **AKV Standard** | **AKV Premium** | **Azure Managed HSM** | **Azure Dedicated HSM** | **Azure Payment HSM** | | | | | | | |-| What level of **compliance** do you need? | FIPS 140-2 level 1 | FIPS 140-2 level 2 | FIPS 140-2 level 3 | FIPS 140-2 level 3 | FIPS 140-2 level 3, PCI HSM v3 | +| What level of **compliance** do you need? | FIPS 140-2 level 1 | FIPS 140-2 level 2, PCI DSS | FIPS 140-2 level 3, PCI DSS, PCI 3DS | FIPS 140-2 level 3, HIPPA, PCI DSS, PCI 3DS, eIDAS CC EAL4+, GSMA | FIPS 140-2 level 3, PCI PTS HSM v3, PCI DSS, PCI 3DS, PCI PIN | | Do you need **key sovereignty**? | No | No | Yes | Yes | Yes | | What kind of **tenancy** are you looking for? | Multi Tenant | Multi Tenant | Single Tenant | Single Tenant | Single Tenant |-| What are your **use cases**? | Encryption at Rest, CMK, custom | Encryption at Rest, CMK, custom | Encryption at Rest, TLS Offload, CMK, custom | PKCS11, TLS Offload, code/document signing, custom | Payment transactions and processes, custom | +| What are your **use cases**? | Encryption at Rest, CMK, custom | Encryption at Rest, CMK, custom | Encryption at Rest, TLS Offload, CMK, custom | PKCS11, TLS Offload, code/document signing, custom | Payment PIN processing, custom | | Do you want **HSM hardware protection**? | No | Yes | Yes | Yes | Yes | | What is your **budget**? | $ | $$ | $$$ | $$$$ | $$$$ | | Who takes responsibility for **patching and maintenance**? | Microsoft | Microsoft | Microsoft | Customer | Customer | Here is a list of the key management solutions we commonly see being utilized ba | **Industry** | **Suggested Azure solution** | **Considerations for suggested solutions** | | | | | | I am a financial service customer with strict security compliancy requirements. | Azure Managed HSM | Azure Managed HSM provides FIPS 140-2 Level 3 compliance. It provides HSM backed keys and gives customers key sovereignty and single tenancy. |-| I am a customer looking for PCI compliancy and support for payment and credit card processing services. | Azure Payment HSM | Azure Payment HSM provides FIPS 140-2 Level 3 and PCI HSM v3 compliance. It provides key sovereignty and single tenancy, common internal compliance requirements around payment processing. Azure Payment HSM provides full payment transaction and processing support. | +| I am a customer looking for PCI compliancy and support for payment and credit card processing services. | Azure Payment HSM | Azure Payment HSM provides FIPS 140-2 Level 3 and PCI HSM v3 compliance. It provides key sovereignty and single tenancy, common internal compliance requirements around payment processing. Azure Payment HSM provides full payment transaction and PIN processing support. | | I am an early-stage startup customer looking to prototype a cloud-native application. | Azure Key Vault Standard | Azure Key Vault Standard provides software-backed keys at an economy price. | | I am a startup customer looking to produce a cloud-native application. | Azure Key Vault Premium, Azure Managed HSM | Both Azure Key Vault Premium and Azure Managed HSM provide HSM-backed keys\* and are the best solutions for building cloud native applications. | | I am an IaaS customer wanting to move my application to use Azure VM/HSMs. | Azure Dedicated HSM | Azure Dedicated HSM supports SQL IaaS customers. It is the only solution that supports PKCS11 and custom non-cloud native applications. | Here is a list of the key management solutions we commonly see being utilized ba **Azure Dedicated HSM**: A FIPS 140-2 Level 3 validated single-tenant bare metal HSM offering that lets customers lease a general-purpose HSM appliance that resides in Microsoft datacenters. The customer has complete ownership over the HSM device and is responsible for patching and updating the firmware when required. Microsoft has no permissions on the device or access to the key material, and Azure Dedicated HSM is not integrated with any Azure PaaS offerings. Customers can interact with the HSM using the PKCS#11, JCE/JCA, and KSP/CNG APIs. This offering is most useful for legacy lift-and-shift workloads, PKI, SSL Offloading and Keyless TLS (supported integrations include F5, Nginx, Apache, Palo Alto, IBM GW and more), OpenSSL applications, Oracle TDE, and Azure SQL TDE IaaS. For more information, see [What is Azure Dedicated HSM?](../../dedicated-hsm/overview.md) -**Azure Payment HSM**: A FIPS 140-2 Level 3, PCI HSM v3, validated single-tenant bare metal HSM offering that lets customers lease a payment HSM appliance in Microsoft datacenters for payments operations, including payment processing, payment credential issuing, securing keys and authentication data, and sensitive data protection. The service is PCI DSS, PCI 3DS, and PCI PIN compliant. Azure Payment HSM offers single-tenant HSMs for customers to have complete administrative control and exclusive access to the HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released, to ensure complete privacy and security is maintained. For more information, see [About Azure Payment HSM](../../payment-hsm/overview.md). +**Azure Payment HSM**: A FIPS 140-2 Level 3, PCI HSM v3, validated single-tenant bare metal HSM offering that lets customers lease a payment HSM appliance in Microsoft datacenters for payments operations, including payment PIN processing, payment credential issuing, securing keys and authentication data, and sensitive data protection. The service is PCI DSS, PCI 3DS, and PCI PIN compliant. Azure Payment HSM offers single-tenant HSMs for customers to have complete administrative control and exclusive access to the HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released, to ensure complete privacy and security is maintained. For more information, see [About Azure Payment HSM](../../payment-hsm/overview.md). > [!NOTE] > \* Azure Key Vault Premium allows the creation of both software-protected and HSM protected keys. If using Azure Key Vault Premium, check to ensure that the key created is HSM protected. |
site-recovery | Vmware Physical Mobility Service Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md | During a push installation of the Mobility service, the following steps are perf 1. As part of the agent installation, the Volume Shadow Copy Service (VSS) provider for Azure Site Recovery is installed. The VSS provider is used to generate application-consistent recovery points. - If the VSS provider installation fails, the agent installation will fail. To avoid failure of the agent installation, use [version 9.23](https://support.microsoft.com/help/4494485/update-rollup-35-for-azure-site-recovery) or higher to generate crash-consistent recovery points and do a manual install of the VSS provider. ++## Install the Mobility service using UI (Modernized) ++>[!NOTE] +> This section is applicable to Azure Site Recovery - Modernized. [Here are the installation instructions for Classic](#install-the-mobility-service-using-ui-classic). ++### Prerequisites ++Locate the installer files for the serverΓÇÖs operating system using the following steps: +- On the appliance, go to the folder *E:\Software\Agents*. +- Copy the installer corresponding to the source machineΓÇÖs operating system and place it on your source machine in a local folder, such as *C:\Program Files (x86)\Microsoft Azure Site Recovery*. ++**Use the following steps to install the mobility service:** ++1. Copy the installation file to the location *C:\Program Files (x86)\Microsoft Azure Site Recovery*, and run it. This will launch the installer UI: ++2. Provide the install location in the UI. This should be *C:\Program Files (x86)\Microsoft Azure Site Recovery*. ++4. Click **Install**. This will start the installation of Mobility Service. Wait till the installation has been completed. ++ ![Image showing Install UI option for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/mobility-service-install.png) ++ ![Image showing Installation progress for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/installation-progress.png) ++5. Once the installation is done, you will need to register the source machine with the appliance of your choice. To do so, copy the string present in the field **Machine Details**. ++ This field includes information which is unique to the source machine. This information is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file). Learn more about [credential less discovery](#credential-less-discovery-in-modernized-architecture). ++ ![Screenshot showing source machine string.](./media/vmware-physical-mobility-service-overview-modernized/source-machine-string.png) ++6. [Generate the configuration file](#generate-mobility-service-configuration-file) using the unique source machine identifier. Once done, provide the path of **Mobility Service configuration file** in the Unified Agent configurator. +7. Click **Register**. ++ This will successfully register your source machine with your appliance. ++## Install the Mobility service using command prompt (Modernized) ++>[!NOTE] +> This section is applicable to Azure Site Recovery - Modernized. [Here are the installation instructions for Classic](#install-the-mobility-service-using-command-prompt-classic). ++### Windows machine +1. Open command prompt and navigate to the folder where the installer file has been placed. ++ ```cmd + cd C:\Program Files (x86)\Microsoft Azure Site Recovery + ``` +2. Run the following command to extract the installer file: + ```cmd + .\Microsoft-ASR_UA*Windows*release.exe /q /x:C:\Program Files (x86)\Microsoft Azure Site Recovery + ``` +3. To proceed with the installation, run the following command: ++ ```cmd ++ .\UnifiedAgentInstaller.exe /Platform vmware /Silent /Role MS /CSType CSPrime /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery" + ``` + Once the installation is complete, copy the string that is generated alongside the parameter *Agent Config Input*. This string is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file). ++ ![sample string for downloading configuration flle ](./media/vmware-physical-mobility-service-overview-modernized/configuration-string.png) ++4. After successfully installing, register the source machine with the above appliance using the following command: ++ ```cmd + "C:\Program Files (x86)\Microsoft Azure Site Recovery\agent\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime /CredentialLessDiscovery true + ``` ++#### Installation settings ++Setting | Details + | +Syntax | `.\UnifiedAgentInstaller.exe /Platform vmware /Role MS /CSType CSPrime /InstallLocation <Install Location>` +`/Role` | Mandatory installation parameter. Specifies whether the Mobility service (MS) will be installed. +`/InstallLocation`| Optional. Specifies the Mobility service installation location (any folder). +`/Platform` | Mandatory. Specifies the platform on which the Mobility service is installed: <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs.<br/><br/> If you're treating Azure VMs as physical machines, specify **VMware**. +`/Silent`| Optional. Specifies whether to run the installer in silent mode. +`/CSType`| Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy) ++#### Registration settings ++Setting | Details + | +Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime /CredentialLessDiscovery true` +`/SourceConfigFilePath` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder. +`/CSType` | Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy). +`/CredentialLessDiscovery` | Optional. Specifies whether credential-less discovery will be performed or not. +++### Linux machine ++1. From a terminal session, copy the installer to a local folder such as **/tmp** on the server that you want to protect. Then run the below command: ++ ```bash + cd /tmp ; + tar -xvf Microsoft-ASR_UA_version_LinuxVersion_GA_date_release.tar.gz + ``` ++2. To install, use the below command: + ```bash + sudo ./install -q -r MS -v VmWare -c CSPrime + ``` ++ Once the installation is complete, copy the string that is generated alongside the parameter *Agent Config Input*. This string is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file). ++3. After successfully installing, register the source machine with the above appliance using the following command: ++ ```bash + <InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q + ``` +#### Installation settings ++ Setting | Details + | + Syntax | `./install -q -r MS -v VmWare -c CSPrime` + `-r` | Mandatory. Installation parameter. Specifies whether the Mobility service (MS) should be installed. + `-d` | Optional. Specifies the Mobility service installation location: `/usr/local/ASR`. + `-v` | Mandatory. Specifies the platform on which Mobility service is installed. <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs. + `-q` | Optional. Specifies whether to run the installer in silent mode. + `-c` | Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy). ++#### Registration settings ++ Setting | Details + | + Syntax | `<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q -D true` + `-S` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder. + `-c` | Mandatory. Used to define modernized and legacy architecture. (CSPrime or CSLegacy). + `-q` | Optional. Specifies whether to run the installer in silent mode. + `-D` | Optional. Specifies whether credential-less discovery will be performed or not. ++## Credential-less discovery in modernized architecture ++When providing both the machine credentials and the vCenter server or vSphere ESXi host credentials is not possible, then you should opt for credential-less discovery. When performing credential-less discovery, mobility service is installed manually on the source machine and during the installation, the check box for credential-less discovery should be set to true, so that when replication is enabled, no credentials will be required. ++![Screenshot showing credential-less-discovery-check-box.](./media/vmware-physical-mobility-service-overview-modernized/credential-less-discovery.png) ++## Generate Mobility Service configuration file ++ Use the following steps to generate mobility service configuration file: ++ 1. Navigate to the appliance with which you want to register your source machine. Open the Microsoft Azure Appliance Configuration Manager and navigate to the section **Mobility service configuration details**. + 2. Paste the Machine Details string that you copied from Mobility Service and paste it in the input field here. + 3. Click **Download configuration file**. ++ ![Image showing download configuration file option for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/download-configuration-file.png) ++This downloads the Mobility Service configuration file. Copy the downloaded file to a local folder in your source machine. You can place it in the same folder as the Mobility Service installer. ++See information about [upgrading the mobility services](upgrade-mobility-service-modernized.md). +++ ## Install the Mobility service using UI (Classic) >[!NOTE] During a push installation of the Mobility service, the following steps are perf >[!IMPORTANT] > Don't use the UI installation method if you're replicating an Azure Infrastructure as a Service (IaaS) VM from one Azure region to another. Use the [command prompt](#install-the-mobility-service-using-command-prompt-classic) installation. -1. Copy the installation file to the machine, and run it. +1. Open command prompt and navigate to the folder where the installer file has been placed. Extract the installer: + ```cmd + Microsoft-ASR_UA*Windows*release.exe /q /x:C:\Program Files (x86)\Microsoft Azure Site Recovery + ``` +1. Run the below command to launch the installation wizard for the agent . + ```cmd + UnifiedAgentInstaller.exe /CSType CSLegacy + ``` 1. In **Installation Option**, select **Install mobility service**. 1. Choose the installation location and select **Install**. As a **prerequisite to update or protect Ubuntu 14.04 machines** from 9.42 versi 1. C:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository -## Install the Mobility service using UI (Modernized) -->[!NOTE] -> This section is applicable to Azure Site Recovery - Modernized. [Here are the installation instructions for Classic](#install-the-mobility-service-using-ui-classic). --### Prerequisites --Locate the installer files for the serverΓÇÖs operating system using the following steps: -- On the appliance, go to the folder *E:\Software\Agents*.-- Copy the installer corresponding to the source machineΓÇÖs operating system and place it on your source machine in a local folder, such as *C:\Program Files (x86)\Microsoft Azure Site Recovery*.--**Use the following steps to install the mobility service:** --1. Open command prompt and navigate to the folder where the installer file has been placed. -- ```cmd - cd C:\Program Files (x86)\Microsoft Azure Site Recovery* - ``` --2. Run the below command to extract the installer file: -- ```cmd - .\Microsoft-ASR_UA*Windows*release.exe /q /x:"C:\Program Files (x86)\Microsoft Azure Site Recovery" - ``` --3. Run the following command to proceed with the installation. This will launch the installer UI: -- ```cmd - .\UnifiedAgentInstaller.exe /Platform vmware /Role MS /CSType CSPrime /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery" - ``` -- >[!NOTE] - >The install location mentioned in the UI is the same as what was passed in the command. --4. Click **Install**. -- This will start the installation for Mobility Service. -- Wait till the installation has been completed. Once done, you will reach the registration step, you can register the source machine with the appliance of your choice. -- ![Image showing Install UI option for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/mobility-service-install.png) -- ![Image showing Installation progress for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/installation-progress.png) --5. Copy the string present in the field **Machine Details**. -- This field includes information unique to the source machine. This information is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file). Learn more about [credential less discovery](#credential-less-discovery-in-modernized-architecture). -- ![Screenshot showing source machine string.](./media/vmware-physical-mobility-service-overview-modernized/source-machine-string.png) --6. Provide the path of **Mobility Service configuration file** in the Unified Agent configurator. -7. Click **Register**. -- This will successfully register your source machine with your appliance. --## Install the Mobility service using command prompt (Modernized) -->[!NOTE] -> This section is applicable to Azure Site Recovery - Modernized. [Here are the installation instructions for Classic](#install-the-mobility-service-using-command-prompt-classic). --### Windows machine -1. Open command prompt and navigate to the folder where the installer file has been placed. -- ```cmd - cd C:\Program Files (x86)\Microsoft Azure Site Recovery - ``` -2. Run the following command to extract the installer file: - ```cmd - .\Microsoft-ASR_UA*Windows*release.exe /q /x:C:\Program Files (x86)\Microsoft Azure Site Recovery - ``` -3. To proceed with the installation, run the following command: -- ```cmd -- .\UnifiedAgentInstaller.exe /Platform vmware /Silent /Role MS /CSType CSPrime /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery" - ``` - Once the installation is complete, copy the string that is generated alongside the parameter *Agent Config Input*. This string is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file). -- ![sample string for downloading configuration flle ](./media/vmware-physical-mobility-service-overview-modernized/configuration-string.png) --4. After successfully installing, register the source machine with the above appliance using the following command: -- ```cmd - "C:\Program Files (x86)\Microsoft Azure Site Recovery\agent\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime /CredentialLessDiscovery true - ``` --#### Installation settings --Setting | Details - | -Syntax | `.\UnifiedAgentInstaller.exe /Platform vmware /Role MS /CSType CSPrime /InstallLocation <Install Location>` -`/Role` | Mandatory installation parameter. Specifies whether the Mobility service (MS) will be installed. -`/InstallLocation`| Optional. Specifies the Mobility service installation location (any folder). -`/Platform` | Mandatory. Specifies the platform on which the Mobility service is installed: <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs.<br/><br/> If you're treating Azure VMs as physical machines, specify **VMware**. -`/Silent`| Optional. Specifies whether to run the installer in silent mode. -`/CSType`| Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy) --#### Registration settings --Setting | Details - | -Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime /CredentialLessDiscovery true` -`/SourceConfigFilePath` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder. -`/CSType` | Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy). -`/CredentialLessDiscovery` | Optional. Specifies whether credential-less discovery will be performed or not. ---### Linux machine --1. From a terminal session, copy the installer to a local folder such as **/tmp** on the server that you want to protect. Then run the below command: -- ```bash - cd /tmp ; - tar -xvf Microsoft-ASR_UA_version_LinuxVersion_GA_date_release.tar.gz - ``` --2. To install, use the below command: - ```bash - sudo ./install -q -r MS -v VmWare -c CSPrime - ``` -- Once the installation is complete, copy the string that is generated alongside the parameter *Agent Config Input*. This string is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file). --3. After successfully installing, register the source machine with the above appliance using the following command: -- ```bash - <InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q - ``` -#### Installation settings -- Setting | Details - | - Syntax | `./install -q -r MS -v VmWare -c CSPrime` - `-r` | Mandatory. Installation parameter. Specifies whether the Mobility service (MS) should be installed. - `-d` | Optional. Specifies the Mobility service installation location: `/usr/local/ASR`. - `-v` | Mandatory. Specifies the platform on which Mobility service is installed. <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs. - `-q` | Optional. Specifies whether to run the installer in silent mode. - `-c` | Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy). --#### Registration settings -- Setting | Details - | - Syntax | `<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q -D true` - `-S` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder. - `-c` | Mandatory. Used to define modernized and legacy architecture. (CSPrime or CSLegacy). - `-q` | Optional. Specifies whether to run the installer in silent mode. - `-D` | Optional. Specifies whether credential-less discovery will be performed or not. --## Credential-less discovery in modernized architecture --When providing both the machine credentials and the vCenter server or vSphere ESXi host credentials is not possible, then you should opt for credential-less discovery. When performing credential-less discovery, mobility service is installed manually on the source machine and during the installation, the check box for credential-less discovery should be set to true, so that when replication is enabled, no credentials will be required. --![Screenshot showing credential-less-discovery-check-box.](./media/vmware-physical-mobility-service-overview-modernized/credential-less-discovery.png) --## Generate Mobility Service configuration file -- Use the following steps to generate mobility service configuration file: -- 1. Navigate to the appliance with which you want to register your source machine. Open the Microsoft Azure Appliance Configuration Manager and navigate to the section **Mobility service configuration details**. - 2. Paste the Machine Details string that you copied from Mobility Service and paste it in the input field here. - 3. Click **Download configuration file**. -- ![Image showing download configuration file option for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/download-configuration-file.png) --This downloads the Mobility Service configuration file. Copy the downloaded file to a local folder in your source machine. You can place it in the same folder as the Mobility Service installer. --See information about [upgrading the mobility services](upgrade-mobility-service-modernized.md). ## Next steps |
storage | Blob Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md | Each inventory run for a rule generates the following files: ## Pricing and billing -Pricing for inventory is based on the number of blobs and containers that are scanned during the billing period. As an example, suppose an account contains one million blobs, and blob inventory is set to run once per week. After four weeks, four million blob entries will have been scanned. +Pricing for inventory is based on the number of blobs and containers that are scanned during the billing period. The [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) page shows the price per one million objects scanned. For example, if the price to scan one million objects is $0.003, your account contains three million objects, and you produce four reports in a month, then your bill would be 4 * 3 * $0.003 = $0.036. After inventory files are created, additional standard data storage and operations charges will be incurred for storing, reading, and writing the inventory-generated files in the account. |
storage | Storage Blob Static Website | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website.md | |
storage | Container Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md | description: An overview of Azure Container Storage Preview, a service built nat Previously updated : 07/03/2023 Last updated : 07/25/2023 -To sign up for Azure Container Storage Preview, complete the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp). To get started using Azure Container Storage, see [Install Azure Container Storage for use with AKS](container-storage-aks-quickstart.md). +To sign up for Azure Container Storage Preview, complete the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp). To get started using Azure Container Storage, see [Install Azure Container Storage for use with AKS](container-storage-aks-quickstart.md) or watch the video. ++ :::column::: + <iframe width="560" height="315" src="https://www.youtube.com/embed/I_2nCQ1FKTU" title="Get started with Azure Container Storage" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> + :::column-end::: + :::column::: + This video provides an introduction to Azure Container Storage, an end-to-end storage management and orchestration service for stateful applications. See how simple it is to create and manage volumes for production-scale stateful container applications. Learn how to optimize the performance of stateful workloads on Azure Kubernetes Service (AKS) to effectively scale across storage services while providing a cost-effective container-native experience. + :::column-end::: ## Supported storage types |
storage | Azure Files Case Study | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/azure-files-case-study.md | description: Case studies describing how Microsoft customers use Azure Files and Previously updated : 03/10/2023 Last updated : 07/24/2023 +## Azure Files AI model training use case ++To interpret and contextualize seafloor health, a team of marine environmental scientists and analysts stored an extensive collection of images in Azure Files to use for building and training a crucial AI model. Now, the team seamlessly updates seafloor data and makes it accessible to clients in near real-time. [Check out the full story here](https://customers.microsoft.com/story/1653678788412771617-inspireenvironmental-sustainability-azure-en-us). + ## Azure Files NFS for SAP use case A global insurance company runs one of the largest SAP deployments in Europe, which it historically managed on its own private cloud. As the company continued to grow, its on-premises hardware resources became increasingly scarce. To improve scalability and performance, the company moved its SAP environment to Azure, using Azure Virtual Machines and Azure Disk Storage. It used Azure Files to provide NFS file storage for its Linux-based SAP servers, eliminating the burden and costs of managing on-premises NFS file servers. With Azure Files, the company is also able to easily operate business-critical SAP transport directories. [Check out the full story here](https://customers.microsoft.com/story/1557513300029587275-munichre-insurance-sap-on-azure). |
storage | Storage Files Smb Multichannel Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-smb-multichannel-performance.md | -SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account kind). On the service side, SMB Multichannel is disabled by default in Azure Files, but there's no additional cost for enabling it. +SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account kind) for Windows clients. On the service side, SMB Multichannel is disabled by default in Azure Files, but there's no additional cost for enabling it. ## Applies to | File share type | SMB | NFS | This feature provides greater performance benefits to multi-threaded application ## Limitations SMB Multichannel for Azure file shares currently has the following restrictions: - Only supported on Windows clients that are using SMB 3.1.1. Ensure SMB client operating systems are patched to recommended levels.+- Not currently supported or recommended for Linux clients. - Maximum number of channels is four, for details see [here](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json#cause-4-number-of-smb-channels-exceeds-four). ## Configuration Get-SmbClientConfiguration | Select-Object -Property EnableMultichannel On your Azure storage account, you'll need to enable SMB Multichannel. See [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel). ### Disable SMB Multichannel-In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB Multichannel. However, some specific scenarios such as single-threaded workloads or for testing purposes, you might want to disable SMB Multichannel. See [Performance comparison](#performance-comparison) for more details. +In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB Multichannel. However, for some specific scenarios such as single-threaded workloads or for testing purposes, you might want to disable SMB Multichannel. See [Performance comparison](#performance-comparison) for more details. ## Verify SMB Multichannel is configured correctly In most scenarios, particularly multi-threaded workloads, clients should see imp A copy tool such as robocopy /MT, or any performance tool such as Diskspd to read/write files can generate load. 1. Open PowerShell as an admin and use the following command: `Get-SmbMultichannelConnection |fl`-1. Look for **MaxChannels** and **CurrentChannels** properties +1. Look for **MaxChannels** and **CurrentChannels** properties. :::image type="content" source="media/storage-files-smb-multichannel-performance/files-smb-multi-channel-connection.PNG" alt-text="Screenshot of Get-SMBMultichannelConnection results." lightbox="media/storage-files-smb-multichannel-performance/files-smb-multi-channel-connection.PNG"::: |
stream-analytics | Run Job In Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/run-job-in-virtual-network.md | Virtual network (VNet) support enables you to lock down access to Azure Stream A - [Service tags](../virtual-network/service-tags-overview.md), which allow or deny traffic to Azure Stream Analytics. ## Availability -Currently, this capability is only available in select regions: **West US**, **Central Canada**, **East US**, **Central US**, **West Europe**, and **North Europe**. +Currently, this capability is only available in select regions: **West US**, **Central Canada**, **East US**, **East US 2**, **Central US**, **West Europe**, and **North Europe**. If you're interested in enabling VNet integration in your region, **fill out this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRzFwASREnlZFvs9gztPNuTdUMU5INk5VT05ETkRBTTdSMk9BQ0w3OEZDQi4u)**. ## Requirements for VNet integration support |
synapse-analytics | Develop Storage Files Storage Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md | Title: Control storage account access for serverless SQL pool description: Describes how serverless SQL pool accesses Azure Storage and how you can control storage access for serverless SQL pool in Azure Synapse Analytics.- --- Previously updated : 06/11/2020 -+ Last updated : 07/24/2023+++ # Control storage account access for serverless SQL pool in Azure Synapse Analytics A serverless SQL pool query reads files directly from Azure Storage. Permissions to access the files on Azure storage are controlled at two levels:-- **Storage level** - User should have permission to access underlying storage files. Your storage administrator should allow Azure AD principal to read/write files, or generate SAS key that will be used to access storage. +- **Storage level** - User should have permission to access underlying storage files. Your storage administrator should allow Azure AD principal to read/write files, or generate shared access signature (SAS) key that will be used to access storage. - **SQL service level** - User should have granted permission to read data using [external table](develop-tables-external-tables.md) or to execute the `OPENROWSET` function. Read more about [the required permissions in this section](develop-storage-files-overview.md#permissions). This article describes the types of credentials you can use and how credential lookup is enacted for SQL and Azure AD users. This article describes the types of credentials you can use and how credential l ## Storage permissions A serverless SQL pool in Synapse Analytics workspace can read the content of files stored in Azure Data Lake storage. You need to configure permissions on storage to enable a user who executes a SQL query to read the files. There are three methods for enabling the access to the files:-- **[Role based access control (RBAC)](../../role-based-access-control/overview.md)** enables you to assign a role to some Azure AD user in the tenant where your storage is placed. A reader must have `Storage Blob Data Reader`, `Storage Blob Data Contributor`, or `Storage Blob Data Owner` RBAC role on storage account. A user who writes data in the Azure storage must have `Storage Blob Data Contributor` or `Storage Blob Data Owner` role. Note that `Storage Owner` role does not imply that a user is also `Storage Data Owner`.+- **[Role based access control (RBAC)](../../role-based-access-control/overview.md)** enables you to assign a role to some Azure AD user in the tenant where your storage is placed. A reader must be a member of the Storage Blob Data Reader, Storage Blob Data Contributor, or Storage Blob Data Owner role on the storage account. A user who writes data in the Azure storage must be a member of the Storage Blob Data Contributor or Storage Blob Data Owner role. The Storage Owner role does not imply that a user is also Storage Data Owner. - **Access Control Lists (ACL)** enable you to define a fine grained [Read(R), Write(W), and Execute(X) permissions](../../storage/blobs/data-lake-storage-access-control.md#levels-of-permission) on the files and directories in Azure storage. ACL can be assigned to Azure AD users. If readers want to read a file on a path in Azure Storage, they must have Execute(X) ACL on every folder in the file path, and Read(R) ACL on the file. [Learn more how to set ACL permissions in storage layer](../../storage/blobs/data-lake-storage-access-control.md#how-to-set-acls).-- **Shared access signature (SAS)** enables a reader to access the files on the Azure Data Lake storage using the time-limited token. The reader doesnΓÇÖt even need to be authenticated as Azure AD user. SAS token contains the permissions granted to the reader as well as the period when the token is valid. SAS token is good choice for time-constrained access to any user that doesn't even need to be in the same Azure AD tenant. SAS token can be defined on the storage account or on specific directories. Learn more about [granting limited access to Azure Storage resources using shared access signatures](../../storage/common/storage-sas-overview.md).+- **Shared access signature (SAS)** enables a reader to access the files on the Azure Data Lake storage using the time-limited token. The reader doesn't even need to be authenticated as Azure AD user. SAS token contains the permissions granted to the reader as well as the period when the token is valid. SAS token is good choice for time-constrained access to any user that doesn't even need to be in the same Azure AD tenant. SAS token can be defined on the storage account or on specific directories. Learn more about [granting limited access to Azure Storage resources using shared access signatures](../../storage/common/storage-sas-overview.md). As an alternative, you can make your files publicly available by allowing anonymous access. This approach should NOT be used if you have non-public data. ## Supported storage authorization types -A user that has logged into a serverless SQL pool must be authorized to access and query the files in Azure Storage if the files aren't publicly available. You can use four authorization types to access non-public storage - [User Identity](?tabs=user-identity), [Shared access signature](?tabs=shared-access-signature), [Service Principal](?tab/service-principal) and [Managed Identity](?tabs=managed-identity). +A user that has logged into a serverless SQL pool must be authorized to access and query the files in Azure Storage if the files aren't publicly available. You can use four authorization types to access non-public storage: [user identity](?tabs=user-identity), [shared access signature](?tabs=shared-access-signature), [service principal](?tab/service-principal), and [managed identity](?tabs=managed-identity). > [!NOTE] > **Azure AD pass-through** is the default behavior when you create a workspace. -### [User Identity](#tab/user-identity) +### [User identity](#tab/user-identity) -**User Identity**, also known as "Azure AD pass-through", is an authorization type where the identity of the Azure AD user that logged into -serverless SQL pool is used to authorize data access. Before accessing the data, the Azure Storage administrator must grant permissions to the Azure AD user. As indicated in the table below, it's not supported for the SQL user type. +**User identity**, also known as "Azure AD pass-through", is an authorization type where the identity of the Azure AD user that logged into serverless SQL pool is used to authorize data access. Before accessing the data, the Azure Storage administrator must grant permissions to the Azure AD user. As indicated in the [Supported authorization types for database users table](#supported-authorization-types-for-databases-users), it's not supported for the SQL user type. > [!IMPORTANT]-> AAD authentication token might be cached by the client applications. For example PowerBI caches AAD token and reuses the same token for an hour. The long running queries might fail if the token expires in the middle of the query execution. If you are experiencing query failures caused by the AAD access token that expires in the middle of the query, consider switching to [Service Principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types), [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types) or [Shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types). +> An Azure Active Directory authentication token might be cached by the client applications. For example, Power BI caches Azure Active Directory tokens and reuses the same token for an hour. Long-running queries might fail if the token expires in the middle of the query execution. If you are experiencing query failures caused by the Azure Active Directory access token that expires in the middle of the query, consider switching to a [service principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types), [managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types) or [shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types). -You need to have a Storage Blob Data Owner/Contributor/Reader role to use your identity to access the data. As an alternative, you can specify fine-grained ACL rules to access files and folders. Even if you are an Owner of a Storage Account, you still need to add yourself into one of the Storage Blob Data roles. +You need to be a member of the Storage Blob Data Owner, Storage Blob Data Contributor, or Storage Blob Data Reader role to use your identity to access the data. As an alternative, you can specify fine-grained ACL rules to access files and folders. Even if you are an Owner of a Storage Account, you still need to add yourself into one of the Storage Blob Data roles. To learn more about access control in Azure Data Lake Store Gen2, review the [Access control in Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-access-control.md) article. ### [Shared access signature](#tab/shared-access-signature) -**Shared access signature (SAS)** provides delegated access to resources in a storage account. With SAS, a customer can grant clients access to resources in a storage account without sharing account keys. SAS gives you granular control -over the type of access you grant to clients who have an SAS, including validity interval, granted permissions, acceptable IP address range, and the acceptable protocol (https/http). +**Shared access signature (SAS)** provides delegated access to resources in a storage account. With SAS, a customer can grant clients access to resources in a storage account without sharing account keys. SAS gives you granular control over the type of access you grant to clients who have an SAS, including validity interval, granted permissions, acceptable IP address range, and the acceptable protocol (https/http). -You can get an SAS token by navigating to the **Azure portal -> Storage Account -> Shared access signature -> Configure permissions -> Generate SAS and connection string.** +You can get an SAS token by navigating to the **Azure portal -> Storage Account -> Shared access signature -> Configure permissions -> Generate SAS and connection string**. > [!IMPORTANT]-> When an SAS token is generated, it includes a question mark ('?') at the beginning of the token. To use the token in serverless SQL pool, you must remove the question mark ('?') when creating a credential. For example: +> When a shared access signature (SAS) token is generated, it includes a question mark (`?`) at the beginning of the token. To use the token in serverless SQL pool, you must remove the question mark (`?`) when creating a credential. For example: >-> SAS token: ?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-18T20:42:12Z&st=2019-04-18T12:42:12Z&spr=https&sig=lQHczNvrk1KoYLCpFdSsMANd0ef9BrIPBNJ3VYEIq78%3D +> SAS token: `?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-18T20:42:12Z&st=2019-04-18T12:42:12Z&spr=https&sig=lQHcEIq78%3D` To enable access using an SAS token, you need to create a database-scoped or server-scoped credential - > [!IMPORTANT]-> You cannnot access private storage accounts with the SAS token. Consider switching to [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types) or [Azure AD pass-through](develop-storage-files-storage-access-control.md?tabs=user-identity#supported-storage-authorization-types) authentication to access protected storage. +> You cannot access private storage accounts with the SAS token. Consider switching to [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types) or [Azure AD pass-through](develop-storage-files-storage-access-control.md?tabs=user-identity#supported-storage-authorization-types) authentication to access protected storage. +### [Service principal](#tab/service-principal) -### [Service Principal](#tab/service-principal) -**Service Principal** is the local representation of a global application object in a particular Azure AD tenant. This authentication method is appropriate in case when storage access is to be authorized for a user app, service or automation tool. +A **service principal** is the local representation of a global application object in a particular Azure Active Directory tenant. This authentication method is appropriate in cases where storage access is to be authorized for a user application, service, or automation tool. For more information on service principals in Azure Active Directory, see [Application and service principal objects in Azure Active Directory](/azure/active-directory/develop/app-objects-and-service-principals). -The application needs to be registered in Azure Active Directory. For registration process you can consult [Quickstart: Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md). Once the application is registered, its Service Principal can be used for authorization. +The application needs to be registered in Azure Active Directory. For more information on the registration process, follow [Quickstart: Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md). Once the application is registered, its service principal can be used for authorization. -Service Principal should be assigned Storage Blob Data Owner/Contributor/Reader role in order for the application to access the data. Even if Service Principal is Owner of a Storage Account, it still needs to be granted an appropriate Storage Blob Data role. As an alternative way of granting access to storage files and folders, fine-grained ACL rules for Service Principal can be defined. -To learn more about access control in Azure Data Lake Store Gen2, review the [Access control in Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-access-control.md) article. +The service principal should be assigned to the Storage Blob Data Owner, Storage Blob Data Contributor, and Storage Blob Data Reader roles in order for the application to access the data. Even if the service principal is the Owner of a storage account, it still needs to be granted an appropriate Storage Blob Data role. As an alternative way of granting access to storage files and folders, fine-grained ACL rules for service principal can be defined. -### [Managed Identity](#tab/managed-identity) +To learn more about access control in Azure Data Lake Store Gen2, review [Access control in Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-access-control.md). ++### [Managed service identity](#tab/managed-identity) -**Managed Identity** is also known as MSI. It's a feature of Azure Active Directory (Azure AD) that provides Azure services for serverless SQL pool. Also, it deploys an automatically managed identity in Azure AD. This identity can be used to authorize the request for data access in Azure Storage. +**Managed service identity** or managed identity is also known as an MSI. An MSI is a feature of Azure Active Directory that provides Azure services to an Azure service, in this case, for your serverless SQL pool. The MSI is created automatically in Azure AD. This identity can be used to authorize the request for data access in Azure Storage. -Before accessing the data, the Azure Storage administrator must grant permissions to Managed Identity for accessing the data. Granting permissions to Managed Identity is done the same way as granting permission to any other Azure AD user. +Before accessing the data, the Azure Storage administrator must grant permissions to the managed service identity for accessing data. Granting permissions to MSI is done the same way as granting permission to any other Azure AD user. ### [Anonymous access](#tab/public-access) You can access publicly available files placed on Azure storage accounts that [a #### Cross-tenant scenarios In cases when Azure Storage is in a different tenant from the Synapse serverless SQL pool, authorization via **Service Principal** is the recommended method. **SAS** authorization is also possible, while **Managed Identity** is not supported.++| Authorization Type | *Firewall protected storage* | *non-Firewall protected storage* | +| -- | -- | -- | +| [SAS](?tabs=shared-access-signature#supported-storage-authorization-types)| Supported | Supported| +| [Service Principal](?tabs=service-principal#supported-storage-authorization-types)| Not Supported | Supported| + > [!NOTE]-> In case when Azure Storage is protected with a firewall **Service Principal** will not be supported. +> If Azure Storage is protected by an [Azure Storage firewall](/azure/storage/common/storage-network-security), **Service Principal** will not be supported. ### Supported authorization types for databases users -In the table below you can find the available authorization types for different login methods into Synapse Serverless SQL endpoint: +The following table provides available Azure Storage authorization types for different sign-in methods into an Azure Synapse Analytics serverless SQL endpoint: -| Authorization type | *SQL user* | *Azure AD user* | *Service Principal* | +| Authorization type | *SQL user* | *Azure AD user* | *Service principal* | | - | - | -- | -- | | [User Identity](?tabs=user-identity#supported-storage-authorization-types) | Not Supported | Supported | Supported| | [SAS](?tabs=shared-access-signature#supported-storage-authorization-types) | Supported | Supported | Supported|-| [Service Principal](?tabs=service-principal#supported-storage-authorization-types) | Supported | Supported | Supported| +| [Service principal](?tabs=service-principal#supported-storage-authorization-types) | Supported | Supported | Supported| | [Managed Identity](?tabs=managed-identity#supported-storage-authorization-types) | Supported | Supported | Supported| ### Supported storages and authorization types -You can use the following combinations of authorization and Azure Storage types: +You can use the following combinations of authorization types and Azure Storage types: | Authorization type | Blob Storage | ADLS Gen1 | ADLS Gen2 | | - | | -- | -- | | [SAS](?tabs=shared-access-signature#supported-storage-authorization-types) | Supported | Not supported | Supported |-| [Service Principal](?tabs=managed-identity#supported-storage-authorization-types) | Supported | Supported | Supported | +| [Service principal](?tabs=managed-identity#supported-storage-authorization-types) | Supported | Supported | Supported | | [Managed Identity](?tabs=managed-identity#supported-storage-authorization-types) | Supported | Supported | Supported | | [User Identity](?tabs=user-identity#supported-storage-authorization-types) | Supported | Supported | Supported | -## Firewall protected storage +### Cross-tenant scenarios ++In cases when Azure Storage is in a different tenant from the Azure Synapse Analytics serverless SQL pool, authorization via **service principal** is the recommended method. **Shared access signature** authorization is also possible. **Managed service identity** is not supported. -You can configure storage accounts to allow access to specific serverless SQL pool by creating a [resource instance rule](../../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-from-azure-resource-instances). -When accessing storage that is protected with the firewall, you can use **User Identity** or **Managed Identity**. +| Authorization Type | *Firewall protected storage* | *non-Firewall protected storage* | +| -- | -- | -- | +| [SAS](?tabs=shared-access-signature#supported-storage-authorization-types)| Supported | Supported| +| [Service principal](?tabs=service-principal#supported-storage-authorization-types)| Not Supported | Supported| > [!NOTE]-> The firewall feature on Storage is in public preview and is available in all public cloud regions. +> If Azure Storage is protected by an [Azure Storage firewall](/azure/storage/common/storage-network-security) and is in another tenant, **service principal** will not be supported. Instead, use a shared access signature (SAS). +## Firewall protected storage ++You can configure storage accounts to allow access to a specific serverless SQL pool by creating a [resource instance rule](/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-from-azure-resource-instances). When accessing storage that is protected with the firewall, use **User Identity** or **Managed Identity**. ++> [!NOTE] +> The firewall feature on Azure Storage is in public preview and is available in all public cloud regions. -In the table below you can find the available authorization types for different login methods into Synapse Serverless SQL endpoint: +The following table provides available firewall-protected Azure Storage authorization types for different sign-in methods into an Azure Synapse Analytics serverless SQL endpoint: -| Authorization type | *SQL user* | *Azure AD user* | *Service Principal* | +| Authorization type | *SQL user* | *Azure AD user* | *Service principal* | | - | - | -- | -- | | [User Identity](?tabs=user-identity#supported-storage-authorization-types) | Not Supported | Supported | Supported| | [SAS](?tabs=shared-access-signature#supported-storage-authorization-types) | Not Supported | Not Supported | Not Supported|-| [Service Principal](?tabs=service-principal#supported-storage-authorization-types) | Not Supported | Not Supported | Not Supported| +| [Service principal](?tabs=service-principal#supported-storage-authorization-types) | Not Supported | Not Supported | Not Supported| | [Managed Identity](?tabs=managed-identity#supported-storage-authorization-types) | Supported | Supported | Supported| -### [User Identity](#tab/user-identity) +### [User identity](#tab/user-identity) ++To access storage that is protected with the firewall via a user identity, you can use the Azure portal or the Az.Storage PowerShell module. -To access storage that is protected with the firewall via User Identity, you can use Azure portal UI or PowerShell module Az.Storage. -### Configuration via Azure portal +### Azure Storage firewall configuration via Azure portal 1. Search for your Storage Account in Azure portal.-1. Go to Networking under section Settings. -1. In Section "Resource instances" add an exception for your Synapse workspace. -1. Select Microsoft.Synapse/workspaces as a Resource type. -1. Select name of your workspace as an Instance name. -1. Click Save. +1. In the main navigation menu, go to **Networking** under **Settings**. +1. In the section **Resource instances**, add an exception for your Azure Synapse workspace. +1. Select `Microsoft.Synapse/workspaces` as a **Resource type**. +1. Select the name of your workspace as an **Instance name**. +1. Select **Save**. ++### Azure Storage firewall configuration via PowerShell -### Configuration via PowerShell +Follow these steps to configure your storage account and add an exception for the Azure Synapse workspace. -Follow these steps to configure your storage account firewall and add an exception for Synapse workspace. +1. Open PowerShell or [install PowerShell](/powershell/scripting/install/installing-powershell-core-on-windows). +1. Install the latest versions of the Az.Storage module and Az.Synapse module, for example in the following script: -1. Open PowerShell or [install PowerShell](/powershell/scripting/install/installing-powershell-core-on-windows) -2. Install the Az.Storage 3.4.0 module and Az.Synapse 0.7.0: ```powershell Install-Module -Name Az.Storage -RequiredVersion 3.4.0 Install-Module -Name Az.Synapse -RequiredVersion 0.7.0 ```+ > [!IMPORTANT]- > Make sure that you use **version 3.4.0**. You can check your Az.Storage version by running this command: + > Make sure that you use at least **version 3.4.0**. You can check your Az.Storage version by running this command: + > > ```powershell - > Get-Module -ListAvailable -Name Az.Storage | select Version + > Get-Module -ListAvailable -Name Az.Storage | Select Version > ```- > -3. Connect to your Azure Tenant: +1. Connect to your Azure Tenant: + ```powershell Connect-AzAccount ```-4. Define variables in PowerShell: - - Resource group name - you can find this in Azure portal in overview of Storage account. - - Account Name - name of storage account that is protected by firewall rules. - - Tenant ID - you can find this in Azure portal in Azure Active Directory in tenant information. - - Workspace Name - Name of the Synapse workspace. +1. Define variables in PowerShell: ++ - Resource group name - you can find this in Azure portal in the **Overview** of your storage account. + - Account Name - name of the storage account that is protected by firewall rules. + - Tenant ID - you can find this in [Azure portal in Azure Active Directory](/azure/active-directory/fundamentals/how-to-find-tenant), under **Properties**, in **Tenant properties**. + - Workspace Name - Name of the Azure Synapse workspace. + ```powershell $resourceGroupName = "<resource group name>" $accountName = "<storage account name>" $tenantId = "<tenant id>"- $workspaceName = "<synapse workspace name>" + $workspaceName = "<Azure Synapse workspace name>" $workspace = Get-AzSynapseWorkspace -Name $workspaceName $resourceId = $workspace.Id $index = $resourceId.IndexOf("/resourceGroups/", 0) # Replace G with g - /resourceGroups/ to /resourcegroups/- $resourceId = $resourceId.Substring(0,$index) + "/resourcegroups/" + $resourceId.Substring($index + "/resourceGroups/".Length) + $resourceId = $resourceId.Substring(0,$index) + "/resourcegroups/" ` + + $resourceId.Substring($index + "/resourceGroups/".Length) + $resourceId ```+ > [!IMPORTANT]- > Make sure that resource id matches this template in the print of the resourceId variable. - > - > It's important to write **resourcegroups** in lower case. - > Example of one resource id: - > ``` - > /subscriptions/{subscription-id}/resourcegroups/{resource-group}/providers/Microsoft.Synapse/workspaces/{name-of-workspace} - > ``` - > -5. Add Storage Network rule: + > The value of the `$resourceid` returned by the PowerShell script should match this template: + > `/subscriptions/{subscription-id}/resourcegroups/{resource-group}/providers/Microsoft.Synapse/workspaces/{name-of-workspace}` + > It's important to write **resourcegroups** in lower case. ++1. Add an Azure storage account network rule: + ```powershell- Add-AzStorageAccountNetworkRule -ResourceGroupName $resourceGroupName -Name $accountName -TenantId $tenantId -ResourceId $resourceId + $parameters = @{ + ResourceGroupName = $resourceGroupName + Name = $accountName + TenantId = $tenantId + ResourceId = $resourceId + } + + Add-AzStorageAccountNetworkRule @parameters ```-6. Verify that rule was applied in your storage account: ++1. Verify that storage account network rule was applied in your storage account firewall. The following PowerShell script compares the `$resourceid` variable from previous steps to the output of the storage account network rule. + ```powershell- $rule = Get-AzStorageAccountNetworkRuleSet -ResourceGroupName $resourceGroupName -Name $accountName + $parameters = @{ + ResourceGroupName = $resourceGroupName + Name = $accountName + } ++ $rule = Get-AzStorageAccountNetworkRuleSet @parameters $rule.ResourceAccessRules | ForEach-Object { if ($_.ResourceId -cmatch "\/subscriptions\/(\w\-*)+\/resourcegroups\/(.)+") { Write-Host "Storage account network rule is successfully configured." -ForegroundColor Green Follow these steps to configure your storage account firewall and add an excepti Shared access signatures cannot be used to access firewall-protected storage. -### [Service Principal](#tab/service-principal) +### [Service principal](#tab/service-principal) -Service Principal cannot be used to access firewall-protected storage. Use Managed Identity instead. +Service principal cannot be used to access firewall-protected storage. Use a managed service identity instead. -### [Managed Identity](#tab/managed-identity) +### [Managed service identity](#tab/managed-identity) -You need to [Allow trusted Microsoft services... setting](../../storage/common/storage-network-security.md#trusted-microsoft-services) and explicitly [assign an Azure role](../../storage/blobs/authorize-access-azure-active-directory.md#assign-azure-roles-for-access-rights) to the [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for that resource instance. -In this case, the scope of access for the instance corresponds to the Azure role assigned to the managed identity. +You need to enable the [Allow trusted Microsoft services setting](/azure/storage/common/storage-network-security#trusted-microsoft-services) and [assign an Azure role](../../storage/blobs/authorize-access-azure-active-directory.md#assign-azure-roles-for-access-rights) to the [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for that resource instance. ++In this case, the scope of access for the resource instance corresponds to the Azure role assigned to the managed identity. ### [Anonymous access](#tab/public-access) You cannot access firewall-protected storage using anonymous access. ## Credentials -To query a file located in Azure Storage, your serverless SQL pool end point needs a credential that contains the authentication information. Two types of credentials are used: -- Server-level CREDENTIAL is used for ad-hoc queries executed using `OPENROWSET` function. Credential name must match the storage URL.-- DATABASE SCOPED CREDENTIAL is used for external tables. External table references `DATA SOURCE` with the credential that should be used to access storage.+To query a file located in Azure Storage, your serverless SQL pool endpoint needs a credential that contains the authentication information. Two types of credentials are used: -To allow a user to create or drop a server-level credential, admin can GRANT ALTER ANY CREDENTIAL permission to the user: +- Server-level credential is used for ad-hoc queries executed using `OPENROWSET` function. The credential *name* must match the storage URL. +- A database-scoped credential is used for external tables. External table references `DATA SOURCE` with the credential that should be used to access storage. -```sql -GRANT ALTER ANY CREDENTIAL TO [user_name]; -``` -To allow a user to create or drop a database scoped credential, admin can GRANT CONTROL permission on the database to the user: +### Grant permissions to manage credentials -```sql -GRANT CONTROL ON DATABASE::[database_name] TO [user_name]; -``` +To grant the ability manage credentials: ++- To allow a user to create or drop a server-level credential, an administrator must grant the `ALTER ANY CREDENTIAL` permission to the user. For example: + ```sql + GRANT ALTER ANY CREDENTIAL TO [user_name]; + ``` + +- To allow a user to create or drop a database scoped credential, an administrator must grant the `CONTROL` permission on the database to the user. For example: -Database users who access external storage must have permission to use credentials. + ```sql + GRANT CONTROL ON DATABASE::[database_name] TO [user_name]; + ``` ### Grant permissions to use credential -To use the credential, a user must have `REFERENCES` permission on a specific credential. +Database users who access external storage must have permission to use credentials. To use the credential, a user must have the `REFERENCES` permission on a specific credential. -To grant a `REFERENCES` permission ON a server-level credential for a specific_user, execute: +To grant the `REFERENCES` permission on a server-level credential for a user, use the following T-SQL query: ```sql-GRANT REFERENCES ON CREDENTIAL::[server-level_credential] TO [specific_user]; +GRANT REFERENCES ON CREDENTIAL::[server-level_credential] TO [user]; ``` -To grant a `REFERENCES` permission ON a DATABASE SCOPED CREDENTIAL for a specific_user, execute: +To grant a `REFERENCES` permission on a database-scoped credential for a user, use the following T-SQL query: ```sql-GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[database-scoped_credential] TO [specific_user]; +GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[database-scoped_credential] TO [user]; ``` ## Server-level credential -Server-level credentials are used when SQL login calls `OPENROWSET` function without `DATA_SOURCE` to read files on some storage account. The name of server-level credential **must** match the base URL of Azure storage (optionally followed by a container name). A credential is added by running [CREATE CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true). You'll need to provide a CREDENTIAL NAME argument. +Server-level credentials are used when a SQL login calls `OPENROWSET` function without a `DATA_SOURCE` to read files on a storage account. ++The name of server-level credential **must** match the base URL of Azure storage, optionally followed by a container name. A credential is added by running [CREATE CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true). You must provide the `CREDENTIAL NAME` argument. > [!NOTE] > The `FOR CRYPTOGRAPHIC PROVIDER` argument is not supported. -Server-level CREDENTIAL name must match the full path to the storage account (and optionally container) in the following format: `<prefix>://<storage_account_path>[/<container_name>]`. Storage account paths are described in the following table: +Server-level CREDENTIAL name must match the following format: `<prefix>://<storage_account_path>[/<container_name>]`. Storage account paths are described in the following table: | External Data Source | Prefix | Storage account path | | -- | | |-| Azure Blob Storage | https | <storage_account>.blob.core.windows.net | -| Azure Data Lake Storage Gen1 | https | <storage_account>.azuredatalakestore.net/webhdfs/v1 | -| Azure Data Lake Storage Gen2 | https | <storage_account>.dfs.core.windows.net | +| Azure Blob Storage | `https` | `<storage_account>.blob.core.windows.net` | +| Azure Data Lake Storage Gen1 | `https` | `<storage_account>.azuredatalakestore.net/webhdfs/v1` | +| Azure Data Lake Storage Gen2 | `https` | `<storage_account>.dfs.core.windows.net` | -Server-level credentials enable access to Azure storage using the following authentication types: +Server-level credentials are then able to access Azure storage using the following authentication types: -### [User Identity](#tab/user-identity) +### [User identity](#tab/user-identity) -Azure AD users can access any file on Azure storage if they have `Storage Blob Data Owner`, `Storage Blob Data Contributor`, or `Storage Blob Data Reader` role. Azure AD users don't need credentials to access storage. +Azure Active Directory users can access any file on Azure storage if they are members of the Storage Blob Data Owner, Storage Blob Data Contributor, or Storage Blob Data Reader role. Azure AD users don't need credentials to access storage. -SQL users can't use Azure AD authentication to access storage. +SQL authenticated users can't use Azure AD authentication to access storage. They can access storage through a database credential using Managed Identity, SAS Key, Service Principal or if there is public access to the storage. ### [Shared access signature](#tab/shared-access-signature) -The following script creates a server-level credential that can be used by `OPENROWSET` function to access any file on Azure storage using SAS token. Create this credential to enable SQL principal that executes `OPENROWSET` function to read files protected -with SAS key on the Azure storage that matches URL in credential name. +The following script creates a server-level credential that can be used by the `OPENROWSET` function to access any file on Azure storage using SAS token. Create this credential to enable a SQL principal to use the `OPENROWSET` function to read files protected with a SAS key on the Azure storage. The credential name must match the URL. -Exchange <*mystorageaccountname*> with your actual storage account name, and <*mystorageaccountcontainername*> with the actual container name: +In the following sample query, replace `<mystorageaccountname>` with your actual storage account name, and `<mystorageaccountcontainername>` with the actual container name: ```sql CREATE CREDENTIAL [https://<mystorageaccountname>.dfs.core.windows.net/<mystorageaccountcontainername>] WITH IDENTITY='SHARED ACCESS SIGNATURE'-, SECRET = 'sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-18T20:42:12Z&st=2019-04-18T12:42:12Z&spr=https&sig=lQHczNvrk1KoYLCpFdSsMANd0ef9BrIPBNJ3VYEIq78%3D'; +, SECRET = 'sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-18T20:42:12Z&st=2019-04-18T12:42:12Z&spr=https&sig=lQHczNvrk1BrIPBNJ3VYEIq78%3D'; GO ``` Optionally, you can use just the base URL of the storage account, without container name. -### [Service Principal](#tab/service-principal) +### [Service principal](#tab/service-principal) -The following script creates a server-level credential that can be used to access files in a storage using Service Principal for authentication and authorization. **AppID** can be found by visiting App registrations in Azure portal and selecting the App requesting storage access. **Secret** is obtained during the App registration. **AuthorityUrl** is URL of AAD Oauth2.0 authority. +The following script creates a server-level credential that can be used to access files in a storage using Service principal for authentication and authorization. **AppID** can be found by visiting App registrations in Azure portal and selecting the App requesting storage access. **Secret** is obtained during the App registration. **AuthorityUrl** is URL of Azure Active Directory Oauth2.0 authority. ```sql CREATE CREDENTIAL [https://<storage_account>.dfs.core.windows.net/<container>] WITH IDENTITY = '<AppID>@<AuthorityUrl>' ### [Managed Identity](#tab/managed-identity) -The following script creates a server-level credential that can be used by `OPENROWSET` function to access any file on Azure storage using workspace-managed identity. +The following script creates a server-level credential that can be used by `OPENROWSET` function to access any file on Azure storage using the Azure Synapse workspace managed identity, a special type of managed service identity. ```sql CREATE CREDENTIAL [https://<storage_account>.dfs.core.windows.net/<container>] Server-level credential isn't required to allow access to publicly available fil ## Database-scoped credential -Database-scoped credentials are used when any principal calls `OPENROWSET` function with `DATA_SOURCE` or selects data from [external table](develop-tables-external-tables.md) that don't access public files. The database scoped credential doesn't need to match the name of storage account. It will be explicitly used in DATA SOURCE that defines the location of storage. +Database-scoped credentials are used when any principal calls `OPENROWSET` function with `DATA_SOURCE` or selects data from [external table](develop-tables-external-tables.md) that don't access public files. The database scoped credential doesn't need to match the name of storage account, it is referenced in DATA SOURCE that defines the location of storage. Database-scoped credentials enable access to Azure storage using the following authentication types: ### [Azure AD Identity](#tab/user-identity) -Azure AD users can access any file on Azure storage if they have at least `Storage Blob Data Owner`, `Storage Blob Data Contributor`, or `Storage Blob Data Reader` role. Azure AD users don't need credentials to access storage. +Azure AD users can access any file on Azure storage if they are members of the Storage Blob Data Owner, Storage Blob Data Contributor, or Storage Blob Data Reader roles. Azure AD users don't need credentials to access storage. ```sql CREATE EXTERNAL DATA SOURCE mysample WITH ( LOCATION = 'https://<storage_account>.dfs.core.windows.net/<containe ) ``` -SQL users can't use Azure AD authentication to access storage. +SQL authenticated users can't use Azure AD authentication to access storage. They can access storage through a database credential using Managed Identity, SAS Key, Service Principal or if there is public access to the storage. + ### [Shared access signature](#tab/shared-access-signature) -The following script creates a credential that is used to access files on storage using SAS token specified in the credential. The script will create a sample external data source that uses this SAS token to access storage. +The following script creates a credential that is used to access files on storage using SAS token specified in the credential. The script creates a sample external data source that uses this SAS token to access storage. ```sql -- Optional: Create MASTER KEY if not exists in database: The following script creates a credential that is used to access files on storag GO CREATE DATABASE SCOPED CREDENTIAL [SasToken] WITH IDENTITY = 'SHARED ACCESS SIGNATURE',- SECRET = 'sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-18T20:42:12Z&st=2019-04-18T12:42:12Z&spr=https&sig=lQHczNvrk1KoYLCpFdSsMANd0ef9BrIPBNJ3VYEIq78%3D'; + SECRET = 'sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-18T20:42:12Z&st=2019-04-18T12:42:12Z&spr=https&sig=lQHczNvrk1KEIq78%3D'; GO CREATE EXTERNAL DATA SOURCE mysample WITH ( LOCATION = 'https://<storage_account>.dfs.core.windows.net/<container>/<path>', WITH ( LOCATION = 'https://<storage_account>.dfs.core.windows.net/<containe ) ``` --### [Service Principal](#tab/service-principal) -The following script creates a database-scoped credential that can be used to access files in a storage using Service Principal for authentication and authorization. **AppID** can be found by visiting App registrations in Azure portal and selecting the App requesting storage access. **Secret** is obtained during the App registration. **AuthorityUrl** is URL of AAD Oauth2.0 authority. +### [Service principal](#tab/service-principal) +The following script creates a database-scoped credential that can be used to access files in a storage using service principal for authentication and authorization. **AppID** can be found by visiting App registrations in Azure portal and selecting the App requesting storage access. **Secret** is obtained during the App registration. **AuthorityUrl** is URL of Azure Active Directory Oauth2.0 authority. ```sql -- Optional: Create MASTER KEY if not exists in database: WITH ( LOCATION = 'https://<storage_account>.dfs.core.windows.net/<containe ### [Managed Identity](#tab/managed-identity) -The following script creates a database-scoped credential that can be used to impersonate current Azure AD user as Managed Identity of service. The script will create a sample external data source that uses workspace identity to access storage. +The following script creates a database-scoped credential that can be used to impersonate current Azure AD user as Managed Identity of service. The script creates a sample external data source that uses workspace identity to access storage. ```sql -- Optional: Create MASTER KEY if not exists in database: The database scoped credential doesn't need to match the name of storage account ### [Public access](#tab/public-access) -Database scoped credential isn't required to allow access to publicly available files. Create [data source without credential](develop-tables-external-tables.md?tabs=sql-ondemand#example-for-create-external-data-source) to access publicly available files on Azure storage. +Database scoped credential isn't required to allow access to publicly available files. Create a [data source without credential](develop-tables-external-tables.md?tabs=sql-ondemand#example-for-create-external-data-source) to access publicly available files on Azure storage. ```sql CREATE EXTERNAL DATA SOURCE mysample WITH ( LOCATION = 'https://<storage_account>.blob.core.windows.net/<container>/<path>' ) ```+ Database scoped credentials are used in external data sources to specify what authentication method will be used to access this storage: WITH ( LOCATION = 'https://<storage_account>.dfs.core.windows.net/<containe ## Examples -### **Access a publicly available data source** +### Access a publicly available data source Use the following script to create a table that accesses publicly available data source. SELECT TOP 10 * FROM OPENROWSET(BULK 'parquet/user-data/*.parquet', GO ``` -### **Access a data source using credentials** +### Access a data source using credentials Modify the following script to create an external table that accesses Azure storage using SAS token, Azure AD identity of user, or managed identity of workspace. ```sql -- Create master key in databases with some password (one-off per database)-CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Y*********0' +CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong password>' GO Create databases scoped credential that use Managed Identity, SAS token or Service Principal. User needs to create only database-scoped credentials that should be used to access data source:+-- Create databases scoped credential that use Managed Identity, SAS token or service principal. User needs to create only database-scoped credentials that should be used to access data source: CREATE DATABASE SCOPED CREDENTIAL WorkspaceIdentity WITH IDENTITY = 'Managed Identity' GO ## Next steps -The articles listed below will help you learn how query different folder types, file types, and create and use views: +These articles help you learn how query different folder types, file types, and create and use views: - [Query single CSV file](query-single-csv-file.md) - [Query folders and multiple CSV files](query-folders-multiple-csv-files.md) The articles listed below will help you learn how query different folder types, - [Query Parquet files](query-parquet-files.md) - [Create and use views](create-use-views.md) - [Query JSON files](query-json-files.md)-- [Query Parquet nested types](query-parquet-nested-types.md)+- [Query Parquet nested types](query-parquet-nested-types.md) |
virtual-desktop | Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md | To use a smart card to authenticate to Azure AD, you must first [configure AD FS If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host when launching a connection. The following list describes which types of authentication each Azure Virtual Desktop client currently supports. -- The Windows Desktop client and Azure Virtual Desktop Store app both support the following authentication methods:- - Username and password - - Smart card - - [Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) - - [Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) - - [Azure AD authentication](configure-single-sign-on.md) -- The Remote Desktop app supports the following authentication method: - - Username and password -- The web client supports the following authentication method:- - Username and password -- The Android client supports the following authentication method:- - Username and password -- The iOS client supports the following authentication method:- - Username and password -- The macOS client supports the following authentication method:- - Username and password - - Smart card: support for smart card-based sign in using smart card redirection at the Winlogon prompt when NLA is not negotiated. ++|Client |Supported authentication type(s) | +||| +|Windows Desktop client | Username and password <br>Smart card <br>[Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) <br>[Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) <br>[Azure AD authentication](configure-single-sign-on.md) | +|Azure Virtual Desktop Store app | Username and password <br>Smart card <br>[Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) <br>[Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) <br>[Azure AD authentication](configure-single-sign-on.md) | +|Remote Desktop app | Username and password | +|Web client | Username and password | +|Android client | Username and password | +|iOS client | Username and password | +|macOS client | Username and password <br>Smart card: support for smart card-based sign in using smart card redirection at the Winlogon prompt when NLA is not negotiated. | >[!IMPORTANT] >In order for authentication to work properly, your local machine must also be able to access the [required URLs for Remote Desktop clients](safe-url-list.md#remote-desktop-clients). |
virtual-desktop | Private Link Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md | In order to use Private Link with Azure Virtual Desktop, you need the following ## Enable the feature -To use of Private Link with Azure Virtual Desktop, first you need to re-register the *Microsoft.DesktopVirtualization* resource provider and register the *Azure Virtual Desktop Private Link* feature on your Azure subscription. +To use Private Link with Azure Virtual Desktop, first you need to re-register the *Microsoft.DesktopVirtualization* resource provider and register the *Azure Virtual Desktop Private Link* feature on your Azure subscription. > [!IMPORTANT] > You need to re-register the resource provider and register the feature for each subscription you want to use Private Link with Azure Virtual Desktop. |
virtual-desktop | Whats New Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md | description: Learn about recent changes to the Remote Desktop client for Windows Previously updated : 07/21/2023 Last updated : 07/25/2023 # What's new in the Remote Desktop client for Windows The following table lists the current versions available for the public and Insi | Release | Latest version | Download | ||-|-|-| Public | 1.2.4419 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) | +| Public | 1.2.4485 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) | | Insider | 1.2.4487 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) | ## Updates for version 1.2.4487 (Insider) The following table lists the current versions available for the public and Insi Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) -In this release, we've made the following changes: +In this release, we've made the following changes: -- Narrator now describes the toggle button in the display settings side panel as "toggle button" instead of "button."-- Control types for text now correctly reflect that they're "text" and not "custom." -- Updated File and URI Launch Dialog Error Handling to be more specific and user-friendly. -- Fixed an issue where Narrator didn't read the error message that appears after the user selects **Detect**.-- The client now displays an error message after unsuccessfully checking for updates instead of incorrectly displaying a message that says the client is up to date.-- Added a new RDP file property called "allowed security protocols." This property restricts the list of security protocols the client can negotiate.-- Fixed an issue where, in Azure Arc, Connection Information dialog gave inconsistent information about identity verification. -- Added heading-level description to subscribe with URL.-- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Fixed an issue where the client doesn't auto-reconnect when the Gateway WebSocket connection shuts down normally. +## Updates for version 1.2.4485 ++*Date published: July 25, 2023* ++Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) ++In this release, we've made the following changes: ++- Added a new RDP file property called "allowed security protocols." This property restricts the list of security protocols the client can negotiate. +- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. +- Accessibility improvements ++ - Narrator now describes the toggle button in the display settings side panel as *toggle button* instead of *button*. + - Control types for text now correctly say that they're *text* and not *custom*. + - Fixed an issue where Narrator didn't read the error message that appears after the user selects **Delete**. + - Added heading-level description to subscribe with URL. ++- Dialog improvements ++ - Updated File and URI Launch Dialog Error Handling to be more specific and user-friendly. + - The client now displays an error message after unsuccessfully checking for updates instead of incorrectly notifying the user that the client is up to date. + - Fixed an issue where, after having been automatically reconnected to the remote session, the Connection Information dialog gave inconsistent information about identity verification. + ## Updates for version 1.2.4419 *Date published: July 6, 2023* -Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) +Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW15LC7), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW15W7D), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW15LC6) In this release, we've made the following changes: In this release, we've made the following changes: ## Updates for version 1.2.4337 -*Date published: June 13, 2023* --Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1697H), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW15Tzb), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW15W7E) +*Date published: June 13, 2023* In this release, we've made the following changes: |
virtual-machines | Dsc Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-windows.md | The PowerShell DSC Extension for Windows is published and supported by Microsoft The DSC Extension supports the following OS's -Windows Server 2019, Windows Server 2016, Windows Server 2012R2, Windows Server 2012, Windows Server 2008 R2 SP1, Windows Client 7/8.1/10 +Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012R2, Windows Server 2012, Windows Server 2008 R2 SP1, Windows Client 7/8.1/10 ### Internet connectivity |
virtual-machines | Virtual Machine Scale Sets Maintenance Control Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-template.md | For more information, see [configurationAssignments](/azure/templates/microsoft. ```json { - "type": "Microsoft.Maintenance/configurationAssignments", - "apiVersion": "2021-09-01-preview", - "name": "string", - "location": "string", - "properties": { - "maintenanceConfigurationId": "string", - "resourceId": "string" - } +"type": "Microsoft.Maintenance/configurationAssignments", +"apiVersion": "2021-09-01-preview", +"name": "[variables('maintenanceConfigurationAssignmentName')]", +"location": "string (e.g. westeurope)", +"scope": "Resource Id of the resource that is being assigned to the Maintenance Configuration (e.g. VMSS Id)" +"properties": { + "maintenanceConfigurationId": "Resource Id of the Maintenance Configuration" + "resourceId": "Resource Id of the resource that is being assigned to the Maintenance Configuration (e.g. VMSS Id)" +} } ``` |
virtual-machines | Vm Boot Optimization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-boot-optimization.md | + + Title: VM Boot Optimization for Azure Compute Gallery Images with Azure VM Image Builder +description: Optimize VM Boot and Provisioning time with Azure VM Image Builder ++ Last updated : 06/07/2023 ++++ ++ ++# VM optimization for gallery images with Azure VM Image Builder ++ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Virtual Machine Scale Sets ++In this article, you learn how to use Azure VM Image Builder to optimize your ACG (Azure Compute Gallery) Images or Managed Images or VHDs to improve the create time for your VMs. ++## Azure VM Optimization +Azure VM optimization improves virtual machine creation time by updating the gallery image to optimize the image for a faster boot time. ++## Image types supported ++Optimization for the following images is supported: ++| Features | Details | +||| +|OS Type| Linux, Windows | +| Partition | MBR/GPT | +| Hyper-V | Gen1/Gen2 | +| OS State | Generalized | ++The following types of images aren't supported: ++* Images with size greater than 2 TB +* ARM64 images +* Specialized images +++## Optimization in Azure VM Image Builder ++Optimization can be enabled while creating a VM image using the CLI. ++Customers can create an Azure VM Image Builder template using CLI. It contains details regarding source, type of customization, and distribution. ++In your template, you will need to enable the additional fields for VM optimization. For more information on how to enable the VM optimization fields for your image builder template, see the [Optimize property](../virtual-machines/linux/image-builder-json.md#properties-optimize). ++> [!NOTE] +> To enable VM optimization benefits, you must be using Azure Image Builder API Version `2022-07-01` or later. ++ ++## FAQs ++ ++### Can VM optimization be used without Azure VM Image Builder customization? ++ ++Yes, customers can opt for only VM optimization without using Azure VM Image Builder customization feature. Customers can simply enable the optimization flag and keep customization field as empty. ++ ++### Can an existing ACG image version be optimized? ++No, this optimization feature won't update an existing SIG image version. However, optimization can be enabled during new version creation for an existing image ++ ++## How much time does it take for generating an optimized image? ++ ++ The below latencies have been observed at various percentiles: ++| OS | Size | P50 | P95 | Average | +| | | | | | +| Linux | 30 GB VHD | 20 mins | 21 mins | 20 mins | +| Windows | 127 GB VHD | 34 mins | 35 mins | 33 mins | ++ ++This is the end to end duration observed. Note, image generation duration varies based on different factors such as, OS Type, VHD size, OS State, etc. ++ ++### Is OS image copied out of customer subscription for optimization? ++Yes, the OS VHD is copied from customer subscription to Azure subscription for optimization in the same geographic location. Once optimization is finished or timed out, Azure internally deletes all copied OS VHDs. ++### What are the performance improvements observed for VM boot optimization? ++Enabling VM boot optimization feature may not always result in noticeable performance improvement as it depends on several factors like source image already optimized, OS type, customization etc. However, to ensure the best VM boot performance, it's recommended to enable this feature. ++ ++## Next steps +Learn more about [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md). |
virtual-machines | Oracle Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-overview.md | -In this article, you learn about running Oracle solutions using the Azure infrastructure. +In this article, you learn about running Oracle solutions using the Azure infrastructure. ++> [!Important] +> Oracle RAC and Oracle RAC OneNode are not supported in Azure Bare Metal Infrastructure. ## Oracle databases on Azure infrastructure Oracle supports running its Database 12.1 and higher Standard and Enterprise editions in Azure on VM images based on Oracle Linux. You can run Oracle databases on Azure infrastructure using Oracle Database on Oracle Linux images available in the Azure Marketplace. Different [backup strategies](oracle-database-backup-strategies.md) are availabl - Using [Azure backup](oracle-database-backup-azure-backup.md) - Using [Oracle RMAN Streaming data](oracle-rman-streaming-backup.md) backup ## Deploy Oracle applications on Azure-Use Terraform templates to set up Azure infrastructure and install Oracle applications. For more information, see [Terraform on Azure](https://learn.microsoft.com/azure/developer/terraform/?branch=main&branchFallbackFrom=pr-en-us-234143). +Use Terraform templates to set up Azure infrastructure and install Oracle applications. For more information, see [Terraform on Azure](/azure/developer/terraform). Oracle has certified the following applications to run in Azure when connecting to an Oracle database by using the Azure with Oracle Cloud interconnect solution: - E-Business Suite According to Oracle Support, JD Edwards EnterpriseOne versions 9.2 and above are ## Licensing Deployment of Oracle solutions in Azure is based on a bring-your-own-license model. This model assumes that you have licenses to use Oracle software and that you have a current support agreement in place with Oracle. Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf). -Oracle databases generally require higher memory and I/O. For this reason, we recommend [Memory Optimized VMs](https://learn.microsoft.com/azure/virtual-machines/sizes-memory) for these workloads. To optimize your workloads further, we recommend [Constrained Core vCPUs](https://learn.microsoft.com/azure/virtual-machines/constrained-vcpu?branch=main) for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count. +Oracle databases generally require higher memory and I/O. For this reason, we recommend [Memory Optimized VMs](/azure/virtual-machines/sizes-memory) for these workloads. To optimize your workloads further, we recommend [Constrained Core vCPUs](/azure/virtual-machines/constrained-vcpu) for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count. When you migrate Oracle software and workloads from on-premises to Microsoft Azure, Oracle provides license mobility as stated in [Oracle and Microsoft Strategic Partnership FAQ](https://www.oracle.com/cloud/azure/interconnect/faq/). ## Next steps You now have an overview of current Oracle databases and solutions based on VM images in Microsoft Azure. Your next step is to deploy your first Oracle database on Azure. |
virtual-network | Ipv6 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md | The current IPv6 for Azure Virtual Network release has the following limitations - While it's possible to create NSG rules for IPv4 and IPv6 within the same NSG, it isn't currently possible to combine an IPv4 subnet with an IPv6 subnet in the same rule when specifying IP prefixes. +- When using a dual stack configuration with a load balancer, health probes will not function for IPv6 if a Network Security Group is not active. + - ICMPv6 isn't currently supported in Network Security Groups. - Azure Virtual WAN currently supports IPv4 traffic only. |
virtual-network | Tutorial Filter Network Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic.md | -Network security groups contain security rules that filter network traffic by IP address, port, and protocol. When a network security group is associated with a subnet, security rules are applied to resources deployed in that subnet. +Network security groups contain security rules that filter network traffic by IP address, port, and protocol. When a network security group is associated with a subnet, security rules are applied to resources deployed in that subnet. + In this tutorial, you learn how to: In this tutorial, you learn how to: > * Create application security groups > * Create a virtual network and associate a network security group to a subnet > * Deploy virtual machines and associate their network interfaces to the application security groups-> * Test traffic filters --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites -- An Azure subscription+- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com). -## Create a virtual network --1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Virtual network**, or search for *Virtual Network* in the portal search box. --1. Select **Create**. --1. On the **Basics** tab of **Create virtual network**, enter or select this information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **Create new**. </br> Enter *myResourceGroup*. </br> Select **OK**. | - | **Instance details** | | - | Name | Enter *myVNet*. | - | Region | Select **East US**. | --1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. --1. Select **Create**. ## Create application security groups An [application security group (ASGs)](application-security-groups.md) enables you to group together servers with similar functions, such as web servers. -1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Application security group**, or search for *Application security group* in the portal search box. +1. In the search box at the top of the portal, enter **Application security group**. Select **Application security groups** in the search results. -2. Select **Create**. +1. Select **+ Create**. -3. On the **Basics** tab of **Create an application security group**, enter or select this information: +1. On the **Basics** tab of **Create an application security group**, enter or select this information: | Setting | Value | | - | -- | |**Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **myResourceGroup**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Name | Enter *myAsgWebServers*. | - | Region | Select **(US) East US**. | + | Name | Enter **asg-web**. | + | Region | Select **East US 2**. | -4. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. +1. Select **Review + create**. -5. Select **Create**. +1. Select **+ Create**. -6. Repeat the previous steps, specifying the following values: +1. Repeat the previous steps, specifying the following values: | Setting | Value | | - | -- | |**Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **myResourceGroup**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Name | Enter *myAsgMgmtServers*. | - | Region | Select **(US) East US**. | + | Name | Enter **asg-mgmt**. | + | Region | Select **East US 2**. | -8. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. +1. Select **Review + create**. -9. Select **Create**. +1. Select **Create**. ## Create a network security group A [network security group (NSG)](network-security-groups-overview.md) secures network traffic in your virtual network. -1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Network security group**, or use the portal search box to search for **Network security group** (not *Network security group (classic)*). +1. In the search box at the top of the portal, enter **Network security group**. Select **Network security groups** in the search results. -1. Select **Create**. + > [!NOTE] + > In the search results for **Network security groups**, you may see **Network security groups (classic)**. Select **Network security groups**. ++1. Select **+ Create**. 1. On the **Basics** tab of **Create network security group**, enter or select this information: A [network security group (NSG)](network-security-groups-overview.md) secures ne | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **myResourceGroup**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Name | Enter *myNSG*. | - | Location | Select **(US) East US**. | + | Name | Enter **nsg-1**. | + | Location | Select **East US 2**. | -5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. +1. Select **Review + create**. -6. Select **Create**. +1. Select **Create**. ## Associate network security group to subnet -In this section, you'll associate the network security group with the subnet of the virtual network you created earlier. +In this section, you associate the network security group with the subnet of the virtual network you created earlier. ++1. In the search box at the top of the portal, enter **Network security group**. Select **Network security groups** in the search results. -1. Search for *myNsg* in the portal search box. +1. Select **nsg-1**. -2. Select **Subnets** from the **Settings** section of **myNSG**. +1. Select **Subnets** from the **Settings** section of **nsg-1**. -3. In the **Subnets** page, select **+ Associate**: +1. In the **Subnets** page, select **+ Associate**: :::image type="content" source="./media/tutorial-filter-network-traffic/associate-nsg-subnet.png" alt-text="Screenshot of Associate a network security group to a subnet." border="true"::: -3. Under **Associate subnet**, select **myVNet** for **Virtual network**. +1. Under **Associate subnet**, select **vnet-1 (test-rg)** for **Virtual network**. -4. Select **default** for **Subnet**, and then select **OK**. +1. Select **subnet-1** for **Subnet**, and then select **OK**. ## Create security rules +1. Select **Inbound security rules** from the **Settings** section of **nsg-1**. -1. Select **Inbound security rules** from the **Settings** section of **myNSG**. --1. In **Inbound security rules** page, select **+ Add**: -- :::image type="content" source="./media/tutorial-filter-network-traffic/add-inbound-rule.png" alt-text="Screenshot of Inbound security rules in a network security group." border="true"::: +1. In **Inbound security rules** page, select **+ Add**. -1. Create a security rule that allows ports 80 and 443 to the **myAsgWebServers** application security group. In **Add inbound security rule** page, enter or select this information: +1. Create a security rule that allows ports 80 and 443 to the **asg-web** application security group. In **Add inbound security rule** page, enter or select the following information: | Setting | Value | | - | -- | | Source | Leave the default of **Any**. | | Source port ranges | Leave the default of **(*)**. | | Destination | Select **Application security group**. |- | Destination application security groups | Select **myAsgWebServers**. | + | Destination application security groups | Select **asg-web**. | | Service | Leave the default of **Custom**. |- | Destination port ranges | Enter *80,443*. | + | Destination port ranges | Enter **80,443**. | | Protocol | Select **TCP**. | | Action | Leave the default of **Allow**. | | Priority | Leave the default of **100**. |- | Name | Enter *Allow-Web-All*. | -- :::image type="content" source="./media/tutorial-filter-network-traffic/inbound-security-rule-inline.png" alt-text="Screenshot of Add inbound security rule in a network security group." lightbox="./media/tutorial-filter-network-traffic/inbound-security-rule-expanded.png"::: + | Name | Enter **allow-web-all**. | 1. Select **Add**. -1. Complete steps 3-4 again using this information: +1. Complete the previous steps with the following information: | Setting | Value | | - | -- | | Source | Leave the default of **Any**. | | Source port ranges | Leave the default of **(*)**. | | Destination | Select **Application security group**. |- | Destination application security group | Select **myAsgMgmtServers**. | - | Service | Leave the default of **Custom**. | - | Destination port ranges | Enter *3389*. | - | Protocol | Select **Any**. | + | Destination application security group | Select **asg-mgmt**. | + | Service | Select **RDP**. | | Action | Leave the default of **Allow**. | | Priority | Leave the default of **110**. |- | Name | Enter *Allow-RDP-All*. | + | Name | Enter *allow-rdp-all*. | 1. Select **Add**. > [!CAUTION]- > In this article, RDP (port 3389) is exposed to the internet for the VM that is assigned to the **myAsgMgmtServers** application security group. + > In this article, RDP (port 3389) is exposed to the internet for the VM that is assigned to the **asg-mgmt** application security group. > > For production environments, instead of exposing port 3389 to the internet, it's recommended that you connect to Azure resources that you want to manage using a VPN, private network connection, or Azure Bastion. > > For more information on Azure Bastion, see [What is Azure Bastion?](../bastion/bastion-overview.md). -Once you've completed steps 1-3, review the rules you created. Your list should look like the list in the following example: -- ## Create virtual machines Create two virtual machines (VMs) in the virtual network. -### Create the first virtual machine +1. In the portal, search for and select **Virtual machines**. -1. From the Azure portal menu, select **+ Create a resource** > **Compute** > **Virtual machine**, or search for *Virtual machine* in the portal search box. +1. In **Virtual machines**, select **+ Create**, then **Azure virtual machine**. -2. In **Create a virtual machine**, enter or select this information in the **Basics** tab: +1. In **Create a virtual machine**, enter or select this information in the **Basics** tab: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **myResourceGroup**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Virtual machine name | Enter *myVMWeb*. | - | Region | Select **(US) East US**. | + | Virtual machine name | Enter **vm-1**. | + | Region | Select **(US) East US 2**. | | Availability options | Leave the default of **No infrastructure redundancy required**. |- | Security type | Leave the default of **Standard**. | - | Image | Select **Windows Server 2019 Datacenter - Gen2**. | + | Security type | Select **Standard**. | + | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. | | Azure Spot instance | Leave the default of unchecked. |- | Size | Select **Standard_D2s_V3**. | + | Size | Select a size. | | **Administrator account** | | | Username | Enter a username. | | Password | Enter a password. | Create two virtual machines (VMs) in the virtual network. | **Inbound port rules** | | | Select inbound ports | Select **None**. | -3. Select the **Networking** tab. +1. Select **Next: Disks** then **Next: Networking**. -4. In the **Networking** tab, enter or select the following information: +1. In the **Networking** tab, enter or select the following information: | Setting | Value | | - | -- | | **Network interface** | |- | Virtual network | Select **myVNet**. | - | Subnet | Select **default (10.0.0.0/24)**. | + | Virtual network | Select **vnet-1**. | + | Subnet | Select **subnet-1 (10.0.0.0/24)**. | | Public IP | Leave the default of a new public IP. | | NIC network security group | Select **None**. | -5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. --6. Select **Create**. The VM may take a few minutes to deploy. --### Create the second virtual machine +1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. -Complete steps 1-6 again, but in step 2, enter *myVMMgmt* for Virtual machine name. +1. Select **Create**. The VM may take a few minutes to deploy. -Wait for the VMs to complete deployment before advancing to the next section. +1. Repeat the previous steps to create a second virtual machine named **vm-2**. ## Associate network interfaces to an ASG When you created the VMs, Azure created a network interface for each VM, and att Add the network interface of each VM to one of the application security groups you created previously: -1. Search for *myVMWeb* in the portal search box. +1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **Networking** from the **Settings** section of **myVMWeb** VM. +1. Select **vm-1**. -3. Select the **Application security groups** tab, then select **Configure the application security groups**. +1. Select **Networking** from the **Settings** section of **vm-1**. - :::image type="content" source="./media/tutorial-filter-network-traffic/configure-app-sec-groups.png" alt-text="Screenshot of Configure application security groups." border="true"::: +1. Select the **Application security groups** tab, then select **Configure the application security groups**. -4. In **Configure the application security groups**, select **myAsgWebServers**. Select **Save**. + :::image type="content" source="./media/tutorial-filter-network-traffic/configure-app-sec-groups.png" alt-text="Screenshot of Configure application security groups." border="true"::: - :::image type="content" source="./media/tutorial-filter-network-traffic/select-application-security-groups-inline.png" alt-text="Screenshot showing how to associate application security groups to a network interface." border="true" lightbox="./media/tutorial-filter-network-traffic/select-application-security-groups-expanded.png"::: +1. In **Configure the application security groups**, select **asg-web** in the **Application security groups** pull-down menu, then select **Save**. -5. Complete steps 1 and 2 again, searching for the *myVMMgmt* virtual machine and selecting the **myAsgMgmtServers** ASG. +1. Repeat the previous steps for **vm-2**, selecting **asg-mgmt** in the **Application security groups** pull-down menu. ## Test traffic filters -1. Search for *myVMMgmt* in the portal search box. +1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **vm-2**. -1. On the **Overview** page, select the **Connect** button and then select **RDP**. +1. On the **Overview** page, select the **Connect** button and then select **Native RDP**. 1. Select **Download RDP file**. Add the network interface of each VM to one of the application security groups y 5. You may receive a certificate warning during the connection process. If you receive the warning, select **Yes** or **Continue**, to continue with the connection. - The connection succeeds, because inbound traffic from the internet to the **myAsgMgmtServers** application security group is allowed through port 3389. + The connection succeeds, because inbound traffic from the internet to the **asg-mgmt** application security group is allowed through port 3389. - The network interface for **myVMMgmt** is associated with the **myAsgMgmtServers** application security group and allows the connection. + The network interface for **vm-2** is associated with the **asg-mgmt** application security group and allows the connection. -6. Open a PowerShell session on **myVMMgmt**. Connect to **myVMWeb** using the following: +6. Open a PowerShell session on **vm-2**. Connect to **vm-1** using the following: ```powershell- mstsc /v:myVmWeb + mstsc /v:vm-1 ``` - The RDP connection from **myVMMgmt** to **myVMWeb** succeeds because virtual machines in the same network can communicate with each other over any port by default. + The RDP connection from **vm-2** to **vm-1** succeeds because virtual machines in the same network can communicate with each other over any port by default. - You can't create an RDP connection to the **myVMWeb** virtual machine from the internet. The security rule for the **myAsgWebServers** prevents connections to port 3389 inbound from the internet. Inbound traffic from the Internet is denied to all resources by default. + You can't create an RDP connection to the **vm-1** virtual machine from the internet. The security rule for the **asg-web** prevents connections to port 3389 inbound from the internet. Inbound traffic from the Internet is denied to all resources by default. -7. To install Microsoft IIS on the **myVMWeb** virtual machine, enter the following command from a PowerShell session on the **myVMWeb** virtual machine: +7. To install Microsoft IIS on the **vm-1** virtual machine, enter the following command from a PowerShell session on the **vm-1** virtual machine: ```powershell Install-WindowsFeature -name Web-Server -IncludeManagementTools ``` -8. After the IIS installation is complete, disconnect from the **myVMWeb** virtual machine, which leaves you in the **myVMMgmt** virtual machine remote desktop connection. +8. After the IIS installation is complete, disconnect from the **vm-1** virtual machine, which leaves you in the **vm-2** virtual machine remote desktop connection. -9. Disconnect from the **myVMMgmt** VM. +9. Disconnect from the **vm-2** VM. -10. Search for *myVMWeb* in the portal search box. +10. Search for **vm-1** in the portal search box. -11. On the **Overview** page of **myVMWeb**, note the **Public IP address** for your VM. The address shown in the following example is 23.96.39.113, but your address is different: +11. On the **Overview** page of **vm-1**, note the **Public IP address** for your VM. The address shown in the following example is 20.230.55.178, your address is different: :::image type="content" source="./media/tutorial-filter-network-traffic/public-ip-address.png" alt-text="Screenshot of Public IP address of a virtual machine in the Overview page." border="true"::: -11. To confirm that you can access the **myVMWeb** web server from the internet, open an internet browser on your computer and browse to `http://<public-ip-address-from-previous-step>`. --You see the IIS default page, because inbound traffic from the internet to the **myAsgWebServers** application security group is allowed through port 80. --The network interface attached for **myVMWeb** is associated with the **myAsgWebServers** application security group and allows the connection. +11. To confirm that you can access the **vm-1** web server from the internet, open an internet browser on your computer and browse to `http://<public-ip-address-from-previous-step>`. -## Clean up resources +You see the IIS default page, because inbound traffic from the internet to the **asg-web** application security group is allowed through port 80. -When no longer needed, delete the resource group and all of the resources it contains: +The network interface attached for **vm-1** is associated with the **asg-web** application security group and allows the connection. -1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it. -2. Select **Delete resource group**. -3. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**. ## Next steps |
web-application-firewall | Waf Front Door Rate Limit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit.md | You also must specify at least one *match condition*, which tells Front Door whe If you need to apply a rate limit rule to all of your requests, consider using a match condition like the following example: The match condition above identifies all requests with a `Host` header of length greater than 0. Because all valid HTTP requests for Front Door contain a `Host` header, this match condition has the effect of matching all HTTP requests. The match condition above identifies all requests with a `Host` header of length Requests from the same client often arrive at the same Front Door server. In that case, you see requests are blocked as soon as the rate limit is reached for each of the client IP addresses. -However, it's possible that requests from the same client might arrive at a different Front Door server that hasn't refreshed the rate limit counter yet. For example, the client might open a new TCP connection for each request. If the threshold is low enough, the first request to the new Front Door server could pass the rate limit check. So, for a low threshold (for example, less than about 100 requests per minute), you might see some requests above the threshold get through. Larger time window sizes (for example, 5 minutes over 1 minute) with larger thresholds are typically more effective than the shorter time window sizes with lower thresholds. +However, it's possible that requests from the same client might arrive at a different Front Door server that hasn't refreshed the rate limit counters yet. For example, the client might open a new TCP connection for each request. If the threshold is low enough, the first request to the new Front Door server could pass the rate limit check. So, for a low threshold (for example, less than about 200 requests per minute), you may see some requests above the threshold get through. ++A few considerations to keep in mind while determining threshold values and time windows for rate limiting: +- Larger window size and smaller thresholds are most effective in preventing against DDoS attacks. +- Setting larger time window sizes (for example, 5 minutes over 1 minute) and larger thresholds values (for example, 200 over 100) tend to be more accurate in enforcing close to rate limits thresholds than using the shorter time window sizes and lower thresholds values. ## Next steps |