Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Concept Authentication Phone Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md | To work properly, phone numbers must be in the format *+CountryCode PhoneNumber* > [!NOTE] > There needs to be a space between the country/region code and the phone number. >-> Password reset doesn't support phone extensions. Even in the *+1 4251234567X12345* format, extensions are removed before the call is placed. +> Password reset and Azure AD Multi-Factor Authentication don't support phone extensions. Even in the *+1 4251234567X12345* format, extensions are removed before the call is placed. ## Mobile phone verification |
active-directory | Concept Certificate Based Authentication Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-migration.md | To configure Staged Rollout, follow these steps: For more information, see [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md). +>[!NOTE] +> When Staged rollout is enabled for a user, the user is considered a managed user and all authentication will happen at Azure AD. For a federated Tenant, if CBA is enabled on Staged Rollout, password authentication only works if PHS is enabled too otherwise password authentication will fail. + ## Use Azure AD connect to update certificateUserIds attribute An AD FS admin can use **Synchronization Rules Editor** to create rules to sync the values of attributes from AD FS to Azure AD user objects. For more information, see [Sync rules for certificateUserIds](concept-certificate-based-authentication-certificateuserids.md#update-certificate-user-ids-using-azure-ad-connect). |
active-directory | Concept Certificate Based Authentication Technical Deep Dive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md | Now we'll walk through each step: 1. Azure AD completes the sign-in process by sending a primary refresh token back to indicate successful sign-in. 1. If the user sign-in is successful, the user can access the application. -## Single-factor certificate-based authentication +## MFA with Single-factor certificate-based authentication -Azure AD CBA supports second factors to meet MFA requirements with single-factor certificates. Users can use either passwordless sign-in or FIDO2 security keys as second factors when the first factor is single-factor CBA. Users need to register passwordless sign-in or FIDO2 in advance to signing in with Azure AD CBA. +Azure AD CBA supports second factors to meet MFA requirements with single-factor certificates. Users can use either passwordless sign-in or FIDO2 security keys as second factors when the first factor is single-factor CBA. Users need to have another way to get MFA and register passwordless sign-in or FIDO2 in advance to signing in with Azure AD CBA. ++>[!IMPORTANT] +>A user will be considered MFA capable when a user is in scope for Certificate-based authentication auth method. This means user will not be able to use proof up as part of their authentication to registerd other available methods. More info on [Azure AD MFA](../authentication/concept-mfa-howitworks.md) **Steps to set up passwordless phone signin(PSI) with CBA** |
active-directory | How To Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-prerequisites.md | You need the following to use Azure AD Connect cloud sync: - On-premises firewall configurations. ## Group Managed Service Accounts-A group Managed Service Account is a managed domain account that provides automatic password management, simplified service principal name (SPN) management,the ability to delegate the management to other administrators, and also extends this functionality over multiple servers. Azure AD Connect Cloud Sync supports and uses a gMSA for running the agent. You will be prompted for administrative credentials during setup, in order to create this account. The account will appear as (domain\provAgentgMSA$). For more information on a gMSA, see [Group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) +A group Managed Service Account is a managed domain account that provides automatic password management, simplified service principal name (SPN) management, the ability to delegate the management to other administrators, and also extends this functionality over multiple servers. Azure AD Connect Cloud Sync supports and uses a gMSA for running the agent. You will be prompted for administrative credentials during setup, in order to create this account. The account will appear as (domain\provAgentgMSA$). For more information on a gMSA, see [group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) ### Prerequisites for gMSA: 1. The Active Directory schema in the gMSA domain's forest needs to be updated to Windows Server 2012 or later. If you are creating a custom gMSA account, you need to ensure that the account h |Allow |gMSA Account |Read all properties |Descendant Contact objects| |Allow |gMSA Account |Create/delete User objects|This object and all descendant objects| -For steps on how to upgrade an existing agent to use a gMSA account see [Group Managed Service Accounts](how-to-install.md#group-managed-service-accounts). --#### Create gMSA account with PowerShell -You can use the following PowerShell script to create a custom gMSA account. Then you can use the [cloud sync gMSA cmdlets](how-to-gmsa-cmdlets.md) to apply more granular permissions. --```powershell -# Filename: 1_SetupgMSA.ps1 -# Description: Creates and installs a custom gMSA account for use with Azure AD Connect cloud sync. -# -# DISCLAIMER: -# Copyright (c) Microsoft Corporation. All rights reserved. This -# script is made available to you without any express, implied or -# statutory warranty, not even the implied warranty of -# merchantability or fitness for a particular purpose, or the -# warranty of title or non-infringement. The entire risk of the -# use or the results from the use of this script remains with you. -# -# -# -# -# Declare variables -$Name = 'provAPP1gMSA' -$Description = "Azure AD Cloud Sync service account for APP1 server" -$Server = "APP1.contoso.com" -$Principal = Get-ADGroup 'Domain Computers' --# Create service account in Active Directory -New-ADServiceAccount -Name $Name ` --Description $Description `--DNSHostName $Server `--ManagedPasswordIntervalInDays 30 `--PrincipalsAllowedToRetrieveManagedPassword $Principal `--Enabled $True `--PassThru--# Install the new service account on Azure AD Cloud Sync server -Install-ADServiceAccount -Identity $Name -``` --For additional information on the cmdlets above, see [Getting Started with Group Managed Service Accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj128431(v=ws.11)?redirectedfrom=MSDN). +For steps on how to upgrade an existing agent to use a gMSA account see [group Managed Service Accounts](how-to-install.md#group-managed-service-accounts). ++For more information on how to prepare your Active Directory for group Managed Service Account, see [group Managed Service Accounts Overview](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview). ### In the Azure Active Directory admin center Run the [IdFix tool](/office365/enterprise/prepare-directory-attributes-for-sync 2. The PowerShell execution policy on the local server must be set to Undefined or RemoteSigned. -3. If there's a firewall between your servers and Azure AD, configure see [Firewall and proxy requirements](#firewall-and-proxy-requirements) below. +3. If there's a firewall between your servers and Azure AD, see [Firewall and proxy requirements](#firewall-and-proxy-requirements) below. >[!NOTE] > Installing the cloud provisioning agent on Windows Server Core is not supported. |
active-directory | Console Quickstart Portal Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-quickstart-portal-nodejs.md | -> -> [!div renderon="portal" class="sxs-lookup"] +> > In this quickstart, you download and run a code sample that demonstrates how a Node.js console application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity. > > This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [client credentials grant](v2-oauth2-client-creds-grant-flow.md). |
active-directory | Daemon Quickstart Portal Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-python.md | +> # Quickstart: Acquire a token and call Microsoft Graph API from a Python console app using app's identity > -> [!div renderon="portal" class="sxs-lookup"] > In this quickstart, you download and run a code sample that demonstrates how a Python application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity. > > ## Prerequisites |
active-directory | Web App Quickstart Portal Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-java.md | -> > [!div renderon="portal" id="display-on-portal" class="sxs-lookup"] +> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"] > # Quickstart: Add sign-in with Microsoft to a Java web app > > In this quickstart, you download and run a code sample that demonstrates how a Java web application can sign in users and call the Microsoft Graph API. Users from any Azure Active Directory (Azure AD) organization can sign in to the application.-> > [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md?tabs=java) +> > [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md?tabs=java) |
active-directory | Datawiza Azure Ad Sso Mfa Oracle Ebs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-mfa-oracle-ebs.md | Title: Configure Datawiza for Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle EBS -description: Learn to enable Azure AD MFA and SSO for an Oracle E-Business Suite application via Datawiza +description: Learn how to enable Azure AD Multi-Factor Authentication and SSO for an Oracle E-Business Suite application via Datawiza. -# Configure Datawiza for Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle EBS +# Configure Datawiza for Azure AD Multi-Factor Authentication and single sign-on to Oracle EBS -In this tutorial, learn how to enable Azure Active Directory Multi-Factor Authentication (MFA) and single sign-on (SSO) for an Oracle E-Business Suite (Oracle EBS) application via Datawiza. +In this article, learn how to enable Azure Active Directory (Azure AD) Multi-Factor Authentication and single sign-on (SSO) for an Oracle E-Business Suite (Oracle EBS) application via Datawiza. -The benefits of integrating applications with Azure Active Directory (Azure AD) via Datawiza: +Here are some benefits of integrating applications with Azure AD via Datawiza: -* [Embrace proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) - a security model that adapts to modern environments and embraces hybrid workplace, while it protects people, devices, apps, and data -* [Azure Active Directory single sign-on](https://azure.microsoft.com/solutions/active-directory-sso/#overview) - secure and seamless access for users and apps, from any location, using a device -* [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) - users are prompted during sign-in for forms of identification, such as a code on their cellphone or a fingerprint scan -* [What is Conditional Access?](../conditional-access/overview.md) - policies are if-then statements, if a user wants to access a resource, then they must complete an action -* [Easy authentication and authorization in Azure AD with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/) - use web applications such as: Oracle JDE, Oracle E-Business Suite, Oracle Sibel, and home-grown apps -* Use the [Datawiza Cloud Management Console](https://console.datawiza.com) (DCMC) - manage access to applications in public clouds and on-premises +* A [Zero Trust](https://www.microsoft.com/security/business/zero-trust) security model adapts to modern environments and embraces a hybrid workplace while it helps protect people, devices, apps, and data. +* [Single sign-on](https://azure.microsoft.com/solutions/active-directory-sso/#overview) provides secure and seamless access for device users and apps from any location. +* [Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) prompts users during sign-in for forms of identification, such as a code on their device or a fingerprint scan. +* [Conditional Access](../conditional-access/overview.md) provides policies as if/then statements. If a user wants to access a resource, then they must complete an action. +* [Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/) provides authentication and authorization in Azure AD with no code. Use web applications such as Oracle JDE, Oracle EBS, Oracle Siebel, and home-grown apps. +* Use the [Datawiza Cloud Management Console](https://console.datawiza.com) (DCMC) to manage access to applications in public clouds and on-premises. -## Scenario description --This document focuses on modern identity providers (IdPs) integrating with the legacy Oracle EBS application. Oracle EBS requires a set of Oracle EBS service account credentials and an Oracle EBS database container (DBC) file. +This article focuses on modern identity providers (IdPs) integrating with the legacy Oracle EBS application. The application requires a set of Oracle EBS service account credentials and an Oracle EBS database container (DBC) file. ## Architecture -The solution contains the following components: --* **Azure AD** Microsoft's cloud-based identity and access management service, which helps users sign in and access external and internal resources. -* **Oracle EBS** the legacy application to be protected by Azure AD. -* **Datawiza Access Proxy (DAP)**: A super lightweight container-based reverse-proxy implements OIDC/OAuth or SAML for user sign-on flow and transparently passes identity to applications through HTTP headers. -* **Datawiza Cloud Management Console (DCMC)**: A centralized management console that manages DAP. DCMC provides UI and RESTful APIs for administrators to manage the configurations of DAP and its granular access control policies. +The solution has the following components: -### Prerequisites +* **Azure AD**: Microsoft's cloud-based identity and access management service, which helps users sign in and access external and internal resources. +* **Oracle EBS**: The legacy application that Azure AD will help protect. +* **Datawiza Access Proxy (DAP)**: A lightweight container-based reverse proxy that implements OIDC/OAuth or SAML for user sign-on flow. It transparently passes identity to applications through HTTP headers. +* **DCMC**: A centralized management console that manages DAP. The console provides UI and RESTful APIs for administrators to manage the configurations of DAP and its granular access control policies. -Ensure the following prerequisites are met. +## Prerequisites -* An Azure subscription. - * If you don't have on, you can get an [Azure free account](https://azure.microsoft.com/free/) -* An Azure AD tenant linked to the Azure subscription -* An account with Azure AD Application Admin permissions - * See, [Azure AD built-in roles](../roles/permissions-reference.md) -* Docker and Docker Compose are required to run DAP - * See, [Get Docker](https://docs.docker.com/get-docker/) and [Overview, Docker Compose](https://docs.docker.com/compose/install/) -* User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to your on-premises directory - * See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md) +To complete the steps in this article, you need: -* An Oracle EBS environment +* An Azure subscription. If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/). +* An Azure AD tenant linked to the Azure subscription. +* An account with Azure AD Application Administrator permissions. For more information, see [Azure AD built-in roles](../roles/permissions-reference.md). +* Docker and Docker Compose, to run DAP. For more information, see [Get Docker](https://docs.docker.com/get-docker/) and [Docker Compose Overview](https://docs.docker.com/compose/install/). +* User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to your on-premises directory. For more information, see [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md). +* An Oracle EBS environment. ## Configure the Oracle EBS environment for SSO and create the DBC file To enable SSO in the Oracle EBS environment: -1. Sign in to the Oracle EBS Management console as an Administrator. -2. Scroll down the Navigator panel and expand **User Management**. +1. Sign in to the Oracle EBS management console as an administrator. +2. Scroll down the navigation pane, expand **User Management**, and then select **Users**. - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/navigator-user-management.png#lightbox) + [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/navigator-user-management.png#lightbox) -3. Add a user account. +3. Add a user account. Select **Create User** > **User Account**. - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/user-account.png#lightbox) + [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/user-account.png#lightbox) 4. For **User Name**, enter **DWSSOUSER**. 5. For **Password**, enter a password. 6. For **Description**, enter **DW User account for SSO**. 7. For **Password Expiration**, select **None**.-8. Assign the **Apps Schema Connect** role to the user. +8. Assign **Apps Schema Connect Role** to the user. - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/assign-role.png#lightbox) + [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/assign-role.png#lightbox) ## Register DAP with Oracle EBS -In the Oracle EBS Linux environment, generate a new DBC file for DAP. You need the apps user credentials, and the default DBC file (under $FND_SECURE) used by the Apps Tier. +In the Oracle EBS Linux environment, generate a new DBC file for DAP. You need the app's user credentials and the default DBC file (under `$FND_SECURE`) that the application tier uses. -1. Configure the environment for Oracle EBS using a command similar to: `./u01/install/APPS/EBSapps.env run` -2. Use the AdminDesktop utility to generate the new DBC file. Specify the name of a new Desktop Node for this DBC file: +1. Configure the environment for Oracle EBS by using a command similar to `./u01/install/APPS/EBSapps.env run`. +2. Use the AdminDesktop utility to generate the new DBC file. Specify the name of a new desktop node for this DBC file: ->>`java oracle.apps.fnd.security.AdminDesktop apps/apps CREATE NODE_NAME=\<ebs domain name> DBC=/u01/install/APPS/fs1/inst/apps/EBSDB_apps/appl/fnd/12.0.0/secure/EBSDB.dbc` + `java oracle.apps.fnd.security.AdminDesktop apps/apps CREATE NODE_NAME=\<ebs domain name> DBC=/u01/install/APPS/fs1/inst/apps/EBSDB_apps/appl/fnd/12.0.0/secure/EBSDB.dbc` -3. This action generates a file called `ebsdb_\<ebs domain name>.dbc` in the location where you ran the previous command. -4. Copy the DBC file content to a notebook. You will use the content later. + This action generates a file called `ebsdb_\<ebs domain name>.dbc` in the location where you ran the command. +3. Copy the DBC file's content to a notebook. You'll use the content later. ## Enable Oracle EBS for SSO -1. To integrate JDE with Azure AD, sign in to [Datawiza Cloud Management Console (DCMC)](https://console.datawiza.com/). -2. The Welcome page appears. -3. Select the orange Getting started button. +1. To integrate JDE with Azure AD, sign in to the [Datawiza Cloud Management Console](https://console.datawiza.com/). -  + The welcome page appears. +1. Select the orange **Getting started** button. -4. Enter a **Name**. -5. Enter a **Description**. -6. Select **Next**. +  - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/deployment-name.png#lightbox) +1. For **Name**, enter a name for the deployment. -7. On **Add Application**, for **Platform** select **Oracle E-Business Suite**. -8. For **App Name**, enter the app name. -9. For **Public Domain** enter the external-facing URL of the application, for example `https://ebs-external.example.com`. You can use localhost DNS for testing. -10. For **Listen Port**, select the port that DAP listens on. You can use the port in Public Domain if you aren't deploying the DAP behind a load balancer. -11. For **Upstream Servers**, enter the URL and port combination of the Oracle EBS implementation being protected. -12. For **EBS Service Account**, enter the username from Service Account (DWSSOUSER). -13. For **EBS Account Password**, enter the password for the Service Account. -14. For **EBS User Mapping**, the product decides the attribute to be mapped to Oracle EBS username for authentication. -15. For **EBS DBC Content**, use the content you copied. -16. Select **Next**. + [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/deployment-name.png#lightbox) +1. For **Description**, enter a description of the deployment. +1. Select **Next**. - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/add-application.png#lightbox) +1. On **Add Application**, for **Platform**, select **Oracle E-Business Suite**. +1. For **App Name**, enter the app name. +1. For **Public Domain**, enter the external-facing URL of the application. For example, enter `https://ebs-external.example.com`. You can use localhost DNS for testing. +1. For **Listen Port**, select the port that DAP listens on. You can use the port in **Public Domain** if you aren't deploying the DAP behind a load balancer. +1. For **Upstream Servers**, enter the URL and port combination of the Oracle EBS implementation that you want to protect. +1. For **EBS Service Account**, enter the username from the service account (**DWSSOUSER**). +1. For **EBS Account Password**, enter the password for the service account. +1. For **EBS User Mapping**, the product decides the attribute to be mapped to the Oracle EBS username for authentication. +1. For **EBS DBC Content**, use the content that you copied. +1. Select **Next**. ++[](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/add-application.png#lightbox) ### IdP configuration -Use the DCMC one-click integration to help you complete Azure AD configuration. With this feature, you can reduce management costs and configuration errors are less likely. +Use the DCMC one-click integration to help you complete Azure AD configuration. With this feature, you can reduce management costs and the likelihood of configuration errors. - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/configure-idp.png#lightbox) +[](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/configure-idp.png#lightbox) ### Docker Compose file -Configuration on the management console is complete. You are prompted to deploy Datawiza Access Proxy (DAP) with your application. Make a note the deployment Docker Compose file. The file includes the image of the DAP, PROVISIONING_KEY, and PROVISIONING_SECRET. DAP uses this information to pull the latest configuration and policies from DCMC. +Configuration on the management console is complete. You're prompted to deploy DAP with your application. Make a note of the deployment Docker Compose file. The file includes the DAP image, `PROVISIONING_KEY`, and `PROVISIONING_SECRET`. DAP uses this information to pull the latest configuration and policies from DCMC. -  + ### SSL configuration -1. For certificate configuration, select the **Advanced** tab on your application page. +1. For certificate configuration, select the **Advanced** tab on your application page. Then select **SSL** > **Edit**. ++ [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/advanced-tab.png#lightbox) - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/advanced-tab.png#lightbox) +2. Turn on the **Enable SSL** toggle. +3. For **Cert Type**, select a certificate type. -2. Enable SSL. -3. Select a **Cert Type**. + [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/cert-type.png#lightbox) - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/cert-type.png#lightbox) + There's a self-signed certificate for localhost. To use that certificate for testing, select **Self Signed**. -4. There's a self-signed certificate for localhost, which you can use for testing. + [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/self-signed-cert-type.png#lightbox) - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/self-signed-cert-type.png#lightbox) + Optionally, you can upload a certificate from a file. For **Cert Type**, select **Upload**. Then, for **Select Option**, select **File Based**. -5. (Optional) You can upload a certificate from a file. + [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/file-based-cert-option.png#lightbox) - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/file-based-cert-option.png#lightbox) +4. Select **Save**. -6. Select **Save**. +### Optional: Enable Multi-Factor Authentication on Azure AD -### Optional: Enable MFA on Azure AD +To provide more security for sign-ins, you can enable Multi-Factor Authentication in the Azure portal: -To provide more security for sign-ins, you can enforce MFA for user sign-in by enabling MFA on the Azure portal. +1. Sign in to the Azure portal as a Global Administrator. +2. Select **Azure Active Directory** > **Manage** > **Properties**. +3. Under **Properties**, select **Manage security defaults**. -1. Sign in to the Azure portal as a Global Administrator. -2. Select **Azure Active Directory** > **Manage** > **Properties**. -3. Under **Properties**, select **Manage security defaults**. - - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/manage-security-defaults.png#lightbox) + [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/manage-security-defaults.png#lightbox) 4. Under **Enable security defaults**, select **Yes**.-5. Select **Save**. - [  ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/enable-security-defaults.png#lightbox) + [](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/enable-security-defaults.png#lightbox) ++5. Select **Save**. -## Next steps +## Next steps -- Video: [Enable SSO and MFA for Oracle JD Edwards with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90)+- [Video: Enable SSO and MFA for Oracle JD Edwards with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90) - [Tutorial: Configure Secure Hybrid Access with Azure AD and Datawiza](./datawiza-with-azure-ad.md) - [Tutorial: Configure Azure AD B2C with Datawiza to provide secure hybrid access](../../active-directory-b2c/partner-datawiza.md)-- Go to docs.datawiza.com for Datawiza [User Guides](https://docs.datawiza.com/)+- [Datawiza user guides](https://docs.datawiza.com/) |
active-directory | How To Assign Managed Identity Via Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md | Title: Use Azure Policy to assign managed identities (preview) description: Documentation for the Azure Policy that can be used to assign managed identities to Azure resources. --++ editor: barclayn Last updated 05/23/2022-+ |
active-directory | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/known-issues.md | For more information, see [Transfer an Azure subscription to a different Azure A In rare cases, you may see error messages indicating errors related to assignment of managed identities with Azure resources. Some of the example error messages are as follows: - Azure resource ΓÇÿazure-resource-id' does not have access to identity 'managed-identity-id'. - No managed service identities are associated with resource ΓÇÿazure-resource-id'-- Managed service identities referenced with URL 'https://control-....virtualMachineScaleSets/<vmss_name>/credentials/v2/systemassigned' are not valid. Ensure all assigned identities associated with the resource are valid. **Workaround** In these rare cases the best next steps are In these rare cases the best next steps are 2. For User Assigned Managed Identity, reassign the identity to the Azure resource. 3. For System Assigned Managed Identity, disable the identity and enable it again. +>[!NOTE] +>To assign/unassign Managed identities please follow below links ++- [Documentation for VM](qs-configure-portal-windows-vm.md) +- [Documentation for VMSS](qs-configure-portal-windows-vmss.md) + ## Next steps You can review our article listing the [services that support managed identities](services-support-managed-identities.md) and our [frequently asked questions](managed-identities-faq.md) |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | For anyone that has used Azure AD to [provision identities into a SaaS applicati ## License requirements -Using this feature requires Azure AD Premium P1 licenses. Each user who is synchronized with cross-tenant synchronization must have a P1 license in their home/source tenant. To find the right license for your requirements, see [Compare generally available features of Azure AD](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). +In the source tenant: Using this feature requires Azure AD Premium P1 licenses. Each user who is synchronized with cross-tenant synchronization must have a P1 license in their home/source tenant. To find the right license for your requirements, see [Compare generally available features of Azure AD](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). ++In the target tenant: Cross-tenant sync relies on the Azure AD External Identities billing model. To understand the external identities licensing model, see [MAU billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md) ## Frequently asked questions |
app-service | Deploy Staging Slots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md | The app must be running in the **Standard**, **Premium**, or **Isolated** tier i 6. Select the app URL on the slot's resource page. The deployment slot has its own host name and is also a live app. To limit public access to the deployment slot, see [Azure App Service IP restrictions](app-service-ip-restrictions.md). -The new deployment slot has no content, even if you clone the settings from a different slot. For example, you can [publish to this slot with Git](./deploy-local-git.md). You can deploy to the slot from a different repository branch or a different repository. +The new deployment slot has no content, even if you clone the settings from a different slot. For example, you can [publish to this slot with Git](./deploy-local-git.md). You can deploy to the slot from a different repository branch or a different repository. Get publish profile [from Azure App Service](/visualstudio/azure/how-to-get-publish-profile-from-azure-app-service) can provide required information to deploy to the slot. The profile can be imported by Visual Studio to deploy contents to the slot. The slot's URL will be of the format `http://sitename-slotname.azurewebsites.net`. To keep the URL length within necessary DNS limits, the site name will be truncated at 40 characters, the slot name will be truncated at 19 characters, and an additional 4 random characters will be appended to ensure the resulting domain name is unique. By default, new slots are given a routing rule of `0%`, shown in grey. When you ## Delete a slot -Search for and select your app. Select **Deployment slots** > *\<slot to delete>* > **Overview**. The app type is shown as **App Service (Slot)** to remind you that you're viewing a deployment slot. Select **Delete** on the command bar. +Search for and select your app. Select **Deployment slots** > *\<slot to delete>* > **Overview**. The app type is shown as **App Service (Slot)** to remind you that you're viewing a deployment slot. Before deleting a slot, make sure to stop the slot and set the traffic in the slot to zero. Select **Delete** on the command bar.  |
app-service | Language Support Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md | + + Title: Language Support Policy +description: App Service language runtime support policies +++ Last updated : 01/23/2023+++++# App Service language runtime support policy ++This document describes the App Service language runtime support policy for updating existing stacks and retiring process for upcoming end-of-life stacks. This policy is to clarify existing practices and doesn't represent a change to customer commitments. ++## Updates to existing stacks +App Service will update existing stacks after they become available from each community. App Service will update major versions of stacks but can't guarantee any specific patch versions. Patch versions are controlled by the platform, and it is not possible for App Service to pin a specific patch version. For example, Python 3.10 will be updated by App Service, but a specific Python 3.10.x version won't be guaranteed. If you need a specific patch version, use a [custom container](quickstart-custom-container.md). ++## Retirements +App Service follows community support timelines for the lifecycle of the runtime. Once community support for a given language reaches end-of-life, your applications will continue to run unchanged. However, App Service cannot provide security patches or related customer support for that runtime version past its end-of-life date. If your application has any issues past the end-of-life date for that version, you should move up to a supported version to receive the latest security patches and features. ++> [!IMPORTANT] +> You're encouraged to upgrade the language version of your affected apps to a supported version. If you're running apps using an unsupported language version, you'll be required to upgrade before receiving support for your app. +> ++## Notifications +End-of-life dates for runtime versions are determined independently by their respective stacks and are outside the control of App Service. App Service will send reminder notifications to subscription owners for upcoming end-of-life runtime versions 12 months prior to the end-of-life date. ++Those who receive notifications include account administrators, service administrators, and co-administrators. Contributors, readers, or other roles won't directly receive notifications, unless they opt-in to receive notification emails, using [Service Health Alerts](/service-health/alerts-activity-log-service-notifications-portal.md). ++## Language runtime version support timelines +To learn more about specific language support policy timelines, visit the following resources: ++- [ASP.NET](https://aka.ms/aspnetrelease) +- [.NET](https://aka.ms/dotnetrelease) +- [Node](https://aka.ms/noderelease) +- [Java](https://aka.ms/javarelease) +- [Python](https://aka.ms/pythonrelease) +- [PHP](https://aka.ms/phprelease) +- [Go](https://aka.ms/gorelease) ++++## Configure language versions +To learn more about how to update your App Service application language versions, see the following resources: ++- [.NET](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/dot_net_core.md#how-to-update-your-app-to-target-a-different-version-of-net-or-net-core) +- [Node](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md#node-on-linux-app-service) +- [Java](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/java_support.md#java-on-app-service) +- [Python](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/python_support.md#how-to-update-your-app-to-target-a-different-version-of-python) +- [PHP](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#how-to-update-your-app-to-target-a-different-version-of-php) + |
app-service | Manage Custom Dns Buy Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md | Last updated 01/31/2023 -# By an App Service domain and configure an app with it +# Buy an App Service domain and configure an app with it App Service domains are custom domains that are managed directly in Azure. They make it easy to manage custom domains for [Azure App Service](overview.md). This article shows you how to buy an App Service domain and configure an App Service app with it. |
azure-app-configuration | Powershell Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/powershell-samples.md | Title: PowerShell samples description: Learn about the Azure PowerShell sample scripts available for App Configuration. Previously updated : 12/14/2022 Last updated : 01/19/2023 The following table includes links to PowerShell scripts built using the [Az.App | Script | Description | |-|-| |**Create store**||-| [Create a configuration store with the specified parameters](/powershell/module/az.appconfiguration/New-AzAppConfigurationStore) | Creates an Azure App Configuration store with some specified parameters. | +| [Create a configuration store with the specified parameters](scripts/powershell-create-service.md) | Creates an Azure App Configuration store with some specified parameters. | |**Delete store**||-| [Delete a configuration store](/powershell/module/az.appconfiguration/Remove-AzAppConfigurationStore) | Deletes an Azure App Configuration store. | +| [Delete a configuration store](scripts/powershell-delete-service.md) | Deletes an Azure App Configuration store. | | [Purge a deleted configuration store](/powershell/module/az.appconfiguration/Clear-AzAppConfigurationDeletedStore) | Purges a deleted Azure App Configuration store, permanently removing all data. | |**Get and list stores**|| | [Get a deleted configuration store](/powershell/module/az.appconfiguration/Get-AzAppConfigurationDeletedStore) | Gets a deleted Azure App Configuration store. | |
azure-app-configuration | Cli Create Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-create-service.md | -# Create an Azure App Configuration Store +# Create an Azure App Configuration store with the Azure CLI -This sample script creates a new instance of Azure App Configuration in a new resource group. +This sample script creates a new instance of Azure App Configuration using the Azure CLI in a new resource group. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] +- This tutorial requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script appConfigConnectionString=$(az appconfig credential list \ echo "$appConfigConnectionString" ``` -Make a note of the actual name generated for the new resource group. You will use that resource group name when you want to delete all group resources. +Make a note of the actual name generated for the new resource group. You'll use that resource group name when you want to delete all group resources. [!INCLUDE [cli-script-cleanup](../../../includes/cli-script-clean-up.md)] This script uses the following commands to create a new resource group and an Ap For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure). -Additional App Configuration CLI script samples can be found in the [Azure App Configuration CLI samples](../cli-samples.md). +More App Configuration CLI script samples can be found in the [Azure App Configuration CLI samples](../cli-samples.md). |
azure-app-configuration | Cli Delete Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-delete-service.md | -# Delete an Azure App Configuration store +# Delete an Azure App Configuration store with the Azure CLI -This sample script deletes an instance of Azure App Configuration. +This sample script deletes an instance of Azure App Configuration using the Azure CLI. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] |
azure-app-configuration | Powershell Create Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-create-service.md | + + Title: PowerShell script sample - Create an Azure App Configuration store ++description: Create an Azure App Configuration store using a sample PowerShell script. See reference article links to commands used in the script. ++++ Last updated : 02/12/2023+++++# Create an Azure App Configuration store with PowerShell ++This sample script creates a new instance of Azure App Configuration in a new resource group using PowerShell. +++To execute the sample scripts, you need a functional setup of [Azure PowerShell](/powershell/azure/). ++Open a PowerShell window with admin rights and run `Install-Module -Name Az` to install Azure PowerShell ++## Sample script ++```powershell +# Create a resource group +New-AzResourceGroup -Name <resource-group-name> -Location <location> ++# Create an App Configuration store +New-AzAppConfigurationStore -Name <store-name> -ResourceGroupName <resource-group-name> -Location <location> -Sku <sku> ++# Get the App Configuration connection string +Get-AzAppConfigurationStoreKey -Name <store-name> -ResourceGroupName <resource-group-name> +``` ++## Clean up resources ++Clean up the resources you deployed by deleting the resource group. ++```powershell +Remove-AzResourceGroup -Name <resource-group-name> +``` ++## Script explanation ++This script uses the following commands to create a new resource group and an App Configuration store. Each command in the table links to command specific documentation. ++| Command | Notes | +||| +| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. | +| [New-AzAppConfigurationStore](/powershell/module/az.appconfiguration/new-azappconfigurationstore) | Creates an App Configuration store resource. | +| [Get-AzAppConfigurationStoreKey](/powershell/module/az.appconfiguration/get-azappconfigurationstorekey) | Lists access keys for an App Configuration store. | ++## Next steps ++For more information about Azure PowerShell, check out the [Azure PowerShell documentation](/powershell/azure/). ++More App Configuration script samples for PowerShell can be found in the [Azure App Configuration PowerShell samples](../powershell-samples.md). |
azure-app-configuration | Powershell Delete Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-delete-service.md | + + Title: PowerShell script sample - Delete an Azure App Configuration store ++description: Delete an Azure App Configuration store using a sample PowerShell script. See reference article links to commands used in the script. ++++ Last updated : 02/02/2023+++++# Delete an Azure App Configuration store with PowerShell ++This sample script deletes an instance of Azure App Configuration using PowerShell. +++To execute this sample script, you need a functional setup of [Azure PowerShell](/powershell/azure/). ++Open a PowerShell window with admin rights and run `Install-Module -Name Az` to install Azure PowerShell ++## Sample script ++```powershell +# Delete an App Configuration store +Remove-AzAppConfigurationStore -Name <store-name> -ResourceGroupName <resource-group-name> +``` ++## Script explanation ++This script uses the following command to delete an App Configuration store. Each command in the table links to command specific documentation. ++| Command | Notes | +||| +| [Remove-AzAppConfigurationStore](/powershell/module/az.appconfiguration/Remove-AzAppConfigurationStore) | Deletes an App Configuration store. | ++## Next steps ++For more information about Azure PowerShell, check out the [Azure PowerShell documentation](/powershell/azure/). ++More App Configuration script samples for PowerShell can be found in the [Azure App Configuration PowerShell samples](../powershell-samples.md). |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | For complete release version information, see [Version log](version-log.md#janua New for this release: - Arc data - - Kafka separate mode - Description of this change and all customer and developer impacts are enumerated in the linked feature. + - Kafka separate mode - Arc-SQL MI - Time series functions are available. |
azure-arc | Conceptual Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md | Title: "GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes" description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 11/29/2022 Last updated : 02/02/2023 The `microsoft.flux` extension installs by default the [Flux controllers](https: * `helmreleases.helm.toolkit.fluxcd.io` * `fluxconfigs.clusterconfig.azure.com` -* [FluxConfig CRD](https://github.com/Azure/ClusterConfigurationAgent/blob/master/charts/azure-k8s-flux/templates/clusterconfig.azure.com_fluxconfigs.yaml): Custom Resource Definition for `fluxconfigs.clusterconfig.azure.com` custom resources that define `FluxConfig` Kubernetes objects. +* FluxConfig CRD: Custom Resource Definition for `fluxconfigs.clusterconfig.azure.com` custom resources that define `FluxConfig` Kubernetes objects. * fluxconfig-agent: Responsible for watching Azure for new or updated `fluxConfigurations` resources, and for starting the associated Flux configuration in the cluster. Also, is responsible for pushing Flux status changes in the cluster back to Azure for each `fluxConfigurations` resource. * fluxconfig-controller: Watches the `fluxconfigs.clusterconfig.azure.com` custom resources and responds to changes with new or updated configuration of GitOps machinery in the cluster. |
azure-arc | Network Requirements Consolidated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md | Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 01/30/2023 Last updated : 02/01/2023 This article lists the endpoints, ports, and protocols required for Azure Arc-en ## Azure Arc-enabled Kubernetes endpoints -Connectivity to the Arc Kubernetes-based endpoints is required for all Kubernetes based Arc offerings, including: +Connectivity to the Arc Kubernetes-based endpoints is required for all Kubernetes-based Arc offerings, including: - Azure Arc-enabled Kubernetes - Azure Arc-enabled App services For an example, see [Quickstart: Connect an existing Kubernetes cluster to Azure ## Azure Arc-enabled data services -This section describes additional requirements specific to Azure Arc-enabled data services, in addition to the Arc-enabled Kubernetes endpoints listed above. +This section describes requirements specific to Azure Arc-enabled data services, in addition to the Arc-enabled Kubernetes endpoints listed above. [!INCLUDE [network-requirements](dat)] For examples, see [Connected Machine agent network requirements](servers/network ## Azure Arc resource bridge (preview) -This section describes additional networking requirements specific to deploying Azure Arc resource bridge (preview) in your enterprise. These additional requirements also apply to Azure Arc-enabled VMware vSphere (preview) and Azure Arc-enabled System Center Virtual Machine Manager (preview). +This section describes additional networking requirements specific to deploying Azure Arc resource bridge (preview) in your enterprise. These requirements also apply to Azure Arc-enabled VMware vSphere (preview) and Azure Arc-enabled System Center Virtual Machine Manager (preview). [!INCLUDE [network-requirements](resource-bridge/includes/network-requirements.md)] ## Azure Arc-enabled System Center Virtual Machine Manager (preview) -Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) requires the connectivity described below: +Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) also requires: | **Service** | **Port** | **URL** | **Direction** | **Notes**| | | | | | | | SCVMM management Server | 443 | URL of the SCVMM management server | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. | - For more information, see [Overview of Arc-enabled System Center Virtual Machine Manager (preview)](system-center-virtual-machine-manager/overview.md).+ ## Azure Arc-enabled VMware vSphere (preview) -Azure Arc-enabled VMware vSphere requires the connectivity described below: +Azure Arc-enabled VMware vSphere also requires: | **Service** | **Port** | **URL** | **Direction** | **Notes**| | | | | | | | vCenter Server | 443 | URL of the vCenter server | Appliance VM IP and control plane endpoint need outbound connection. | Used to by the vCenter server to communicate with the Appliance VM and the control plane.| -For more information, see [Support matrix for Azure Arc-enabled VMware vSphere (preview)](vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md). +For more information, see [Support matrix for Azure Arc-enabled VMware vSphere (preview)](vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md). ++## Additional endpoints ++Depending on your scenario, you may need connectivity to other URLs, such as those used by the Azure portal, management tools, or other Azure services. In particular, review these lists to ensure that you allow connectivity to any necessary endpoints: ++- [Azure portal URLs](../azure-portal/azure-portal-safelist-urls.md) +- [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints) |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | The following Azure built-in roles are required for different aspects of managin ## Azure subscription and service limits -Azure Arc-enabled servers support up to 5,000 machine instances in a resource group. +There are no limits to the number of Azure Arc-enabled servers you can register in any single resource group, subscription or tenant. -Before configuring your machines with Azure Arc-enabled servers, review the Azure Resource Manager [subscription limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits) and [resource group limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits) to plan for the number of machines to be connected. +Each Azure Arc-enabled server is associated with an Azure Active Directory object and will count against your directory quota. See [Azure AD service limits and restrictions](../../active-directory/enterprise-users/directory-service-limits-restrictions.md) for information about the maximum number of objects you can have in an Azure AD directory. ## Azure resource providers |
azure-functions | Functions Deployment Slots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md | Some configuration settings are slot-specific. The following lists detail which - Always On - Diagnostic settings - Cross-origin resource sharing (CORS)+- Private endpoints **Non slot-specific settings**: |
azure-functions | Functions Networking Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md | You can host function apps in a couple of ways: Use the following resources to quickly get started with Azure Functions networking scenarios. These resources are referenced throughout the article. -* ARM templates: +* ARM, Bicep, and Terraform templates: + * [Private HTTP Triggered Function App](https://github.com/Azure-Samples/function-app-with-private-http-endpoint) + * [Private Event Hubs Triggered Function App](https://github.com/Azure-Samples/function-app-with-private-eventhub) +* ARM templates only: * [Function App with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints). * [Azure Function App with Virtual Network Integration](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-vnet-integration). * Tutorials: |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | The following features have known limitations in Azure Government: - Limitations with multi-factor authentication: - Trusted IPs isn't supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multi-factor authentication should and shouldn't be required based off the user's current IP address. +### [Azure Active Directory B2C](../active-directory-b2c/index.yml) ++Azure Active Directory B2C is **not available** in Azure Government. + ### [Microsoft Authentication Library (MSAL)](../active-directory/develop/msal-overview.md) The Microsoft Authentication Library (MSAL) enables developers to acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs. For feature variations and limitations, see [National clouds and MSAL](../active-directory/develop/msal-national-cloud.md). |
azure-government | Documentation Government Overview Jps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md | Microsoft enables you to protect your data throughout its entire lifecycle: at r Technologies like [Intel Software Guard Extensions](https://software.intel.com/sgx) (Intel SGX), or [AMD Secure Encrypted Virtualization](https://www.amd.com/en/processors/amd-secure-encrypted-virtualization) (SEV-SNP) are recent CPU improvements supporting confidential computing implementations. These technologies are designed as virtualization extensions and provide feature sets including memory encryption and integrity, CPU-state confidentiality and integrity, and attestation. For more information, see [Azure confidential computing](../confidential-computing/index.yml) documentation. +## Multi-factor authentication (MFA) ++The CJIS Security Policy v5.9.2 revised multi-factor authentication (MFA) requirements for CJI protection. MFA requires the use of two or more different factors defined as follows: ++- Something you know, for example, username/password or personal identification number (PIN) +- Something you have, for example, a hard token such as a cryptographic key stored on or a one-time password (OTP) transmitted to a specialized hardware device +- Something you are, for example, biometric information ++According to the CJIS Security Policy, identification and authentication of organizational users requires MFA to privileged and non-privileged accounts as part of CJI access control requirements. MFA is required at Authenticator Assurance Level 2 (AAL2), as described in the National Institute of Standards and Technology (NIST) [SP 800-63](https://pages.nist.gov/800-63-3/sp800-63-3.html) *Digital Identity Guidelines*. Authenticators and verifiers operated at AAL2 shall be validated to meet the requirements of FIPS 140 Level 1. ++The [Microsoft Authenticator app](../active-directory/authentication/concept-authentication-authenticator-app.md) provides an extra level of security to your Azure Active Directory (Azure AD) account. It's available on mobile phones running Android and iOS. With the Microsoft Authenticator app, you can provide secondary verification for MFA scenarios to meet your CJIS Security Policy MFA requirements. As mentioned previously, CJIS Security Policy requires that solutions for hard tokens use cryptographic modules validated at FIPS 140 Level 1. The Microsoft Authenticator app meets FIPS 140 Level 1 validation requirements for all Azure AD authentications, as explained in [Authentication methods in Azure Active Directory - Microsoft Authenticator app](../active-directory/authentication/concept-authentication-authenticator-app.md#fips-140-compliant-for-azure-ad-authentication). FIPS 140 compliance for Microsoft Authenticator is currently in place for iOS and in progress for Android. ++Moreover, Azure can help you meet and **exceed** your CJIS Security Policy MFA requirements by supporting the highest Authenticator Assurance Level 3 (AAL3). According to [NIST SP 800-63B Section 4.3](https://pages.nist.gov/800-63-3/sp800-63b.html#sec4), multi-factor **authenticators** used at AAL3 shall rely on hardware cryptographic modules validated at FIPS 140 Level 2 overall with at least FIPS 140 Level 3 for physical security, which exceeds the CJIS Security Policy MFA requirements. **Verifiers** at AAL3 shall be validated at FIPS 140 Level 1 or higher. ++Azure Active Directory (Azure AD) supports both authenticator and verifier NIST SP 800-63B AAL3 requirements: ++- **Authenticator requirements:** FIDO2 security keys, smartcards, and Windows Hello for Business can help you meet AAL3 requirements, including the underlying FIPS 140 validation requirements. Azure AD support for NIST SP 800-63B AAL3 **exceeds** the CJIS Security Policy MFA requirements. +- **Verifier requirements:** Azure AD uses the [Windows FIPS 140 Level 1](/windows/security/threat-protection/fips-140-validation) overall validated cryptographic module for all its authentication related cryptographic operations. It is therefore a FIPS 140 compliant verifier. ++For more information, see [Azure NIST SP 800-63 documentation](/azure/compliance/offerings/offering-nist-800-63). + ## Restrictions on insider access Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to your systems and data. For more information on how Microsoft restricts insider access to your data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access). |
azure-maps | Drawing Package Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md | -This guide shows you how to prepare your Drawing Package for the [Azure Maps Conversion service](/rest/api/maps/v2/conversion) using specific CAD commands to correctly prepare your DWG files and manifest file for the Conversion service. +This guide shows you how to prepare your Drawing Package for the [Azure Maps Conversion service] using specific CAD commands to correctly prepare your DWG files and manifest file for the Conversion service. To start with, make sure your Drawing Package is in .zip format, and contains the following files: * One or more drawing files in DWG format. * A Manifest file describing DWG files and facility metadata. -If you don't have your own package to reference along with this guide, you may download the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +If you don't have your own package to reference along with this guide, you may download the [sample drawing package]. You may choose any CAD software to open and prepare your facility drawing files. However, this guide is created using Autodesk's AutoCAD® software. Any commands referenced in this guide are meant to be executed using Autodesk's AutoCAD® software. >[!TIP]->For more information about drawing package requirements that aren't covered in this guide, see [Drawing Package Requirements](drawing-requirements.md). +>For more information about drawing package requirements that aren't covered in this guide, see [Drawing Package Requirements]. ## Glossary of terms For easy reference, here are some terms and definitions that are important as you read this guide. -| Term | Definition | -|:-|:| -| Layer | An AutoCAD DWG layer from the drawing file.| -| Entity | An AutoCAD DWG entity from the drawing file. | -| Level |An area of a building at a set elevation. For example, the floor of a building.  | +| Term | Definition | +|:--|:| +| Layer | An AutoCAD DWG layer from the drawing file. | +| Entity | An AutoCAD DWG entity from the drawing file. | +| Level | An area of a building at a set elevation. For example, the floor of a building. | | Feature | An object that combines a geometry with more metadata information. | | Feature classes | A common blueprint for features. For example, a *unit* is a feature class, and an *office* is a feature. | You may choose any CAD software to open and prepare your facility drawing files. ### Bind External References -Each floor of a facility must be provided as one DWG file. If there are no external references, then nothing more needs to be done. However, if there are any external references, they must be bound to a single drawing. To bind an external reference, you may use the `XREF` command. After binding, each external reference drawing will be added as a block reference. If you need to make changes to any of these layers, remember to explode the block references by using the `XPLODE` command. +Each floor of a facility must be provided as one DWG file. If there are no external references, then nothing more needs to be done. However, if there are any external references, they must be bound to a single drawing. To bind an external reference, you may use the `XREF` command. Each external reference drawing will be added as a block reference after binding. If you need to make changes to any of these layers, remember to explode the block references by using the `XPLODE` command. ### Unit of measurement The drawings can be created using any unit of measurement. However, all drawings The following image shows the Drawing Units window within Autodesk's AutoCAD® software that you can use to verify the unit of measurement. ### Alignment -Each floor of a facility is provided as an individual DWG file. As a result, it's possible that the floors are not perfectly aligned when stacked on top of each other. Azure Maps Conversion service requires that all drawings be aligned with the physical space. To verify alignment, use a reference point that can span across floors, such as an elevator or column that spans multiple floors. you can view all the floors by opening a new drawing, and then use the `XATTACH` command to load all floor drawings. If you need to fix any alignment issues, you can use the reference points and the `MOVE` command to realign the floors that require it. +Each floor of a facility is provided as an individual DWG file. As a result, it's possible that the floors aren't perfectly aligned when stacked on top of each other. Azure Maps Conversion service requires that all drawings be aligned with the physical space. To verify alignment, use a reference point that can span across floors, such as an elevator or column that spans multiple floors. you can view all the floors by opening a new drawing, and then use the `XATTACH` command to load all floor drawings. If you need to fix any alignment issues, you can use the reference points and the `MOVE` command to realign the floors that require it. ### Layers Ensure that each layer of a drawing contains entities of one feature class. If a Furthermore, each layer has a list of supported entity types and any other types are ignored. For example, if the Unit Label layer only supports single-line text, a multiline text or Polyline on the same layer is ignored. -For a better understanding of layers and feature classes, see [Drawing Package Requirements](drawing-requirements.md). +For a better understanding of layers and feature classes, see [Drawing Package Requirements]. ### Exterior layer A single level feature is created from each exterior layer or layers. This level The following image is taken from the sample package, and shows the exterior layer of the facility in red. The unit layer is turned off to help with visualization. ### Unit layer Units are navigable spaces in the building, such as offices, hallways, stairs, and elevators. A closed entity type such as Polygon, closed Polyline, Circle, or closed Ellipse is required to represent each unit. So, walls and doors alone won't create a unit because there isn’t an entity that represents the unit. -The following image is taken from the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples) and shows the unit label layer and unit layer in red. All other layers are turned off to help with visualization. Also, one unit is selected to help show that each unit is a closed Polyline. +The following image is taken from the [sample drawing package] and shows the unit label layer and unit layer in red. All other layers are turned off to help with visualization. Also, one unit is selected to help show that each unit is a closed Polyline. ### Unit label layer If you'd like to add a name property to a unit, you'll need to add a separate la Doors are optional. However, doors may be used if you'd like to specify the entry point(s) for a unit. Doors can be drawn in any way if it's a supported entity type by the door layer. The door must overlap the boundary of a unit and the overlapping edge of the unit is then be treated as an opening to the unit. -The following image is taken from the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples) and shows a unit with a door (in red) drawn on the unit boundary. +The following image is taken from the [sample drawing package] and shows a unit with a door (in red) drawn on the unit boundary. ### Wall layer -The wall layer is meant to represent the physical extents of a facility such as walls and columns. The Azure Maps Conversion service perceives walls as physical structures that are an obstruction to routing. With that in mind, a wall should be thought as a physical structure that one can see, but not walk though. Anything that can’t be seen won't captured in this layer. If a wall has inner walls or columns inside, then only the exterior should be captured. +The wall layer is meant to represent the physical extents of a facility such as walls and columns. The Azure Maps Conversion service perceives walls as physical structures that are an obstruction to routing. With that in mind, a wall should be thought as a physical structure that one can see, but not walk through. Anything that can’t be seen won't captured in this layer. If a wall has inner walls or columns inside, then only the exterior should be captured. ## Step 3: Prepare the manifest The Drawing package Manifest is a JSON file. The Manifest tells the Azure Maps Conversion service how to read the facility DWG files and metadata. Some examples of this information could be the specific information each DWG layer contains, or the geographical location of the facility. -To achieve a successful conversion, all “required” properties must be defined. A sample manifest file can be found inside the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). This guide does not cover properties supported by the manifest. For more information about manifest properties, see [Manifest File Properties](drawing-requirements.md#manifest-file-requirements). +To achieve a successful conversion, all “required” properties must be defined. A sample manifest file can be found inside the [sample drawing package]. This guide doesn't cover properties supported by the manifest. For more information about manifest properties, see [Manifest File Properties]. ### Building levels The building level specifies which DWG file to use for which level. A level must have a level name and ordinal that describes that vertical order of each level. Every facility must have an ordinal 0, which is the ground floor of a facility. An ordinal 0 must be provided even if the drawings occupy a few floors of a facility. For example, floors 15-17 can be defined as ordinal 0-2, respectively. -The following example is taken from the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). The facility has three levels: basement, ground, and level 2. The filename contains the full file name and path of the file relative to the manifest file within the .zip Drawing package. +The following example is taken from the [sample drawing package]. The facility has three levels: basement, ground, and level 2. The filename contains the full file name and path of the file relative to the manifest file within the .zip Drawing package. ```json     "buildingLevels": { The following example of the `dwgLayers` object in the manifest. The following image shows the layers from the corresponding DWG drawing viewed in Autodesk's AutoCAD® software. ### unitProperties The `unitProperties` object allows you to define other properties for a unit that you can’t do in the DWG file. Examples could be directory information of a unit or the category type of a unit. A unit property is associated with a unit by having the `unitName` object match the label in the `unitLabel` layer. -The following image is taken from the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). It displays the unit label that's associated to the unit property in the manifest. +The following image is taken from the [sample drawing package]. It displays the unit label that's associated to the unit property in the manifest. The following snippet shows the unit property object that is associated with the unit. The following snippet shows the unit property object that is associated with the ## Step 4: Prepare the Drawing Package -You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files will need to be zipped into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the zipped package. All other files can be in any directory of the zipped package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files will need to be zipped into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the zipped package. All other files can be in any directory of the zipped package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample drawing package]. ## Next steps > [!div class="nextstepaction"]-> [Tutorial: Creating a Creator indoor map](tutorial-creator-indoor-maps.md) +> [Tutorial: Creating a Creator indoor map] ++[Azure Maps Conversion service]: /rest/api/maps/v2/conversion +[sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples +[Manifest File Properties]: drawing-requirements.md#manifest-file-requirements +[Drawing Package Requirements]: drawing-requirements.md +[Tutorial: Creating a Creator indoor map]: tutorial-creator-indoor-maps.md |
azure-monitor | Azure Monitor Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md | N/A ## Update +> [!NOTE] +> The recommendation is to enable [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) which may take **up to 5 weeks** after a new extension version is released for it to update installed extensions to the released (latest) version across all regions. Upgrades are issued in batches, so you may see some of your virtual machines, scale-sets or Arc-enabled servers get upgraded before others. If you need to upgrade an extension immediately, you may use the manual instructions below. + #### [Portal](#tab/azure-portal) To perform a one-time update of the agent, you must first uninstall the existing agent version. Then install the new version as described. |
azure-monitor | Proactive Arm Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-arm-config.md | Learn more about automatically detecting: - [Failure anomalies](./proactive-failure-diagnostics.md) - [Memory Leaks](./proactive-potential-memory-leak.md)-- [Performance anomalies](./proactive-performance-diagnostics.md)+- [Performance anomalies](./smart-detection-performance.md) |
azure-monitor | Proactive Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md | Select a detection to view its details. Smart detection detects and notifies about various issues, such as: * [Smart detection - Failure Anomalies](./proactive-failure-diagnostics.md). We use machine learning to set the expected rate of failed requests for your app, correlating with load, and other factors. Notifies if the failure rate goes outside the expected envelope.-* [Smart detection - Performance Anomalies](./proactive-performance-diagnostics.md). Notifies if response time of an operation or dependency duration is slowing down, compared to historical baseline. It also notifies if we identify an anomalous pattern in response time, or page load time. +* [Smart detection - Performance Anomalies](./smart-detection-performance.md). Notifies if response time of an operation or dependency duration is slowing down, compared to historical baseline. It also notifies if we identify an anomalous pattern in response time, or page load time. * General degradations and issues, like [Trace degradation](./proactive-trace-severity.md), [Memory leak](./proactive-potential-memory-leak.md), [Abnormal rise in Exception volume](./proactive-exception-volume.md) and [Security anti-patterns](./proactive-application-security-detection-pack.md). (The help links in each notification take you to the relevant articles.) |
azure-monitor | Proactive Email Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-email-notification.md | Learn more about Smart Detection: - [Failure anomalies](./proactive-failure-diagnostics.md) - [Memory Leaks](./proactive-potential-memory-leak.md)-- [Performance anomalies](./proactive-performance-diagnostics.md)+- [Performance anomalies](./smart-detection-performance.md) |
azure-monitor | Proactive Failure Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md | Smart Detection of Failure Anomalies complements other similar but distinct feat * [metric alerts](./alerts-log.md) are set by you and can monitor a wide range of metrics such as CPU occupancy, request rates, page load times, and so on. You can use them to warn you, for example, if you need to add more resources. By contrast, Smart Detection of Failure Anomalies covers a small range of critical metrics (currently only failed request rate), designed to notify you in near real-time manner once your web app's failed request rate increases compared to web app's normal behavior. Unlike metric alerts, Smart Detection automatically sets and updates thresholds in response changes in the behavior. Smart Detection also starts the diagnostic work for you, saving you time in resolving issues. -* [Smart Detection of performance anomalies](proactive-performance-diagnostics.md) also uses machine intelligence to discover unusual patterns in your metrics, and no configuration by you is required. But unlike Smart Detection of Failure Anomalies, the purpose of Smart Detection of performance anomalies is to find segments of your usage manifold that might be badly served - for example, by specific pages on a specific type of browser. The analysis is performed daily, and if any result is found, it's likely to be much less urgent than an alert. By contrast, the analysis for Failure Anomalies is performed continuously on incoming application data, and you will be notified within minutes if server failure rates are greater than expected. +* [Smart Detection of performance anomalies](smart-detection-performance.md) also uses machine intelligence to discover unusual patterns in your metrics, and no configuration by you is required. But unlike Smart Detection of Failure Anomalies, the purpose of Smart Detection of performance anomalies is to find segments of your usage manifold that might be badly served - for example, by specific pages on a specific type of browser. The analysis is performed daily, and if any result is found, it's likely to be much less urgent than an alert. By contrast, the analysis for Failure Anomalies is performed continuously on incoming application data, and you will be notified within minutes if server failure rates are greater than expected. ## If you receive a Smart Detection alert *Why have I received this alert?* |
azure-monitor | Smart Detection Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/smart-detection-performance.md | + + Title: Smart detection - performance anomalies | Microsoft Docs +description: Smart detection analyzes your app telemetry and warns you of potential problems. This feature needs no setup. + Last updated : 05/04/2017+++# Smart detection - Performance Anomalies ++>[!NOTE] +>You can migrate your Application Insight resources to alerts-based smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections. +> +> For more information on the migration process, see [Smart Detection Alerts migration](./alerts-smart-detections-migration.md). ++[Application Insights](../app/app-insights-overview.md) automatically analyzes the performance of your web application, and can warn you about potential problems. ++This feature requires no special setup, other than configuring your app for Application Insights for your [supported language](../app/app-insights-overview.md#supported-languages). It's active when your app generates enough telemetry. ++## When would I get a smart detection notification? ++Application Insights has detected that the performance of your application has degraded in one of these ways: ++* **Response time degradation** - Your app has started responding to requests more slowly than it used to. The change might have been rapid, for example because there was a regression in your latest deployment. Or it might have been gradual, maybe caused by a memory leak. +* **Dependency duration degradation** - Your app makes calls to a REST API, database, or other dependency. The dependency is responding more slowly than it used to. +* **Slow performance pattern** - Your app appears to have a performance issue that is affecting only some requests. For example, pages are loading more slowly on one type of browser than others; or requests are being served more slowly from one particular server. Currently, our algorithms look at page load times, request response times, and dependency response times. ++To establish a baseline of normal performance, smart detection requires at least eight days of sufficient telemetry volume. After your application has been running for that period, significant anomalies will result in a notification. +++## Does my app definitely have a problem? ++No, a notification doesn't mean that your app definitely has a problem. It's simply a suggestion about something you might want to look at more closely. ++## How do I fix it? ++The notifications include diagnostic information. Here's an example: +++ ++1. **Triage**. The notification shows you how many users or how many operations are affected. This information can help you assign a priority to the problem. +2. **Scope**. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or locations? This information can be obtained from the notification. +3. **Diagnose**. Often, the diagnostic information in the notification will suggest the nature of the problem. For example, if response time slows down when request rate is high, it may indicate that your server or dependencies are beyond their capacity. ++ Otherwise, open the Performance pane in Application Insights. You'll find there [Profiler](../profiler/profiler.md) data. If exceptions are thrown, you can also try the [snapshot debugger](../snapshot-debugger/snapshot-debugger.md). ++## Configure Email Notifications ++Smart detection notifications are enabled by default. They are sent to users that have [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) and [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) access to the subscription in which the Application Insights resource resides. To change the default notification, either click **Configure** in the email notification, or open **Smart detection settings** in Application Insights. + +  + + * You can disable the default notification, and replace it with a specified list of emails. ++Emails about smart detection performance anomalies are limited to one email per day per Application Insights resource. The email will be sent only if there is at least one new issue that was detected on that day. You won't get repeats of any message. ++## Frequently asked questions ++* *So, Microsoft staff look at my data?* + * No. The service is entirely automatic. Only you get the notifications. Your data is [private](../app/data-retention-privacy.md). +* *Do you analyze all the data collected by Application Insights?* + * Currently, we analyze request response time, dependency response time, and page load time. Analysis of other metrics is on our backlog looking forward. ++* What types of application does this detection work for? + * These degradations are detected in any application that generates the appropriate telemetry. If you installed Application Insights in your web app, then requests and dependencies are automatically tracked. But in backend services or other apps, if you inserted calls to [TrackRequest()](../app/api-custom-events-metrics.md#trackrequest) or [TrackDependency](../app/api-custom-events-metrics.md#trackdependency), then smart detection will work in the same way. ++* *Can I create my own anomaly detection rules or customize existing rules?* ++ * Not yet, but you can: + * [Set up alerts](./alerts-log.md) that tell you when a metric crosses a threshold. + * [Export telemetry](../app/export-telemetry.md) to a [database](../../stream-analytics/app-insights-export-sql-stream-analytics.md) or [to Power BI](../logs/log-powerbi.md), where you can analyze it yourself. +* *How often is the analysis done?* ++ * We run the analysis daily on the telemetry from the previous day (full day in UTC timezone). +* *Does this replace [metric alerts](./alerts-log.md)?* + * No. We don't commit to detecting every behavior that you might consider abnormal. +++* *If I don't do anything in response to a notification, will I get a reminder?* + * No, you get a message about each issue only once. If the issue persists, it will be updated in the smart detection feed pane. +* *I lost the email. Where can I find the notifications in the portal?* + * In the Application Insights overview of your app, click the **Smart detection** tile. There you'll find all notifications up to 90 days back. ++## How can I improve performance? +Slow and failed responses are one of the biggest frustrations for web site users, as you know from your own experience. So, it's important to address the issues. ++### Triage +First, does it matter? If a page is always slow to load, but only 1% of your site's users ever have to look at it, maybe you have more important things to think about. However, if only 1% of users open it, but it throws exceptions every time, that might be worth investigating. ++Use the impact statement, such as affected users or % of traffic, as a general guide. Be aware that it may not be telling the whole story. Gather other evidence to confirm. ++Consider the parameters of the issue. If it's geography-dependent, set up [availability tests](../app/monitor-web-app-availability.md) including that region: there might be network issues in that area. ++### Diagnose slow page loads +Where is the problem? Is the server slow to respond, is the page too long, or does the browser need too much work to display it? ++Open the Browsers metric pane. The segmented display of browser page load time shows where the time is going. ++* If **Send Request Time** is high, either the server is responding slowly, or the request is a post with large amount of data. Look at the [performance metrics](../app/performance-counters.md) to investigate response times. +* Set up [dependency tracking](../app/asp-net-dependencies.md) to see whether the slowness is because of external services or your database. +* If **Receiving Response** is predominant, your page and its dependent parts - JavaScript, CSS, images, and so on (but not asynchronously loaded data) are long. Set up an [availability test](../app/monitor-web-app-availability.md), and be sure to set the option to load dependent parts. When you get some results, open the detail of a result and expand it to see the load times of different files. +* High **Client Processing time** suggests scripts are running slowly. If the reason isn't obvious, consider adding some timing code and send the times in trackMetric calls. ++### Improve slow pages +There's a web full of advice on improving your server responses and page load times, so we won't try to repeat it all here. Here are a few tips that you probably already know about, just to get you thinking: ++* Slow loading because of large files: Load the scripts and other parts asynchronously. Use script bundling. Break the main page into widgets that load their data separately. Don't send plain old HTML for long tables: use a script to request the data as JSON or other compact format, then fill the table in place. There are great frameworks to help with such tasks. (They also include large scripts, of course.) +* Slow server dependencies: Consider the geographical locations of your components. For example, if you're using Azure, make sure the web server and the database are in the same region. Do queries retrieve more information than they need? Would caching or batching help? +* Capacity issues: Look at the server metrics of response times and request counts. If response times peak disproportionately with peaks in request counts, it's likely that your servers are stretched. +++## Server Response Time Degradation ++The response time degradation notification tells you: ++* The response time compared to normal response time for this operation. +* How many users are affected. +* Average response time and 90th percentile response time for this operation on the day of the detection and seven days before. +* Count of this operation requests on the day of the detection and seven days before. +* Correlation between degradation in this operation and degradations in related dependencies. +* Links to help you diagnose the problem. + * Profiler traces can help you view where operation time is spent. The link is available if Profiler trace examples exist for this operation. + * Performance reports in Metric Explorer, where you can slice and dice time range/filters for this operation. + * Search for this call to view specific call properties. + * Failure reports - If count > 1, it means that there were failures in this operation that might have contributed to performance degradation. ++## Dependency Duration Degradation ++Modern applications often adopt a micro services design approach, which in many cases rely heavily on external services. For example, if your application relies on some data platform, or on a critical services provider such as cognitive services. ++Example of dependency degradation notification: ++ ++Notice that it tells you: ++* The duration compared to normal response time for this operation +* How many users are affected +* Average duration and 90th percentile duration for this dependency on the day of the detection and seven days before +* Number of dependency calls on the day of the detection and seven days before +* Links to help you diagnose the problem + * Performance reports in Metric Explorer for this dependency + * Search for this dependency calls to view calls properties + * Failure reports - If count > 1, it means that there were failed dependency calls during the detection period that might have contributed to duration degradation. + * Open Analytics with queries that calculate this dependency duration and count ++## Smart detection of slow performing patterns ++Application Insights finds performance issues that might only affect some portion of your users, or only affect users in some cases. For example, if a page loads slower on a specific browser type compared to others, or if a particular server handles requests more slowly than other servers. It can also discover problems that are associated with combinations of properties, such as slow page loads in one geographical area for clients using particular operating system. ++Anomalies like these are hard to detect just by inspecting the data, but are more common than you might think. Often they only surface when your customers complain. By that time, it's too late: the affected users are already switching to your competitors! ++Currently, our algorithms look at page load times, request response times at the server, and dependency response times. ++You don't have to set any thresholds or configure rules. Machine learning and data mining algorithms are used to detect abnormal patterns. ++ ++* **When** shows the time the issue was detected. +* **What** describes the problem that was detected, and th characteristics of the set of events that we found, which displayed the problem behavior. +* The table compares the poorly performing set with the average behavior of all other events. ++Click the links to open Metric Explorer to view reports, filtered by the time and properties of the slow performing set. ++Modify the time range and filters to explore the telemetry. ++## Next steps +These diagnostic tools help you inspect the telemetry from your app: ++* [Profiler](../profiler/profiler.md) +* [snapshot debugger](../snapshot-debugger/snapshot-debugger.md) +* [Analytics](../logs/log-analytics-tutorial.md) +* [Analytics smart diagnostics](../logs/log-query-overview.md) ++Smart detection is automatic. But maybe you'd like to set up some more alerts? ++* [Manually configured metric alerts](./alerts-log.md) +* [Availability web tests](../app/monitor-web-app-availability.md) |
azure-monitor | Prometheus Metrics Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md | The output will be similar to the following: - If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, then please register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider following this [documentation](/azure-resource-manager/management/resource-providers-and-types#register-resource-provider.md#register-resource-provider). - The Azure Monitor workspace and Azure Managed Grafana workspace must already be created. - The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace.+- Users with 'User Access Administrator' role in the subscription of the AKS cluster can be able to enable 'Monitoring Data Reader' role directly by deploying the template. ### Retrieve required values for Grafana resource From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**. - Copy the value of the `principalId` field for the `SystemAssigned` identity. --```json -"identity": { - "principalId": "00000000-0000-0000-0000-000000000000", - "tenantId": "00000000-0000-0000-0000-000000000000", - "type": "SystemAssigned" - }, -``` - If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace. ```json If you're using an existing Azure Managed Grafana instance that already has been } ``` -### Assign role to system identity -The Azure Managed Grafana resource requires the `Monitoring Data Reader` role to read data from the Azure Monitor Workspace. --1. From the **Access control (IAM)** page for the Azure Managed Grafana instance in the Azure portal, select **Add** and then **Add role assignment**. -2. Select `Monitoring Data Reader`. -3. Select **Managed identity** and then **Select members**. -4. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource. -5. Click **Select** and then **Review+assign**. - ### Download and edit template and parameter file 1. Download the template at [https://aka.ms/azureprometheus-enable-arm-template](https://aka.ms/azureprometheus-enable-arm-template) and save it as **existingClusterOnboarding.json**. The Azure Managed Grafana resource requires the `Monitoring Data Reader` role t } ```` -In this json, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON, and they're added here to the ARM template. If you have no existing Grafana integrations, then don't include these entries for `full_resource_id_1` and `full_resource_id_2`. +In this json, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON, and they're added here to the ARM template. If you have no existing Grafana integrations, then don't include these entries for `full_resource_id_1` and `full_resource_id_2`. ++The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor Workspace resource ID provided in the parameters file. ++## [Bicep](#tab/bicep) ++### Prerequisites ++- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`. +- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created. +- The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace. +- Users with 'User Access Administrator' role in the subscription of the AKS cluster can be able to enable 'Monitoring Data Reader' role directly by deploying the template. ++### Minor Limitation while deploying through bicep +Currently in bicep, there is no way to explicitly "scope" the Monitoring Data Reader role assignment on a string parameter "resource id" for Azure Monitor Workspace (like in ARM template). Bicep expects a value of type "resource | tenant" and currently there is no rest api [spec](https://github.com/Azure/azure-rest-api-specs) for Azure Monitor Workspace. So, as a workaround, the default scoping for Monitoring Data Reader role is on the resource group and thus the role is applied on the same Azure monitor workspace (by inheritance) which is the expected behavior. Thus, after deploying this bicep template, the Grafana resource will get read permissions in all the Azure Monitor Workspaces under the subscription. +++### Retrieve required values for Grafana resource ++From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**. ++If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace. ++```json +"properties": { + "grafanaIntegrations": { + "azureMonitorWorkspaceIntegrations": [ + { + "azureMonitorWorkspaceResourceId": "full_resource_id_1" + }, + { + "azureMonitorWorkspaceResourceId": "full_resource_id_2" + } + ] + } +} +``` ++### Download and edit templates and parameter file ++1. Download the main bicep template from [here](https://aka.ms/azureprometheus-enable-bicep-template) and save it as **FullAzureMonitorMetricsProfile.bicep**. +2. Download the parameter file from [here](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main bicep template. +3. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files in the same directory as the main bicep template. +4. Edit the values in the parameter file. +5. The main bicep template creates all the required resources and uses 2 modules for creating the dcra and monitormetrics profile resources from the other two bicep files. ++ | Parameter | Value | + |:|:| + | `azureMonitorWorkspaceResourceId` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. | + | `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. | + | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | + | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | + | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. | + | `metricAnnotationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. | + | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | + | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | + | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. | +++6. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This will be similar to the following: ++ ```json + { + "type": "Microsoft.Dashboard/grafana", + "apiVersion": "2022-08-01", + "name": "[split(parameters('grafanaResourceId'),'/')[8]]", + "sku": { + "name": "[parameters('grafanaSku')]" + }, + "location": "[parameters('grafanaLocation')]", + "properties": { + "grafanaIntegrations": { + "azureMonitorWorkspaceIntegrations": [ + { + "azureMonitorWorkspaceResourceId": "full_resource_id_1" + }, + { + "azureMonitorWorkspaceResourceId": "full_resource_id_2" + }, + { + "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]" + } + ] + } + } + ```` ++In this json, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON, and they're added here to the ARM template. If you have no existing Grafana integrations, then don't include these entries for `full_resource_id_1` and `full_resource_id_2`. -The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor Workspace resource ID provided in the parameters file. +The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor Workspace resource ID provided in the parameters file. ### Deploy template Deploy the template with the parameter file using any valid method for deploying -- ## Verify Deployment Run the following command to which verify that the daemon set was deployed properly: ama-metrics-ksm-5fcf8dffcd 1 1 1 11h ## Limitations - CPU and Memory requests and limits can't be changed for Container insights metrics addon. If changed, they'll be reconciled and replaced by original values in a few seconds.-- Metrics addon doesn't work on AKS clusters configured with HTTP proxy. +- Metrics addon doesn't work on AKS clusters configured with HTTP proxy. ## Uninstall metrics addon-Currently, Azure CLI is the only option to remove the metrics addon and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus. +Currently, Azure CLI is the only option to remove the metrics addon and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus. -If you don't already have it, install the aks-preview extension with the following command. +If you don't already have it, install the aks-preview extension with the following command. The `aks-preview` extension needs to be installed using the following command. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview). When you allow a default Azure Monitor workspace to be created when you install ## Next steps + - [See the default configuration for Prometheus metrics](./prometheus-metrics-scrape-default.md). - [Customize Prometheus metric scraping for the cluster](./prometheus-metrics-scrape-configuration.md). - [Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana](./prometheus-grafana.md) - [Configure self-hosted Grafana to use Azure Monitor managed service for Prometheus (preview)](./prometheus-self-managed-grafana-azure-active-directory.md)+ |
azure-monitor | Prometheus Metrics Multiple Workspaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-multiple-workspaces.md | Routing metrics to more Azure Monitor Workspaces can be done through the creatio ## Send same metrics to multiple Azure Monitor workspaces -You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor Workspaces from the same Kubernetes cluster. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](prometheus-metrics-enable.md) and then edit the same Resource Manager templates to add additional DCRs for your additional Azure Monitor Workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, and add an additional Azure Monitor workspace integration for Grafana. +You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor Workspaces from the same Kubernetes cluster. In case you have a very high volume of metrics, a new Data Collection Endpoint can be created as well. Please refer to the service limits [document](../service-limits.md) regarding ingestion limits. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](prometheus-metrics-enable.md) and then edit the same Resource Manager templates to add additional DCRs and DCEs(if applicable) for your additional Azure Monitor Workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, add another DCE (if applicable), add the Monitor Reader Role for the new Azure Monitor Workspace and add an additional Azure Monitor workspace integration for Grafana. - Add the following parameters: ```json You can create multiple Data Collection Rules that point to the same Data Collec } ``` -- Add an additional DCR with the same Data Collection Endpoint. You *must* replace `<dcrName>`:+- For high metric volume, add an additional Data Collection Endpoint. You *must* replace `<dceName>`: + ```json + { + "type": "Microsoft.Insights/dataCollectionEndpoints", + "apiVersion": "2021-09-01-preview", + "name": "[variables('dceName')]", + "location": "[parameters('azureMonitorWorkspaceLocation2')]", + "kind": "Linux", + "properties": {} + } + ``` +- Add an additional DCR with the same or a different Data Collection Endpoint. You *must* replace `<dcrName>`: ```json { "type": "Microsoft.Insights/dataCollectionRules", You can create multiple Data Collection Rules that point to the same Data Collec } ``` -+- Add an additional DCRA with the relevant Data Collection Rule. You *must* replace `<dcraName>`: + ```json + { + "type": "Microsoft.Resources/deployments", + "name": "<dcraName>", + "apiVersion": "2017-05-10", + "subscriptionId": "[variables('clusterSubscriptionId')]", + "resourceGroup": "[variables('clusterResourceGroup')]", + "dependsOn": [ + "[resourceId('Microsoft.Insights/dataCollectionEndpoints/', variables('dceName'))]", + "[resourceId('Microsoft.Insights/dataCollectionRules', variables('dcrName'))]" + ], + "properties": { + "mode": "Incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "type": "Microsoft.ContainerService/managedClusters/providers/dataCollectionRuleAssociations", + "name": "[concat(variables('clusterName'),'/microsoft.insights/', variables('dcraName'))]", + "apiVersion": "2021-09-01-preview", + "location": "[parameters('clusterLocation')]", + "properties": { + "description": "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster.", + "dataCollectionRuleId": "[resourceId('Microsoft.Insights/dataCollectionRules', variables('dcrName'))]" + } + } + ] + }, + "parameters": {} + } + } + ``` - Add an additional Grafana integration: ```json { You can create multiple Data Collection Rules that point to the same Data Collec } } ```- Similar to the regular Resource Manager onboarding process, the `Monitoring Data Reader` role will need to be assigned for every Azure Monitor workspace linked to Grafana. This will allow the Azure Managed Grafana resource to read data from the Azure Monitor workspace and is a requirement for viewing the metrics. + - Assign `Monitoring Data Reader` role to read data from the new Azure Monitor Workspace: + ```json + { + "type": "Microsoft.Authorization/roleAssignments", + "apiVersion": "2022-04-01", + "name": "[parameters('roleNameGuid')]", + "scope": "[parameters('azureMonitorWorkspaceResourceId2')]", + "properties": { + "roleDefinitionId": "[concat('/subscriptions/', variables('clusterSubscriptionId'), '/providers/Microsoft.Authorization/roleDefinitions/', 'b0d8363b-8ddd-447d-831f-62ca05bff136')]", + "principalId": "[reference(resourceId('Microsoft.Dashboard/grafana', split(parameters('grafanaResourceId'),'/')[8]), '2022-08-01', 'Full').identity.principalId]" + } + } ++ ``` ## Send different metrics to different Azure Monitor workspaces If you want to send some metrics to one Azure Monitor Workspace and other metrics to a different one, follow the above steps to add additional DCRs. The value of `microsoft_metrics_include_label` under the `labelIncludeFilter` in the DCR is the identifier for the workspace. To then configure which metrics are routed to which workspace, you can add an extra pre-defined label, `microsoft_metrics_account` to the metrics. The value should be the same as the corresponding `microsoft_metrics_include_label` in the DCR for that workspace. To add the label to the metrics, you can utilize `relabel_configs` in your scrape config. To send all metrics from one job to a certain workspace, add the following relabel config: |
azure-monitor | Network Performance Monitor Expressroute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-performance-monitor-expressroute.md | description: Use the ExpressRoute Monitor capability in Network Performance Moni Previously updated : 11/27/2018 Last updated : 11/14/2022 You can use the Azure ExpressRoute Monitor capability in [Network Performance Mo - Autodetection of ExpressRoute circuits associated with your subscription. - Tracking of bandwidth utilization, loss and latency at the circuit, peering, and Azure Virtual Network level for ExpressRoute.+ > [!NOTE] + > Use [Connection Monitor](../../network-watcher/connection-monitor-overview.md) instead of Connection Monitor (Classic) when using VWAN with ExpressRoute. - Discovery of network topology of your ExpressRoute circuits.  |
azure-netapp-files | Azure Netapp Files Create Volumes Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md | You can set permissions for a file or folder by using the **Security** tab of th  +### Modify SMB share permissions ++You can modify SMB share permissions using Microsoft Management Console (MMC). ++>[!IMPORTANT] +>Modifying SMB share permissions poses a risk. If the users or groups assigned to the share properties are removed from the Active Directory, or if the permissions for the share become unusable, then the entire share will become inaccessible. ++1. To open Computer Management MMC on any Windows server, in the Control Panel, select **Administrative Tools > Computer Management**. +1. Select **Action > Connect to another computer**. +1. In the **Select Computer** dialog box, enter the name of the Azure NetApp Files FQDN or IP address or select **Browse** to locate the storage system. +1. Select **OK** to connect the MMC to the remote server. +1. When the MMC connects to the remote server, in the navigation pane, select **Shared Folders > Shares**. +1. In the display pane that lists the shares, double-click a share to display its properties. In the **Properties** dialog box, modify the properties as needed. + ## Next steps * [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md) |
azure-netapp-files | Azure Netapp Files Resource Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md | You can increase the `maxfiles` limit to 531,278,150 if your volume quota is at >[!IMPORTANT] > Once a volume has exceeded a `maxfiles` limit, you cannot reduce volume size below the quota corresponding to that `maxfiles` limit even if you have reduced the actual used file count. For example, if you have crossed the 63,753,378 `maxfiles` limit, the volume quota cannot be reduced below its corresponding index of 2 TiB. +You cannot set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens to a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles` [quota request](#request-limit-increase) for the volume. + ## Request limit increase You can create an Azure support request to increase the adjustable limits from the [Resource Limits](#resource-limits) table. |
azure-netapp-files | Faq Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md | However, you can map multiple NetApp accounts that are under the same subscripti ## Does Azure NetApp Files support Azure Active Directory? -Both [Azure Active Directory (AD) Domain Services](../active-directory-domain-services/overview.md) and [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) are supported. You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can reside in Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. Azure NetApp Files doesn't support AD join for [Azure Active Directory](https://azure.microsoft.com/resources/videos/azure-active-directory-overview/) at this time. +Both [Azure Active Directory Domain Services (Azure AD DS)](../active-directory-domain-services/overview.md) and [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) are supported. You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can reside in Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. Azure NetApp Files doesn't support AD join for [Azure Active Directory (Azure AD)](../active-directory/fundamentals/index.yml) at this time. If you're using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account. To use an Azure NetApp Files SMB share as a DFS-N folder target, provide the Uni Azure NetApp Files supports modifying `SMB Shares` by using Microsoft Management Console (MMC). However, modifying share properties has significant risk. If the users or groups assigned to the share properties are removed from the Active Directory, or if the permissions for the share become unusable, then the entire share will become inaccessible. -You can change the NTFS permissions of the root volume by using [NTFS file and folder permissions](azure-netapp-files-create-volumes-smb.md#ntfs-file-and-folder-permissions) procedure. +See [Modify SMB share permissions](azure-netapp-files-create-volumes-smb.md#modify-smb-share-permissions) for more information on this procedure. ## Can I change the SMB share name after the SMB volume has been created? |
azure-resource-manager | Resources Without Resource Group Limit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md | Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 10/20/2022 Last updated : 02/02/2023 # Resources not limited to 800 instances per resource group Some resources have a limit on the number instances per region. This limit is di ## Microsoft.HybridCompute -* machines - Supports up to 5,000 instances. +* machines * machines/extensions ## microsoft.insights |
backup | Backup Azure Microsoft Azure Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-microsoft-azure-backup.md | Title: Use Azure Backup Server to back up workloads description: In this article, learn how to prepare your environment to protect and back up workloads using Microsoft Azure Backup Server (MABS). Last updated 08/26/2022- -++ # Install and upgrade Azure Backup Server |
backup | Backup Azure Monitoring Built In Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-built-in-monitor.md | description: In this article, learn about the monitoring and notification capabi Last updated 09/14/2022 ms.assetid: 86ebeb03-f5fa-4794-8a5f-aa5cbbf68a81- -++ # Monitoring Azure Backup workloads |
backup | Backup Azure Monitoring Use Azuremonitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-use-azuremonitor.md | description: Monitor Azure Backup workloads and create custom alerts by using Az Last updated 06/04/2019 ms.assetid: 01169af5-7eb0-4cb0-bbdb-c58ac71bf48b++ # Monitor at scale by using Azure Monitor |
backup | Backup Azure Move Recovery Services Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-move-recovery-services-vault.md | |
backup | Backup Azure Policy Supported Skus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-policy-supported-skus.md | Title: Supported VM SKUs for Azure Policy description: 'An article describing the supported VM SKUs (by Publisher, Image Offer and Image SKU) which are supported for the built-in Azure Policies provided by Backup' Last updated 04/08/2022- -++ # Supported VM SKUs for Azure Policy |
backup | Backup Azure Reports Data Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-reports-data-model.md | Title: Data model for Azure Backup diagnostics events description: This data model is in reference to the Resource Specific Mode of sending diagnostic events to Log Analytics (LA). Last updated 10/19/2022- -++ # Data Model for Azure Backup Diagnostics Events |
backup | Backup Azure Reserved Pricing Optimize Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-reserved-pricing-optimize-cost.md | description: This article explains about how to optimize costs for Azure Backup Last updated 09/03/2022--++ # Optimize costs for Azure Backup Storage with reserved capacity |
backup | Backup Azure Reserved Pricing Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-reserved-pricing-overview.md | description: This article explains about how reservation discounts are applied t Last updated 09/09/2022--++ # Understand how reservation discounts are applied to Azure Backup storage |
backup | Backup Azure Restore Files From Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md | description: In this article, learn how to recover files and folders from an Azu Last updated 11/04/2022 - -++ # Recover files from Azure virtual machine backup |
backup | Backup Azure Restore Key Secret | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-key-secret.md | description: Learn how to restore Key Vault key and secret in Azure Backup using Last updated 08/28/2017 ++ # Restore Key Vault key and secret for encrypted VMs using Azure Backup |
backup | Backup Azure Restore System State | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-system-state.md | Title: Restore System State to a Windows Server description: Step-by-step explanation for restoring Windows Server System State from a backup in Azure. Last updated 12/09/2022- -++ # Restore System State to Windows Server |
backup | Backup Azure Restore Windows Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-windows-server.md | Title: Restore files to Windows Server using the MARS Agent description: In this article, learn how to restore data stored in Azure to a Windows server or Windows computer with the Microsoft Azure Recovery Services (MARS) Agent. Last updated 09/07/2018++ # Restore files to Windows Server using the MARS Agent |
backup | Backup Azure Sap Hana Database Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md | Title: Troubleshoot SAP HANA databases backup errors description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Last updated 11/02/2022- -++ # Troubleshoot backup of SAP HANA databases on Azure |
backup | Backup Azure Sap Hana Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md | Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Last updated 01/05/2023- -++ # Back up SAP HANA databases in Azure VMs |
backup | Backup Azure Scdpm Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-scdpm-troubleshooting.md | Title: Troubleshoot System Center Data Protection Manager description: In this article, discover solutions for issues that you might encounter while using System Center Data Protection Manager. Last updated 10/21/2022- -++ # Troubleshoot System Center Data Protection Manager |
backup | Backup Azure Security Feature Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-security-feature-cloud.md | description: Learn how to use security features in Azure Backup to make backups Last updated 12/30/2022 - -++ # Soft delete for Azure Backup |
backup | Backup Azure Security Feature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-security-feature.md | description: Learn how to use security features in Azure Backup to make backups Last updated 11/30/2022- -++ # Security features to help protect hybrid backups that use Azure Backup |
backup | Backup Azure Sql Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-automation.md | |
backup | Backup Azure Sql Backup Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-backup-cli.md | Title: Back up SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to back up SQL server databases in Azure VMs in the Recovery Services vault. Last updated 08/11/2022- -++ # Back up SQL databases in Azure VM using Azure CLI |
backup | Backup Azure Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-database.md | Title: Back up SQL Server databases to Azure description: This article explains how to back up SQL Server to Azure. The article also explains SQL Server recovery. Last updated 08/11/2022++ # About SQL Server Backup in Azure VMs |
backup | Backup Azure Sql Manage Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-manage-cli.md | Title: Manage SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to manage SQL server databases in Azure VMs in the Recovery Services vault. Last updated 08/11/2022- -++ # Manage SQL databases in an Azure VM using Azure CLI |
backup | Backup Azure Sql Restore Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-restore-cli.md | Title: Restore SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to restore SQL server databases in Azure VMs in the Recovery Services vault. Last updated 08/11/2022- -++ # Restore SQL databases in an Azure VM using Azure CLI |
backup | Backup Azure Sql Vm Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-vm-rest-api.md | Title: Back up SQL server databases in Azure VMs using Azure Backup via REST API description: Learn how to use REST API to back up SQL server databases in Azure VMs in the Recovery Services vault Last updated 08/11/2022- -++ # Back up SQL server databases in Azure VMs using Azure Backup via REST API |
backup | Backup Azure System State Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-system-state-troubleshoot.md | description: In this article, learn how to troubleshoot issues in System State B Last updated 07/22/2019++ # Troubleshoot System State Backup |
backup | Backup Azure Troubleshoot Slow Backup Performance Issue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-slow-backup-performance-issue.md | Title: Troubleshoot slow backup of files and folders description: Provides troubleshooting guidance to help you diagnose the cause of Azure Backup performance issues Last updated 12/28/2022- - ++ # Troubleshoot slow backup of files and folders in Azure Backup |
backup | Backup Azure Troubleshoot Vm Backup Fails Snapshot Timeout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md | |
backup | Backup Azure Vm File Recovery Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vm-file-recovery-troubleshoot.md | Title: Troubleshoot Azure VM file recovery description: Troubleshoot issues when recovering files and folders from an Azure VM backup. Last updated 07/12/2020++ # Troubleshoot issues in file recovery of an Azure VM backup |
backup | Backup Azure Vms Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-automation.md | description: Describes how to back up and recover Azure VMs using Azure Backup w Last updated 04/25/2022 - -++ # Back up and restore Azure VMs with PowerShell |
backup | Backup Azure Vms Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-encryption.md | description: Describes how to back up and restore encrypted Azure VMs with the A Last updated 12/14/2022 --++ # Back up and restore encrypted Azure virtual machines |
backup | Backup Azure Vms Enhanced Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md | description: Learn how to configure Enhanced policy to back up VMs. Last updated 07/04/2022 - -++ # Back up an Azure VM using Enhanced policy |
backup | Backup Azure Vms First Look Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-first-look-arm.md | Title: Back up an Azure VM from the VM settings description: In this article, learn how to back up either a singular Azure VM or multiple Azure VMs with the Azure Backup service. Last updated 06/13/2019++ # Back up an Azure VM from the VM settings |
backup | Backup Azure Vms Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-introduction.md | Title: About Azure VM backup description: In this article, learn how the Azure Backup service backs up Azure Virtual machines, and how to follow best practices. Last updated 09/13/2019++ # An overview of Azure VM backup |
backup | Backup Azure Vms Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md | description: In this article, learn how to troubleshoot errors encountered with Last updated 12/23/2022- -++ # Troubleshooting backup failures on Azure virtual machines |
backup | Backup Blobs Storage Account Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-cli.md | Title: Back up Azure Blobs using Azure CLI description: Learn how to back up Azure Blobs using Azure CLI. Last updated 08/06/2021++ # Back up Azure Blobs in a storage account using Azure CLI |
backup | Backup Blobs Storage Account Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-ps.md | Title: Back up Azure blobs within a storage account using Azure PowerShell description: Learn how to back up all Azure blobs within a storage account using Azure PowerShell. Last updated 08/06/2021++ # Back up all Azure blobs in a storage account using Azure PowerShell |
backup | Backup Center Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-actions.md | Title: Perform actions using Backup center description: This article explains how to perform actions using Backup center Last updated 12/08/2022- -++ # Perform actions using Backup center |
backup | Backup Center Govern Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-govern-environment.md | Title: Govern your backup estate using Backup Center description: Learn how to govern your Azure environment to ensure that all your resources are compliant from a backup perspective with Backup Center. Last updated 09/01/2020++ # Govern your backup estate using Backup Center |
backup | Backup Center Monitor Operate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-monitor-operate.md | Title: Monitor and operate backups and disaster recovery using Backup center description: This article explains how to monitor and operate backups and disaster recovery at-scale using Backup center. Last updated 12/08/2022- -++ # Monitor and operate backups and disaster recovery using Backup center |
backup | Backup Center Obtain Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-obtain-insights.md | Title: Obtain insights using Backup center description: Learn how to analyze historical trends and gain deeper insights on your backups with Backup center. Last updated 10/19/2021++ # Obtain Insights using Backup center |
backup | Backup Center Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-overview.md | Title: Overview of Backup center for Azure Backup and Azure Site Recovery description: This article provides an overview of Backup center for Azure. Last updated 12/08/2022- - ++ # About Backup center for Azure Backup and Azure Site Recovery |
backup | Backup Center Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-support-matrix.md | Title: Support matrix for Backup center description: This article summarizes the scenarios that Backup center supports for each workload type Last updated 12/08/2022- -++ # Support matrix for Backup center |
backup | Backup Client Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-client-automation.md | description: In this article, learn how to use PowerShell to set up Azure Backup Last updated 08/24/2021 ++ # Deploy and manage backup to Azure for Windows Server/Windows Client using PowerShell |
backup | Backup Create Recovery Services Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-create-recovery-services-vault.md | description: Learn how to create and configure Recovery Services vaults, and how Last updated 12/14/2022 ++ # Create and configure a Recovery Services vault |
backup | Backup Dpm Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-dpm-automation.md | description: Learn how to deploy and manage Azure Backup for Data Protection Man Last updated 01/23/2017 ++ # Deploy and manage backup to Azure for Data Protection Manager (DPM) servers using PowerShell |
backup | Backup During Vm Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-during-vm-creation.md | Title: Enable backup when you create an Azure VM description: Describes how to enable backup when you create an Azure VM with Azure Backup. Last updated 07/19/2022- -++ # Enable backup when you create an Azure VM |
backup | Backup Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-encryption.md | description: Learn how encryption features in Azure Backup help you protect your Last updated 10/28/2022 - -++ # Encryption in Azure Backup |
backup | Backup Instant Restore Capability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-instant-restore-capability.md | description: Azure Instant Restore Capability and FAQs for VM backup stack, Reso Last updated 04/23/2019++ # Get improved backup and restore performance with Azure Backup Instant Restore capability |
backup | Backup Mabs Add Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-add-storage.md | Title: Use Modern Backup Storage with Azure Backup Server description: Learn about the new features in Azure Backup Server. This article describes how to upgrade your Backup Server installation. Last updated 11/13/2018++ # Add storage to Azure Backup Server |
backup | Backup Mabs Files Applications Azure Stack | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-files-applications-azure-stack.md | Title: Back up files in Azure Stack VMs description: Use Azure Backup to back up and recover Azure Stack files and applications to your Azure Stack environment. Last updated 11/11/2021- -++ # Back up files and applications on Azure Stack |
backup | Backup Mabs Install Azure Stack | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-install-azure-stack.md | Title: Install Azure Backup Server on Azure Stack description: In this article, learn how to use Azure Backup Server to protect or back up workloads in Azure Stack. Last updated 01/31/2019++ # Install Azure Backup Server on Azure Stack |
backup | Backup Mabs Protection Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-protection-matrix.md | Title: MABS (Azure Backup Server) V3 UR1 protection matrix description: This article provides a support matrix listing all workloads, data types, and installations that Azure Backup Server protects. Last updated 08/08/2022 - -++ # MABS (Azure Backup Server) V3 UR1 (and later) protection matrix |
backup | Backup Mabs Release Notes V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-release-notes-v3.md | description: This article provides the information about the known issues and wo Last updated 07/27/2021 ms.asset: 0c4127f2-d936-48ef-b430-a9198e425d81++ # Release notes for Microsoft Azure Backup Server |
backup | Backup Mabs Sharepoint Azure Stack | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-sharepoint-azure-stack.md | Title: Back up a SharePoint farm on Azure Stack description: Use Azure Backup Server to back up and restore your SharePoint data on Azure Stack. This article provides the information to configure your SharePoint farm so that desired data can be stored in Azure. You can restore protected SharePoint data from disk or from Azure. Last updated 10/20/2022- - ++ # Back up a SharePoint farm on Azure Stack using Microsoft Azure Backup Server |
backup | Backup Mabs System State And Bmr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-system-state-and-bmr.md | Title: System state and bare-metal recovery protection description: Use Azure Backup Server to back up your system state and provide bare-metal recovery (BMR) protection. Last updated 05/15/2017++ # Back up system state and restore to bare metal by using Azure Backup Server |
backup | Backup Mabs Unattended Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-unattended-install.md | Title: Silent installation of Azure Backup Server V2 description: Use a PowerShell script to silently install Azure Backup Server V2. This kind of installation is also called an unattended installation. Last updated 11/13/2018++ # Run an unattended installation of Azure Backup Server |
backup | Backup Mabs Whats New Mabs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md | Title: What's new in Microsoft Azure Backup Server description: Microsoft Azure Backup Server gives you enhanced backup capabilities for protecting VMs, files and folders, workloads, and more. Last updated 07/27/2021++ # What's new in Microsoft Azure Backup Server (MABS) |
backup | Backup Managed Disks Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-cli.md | Title: Back up Azure Managed Disks using Azure CLI description: Learn how to back up Azure Managed Disks using Azure CLI. Last updated 09/17/2021++ # Back up Azure Managed Disks using Azure CLI |
backup | Backup Managed Disks Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-ps.md | description: Learn how to back up Azure Managed Disks using Azure PowerShell. Last updated 09/17/2021 ++ # Back up Azure Managed Disks using Azure PowerShell |
backup | Backup Managed Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks.md | Title: Back up Azure Managed Disks description: Learn how to back up Azure Managed Disks from the Azure portal. Last updated 11/03/2022- -++ # Back up Azure Managed Disks |
backup | Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-overview.md | description: Provides an overview of the Azure Backup service, and how it contri Last updated 03/11/2022 ++ # What is the Azure Backup service? |
backup | Backup Postgresql Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-postgresql-cli.md | description: Learn how to back up Azure Database for PostgreSQL using Azure CLI. Last updated 02/25/2022 - -++ # Back up Azure PostgreSQL databases using Azure CLI |
backup | Backup Postgresql Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-postgresql-ps.md | description: Learn how to back up Azure Database for PostgreSQL using Azure Powe Last updated 01/24/2022 - -++ # Back up Azure PostgreSQL databases using Azure PowerShell |
backup | Backup Rbac Rs Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-rbac-rs-vault.md | description: Use Azure role-based access control to manage access to backup mana Last updated 02/28/2022- -++ # Use Azure role-based access control to manage Azure Backup recovery points |
backup | Backup Release Notes Archived | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-release-notes-archived.md | Title: Azure Backup release notes - Archive description: Learn about past features releases in Azure Backup. Last updated 01/27/2022- -++ # Archived release notes in Azure Backup |
backup | Backup Reports System Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-reports-system-functions.md | Title: System functions on Azure Monitor Logs description: Write custom queries on Azure Monitor Logs using system functions Last updated 03/01/2021++ # System functions on Azure Monitor Logs |
backup | Backup Rm Template Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-rm-template-samples.md | description: Azure Resource Manager and Bicep templates for use with Recovery Se Last updated 09/05/2022 - -++ # Azure Resource Manager and Bicep templates for Azure Backup |
backup | Backup Sql Server Azure Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-azure-troubleshoot.md | Title: Troubleshoot SQL Server database backup description: Troubleshooting information for backing up SQL Server databases running on Azure VMs with Azure Backup. Last updated 12/28/2022- - ++ # Troubleshoot SQL Server database backup by using Azure Backup |
backup | Backup Sql Server Database Azure Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-database-azure-vms.md | Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Last updated 08/11/2022- -++ # Back up multiple SQL Server VMs from the Recovery Services vault |
backup | Backup Sql Server On Availability Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-on-availability-groups.md | Title: Back up SQL Server always on availability groups description: In this article, learn how to back up SQL Server on availability groups. Last updated 08/11/2022++ # Back up SQL Server always on availability groups |
backup | Backup Sql Server Vm From Vm Pane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-vm-from-vm-pane.md | Title: Back up a SQL Server VM from the VM pane description: In this article, learn how to back up SQL Server databases on Azure virtual machines from the VM pane. Last updated 08/11/2022++ # Back up a SQL Server from the VM pane |
backup | Backup Support Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-automation.md | Title: Automation in Azure Backup support matrix description: This article summarizes automation tasks related to Azure Backup support. Last updated 11/04/2022 - -++ # Support matrix for automation in Azure Backup |
backup | Backup Support Matrix Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md | |
backup | Backup Support Matrix Mabs Dpm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-mabs-dpm.md | Title: MABS & System Center DPM support matrix description: This article summarizes Azure Backup support when you use Microsoft Azure Backup Server (MABS) or System Center DPM to back up on-premises and Azure VM resources. Last updated 02/17/2019 ++ # Support matrix for backup with Microsoft Azure Backup Server or System Center DPM |
backup | Backup Support Matrix Mars Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-mars-agent.md | description: This article summarizes Azure Backup support when you back up machi Last updated 12/28/2022 ++ # Support matrix for backup with the Microsoft Azure Recovery Services (MARS) agent |
backup | Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md | description: Provides a summary of support settings and limitations for the Azur Last updated 10/21/2022 - -++ # Support matrix for Azure Backup |
backup | Backup The Mabs Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-the-mabs-server.md | Title: Back up the MABS server description: Learn how to back up the Microsoft Azure Backup Server (MABS). Last updated 09/24/2020++ # Back up the MABS server |
backup | Backup Vault Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md | description: An overview of Backup vaults. Last updated 10/19/2022 - -++ # Backup vaults overview |
backup | Backup Windows With Mars Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-windows-with-mars-agent.md | Title: Back up Windows machines by using the MARS agent description: Use the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines. Last updated 03/03/2020-++ # Back up Windows Server files and folders to Azure |
backup | Blob Backup Configure Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-manage.md | Title: Configure operational backup for Azure Blobs description: Learn how to configure and manage operational backup for Azure Blobs. Last updated 09/28/2021-++ # Configure operational backup for Azure Blobs |
backup | Blob Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-overview.md | Title: Overview of operational backup for Azure Blobs description: Learn about operational backup for Azure Blobs. Last updated 05/05/2021-++ # Overview of operational backup for Azure Blobs |
backup | Blob Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md | description: Provides a summary of support settings and limitations when backing Last updated 10/07/2021 ++ # Support matrix for Azure Blobs backup |
backup | Blob Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-restore.md | Title: Restore Azure Blobs description: Learn how to restore Azure Blobs. Last updated 03/11/2022-++ # Restore Azure Blobs |
backup | Compliance Offerings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/compliance-offerings.md | Title: Azure Backup compliance offerings description: Summary of compliance offerings for Azure Backup Last updated 11/29/2022- - ++ # Azure Backup compliance offerings |
backup | Configure Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/configure-reports.md | Title: Configure Azure Backup reports description: Configure and view reports for Azure Backup by using Log Analytics and Azure workbooks Last updated 02/14/2022- -++ # Configure Azure Backup reports |
backup | Create Manage Azure Services Using Azure Command Line Interface | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/create-manage-azure-services-using-azure-command-line-interface.md | Title: Create and manage Azure services with Azure CLI description: Use Azure CLI to create and manage Azure services for Azure Backup. Last updated 05/21/2021++ # Create and manage Azure Backup services using Azure CLI |
backup | Disk Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-overview.md | Title: Overview of Azure Disk Backup description: Learn about the Azure Disk backup solution. Last updated 03/10/2022- -++ # Overview of Azure Disk Backup |
backup | Disk Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-support-matrix.md | description: Provides a summary of support settings and limitations Azure Disk B Last updated 03/30/2022 - -++ # Azure Disk Backup support matrix |
backup | Disk Backup Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-troubleshoot.md | Title: Troubleshooting backup failures in Azure Disk Backup description: Learn how to troubleshoot backup failures in Azure Disk Backup Last updated 06/08/2021++ # Troubleshooting backup failures in Azure Disk Backup |
backup | Enable Multi User Authorization Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/enable-multi-user-authorization-quickstart.md | Title: Quickstart - Multi-user authorization using Resource Guard description: In this quickstart, learn how to use Multi-user authorization to protect against unauthorized operation. Last updated 05/05/2022- -++ # Quickstart: Enable protection using Multi-user authorization on Recovery Services vault in Azure Backup |
backup | Guidance Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/guidance-best-practices.md | description: Discover the best practices and guidance for backing up cloud and o Last updated 12/22/2022 - -++ # Backup cloud and on-premises workloads to cloud |
backup | Install Mars Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/install-mars-agent.md | Title: Install the Microsoft Azure Recovery Services (MARS) agent description: Learn how to install the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines. Last updated 11/15/2022- -++ # Install the Azure Backup MARS agent |
backup | Manage Afs Backup Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-afs-backup-cli.md | Title: Manage Azure file share backups with the Azure CLI description: Learn how to use the Azure CLI to manage and monitor Azure file shares backed up by Azure Backup. Last updated 02/09/2022++ # Manage Azure file share backups with the Azure CLI |
backup | Manage Afs Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-afs-backup.md | Title: Manage Azure file share backups description: This article describes common tasks for managing and monitoring the Azure file shares that are backed up by Azure Backup. Last updated 11/03/2021- -++ # Manage Azure file share backups |
backup | Manage Afs Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-afs-powershell.md | description: Learn how to use PowerShell to manage and monitor Azure file shares Last updated 1/27/2020 ++ # Manage Azure file share backups with PowerShell |
backup | Manage Azure Database Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-database-postgresql.md | Title: Manage Azure Database for PostgreSQL server description: Learn about managing Azure Database for PostgreSQL server. Last updated 01/24/2022- -++ # Manage Azure Database for PostgreSQL server |
backup | Manage Azure File Share Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-file-share-rest-api.md | Title: Manage Azure File share backup with REST API description: Learn how to use REST API to manage and monitor Azure file shares that are backed up by Azure Backup. Last updated 02/17/2020++ # Manage Azure File share backup with REST API |
backup | Manage Azure Sql Vm Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-sql-vm-rest-api.md | Title: Manage SQL server databases in Azure VMs with REST API description: Learn how to use REST API to manage and monitor SQL server databases in Azure VM that are backed up by Azure Backup. Last updated 08/11/2022- -++ # Manage SQL server databases in Azure VMs with REST API |
backup | Manage Monitor Sql Database Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-monitor-sql-database-backup.md | Title: Manage and monitor SQL Server DBs on an Azure VM description: This article describes how to manage and monitor SQL Server databases that are running on an Azure VM. Last updated 09/14/2022- -++ # Manage and monitor backed up SQL Server databases |
backup | Manage Recovery Points | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-recovery-points.md | Title: Manage recovery points description: Learn how the Azure Backup service manages recovery points for virtual machines Last updated 06/17/2021++ # Manage recovery points |
backup | Manage Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-telemetry.md | Title: Manage telemetry settings in Microsoft Azure Backup Server (MABS) description: This article provides information about how to manage the telemetry settings in MABS. Last updated 07/27/2021 ++ # Manage telemetry settings |
backup | Metrics Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/metrics-overview.md | Title: Monitor the health of your backups using Azure Backup Metrics (preview) description: In this article, learn about the metrics available for Azure Backup to monitor your backup health - Last updated 07/13/2022- ++ # Monitor the health of your backups using Azure Backup Metrics (preview) |
backup | Microsoft Azure Backup Server Protection V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/microsoft-azure-backup-server-protection-v3.md | Title: What Azure Backup Server V3 RTM can back up description: This article provides a protection matrix listing all workloads, data types, and installations that Azure Backup Serve V3 RTM protects. Last updated 08/08/2022 - -++ # Azure Backup Server V3 RTM protection matrix |
backup | Modify Vm Policy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/modify-vm-policy-cli.md | Title: Update the existing VM backup policy using CLI description: Learn how to update the existing VM backup policy using Azure CLI. Last updated 12/31/2020++ # Update the existing VM backup policy using CLI |
backup | Monitor Azure Backup With Backup Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/monitor-azure-backup-with-backup-explorer.md | Title: Monitor your backups with Backup Explorer description: This article describes how to use Backup Explorer to perform real-time monitoring of backups across vaults, subscriptions, regions, and tenants. Last updated 02/03/2020++ # Monitor your backups with Backup Explorer |
backup | Monitoring And Alerts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/monitoring-and-alerts-overview.md | Title: Monitoring and reporting solutions for Azure Backup description: Learn about different monitoring and reporting solutions provided by Azure Backup. Last updated 10/21/2022- -++ # Monitoring and reporting solutions for Azure Backup |
backup | Move To Azure Monitor Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/move-to-azure-monitor-alerts.md | Title: Switch to Azure Monitor based alerts for Azure Backup description: This article describes the new and improved alerting capabilities via Azure Monitor and the process to configure Azure Monitor. Last updated 09/14/2022- --++ # Switch to Azure Monitor based alerts for Azure Backup |
backup | Multi User Authorization Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md | Title: Multi-user authorization using Resource Guard description: An overview of Multi-user authorization using Resource Guard. Last updated 09/15/2022- -++ # Multi-user authorization using Resource Guard |
backup | Multi User Authorization Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-tutorial.md | Title: Tutorial - Enable Multi-user authorization using Resource Guard description: In this tutorial, you'll learn about how create a resource guard and enable Multi-user authorization on Recovery Services vault for Azure Backup. Last updated 05/05/2022- -++ # Tutorial: Create a Resource Guard and enable Multi-user authorization in Azure Backup |
backup | Multi User Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md | description: This article explains how to configure Multi-user authorization usi zone_pivot_groups: backup-vaults-recovery-services-vault-backup-vault Last updated 11/08/2022- -++ # Configure Multi-user authorization using Resource Guard in Azure Backup |
backup | Offline Backup Azure Data Box Dpm Mabs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/offline-backup-azure-data-box-dpm-mabs.md | Title: Offline Backup with Azure Data Box for DPM and MABS description: You can use Azure Data Box to seed initial Backup data offline from DPM and MABS. Last updated 08/04/2022- -++ # Offline seeding using Azure Data Box for DPM and MABS |
backup | Offline Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/offline-backup-overview.md | description: Learn about the components of offline backup. They include offline Last updated 1/28/2020 ++ # Overview of offline backup |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | description: Lists Azure Policy built-in policy definitions for Azure Backup. Th Last updated 01/05/2023 ++ # Azure Policy built-in definitions for Azure Backup |
backup | Powershell Backup Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/powershell-backup-samples.md | description: This article provides links to PowerShell script samples that use A Last updated 06/23/2021 ++ # Azure Backup PowerShell samples |
backup | Pre Backup Post Backup Scripts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/pre-backup-post-backup-scripts.md | Title: Using Pre-Backup and Post-Backup Scripts description: This article contains the procedure to specify pre-backup and post-backup scripts. Azure Backup Server (MABS). Last updated 07/06/2021++ # Using pre-backup and post-backup scripts |
backup | Private Endpoints Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md | description: Understand the use of private endpoints for Azure Backup and the sc Last updated 11/09/2021 - -++ # Overview and concepts of private endpoints for Azure Backup |
backup | Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints.md | description: Understand the process to creating private endpoints for Azure Back Last updated 12/01/2022 - -++ # Create and use private endpoints for Azure Backup |
backup | Query Backups Using Azure Resource Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/query-backups-using-azure-resource-graph.md | Title: Query your backups using Azure Resource Graph (ARG) description: Learn more about querying information on backup for your Azure resources using Azure Resource Group (ARG). Last updated 05/21/2021++ # Query your backups using Azure Resource Graph (ARG) |
backup | Quick Backup Postgresql Database Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-postgresql-database-portal.md | description: In this quickstart, learn how to back up Azure Database for Postgre Last updated 02/25/2022- -++ # Back up Azure Database for PostgreSQL server in Azure |
backup | Quick Backup Vm Bicep Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-bicep-template.md | |
backup | Quick Backup Vm Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-cli.md | ms.devlang: azurecli Last updated 05/05/2022 - -++ # Back up a virtual machine in Azure with the Azure CLI |
backup | Quick Backup Vm Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-portal.md | Last updated 01/11/2022 ms.devlang: azurecli - -++ # Back up a virtual machine in Azure |
backup | Quick Backup Vm Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-powershell.md | ms.devlang: azurecli Last updated 04/16/2019 ++ # Back up a virtual machine in Azure with PowerShell |
backup | Quick Backup Vm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-template.md | ms.devlang: azurecli Last updated 11/15/2021 - -++ # Back up a virtual machine in Azure with an ARM template |
backup | Restore Afs Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-afs-cli.md | Title: Restore Azure file shares with the Azure CLI description: Learn how to use the Azure CLI to restore backed-up Azure file shares in the Recovery Services vault Last updated 01/16/2020++ # Restore Azure file shares with the Azure CLI |
backup | Restore Afs Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-afs-powershell.md | description: In this article, learn how to restore Azure Files using the Azure B Last updated 1/27/2020 ++ # Restore Azure Files with PowerShell |
backup | Restore Afs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-afs.md | Title: Restore Azure file shares description: Learn how to use the Azure portal to restore an entire file share or specific files from a restore point created by Azure Backup. Last updated 12/28/2022- - ++ # Restore Azure file shares |
backup | Restore All Files Volume Mars | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-all-files-volume-mars.md | Title: Restore all files in a volume with MARS description: Learn how to restore all the files in a volume using the MARS Agent. Last updated 01/17/2021++ # Restore all the files in a volume using the MARS Agent |
backup | Restore Azure Backup Server Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-backup-server-vmware.md | Title: Restore VMware VMs with Azure Backup Server description: Use Azure Backup Server (MABS) to restore VMware VMs running on a VMware vCenter/ESXi server. Last updated 08/18/2019++ # Restore VMware virtual machines |
backup | Restore Azure Database Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql.md | description: Learn about how to restore Azure Database for PostgreSQL backups. Last updated 01/21/2022 - -++ # Restore Azure Database for PostgreSQL backups |
backup | Restore Azure Encrypted Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-encrypted-virtual-machines.md | Title: Restore encrypted Azure VMs description: Describes how to restore encrypted Azure VMs with the Azure Backup service. Last updated 12/07/2022++ # Restore encrypted Azure virtual machines |
backup | Restore Azure File Share Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-file-share-rest-api.md | Title: Restore Azure file shares with REST API description: Learn how to use REST API to restore Azure file shares or specific files from a restore point created by Azure Backup Last updated 02/17/2020++ # Restore Azure File Shares using REST API |
backup | Restore Azure Sql Vm Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-sql-vm-rest-api.md | Title: Restore SQL server databases in Azure VMs with REST API description: Learn how to use REST API to restore SQL server databases in Azure VM from a restore point created by Azure Backup Last updated 08/11/2022- -++ # Restore SQL Server databases in Azure VMs with REST API |
backup | Restore Blobs Storage Account Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-cli.md | Title: Restore Azure Blobs via Azure CLI description: Learn how to restore Azure Blobs to any point-in-time using Azure CLI. Last updated 06/18/2021++ # Restore Azure Blobs to point-in-time using Azure CLI |
backup | Restore Blobs Storage Account Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-ps.md | Title: Restore Azure blobs via Azure PowerShell description: Learn how to restore Azure blobs to any point-in-time using Azure PowerShell. Last updated 05/05/2021++ # Restore Azure blobs to point-in-time using Azure PowerShell |
backup | Restore Managed Disks Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-managed-disks-cli.md | Title: Restore Azure Managed Disks via Azure CLI description: Learn how to restore Azure Managed Disks using Azure CLI. Last updated 06/18/2021++ # Restore Azure Managed Disks using Azure CLI |
backup | Restore Managed Disks Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-managed-disks-ps.md | Title: Restore Azure Managed Disks via Azure PowerShell description: Learn how to restore Azure Managed Disks using Azure PowerShell. Last updated 03/26/2021++ # Restore Azure Managed Disks using Azure PowerShell |
backup | Restore Managed Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-managed-disks.md | Title: Restore Azure Managed Disks description: Learn how to restore Azure Managed Disks from the Azure portal. Last updated 01/07/2021++ # Restore Azure Managed Disks |
backup | Restore Postgresql Database Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-postgresql-database-cli.md | Title: Restore Azure PostgreSQL databases via Azure CLI description: Learn how to restore Azure PostgreSQL databases using Azure CLI. Last updated 01/24/2022- -++ # Restore Azure PostgreSQL databases using Azure CLI |
backup | Restore Postgresql Database Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-postgresql-database-ps.md | Title: Restore Azure PostgreSQL databases via Azure PowerShell description: Learn how to restore Azure PostgreSQL databases using Azure PowerShell. Last updated 01/24/2022- -++ # Restore Azure PostgreSQL databases using Azure PowerShell |
backup | Restore Postgresql Database Use Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-postgresql-database-use-rest-api.md | Title: Restore Azure PostgreSQL databases via Azure data protection REST API description: Learn how to restore Azure PostGreSQL databases using Azure Data Protection REST API. Last updated 01/24/2022- -++ # Restore Azure PostgreSQL databases using Azure data protection REST API |
backup | Restore Sql Database Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md | Title: Restore SQL Server databases on an Azure VM description: This article describes how to restore SQL Server databases that are running on an Azure VM and that are backed up with Azure Backup. You can also use Cross Region Restore to restore your databases to a secondary region. Last updated 11/08/2022- -++ # Restore SQL Server databases on Azure VMs |
backup | Sap Hana Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md | description: In this article, learn about the supported scenarios and limitation Last updated 11/14/2022 - -++ # Support matrix for backup of SAP HANA databases on Azure VMs |
backup | Sap Hana Database About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-about.md | Title: About SAP HANA database backup on Azure VMs description: In this article, you'll learn about backing up SAP HANA databases that are running on Azure virtual machines. Last updated 10/06/2022- -++ # About SAP HANA database backup on Azure VMs |
backup | Sap Hana Database Instance Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instance-troubleshoot.md | Title: Troubleshoot SAP HANA databases instance backup errors description: This article describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA database instances. Last updated 10/05/2022- -++ # Troubleshoot SAP HANA snapshot backup jobs on Azure Backup |
backup | Sap Hana Database Instances Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instances-backup.md | Title: Back up SAP HANA database instances on Azure VMs description: In this article, you'll learn how to back up SAP HANA database instances that are running on Azure virtual machines. Last updated 10/05/2022- -++ # Back up SAP HANA database instance snapshots on Azure VMs (preview) |
backup | Sap Hana Database Instances Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instances-restore.md | Title: Restore SAP HANA database instances on Azure VMs description: In this article, you'll learn how to restore SAP HANA database instances on Azure virtual machines. Last updated 10/05/2022- -++ # Restore SAP HANA database instance snapshots on Azure VMs (preview) |
backup | Sap Hana Database Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-manage.md | Title: Manage backed up SAP HANA databases on Azure VMs description: In this article, you'll learn common tasks for managing and monitoring SAP HANA databases that are running on Azure virtual machines. Last updated 12/23/2022- -++ # Manage and monitor backed up SAP HANA databases |
backup | Sap Hana Database Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-restore.md | Title: Restore SAP HANA databases on Azure VMs description: In this article, you'll learn how to restore SAP HANA databases that are running on Azure virtual machines. You can also use Cross Region Restore to restore your databases to a secondary region. Last updated 10/07/2022- -++ # Restore SAP HANA databases on Azure VMs |
backup | Sap Hana Database With Hana System Replication Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md | Title: Back up SAP HANA System Replication databases on Azure VMs (preview) description: In this article, discover how to back up SAP HANA databases with HANA System Replication enabled. Last updated 12/23/2022- -++ # Back up SAP HANA System Replication databases on Azure VMs (preview) |
backup | Backup Powershell Sample Backup Encrypted Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/backup-powershell-sample-backup-encrypted-vm.md | description: In this article, learn how to use an Azure PowerShell Script sample Last updated 03/05/2019 ++ # Back up an encrypted Azure virtual machine with PowerShell |
backup | Backup Powershell Script Find Recovery Services Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/backup-powershell-script-find-recovery-services-vault.md | description: Learn how to use an Azure PowerShell script to find the Recovery Se Last updated 1/28/2020 ++ # PowerShell Script to find the Recovery Services vault where a Storage Account is registered |
backup | Backup Powershell Script Undelete File Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/backup-powershell-script-undelete-file-share.md | description: Learn how to use an Azure PowerShell script to undelete an accident Last updated 02/02/2020 ++ # PowerShell script to undelete an accidentally deleted File share |
backup | Delete Recovery Services Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/delete-recovery-services-vault.md | Title: Script Sample - Delete a Recovery Services vault description: Learn about how to use a PowerShell script to delete a Recovery Services vault. Last updated 01/30/2022- -++ # PowerShell script to delete a Recovery Services vault |
backup | Disable Soft Delete For File Shares | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/disable-soft-delete-for-file-shares.md | Title: Script Sample - Disable Soft delete for File Share description: Learn how to use a script to disable soft delete for file shares in a storage account. Last updated 02/02/2020++ # Disable soft delete for file shares in a storage account |
backup | Geo Code List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/geo-code-list.md | Title: Geo-code mapping description: Learn about geo-codes mapped with the respective regions. Last updated 03/07/2022- -++ # Geo-code mapping |
backup | Install Latest Microsoft Azure Recovery Services Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/install-latest-microsoft-azure-recovery-services-agent.md | Title: Script Sample - Install the latest MARS agent on on-premises Windows serv description: Learn how to use a script to install the latest MARS agent on your on-premises Windows servers in a storage account. Last updated 06/23/2021++ # PowerShell Script to install the latest MARS agent on an on-premises Windows server |
backup | Microsoft Azure Recovery Services Powershell All | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/microsoft-azure-recovery-services-powershell-all.md | Title: Script Sample - Configuring Backup for on-premises Windows server description: Learn how to use a script to configure Backup for on-premises Windows server. Last updated 06/23/2021++ # PowerShell Script to configure Backup for on-premises Windows server |
backup | Register Microsoft Azure Recovery Services Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/register-microsoft-azure-recovery-services-agent.md | Title: Script Sample - Register an on-premises Windows server or client machine description: Learn about how to use a script to registering an on-premises Windows Server or client machine with a Recovery Services vault. Last updated 06/23/2021++ # PowerShell Script to register an on-premises Windows server or a client machine with Recovery Services vault |
backup | Set File Folder Backup Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/set-file-folder-backup-policy.md | Title: Script Sample - Create a new or modify the current file and folder backup description: Learn about how to use a script to create a new policy or modify the current file and folder Backup policy. Last updated 06/23/2021++ # PowerShell Script to create a new or modify the current file and folder backup policy |
backup | Set System State Backup Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/set-system-state-backup-policy.md | Title: Script Sample - Create a new or modify the current system state backup po description: Learn about how to use a script to create a new or modify the current system state backup policy. Last updated 06/23/2021++ # PowerShell Script to create a new or modify the current system state backup policy |
backup | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 01/05/2023 -- ++ # Azure Policy Regulatory Compliance controls for Azure Backup |
backup | Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-overview.md | Title: Overview of security features description: Learn about security capabilities in Azure Backup that help you protect your backup data and meet the security needs of your business. Last updated 03/12/2020++ # Overview of security features in Azure Backup |
backup | Selective Disk Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/selective-disk-backup-restore.md | description: In this article, learn about selective disk backup and restore usin Last updated 11/10/2021 - + |
backup | Soft Delete Azure File Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/soft-delete-azure-file-share.md | description: Learn how to soft delete can protect your Azure File Shares from ac Last updated 02/02/2020 ++ # Accidental delete protection for Azure file shares using Azure Backup |
backup | Soft Delete Sql Saphana In Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/soft-delete-sql-saphana-in-azure-vm.md | description: Learn how soft delete for SQL server in Azure VM and SAP HANA in Az Last updated 04/27/2020 ++ # Soft delete for SQL server in Azure VM and SAP HANA in Azure VM workloads |
backup | Soft Delete Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/soft-delete-virtual-machines.md | description: Learn how soft delete for virtual machines makes backups more secur Last updated 08/10/2022 - -++ # Soft delete for virtual machines |
backup | Sql Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sql-support-matrix.md | description: Provides a summary of support settings and limitations when backing Last updated 07/20/2022 - -++ # Support matrix for SQL Server Backup in Azure VMs |
backup | Transport Layer Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/transport-layer-security.md | Title: Transport Layer Security in Azure Backup description: Learn how to enable Azure Backup to use the encryption protocol Transport Layer Security (TLS) to keep data secure when being transferred over a network. Last updated 09/20/2022++ # Transport Layer Security in Azure Backup |
backup | Troubleshoot Archive Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/troubleshoot-archive-tier.md | Title: Archive tier troubleshoots description: Learn to troubleshoot Archive Tier errors for Azure Backup. Last updated 10/23/2021- -++ # Troubleshooting recovery point archive using Archive Tier |
backup | Troubleshoot Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/troubleshoot-azure-files.md | Title: Troubleshoot Azure file share backup description: This article is troubleshooting information about issues occurring when protecting your Azure file shares. Last updated 02/10/2020 ++ # Troubleshoot problems while backing up Azure file shares |
backup | Tutorial Backup Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-azure-vm.md | description: This tutorial details backing up multiple Azure VMs to a Recovery S Last updated 03/05/2019 ++ # Back up Azure VMs with PowerShell |
backup | Tutorial Backup Restore Files Windows Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-restore-files-windows-server.md | description: In this tutorial, learn how to use the Microsoft Azure Recovery Ser Last updated 02/14/2018 ++ # Recover files from Azure to a Windows Server |
backup | Tutorial Backup Sap Hana Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-sap-hana-db.md | Title: Tutorial - Back up SAP HANA databases in Azure VMs description: In this tutorial, learn how to back up SAP HANA databases running on Azure VM to an Azure Backup Recovery Services vault. Last updated 05/16/2022- -++ # Tutorial: Back up SAP HANA databases in an Azure VM |
backup | Tutorial Backup Vm At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-vm-at-scale.md | description: In this tutorial, learn how to create a Recovery Services vault, de Last updated 01/11/2022 - -++ # Use Azure portal to back up multiple virtual machines |
backup | Tutorial Backup Windows Server To Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-windows-server-to-azure.md | description: This tutorial details backing up on-premises Windows Servers to a R Last updated 12/15/2022 - -++ # Back up Windows Server to Azure |
backup | Tutorial Postgresql Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-postgresql-backup.md | Title: Tutorial - Back up Azure Database for PostgreSQL server description: Learn about how to back up Azure Database for PostgreSQL server to an Azure Backup Vault. Last updated 02/25/2022- -++ # Back up Azure Database for PostgreSQL server |
backup | Tutorial Restore Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-disk.md | description: Learn how to restore a disk and create a recover a VM in Azure with Last updated 10/28/2022 - -++ # Restore a VM with Azure CLI |
backup | Tutorial Restore Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-files.md | description: Learn how to perform file-level restores on an Azure VM with Backup Last updated 01/31/2019 ++ # Restore files to a virtual machine in Azure |
backup | Tutorial Sap Hana Backup Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-backup-cli.md | description: In this tutorial, learn how to back up SAP HANA databases running o Last updated 08/11/2022 - -++ # Tutorial: Back up SAP HANA databases in an Azure VM using Azure CLI |
backup | Tutorial Sap Hana Manage Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-manage-cli.md | description: In this tutorial, learn how to manage backed-up SAP HANA databases Last updated 08/11/2022 ++ # Tutorial: Manage SAP HANA databases in an Azure VM using Azure CLI |
backup | Tutorial Sap Hana Restore Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-restore-cli.md | description: In this tutorial, learn how to restore SAP HANA databases running o Last updated 08/11/2022 - -++ # Tutorial: Restore SAP HANA databases in an Azure VM using Azure CLI |
backup | Tutorial Sql Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sql-backup.md | Title: Tutorial - Back up SQL Server databases to Azure description: In this tutorial, learn how to back up a SQL Server database running on an Azure VM to an Azure Backup Recovery Services vault. Last updated 08/09/2022- -++ # Back up a SQL Server database in an Azure VM |
backup | Upgrade Mars Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/upgrade-mars-agent.md | Title: Upgrade the Microsoft Azure Recovery Services (MARS) agent description: Learn how to upgrade the Microsoft Azure Recovery Services (MARS) agent. Last updated 12/28/2022- - ++ # Upgrade the Microsoft Azure Recovery Services (MARS) agent |
backup | Use Archive Tier Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/use-archive-tier-support.md | |
backup | Use Restapi Update Vault Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/use-restapi-update-vault-properties.md | description: In this article, learn how to update vault's configuration using RE Last updated 12/06/2019 ms.assetid: 9aafa5a0-1e57-4644-bf79-97124db27aa2++ # Update Azure Recovery Services vault configurations using REST API |
backup | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md | Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Last updated 10/14/2022- -++ # What's new in Azure Backup |
cognitive-services | Batch Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/batch-inference.md | You could choose the batch inference API, or the streaming inference API for det To perform batch inference, provide the blob URL containing the inference data, the start time, and end time. For inference data volume, at least `1 sliding window` length and at most **20000** timestamps. +To get better performance, we recommend you send out no more than 150,000 data points per batch inference. *(Data points = Number of variables * Number of timestamps)* + This inference is asynchronous, so the results aren't returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards. Failures are usually caused by model issues or data issues. You can't perform inference if the model isn't ready or the data link is invalid. Make sure that the training data and inference data are consistent, meaning they should be **exactly** the same variables but with different timestamps. More variables, fewer variables, or inference with a different set of variables won't pass the data verification phase and errors will occur. Data verification is deferred so that you'll get error messages only when you query the results. |
cognitive-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/service-limits.md | + + Title: Service limits - Anomaly Detector service ++description: Service limits for Anomaly Detector service, including Univariate Anomaly Detection and Multivariate Anomaly Detection. ++++++ Last updated : 1/31/2023+++++# Anomaly Detector service quotas and limits ++This article contains both a quick reference and detailed description of Azure Anomaly Detector service quotas and limits for all pricing tiers. It also contains some best practices to help avoid request throttling. ++The quotas and limits apply to all the versions within Azure Anomaly Detector service. ++## Univariate Anomaly Detection ++|Quota<sup>1</sup>|Free (F0)|Standard (S0)| +|--|--|--| +| **All APIs per second** | 10 | 500 | ++<sup>1</sup> All the quota and limit are defined for one Anomaly Detector resource. ++## Multivariate Anomaly Detection ++### API call per minute ++|Quota<sup>1</sup>|Free (F0)<sup>2</sup>|Standard (S0)| +|--|--|--| +| **Training API per minute** | 1 | 20 | +| **Get model API per minute** | 1 | 20 | +| **Batch(async) inference API per minute** | 10 | 60 | +| **Get inference results API per minute** | 10 | 60 | +| **Last(sync) inference API per minute** | 10 | 60 | +| **List model API per minute** | 1 | 20 | +| **Delete model API per minute** | 1 | 20 | ++<sup>1</sup> All quotas and limits are defined for one Anomaly Detector resource. ++<sup>2</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/) ++### Concurrent models and inference tasks +|Quota<sup>1</sup>|Free (F0)|Standard (S0)| +|--|--|--| +| **Maximum models** *(created, running, ready, failed)*| 20 | 1000 | +| **Maximum running models** *(created, running)* | 1 | 20 | +| **Maximum running inference** *(created, running)* | 10 | 60 | ++<sup>1</sup> All quotas and limits are defined for one Anomaly Detector resource. If you want to increase the limit, please contact AnomalyDetector@microsoft.com for further communication. ++## How to increase the limit for your resource? ++For the Standard pricing tier, this limit can be increased. Increasing the **concurrent request limit** doesn't directly affect your costs. Anomaly Detector service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests. ++The **concurrent request limit parameter** isn't visible via Azure portal, Command-Line tools, or API requests. To verify the current value, create an Azure Support Request. ++If you would like to increase your limit, you can enable auto scaling on your resource. Follow this document to enable auto scaling on your resource [enable auto scaling](../autoscale.md). You can also submit an increase Transactions Per Second (TPS) support request. ++### Have the required information ready ++* Anomaly Detector resource ID ++* Region ++#### Retrieve resource ID and region ++* Go to [Azure portal](https://portal.azure.com/) +* Select the Anomaly Detector Resource for which you would like to increase the transaction limit +* Select Properties (Resource Management group) +* Copy and save the values of the following fields: + * Resource ID + * Location (your endpoint Region) ++### Create and submit support request ++To request a limit increase for your resource submit a **Support Request**: ++1. Go to [Azure portal](https://portal.azure.com/) +2. Select the Anomaly Detector Resource for which you would like to increase the limit +3. Select New support request (Support + troubleshooting group) +4. A new window will appear with auto-populated information about your Azure Subscription and Azure Resource +5. Enter Summary (like "Increase Anomaly Detector TPS limit") +6. In Problem type, select *"Quota or usage validation"* +7. Select Next: Solutions +8. Proceed further with the request creation +9. Under the Details tab enters the following in the Description field: + * A note, that the request is about Anomaly Detector quota. + * Provide a TPS expectation you would like to scale to meet. + * Azure resource information you collected. + * Complete entering the required information and select Create button in *Review + create* tab + * Note the support request number in Azure portal notifications. You'll be contacted shortly for further processing. |
cognitive-services | How To Configure Azure Ad Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-azure-ad-auth.md | Here's an example of using Azure Identity to get an Azure AD access token from a TokenRequestContext context = new Azure.Core.TokenRequestContext(new string[] { "https://cognitiveservices.azure.com/.default" }); InteractiveBrowserCredential browserCredential = new InteractiveBrowserCredential(); var browserToken = browserCredential.GetToken(context);+string aadToken = browserToken.Token; ``` The token context must be set to "https://cognitiveservices.azure.com/.default". ::: zone-end |
cognitive-services | Language Identification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md | endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format speech_config = speechsdk.SpeechConfig(subscription=speech_key, endpoint=endpoint_string) audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename) +# Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart) +speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous') + auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig( languages=["en-US", "de-DE", "zh-CN"]) |
cognitive-services | Rest Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md | curl --location --request GET 'https://YOUR_RESOURCE_REGION.tts.speech.microsoft ### Sample response -You should receive a response with a JSON body that includes all supported locales, voices, gender, styles, and other details. This JSON example shows partial results to illustrate the structure of a response: +You should receive a response with a JSON body that includes all supported locales, voices, gender, styles, and other details. The `WordsPerMinute` property for each voice can be used to estimate the length of the output speech. This JSON example shows partial results to illustrate the structure of a response: ```json [ |
cognitive-services | Speech Studio Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md | +> [!TIP] +> You can try speech-to-text and text-to-speech in [Speech Studio](https://aka.ms/speechstudio/) without signing up or writing any code. + ## Speech Studio scenarios Explore, try out, and view sample code for some of common use cases. |
cognitive-services | Speech Synthesis Markup Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-structure.md | Here's a subset of the basic structure and syntax of an SSML document: <lang xml:lang="string"></lang> <lexicon uri="string"/> <math xmlns="http://www.w3.org/1998/Math/MathML"></math>+ <mstts:audioduration value="string"/> <mstts:express-as style="string" styledegree="value" role="string"></mstts:express-as> <mstts:silence type="string" value="string"/> <mstts:viseme type="string"/> Some examples of contents that are allowed in each element are described in the - `lang`: This element can contain all other elements except `mstts:backgroundaudio`, `voice`, and `speak`. - `lexicon`: This element can't contain text or any other elements. - `math`: This element can only contain text and MathML elements.+- `mstts:audioduration`: This element can't contain text or any other elements. - `mstts:backgroundaudio`: This element can't contain text or any other elements. - `mstts:express-as`: This element can contain text and the following elements: `audio`, `break`, `emphasis`, `lang`, `phoneme`, `prosody`, `say-as`, and `sub`. - `mstts:silence`: This element can't contain text or any other elements. Usage of the `mstts:silence` element's attributes are described in the following | Attribute | Description | Required or optional | | - | - | - |-| `type` | Specifies where and how to add silence. The following silence types are supported:<br/><ul><li>`Leading` – Additional silence at the beginning of the text. The value that you set is added to the natural silence before the start of text.</li><li>`Leading-exact` – Silence at the beginning of the text. The value is an absolute silence length.</li><li>`Tailing` – Additional silence at the end of text. The value that you set is added to the natural silence after the last word.</li><li>`Tailing-exact` – Silence at the end of the text. The value is an absolute silence length.</li><li>`Sentenceboundary` – Additional silence between adjacent sentences. The actual silence length for this type includes the natural silence after the last word in the previous sentence, the value you set for this type, and the natural silence before the starting word in the next sentence.</li><li>`Sentenceboundary-exact` – Silence between adjacent sentences. The value is an absolute silence length.</li></ul><br/>An absolute silence type (with the `-exact` suffix) replaces any otherwise natural leading or trailing silence. Absolute silence types take precedence over the corresponding non-absolute type. For example, if you set both `Leading` and `Leading-exact` types, the `Leading-exact` type will take effect.| Required | +| `type` | Specifies where and how to add silence. The following silence types are supported:<br/><ul><li>`Leading` – Additional silence at the beginning of the text. The value that you set is added to the natural silence before the start of text.</li><li>`Leading-exact` – Silence at the beginning of the text. The value is an absolute silence length.</li><li>`Tailing` – Additional silence at the end of text. The value that you set is added to the natural silence after the last word.</li><li>`Tailing-exact` – Silence at the end of the text. The value is an absolute silence length.</li><li>`Sentenceboundary` – Additional silence between adjacent sentences. The actual silence length for this type includes the natural silence after the last word in the previous sentence, the value you set for this type, and the natural silence before the starting word in the next sentence.</li><li>`Sentenceboundary-exact` – Silence between adjacent sentences. The value is an absolute silence length.</li><li>`Comma-exact` – Silence at the comma in half-width or full-width format. The value is an absolute silence length.</li><li>`Semicolon-exact` – Silence at the semicolon in half-width or full-width format. The value is an absolute silence length.</li><li>`Enumerationcomma-exact` – Silence at the enumeration comma in full-width format. The value is an absolute silence length.</li></ul><br/>An absolute silence type (with the `-exact` suffix) replaces any otherwise natural leading or trailing silence. Absolute silence types take precedence over the corresponding non-absolute type. For example, if you set both `Leading` and `Leading-exact` types, the `Leading-exact` type will take effect.| Required | | `Value` | The duration of a pause in seconds (such as `2s`) or milliseconds (such as `500ms`). Valid values range from 0 to 5000 milliseconds. If you set a value greater than the supported maximum, the service will use `5000ms`.| Required | ### mstts silence examples A good place to start is by trying out the slew of educational apps that are hel </speak> ``` +In this example, `mstts:silence` is used to add 50 ms of silence at the comma, 100 ms of silence at the semicolon, and 150 ms of silence at the enumeration comma. ++```xml +<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="zh-CN"> +<voice name="zh-CN-YunxiNeural"> +<mstts:silence type="comma-exact" value="50ms"/><mstts:silence type="semicolon-exact" value="100ms"/><mstts:silence type="enumerationcomma-exact" value="150ms"/>你好呀,云希、晓晓;你好呀。 +</voice> +</speak> +``` + ## Specify paragraphs and sentences The `p` and `s` elements are used to denote paragraphs and sentences, respectively. In the absence of these elements, the Speech service automatically determines the structure of the SSML document. |
cognitive-services | Speech Synthesis Markup Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-voice.md | This SSML snippet illustrates how the `src` attribute is used to insert audio fr </speak> ``` +## Audio duration ++Use the `mstts:audioduration` element to set the duration of the output audio. Use this element to help synchronize the timing of audio output completion. The audio duration can be decreased or increased between 0.5 to 2 times the rate of the original audio. The original audio here is the audio without any other rate settings. The speaking rate will be slowed down or sped up accordingly based on the set value. ++The audio duration setting is applied to all input text within its enclosing `voice` element. To reset or change the audio duration setting again, you must use a new `voice` element with either the same voice or a different voice. ++Usage of the `mstts:audioduration` element's attributes are described in the following table. ++| Attribute | Description | Required or optional | +| - | - | - | +| `value` | The requested duration of the output audio in either seconds (such as `2s`) or milliseconds (such as `2000ms`).<br/><br/>This value should be within 0.5 to 2 times the original audio without any other rate settings. For example, if the requested duration of your audio is `30s`, then the original audio must have otherwise been between 15 and 60 seconds. If you set a value outside of these boundaries, the duration is set according to the respective minimum or maximum multiple.<br/><br/>Given your requested output audio duration, the Speech service adjusts the speaking rate accordingly. Use the [voice list](rest-text-to-speech.md#get-a-list-of-voices) API and check the `WordsPerMinute` attribute to find out the speaking rate of the neural voice that you're using. You can divide the number of words in your input text by the value of the `WordsPerMinute` attribute to get the approximate original output audio duration. The output audio will sound most natural when you set the audio duration closest to the estimated duration.| Required | ++### mstts audio duration examples ++The supported values for attributes of the `mstts:audioduration` element were [described previously](#audio-duration). ++In this example, the original audio is around 15 seconds. The `mstts:audioduration` element is used to set the audio duration to 20 seconds (`20s`). ++```xml +<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US"> +<voice name="en-US-JennyNeural"> +<mstts:audioduration value="20s"/> +If we're home schooling, the best we can do is roll with what each day brings and try to have fun along the way. +A good place to start is by trying out the slew of educational apps that are helping children stay happy and smash their schooling at the same time. +</voice> +</speak> +``` + ## Background audio You can use the `mstts:backgroundaudio` element to add background audio to your SSML documents or mix an audio file with text-to-speech. With `mstts:backgroundaudio`, you can loop an audio file in the background, fade in at the beginning of text-to-speech, and fade out at the end of text-to-speech. |
cognitive-services | Translator Disconnected Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-disconnected-containers.md | The following example shows the formatting for the `docker run` command with pla | `{MODEL_MOUNT_PATH}`| The path where the machine translation models will be downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`| | `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, in the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` | | `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, in the Azure portal. |`{string}`|-| `{LANGUAGES_LIST}` | List of language codes separated by commas. It is mandatory to have English (en) language as part of the list.|en,fr,it,ta,uk | +| `{LANGUAGES_LIST}` | List of language codes separated by commas. It's mandatory to have English (en) language as part of the list.| `en`, `fr`, `it`, `zu`, `uk` | | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` | **Example `docker run` command** ```docker -docker run --rm -it -p 5000:5000 +docker run --rm -it -p 5000:5000 \ -v {MODEL_MOUNT_PATH} \ |
cognitive-services | Quickstart Translator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md | To get started, you'll need an active Azure subscription. If you don't have an A > * For more information on how to use the **Ocp-Apim-Subscription-Region** header, _see_ [Text Translator REST API headers](translator-text-apis.md). <!-- checked -->-> [!div class="nextstepaction"] +<!-- + > [!div class="nextstepaction"] > [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=prerequisites)+--> ## Headers The core operation of the Translator service is translating text. In this quicks :::image type="content" source="media/quickstarts/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button."::: <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=set-up-your-visual-studio-project) +<!-- [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=set-up-your-visual-studio-project) --> ### Build your C# application class Program ``` <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=build-your-c#-application) +<!-- > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=build-your-c#-application) --> ### Run your C# application After a successful call, you should see the following response: ``` <!-- checked -->-> [!div class="nextstepaction"] -> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=run-your-c#-application) +<!-- + > [!div class="nextstepaction"] +> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=run-your-c#-application) --> ### [Go](#tab/go) You can use any text editor to write Go applications. We recommend using the lat go version ``` <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=set-up-your-go-environment) +<!-- + > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=set-up-your-go-environment) --> ### Build your Go application func main() { } ``` <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=build-your-go-application) +<!-- + > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=build-your-go-application) --> ### Run your Go application After a successful call, you should see the following response: ``` <!-- checked -->-> [!div class="nextstepaction"] -> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=run-your-go-application) +<!-- + > [!div class="nextstepaction"] +> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=run-your-go-application) --> ### [Java: Gradle](#tab/java) After a successful call, you should see the following response: * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later. <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=set-up-your-java-environment) +<!-- + > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=set-up-your-java-environment) --> ### Create a new Gradle project After a successful call, you should see the following response: } ``` <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-a-gradle-project) ++<!-- > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-a-gradle-project) --> ### Create your Java Application public class TranslatorText { } ``` <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-your-java-application) ++<!-- > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-your-java-application) --> ### Build and run your Java application After a successful call, you should see the following response: ``` <!-- checked -->-> [!div class="nextstepaction"] -> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=build-and-run-your-java-application) ++<!-- > [!div class="nextstepaction"] +> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=build-and-run-your-java-application) --> ### [JavaScript: Node.js](#tab/nodejs) After a successful call, you should see the following response: > > * You can also create a new file named `index.js` in your IDE and save it to the `translator-app` directory. <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-nodejs-express-project) ++<!-- > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-nodejs-express-project) --> ### Build your JavaScript application Add the following code sample to your `index.js` file. **Make sure you update th ``` <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-javascript-application) +<!-- + > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-javascript-application) --> ### Run your JavaScript application After a successful call, you should see the following response: ``` <!-- checked -->-> [!div class="nextstepaction"] -> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=run-your-javascript-application) +<!-- + > [!div class="nextstepaction"] +> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=run-your-javascript-application) --> ### [Python](#tab/python) After a successful call, you should see the following response: > [!NOTE] > We will also use a Python built-in package called json. It's used to work with JSON data. <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-python-project) +<!-- + > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-python-project) --> ### Build your Python application print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separat ``` <!-- checked -->-> [!div class="nextstepaction"] -> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-python-application) +<!-- + > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-python-application) --> ### Run your Python application After a successful call, you should see the following response: ``` <!-- checked -->-> [!div class="nextstepaction"] -> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=run-your-python-application) +<!-- + > [!div class="nextstepaction"] +> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=run-your-python-application) --> |
cognitive-services | Cognitive Services Container Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md | Azure Cognitive Services containers provide the following set of Docker containe | Service | Container | Description | Availability | |--|--|--|--|-| [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Generally Available. This container can also [run in disconnected environments](containers/disconnected-containers.md). | +| [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Generally Available. Gated - [request access](https://aka.ms/csgate). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-spatial-analysis)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Preview | <!-- |
cognitive-services | Triage Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/tutorials/triage-email.md | + + Title: Triage incoming emails with Power Automate ++description: Learn how to use custom text classification to categorize and triage incoming emails with Power Automate ++++++ Last updated : 01/27/2023++++# Tutorial: Triage incoming emails with power automate ++In this tutorial you will categorize and triage incoming email using custom text classification. Using this [Power Automate](/power-automate/getting-started) flow, when a new email is received, its contents will have a classification applied, and depending on the result, a message will be sent to a designated channel on [Microsoft Teams](https://www.microsoft.com/microsoft-teams). +++## Prerequisites ++* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) +* <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">A Language resource </a> + * A trained [custom text classification](../overview.md) model. + * You will need the key and endpoint from your Language resource to authenticate your Power Automate flow. +* A successfully created and deployed [single text classification custom model](../quickstart.md) +++## Create a Power Automate flow ++1. [Sign in to power automate](https://make.powerautomate.com/) ++2. From the left side menu, select **My flows** and create a **Automated cloud flow** ++ :::image type="content" source="../media/create-flow.png" alt-text="A screenshot of the flow creation screen." lightbox="../media/create-flow.png"::: ++3. Name your flow `EmailTriage`. Below **Choose your flow's triggers**, search for *email* and select **When a new email arrives**. Then click **create** ++ :::image type="content" source="../media/email-flow.png" alt-text="A screenshot of the email flow triggers." lightbox="../media/email-flow.png"::: ++4. Add the right connection to your email account. This connection will be used to access the email content. ++5. To add a Language service connector, search for *Azure Language*. + + :::image type="content" source="../media/language-connector.png" alt-text="A screenshot of available Azure Language service connectors." lightbox="../media/language-connector.png"::: ++6. Search for *CustomSingleLabelClassification*. ++ :::image type="content" source="../media/single-classification.png" alt-text="A screenshot of Classification connector." lightbox="../media/single-classification.png"::: ++7. Start by adding the right connection to your connector. This connection will be used to access the classification project. ++8. In the documents ID field, add **1**. ++9. In the documents text field, add **body** from **dynamic content**. ++10. Fill in the project name and deployment name of your deployed custom text classification model. ++ :::image type="content" source="../media/classification.png" alt-text="A screenshot project details." lightbox="../media/classification.png"::: ++11. Add a condition to send a Microsoft Teams message to the right team by: + 1. Select **results** from **dynamic content**, and add the condition. For this tutorial, we are looking for `Computer_science` related emails. In the **Yes** condition, choose your desired option to notify a team channel. In the **No** condition, you can add additional conditions to perform alternative actions. ++ :::image type="content" source="../media/email-triage.png" alt-text="A screenshot of email flow." lightbox="../media/email-triage.png"::: +++## Next steps ++* [Use the Language service with Power Automate](../../tutorials/power-automate.md) +* [Available Language service connectors](/connectors/cognitiveservicestextanalytics) |
cognitive-services | Language Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-studio.md | When you're ready to use Language Studio features on your own text data, you wil 4. Select **Done**. Your resource will be created, and you will be able to use the different features offered by the Language service with your own text. ++### Valid text formats for conversation features ++> [!NOTE] +> This section applies to the following features: +> * [PII detection for conversation](./personally-identifiable-information/overview.md) +> * [Conversation summarization](./summarization/overview.md?tabs=conversation-summarization) ++If you're sending conversational text to supported features in Language Studio, be aware of the following input requirements: +* The text you send must be a conversational dialog between two or more participants. +* Each line must start with the name of the participant, followed by a `:`, and followed by what they say. +* Each participant must be on a new line. If multiple participants' utterances are on the same line, it will be processed as one line of the conversation. ++See the following example for how you should structure conversational text you want to send. ++*Agent: Hello, you're chatting with Rene. How may I help you?* ++*Customer: Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didn't work.* ++*Agent: IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue.* ++Note that the names of the two participants in the conversation (*Agent* and *Customer*) begin each line, and that there is only one participant per line of dialog. ++ ## Clean up resources If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. |
cognitive-services | How To Call For Conversations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md | Currently the conversational PII preview API supports all Azure regions supporte ## Submitting data +> [!NOTE] +> See the [Language Studio](../language-studio.md#valid-text-formats-for-conversation-features) article for information on formatting conversational text to submit using Language Studio. + You can submit the input to the API as list of conversation items. Analysis is performed upon receipt of the request. Because the API is asynchronous, there may be a delay between sending an API request, and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below. When using the async feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval. When you get results from PII detection, you can stream the results to an applic 1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home) -2. From the menu on the left side, select **Keys and Endpoint**. You will need one of the keys and the endpoint to authenticate your API requests. +2. From the menu on the left side, select **Keys and Endpoint**. You'll need one of the keys and the endpoint to authenticate your API requests. 3. Download and install the client library package for your language of choice: |
cognitive-services | Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/how-to/call-api.md | The API returns opinions as a target (noun or verb) and an assessment (adjective ## See also -* [Sentiment analysis and opinion mining overview](../overview.md) +* [Sentiment analysis and opinion mining overview](../overview.md) |
cognitive-services | Conversation Summarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md | There's another feature in Azure Cognitive Service for Language named [document ## Submitting data +> [!NOTE] +> See the [Language Studio](../../language-studio.md#valid-text-formats-for-conversation-features) article for information on formatting conversational text to submit using Language Studio. + You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below. When you use this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval. |
cognitive-services | Power Automate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/tutorials/power-automate.md | + + Title: Use Language services in power automate ++description: Learn how to use Azure Cognitive Service for Language in power automate, without writing code. ++++++ Last updated : 01/26/2023++++++# Use the Language service in Power Automate ++You can use [Power Automate](/power-automate/getting-started) flows to automate repetitive tasks and bring efficiency to your organization. Using Azure Cognitive Service for Language, you can automate tasks like: +* Send incoming emails to different departments based on their contents. +* Analyze the sentiment of new tweets. +* Extract entities from incoming documents. +* Summarize meetings. +* Remove personal data from files before saving them. ++In this tutorial, you'll create a Power Automate flow to extract entities found in text, using [Named entity recognition](../named-entity-recognition/overview.md). ++## Prerequisites +* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) +* <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">A Language resource </a> + * (optional) A trained model if you're using a custom capability such as [custom NER](../custom-named-entity-recognition/overview.md), [custom text classification](../custom-text-classification/overview.md), or [conversational language understanding](../conversational-language-understanding/overview.md). + * You will need the key and endpoint from your Language resource to authenticate your Power Automate flow. ++## Create a Power Automate flow ++1. [Sign in to power automate](https://make.powerautomate.com/) ++2. From the left side menu, select **My flows** and create a **Automated cloud flow** ++ :::image type="content" source="../media/create-flow.png" alt-text="A screenshot of the menu for creating an automated cloud flow." lightbox="../media/create-flow.png"::: ++3. Enter a name your flow. For example `Languageflow`. ++ :::image type="content" source="../media/language-flow.png" alt-text="A screenshot of automated cloud flow screen." lightbox="../media/language-flow.png"::: ++4. Start by selecting **Manually trigger flow**. ++ :::image type="content" source="../media/trigger-flow.png" alt-text="A screenshot of how to manually trigger a flow." lightbox="../media/trigger-flow.png"::: ++5. To add a Language service connector, search for **Azure Language**. ++ :::image type="content" source="../media/language-connector.png" alt-text="A screenshot of An Azure language connector." lightbox="../media/language-connector.png"::: ++6. For this tutorial, you will create a flow that extracts named entities from text. Search for **Named entity recognition**, and select the connector. ++ :::image type="content" source="../media/entity-connector.png" alt-text="A screenshot of a named entity recognition connector." lightbox="../media/entity-connector.png"::: ++7. Add endpoint and key for your Language resource, which will be used for authentication. You can find your key and endpoint by navigating to your resource in the [Azure portal](https://portal.azure.com), and selecting **Keys and endpoint** from the left navigation menu. ++ :::image type="content" source="../media/azure-portal-resource-credentials.png" alt-text="A screenshot of A language resource key and endpoint in the Azure portal." lightbox="../media/azure-portal-resource-credentials.png"::: ++8. Once you have your key and endpoint, add it to the connector in Power Automate. + + :::image type="content" source="../media/language-auth.png" alt-text="A screenshot of adding the language key and endpoint to the Power Automate flow." lightbox="../media/language-auth.png"::: ++9. Add the data in the connector + + > [!NOTE] + > You will need deployment name and project name if you are using custom language capability. + +9. From the top navigation menu, save the flow and select **Test the flow**. In the window that appears, select **Test**. ++10. After the flow runs, you will see the response in the **outputs** field. ++ :::image type="content" source="../media/response-connector.png" alt-text="A screenshot of flow response." lightbox="../media/response-connector.png"::: ++## Next steps ++* [Triage incoming emails with custom text classification](../custom-text-classification/tutorials/triage-email.md) +* [Available Language service connectors](/connectors/cognitiveservicestextanalytics) ++ |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md | -| Group of features | Capability | JavaScript | +| Group of features | Capability | Supported | | -- | - | - | | Core Capabilities | Join Teams meeting | ✔️ | | | Leave meeting | ✔️ | |
communication-services | Video Effects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-effects.md | Use ΓÇÿnpm installΓÇÖ command to install the Azure Communication Calling Effects ```console npm install @azure/communication-calling-effects --save ```+See [here](https://www.npmjs.com/package/@azure/communication-calling-effects) for more details on the calling commmunication effects npm package page. ## Supported video effects: Currently the video effects support the following ability: To use video effects with the Azure Communication Calling client library, once y import * as AzureCommunicationCallingSDK from '@azure/communication-calling'; import { BackgroundBlurEffect, BackgroundReplacementEffect } from '@azure/communication-calling-effects'; -/** Assuming you have initialized the Azure Communication Calling client library and have created a LocalVideoStream -(reference <link to main SDK npm>) -*/ +// Ensure you have initialized the Azure Communication Calling client library and have created a LocalVideoStream // Get the video effects feature api on the LocalVideoStream -const videoEffectsFeatureApi = localVideoStream.features(AzureCommunicationCallingSDK.Features.VideoEffects); +const videoEffectsFeatureApi = localVideoStream.feature(AzureCommunicationCallingSDK.Features.VideoEffects); // Subscribe to useful events videoEffectsFeatureApi.on(ΓÇÿeffectsStartedΓÇÖ, () => { const backgroundBlurSupported = await backgroundBlurEffect.isSupported(); if (backgroundBlurSupported) { // Use the video effects feature api we created to start/stop effects - await videoEffectsFeatureApi.startEffects(backgroundBlurEffect); - } const backgroundImage = 'https://linkToImageFile'; // Create the effect instance const backgroundReplacementEffect = new BackgroundReplacementEffect({ - backgroundImageUrl: backgroundImage - }); // Recommended: Check if background replacement is supported: if (backgroundReplacementSupported) { const newBackgroundImage = 'https://linkToNewImageFile'; await backgroundReplacementEffect.configure({ - backgroundImageUrl: newBackgroundImage - }); //You can switch the effects using the same method on the video effects feature api: |
container-apps | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md | IP addresses are broken down into the following types: | Type | Description | |--|--| | Public inbound IP address | Used for app traffic in an external deployment, and management traffic in both internal and external deployments. |-| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Using a NAT gateway or other proxy for outbound traffic from a Container App environment isn't supported. Outbound IPs are not guaranteed and may change over time. | +| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Using a NAT gateway or other proxy for outbound traffic from a Container App environment isn't supported. Outbound IPs aren't guaranteed and may change over time. | | Internal load balancer IP address | This address only exists in an internal deployment. | | App-assigned IP-based TLS/SSL addresses | These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured. | There's no forced tunneling in Container Apps routes. - **VNET-scope ingress**: If you plan to use VNET-scope [ingress](./ingress.md#configuration) in an internal Container Apps environment, configure your domains in one of the following ways: - 1. **Non-custom domains**: If you don't plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App EnvironmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record that points to the static IP address of the Container Apps environment. + 1. **Non-custom domains**: If you don't plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App EnvironmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record. The A record contains the name `*<DNS Suffix>` and the static IP address of the Container Apps environment. 1. **Custom domains**: If you plan to use custom domains, use a publicly resolvable domain to [add a custom domain and certificate](./custom-domains-certificates.md#add-a-custom-domain-and-certificate) to the container app. Additionally, create a private DNS zone that resolves the apex domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the apex domain, with an `A` record that points to the static IP address of the Container Apps environment. +The static IP address of the Container Apps environment can be found in the Azure portal in **Custom DNS suffix** of the container app page or using the Azure CLI `az containerapp env list` command. + ## Managed resources When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you're billed for: |
cosmos-db | How To Dotnet Manage Databases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-databases.md | A database is removed from the server using the `DropDatabase` method on the DB ## See also - [Get started with Azure Cosmos DB for MongoDB and .NET](how-to-dotnet-get-started.md)-- Work with a collection](how-to-dotnet-manage-collections.md)+- [Work with a collection](how-to-dotnet-manage-collections.md) |
data-factory | Airflow Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-pricing.md | + + Title: Managed Airflow pricing +description: This article describes the pricing for Managed Airflow. ++++ Last updated : 01/24/2023++++# Managed Airflow pricing +++This article describes the pricing for Managed Airflow usage within data factory. ++## Pricing details ++Managed Airflow supports either small (D2v4) or large (D4v4) node sizing. Small can support up to 50 DAGs simultaneously, and large can support up to 1000 DAGs. The following table describes pricing for each option: +++## Next steps ++- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) +- [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) +- [Changing password for Airflow environments](password-change-airflow.md) |
data-factory | Concept Managed Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md | + + Title: What is Managed Airflow? ++description: Learn about when to use Managed Airflow, basic concepts and supported regions. ++++ Last updated : 01/20/2023++++# What is Azure Data Factory Managed Airflow? +++> [!NOTE] +> This feature is in public preview. For questions or feature suggestions, please send an email to mailto:ManagedAirflow@microsoft.com with the details. ++> [!NOTE] +> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages. ++Azure Data Factory offers serverless pipelines for data process orchestration, data movement with 100+ managed connectors, and visual transformations with the mapping data flow. ++Managed Airflow in Azure Data Factory is a managed orchestration service for [Apache Airflow](https://airflow.apache.org/) that simplifies the creation and management of Airflow environments on which you can operate end-to-end data pipelines at scale. Apache Airflow is an open-source tool used to programmatically author, schedule, and monitor sequences of processes and tasks referred to as "workflows." With Managed Airflow in Azure Data Factory, you can use Airflow and Python to create data workflows without managing the underlying infrastructure for scalability, availability, and security. +++## When to use Managed Airflow? ++Azure Data Factory offers [Pipelines](concepts-pipelines-activities.md) to visually orchestrate data processes (UI-based authoring). While Managed Airflow, offers Airflow based python DAGs (python code-centric authoring) for defining the data orchestration process. If you have the Airflow background, or are currently using Apace Airflow, you may prefer to use the Managed Airflow instead of the pipelines. On the contrary, if you wouldn't like to write/ manage python-based DAGs for data process orchestration, you may prefer to use pipelines. ++With Managed Airflow, Azure Data Factory now offers multi-orchestration capabilities spanning across visual, code-centric, OSS orchestration requirements. ++## Features ++- **Automatic Airflow setup** – Quickly set up Apache Airflow by choosing an [Apache Airflow version](concept-managed-airflow.md#supported-apache-airflow-versions) when you create a Managed Airflow environment. ADF Managed Airflow sets up Apache Airflow for you using the same Apache Airflow user interface and open-source code you can download on the Internet. +- **Automatic scaling** – Automatically scale Apache Airflow Workers by setting the minimum and maximum number of Workers that run in your environment. ADF Managed Airflow monitors the Workers in your environment. It uses its autoscaling component to add Workers to meet demand until it reaches the maximum number of Workers you defined. +- **Built-in authentication** – Enable Azure Active Directory (Azure AD) role-based authentication and authorization for your Airflow Web server by defining Azure AD RBAC's access control policies. +- **Built-in security** – Metadata is also automatically encrypted by Azure-managed keys, so your environment is secure by default. Additionally, it supports double encryption with a Customer-Managed Key (CMK). +- **Streamlined upgrades and patches** – Azure Data Factory Managed Airflow provide new versions of Apache Airflow periodically. The ADF Managed Airflow team will auto-update and patch the minor versions. +- **Workflow monitoring** – View Airflow logs and Airflow metrics in Azure Monitor to identify Airflow task delays or workflow errors without needing additional third-party tools. Managed Airflow automatically sends environment metrics, and if enabled, Airflow logs to Azure Monitor. +- **Azure integration** – Azure Data Factory Managed Airflow supports open-source integrations with Azure Data Factory pipelines, Azure Batch, Azure Cosmos DB, Azure Key Vault, ACI, ADLS Gen2, Azure Kusto, as well as hundreds of built-in and community-created operators and sensors. ++## Architecture + :::image type="content" source="media/concept-managed-airflow/architecture.png" alt-text="Screenshot shows architecture in Managed Airflow."::: ++## Region availability (public preview) ++* EastUs +* SouthCentralUs +* WestUs +* UKSouth +* NorthEurope +* WestEurope +* SouthEastAsia +* EastUS2 +* WestUS2 +* GermanyWestCentral +* AustraliaEast ++> [!NOTE] +> By GA, all ADF regions will be supported. The Airflow environment region is defaulted to the Data Factory region and is not configurable, so ensure you use a Data Factory in the above supported region to be able to access the Managed Airflow preview. ++## Supported Apache Airflow versions ++* 1.10.14 +* 2.2.4 ++## Integrations ++Apache Airflow integrates with Microsoft Azure services through microsoft.azure [provider](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/https://docsupdatetracker.net/index.html). ++You can install any provider package by editing the airflow environment from the Azure Data Factory UI. It takes around a couple of minutes to install the package. ++ :::image type="content" source="media/concept-managed-airflow/airflow-integration.png" lightbox="media/concept-managed-airflow/airflow-integration.png" alt-text="Screenshot shows airflow integration."::: ++## Next steps ++- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) +- [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) +- [Managed Airflow pricing](airflow-pricing.md) +- [How to change the password for Managed Airflow environments](password-change-airflow.md) |
data-factory | Data Movement Security Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-movement-security-considerations.md | In this article, we review security considerations in the following two data mov - **Store encrypted credentials in an Azure Data Factory managed store**. Data Factory helps protect your data store credentials by encrypting them with certificates managed by Microsoft. These certificates are rotated every two years (which includes certificate renewal and the migration of credentials). For more information about Azure Storage security, see [Azure Storage security overview](../storage/blobs/security-recommendations.md). - **Store credentials in Azure Key Vault**. You can also store the data store's credential in [Azure Key Vault](https://azure.microsoft.com/services/key-vault/). Data Factory retrieves the credential during the execution of an activity. For more information, see [Store credential in Azure Key Vault](store-credentials-in-key-vault.md).+- +Centralizing storage of application secrets in Azure Key Vault allows you to control their distribution. Key Vault greatly reduces the chances that secrets may be accidentally leaked. Instead of storing the connection string in the app's code, you can store it securely in Key Vault. Your applications can securely access the information they need by using URIs. These URIs allow the applications to retrieve specific versions of a secret. There's no need to write custom code to protect any of the secret information stored in Key Vault. ### Data encryption in transit If the cloud data store supports HTTPS or TLS, all data transfers between data movement services in Data Factory and a cloud data store are via secure channel HTTPS or TLS. |
data-factory | How Does Managed Airflow Work | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-does-managed-airflow-work.md | + + Title: How does Managed Airflow work? ++description: This article explains how to create a Managed Airflow instance and use DAG to make it work. ++++ Last updated : 01/20/2023+++# How does Azure Data Factory Managed Airflow work? +++> [!NOTE] +> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages. ++Azure Data Factory Managed Airflow orchestrates your workflows using Directed Acyclic Graphs (DAGs) written in Python. You must provide your DAGs and plugins in Azure Blob Storage. Airflow requirements or library dependencies can be installed during the creation of the new Managed Airflow environment or by editing an existing Managed Airflow environment. Then run and monitor your DAGs by launching the Airflow UI from ADF using a command line interface (CLI) or a software development kit (SDK). ++## Create a Managed Airflow environment +The following steps set up and configure your Managed Airflow environment. ++### Prerequisites +**Azure subscription**: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. + Create or select an existing Data Factory in the region where the managed airflow preview is supported. ++### Steps to create the environment +1. Create new Managed Airflow environment. + Go to **Manage** hub -> **Airflow (Preview)** -> **+New** to create a new Airflow environment ++ :::image type="content" source="media/how-does-managed-airflow-work/create-new-airflow.png" alt-text="Screenshot that shows how to create a new Managed Apache Airflow environment."::: ++1. Provide the details (Airflow config) ++ :::image type="content" source="media/how-does-managed-airflow-work/airflow-environment-details.png" alt-text="Screenshot that shows some Managed Airflow environment details."::: ++ > [!IMPORTANT] + > When using **Basic** authentication, remember the username and password specified in this screen. It will be needed to login later in the Managed Airflow UI. The default option is **Azure AD** and it does not require creating username/ password for your Airflow environment, but instead uses the logged in user's credential to Azure Data Factory to login/ monitor DAGs. +1. **Environment variables** a simple key value store within Airflow to store and retrieve arbitrary content or settings. +1. **Requirements** can be used to pre-install python libraries. You can update these later as well. ++## Import DAGs ++The following steps describe how to import DAGs into Managed Airflow. ++### Prerequisites ++You'll need to upload a sample DAG onto an accessible Storage account. ++> [!NOTE] +> Blob Storage behind VNet are not supported during the preview. ++[Sample Apache Airflow v2.x DAG](https://airflow.apache.org/docs/apache-airflow/stable/tutorial/fundamentals.html). +[Sample Apache Airflow v1.10 DAG](https://airflow.apache.org/docs/apache-airflow/1.10.11/_modules/airflow/example_dags/tutorial.html). +++### Steps to import +1. Copy-paste the content (either v2.x or v1.10 based on the Airflow environment that you have setup) into a new file called as **tutorial.py**. ++ Upload the **tutorial.py** to a blob storage. ([How to upload a file into blob](/storage/blobs/storage-quickstart-blobs-portal.md)) ++ > [!NOTE] + > You will need to select a directory path from a blob storage account that contains folders named **dags** and **plugins** to import those into the Airflow environment. **Plugins** are not mandatory. You can also have a container named **dags** and upload all Airflow files within it. ++1. Select on **Airflow (Preview)** under **Manage** hub. Then hover over the earlier created **Airflow** environment and select on **Import files** to Import all DAGs and dependencies into the Airflow Environment. ++ :::image type="content" source="media/how-does-managed-airflow-work/import-files.png" alt-text="Screenshot shows import files in manage hub."::: ++1. Create a new Linked Service to the accessible storage account mentioned in the prerequisite (or use an existing one if you already have your own DAGs). ++ :::image type="content" source="media/how-does-managed-airflow-work/create-new-linked-service.png" alt-text="Screenshot that shows how to create a new linked service."::: ++1. Use the storage account where you uploaded the DAG (check prerequisite). Test connection, then select **Create**. ++ :::image type="content" source="media/how-does-managed-airflow-work/linked-service-details.png" alt-text="Screenshot shows some linked service details."::: ++1. Browse and select **airflow** if using the sample SAS URL or select the folder that contains **dags** folder with DAG files. ++ > [!NOTE] + > You can import DAGs and their dependencies through this interface. You will need to select a directory path from a blob storage account that contains folders named **dags** and **plugins** to import those into the Airflow environment. **Plugins** are not mandatory. ++ :::image type="content" source="media/how-does-managed-airflow-work/browse-storage.png" alt-text="Screenshot shows browse storage in import files."::: ++ :::image type="content" source="media/how-does-managed-airflow-work/browse.png" alt-text="Screenshot that shows browse in airflow."::: ++ :::image type="content" source="media/how-does-managed-airflow-work/import-in-import-files.png" alt-text="Screenshot shows import in import files."::: ++ :::image type="content" source="media/how-does-managed-airflow-work/import-dags.png" alt-text="Screenshot shows import dags."::: ++> [!NOTE] +> Importing DAGs could take a couple of minutes during **Preview**. The notification center (bell icon in ADF UI) can be used to track the import status updates. ++## Troubleshooting import DAG issues ++* Problem: DAG import is taking over 5 minutes +Mitigation: Reduce the size of the imported DAGs with a single import. One way to achieve this is by creating multiple DAG folders with lesser DAGs across multiple containers. ++* Problem: Imported DAGs don't show up when you sign in into the Airflow UI. +Mitigation: Sign in into the Airflow UI and see if there are any DAG parsing errors. This could happen if the DAG files contain any incompatible code. You'll find the exact line numbers and the files, which have the issue through the Airflow UI. ++ :::image type="content" source="media/how-does-managed-airflow-work/import-dag-issues.png" alt-text="Screenshot shows import dag issues."::: +++## Monitor DAG runs ++To monitor the Airflow DAGs, sign in into Airflow UI with the earlier created username and password. ++1. Select on the Airflow environment created. ++ :::image type="content" source="media/how-does-managed-airflow-work/airflow-environment-monitor-dag.png" alt-text="Screenshot that shows the Airflow environment created."::: ++1. Sign in using the username-password provided during the Airflow Integration Runtime creation. ([You can reset the username or password by editing the Airflow Integration runtime]() if needed) ++ :::image type="content" source="media/how-does-managed-airflow-work/login-in-dags.png" alt-text="Screenshot that shows sign in using the username-password provided during the Airflow Integration Runtime creation."::: ++## Remove DAGs from the Airflow environment ++If you're using Airflow version 1.x, delete DAGs that are deployed on any Airflow environment (IR), you need to delete the DAGs in two different places. ++1. Delete the DAG from Airflow UI +1. Delete the DAG in ADF UI ++> [!NOTE] +> This is the current experience during the Public Preview, and we will be improving this experience.  ++## Next steps ++* [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) +* [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) +* [Managed Airflow pricing](airflow-pricing.md) +* [How to change the password for Managed Airflow environments](password-change-airflow.md) |
data-factory | Password Change Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/password-change-airflow.md | + + Title: Changing a password for a Managed Airflow environment +description: This article describes how to change a password for a Managed Airflow environment. ++++ Last updated : 01/24/2023++++# Changing a password for a Managed Airflow environment +++This article describes how to change the password for a Managed Airflow environment in Azure Data Factory using **Basic** authentication. ++## Updating the password ++We recommend using **Azure AD** authentication in Managed Airflow environments. However, if you choose to use **Basic** authentication, you can still update the Airflow password by editing the Airflow environment configuration and updating the username/password in the integration runtime settings, as shown here: +++## Next steps ++- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) +- [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) +- [Managed Airflow pricing](airflow-pricing.md) |
data-factory | Tutorial Refresh Power Bi Dataset With Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-refresh-power-bi-dataset-with-airflow.md | + + Title: Refresh a Power BI dataset with Managed Airflow +description: This tutorial provides step-by-step instructions for refreshing a Power BI dataset with Managed Airflow. ++++ Last updated : 01/24/2023++++# Refresh a Power BI dataset with Managed Airflow +++> [!NOTE] +> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages. ++This tutorial shows you how to refresh a Power BI dataset with Managed Airflow in Azure Data Factory. ++## Prerequisites ++* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. +* **Azure storage account**. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) for steps to create one. *Ensure the storage account allows access only from selected networks.* +* **Setup a Service Principal**. You will need to [create a new service principal](../active-directory/develop/howto-create-service-principal-portal.md) or use an existing one and grant it permission to run the pipeline (example ΓÇô contributor role in the data factory where the existing pipelines exist), even if the Managed Airflow environment and the pipelines exist in the same data factory. You will need to get the Service PrincipalΓÇÖs Client ID and Client Secret (API Key). ++## Steps ++1. Create a new Python file **pbi-dataset-refresh.py** with the below contents: + ```python + from airflow import DAG + from airflow.operators.python_operator import PythonOperator + from datetime import datetime, timedelta + from powerbi.datasets import Datasets ++ # Default arguments for the DAG + default_args = { + 'owner': 'me', + 'start_date': datetime(2022, 1, 1), + 'depends_on_past': False, + 'retries': 1, + 'retry_delay': timedelta(minutes=5), + } ++ # Create the DAG + dag = DAG( + 'refresh_power_bi_dataset', + default_args=default_args, + schedule_interval=timedelta(hours=1), + ) ++ # Define a function to refresh the dataset + def refresh_dataset(**kwargs): + # Create a Power BI client + datasets = Datasets(client_id='your_client_id', + client_secret='your_client_secret', + tenant_id='your_tenant_id') + + # Refresh the dataset + dataset_name = 'your_dataset_name' + datasets.refresh(dataset_name) + print(f'Successfully refreshed dataset: {dataset_name}') ++ # Create a PythonOperator to run the dataset refresh + refresh_dataset_operator = PythonOperator( + task_id='refresh_dataset', + python_callable=refresh_dataset, + provide_context=True, + dag=dag, + ) ++ refresh_dataset_operator + ``` ++ You will have to fill in your **client_id**, **client_secret**, **tenant_id**, and **dataset_name** with your own values. ++ Also, you will need to install the **powerbi** python package to use the above code using Managed Airflow requirements. Edit a Managed Airflow environment and add the **powerbi** python package under **Airflow requirements**. ++1. Upload the **pbi-dataset-refresh.py** file to the blob storage within a folder named **DAG**. +1. [Import the **DAG** folder into your Airflow environment](). If you do not have one, [create a new one](). + :::image type="content" source="media/tutorial-run-existing-pipeline-with-airflow/airflow-environment.png" alt-text="Screenshot showing the data factory management tab with the Airflow section selected."::: ++## Next Steps ++* [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) +* [Managed Airflow pricing](airflow-pricing.md) +* [Changing password for Managed Airflow environments](password-change-airflow.md) |
data-factory | Tutorial Run Existing Pipeline With Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-run-existing-pipeline-with-airflow.md | + + Title: Run an existing pipeline with Managed Airflow +description: This tutorial provides step-by-step instructions for running an existing pipeline with Managed Airflow in Azure Data Factory. ++++ Last updated : 01/24/2023+++++# Run an existing pipeline with Managed Airflow +++> [!NOTE] +> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages. ++Data Factory pipelines provide 100+ data source connectors that provide scalable and reliable data integration/ data flows. There are scenarios where you would like to run an existing data factory pipeline from your Apache Airflow DAG. This tutorial shows you how to do just that. ++## Prerequisites ++* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. +* **Azure storage account**. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) for steps to create one. *Ensure the storage account allows access only from selected networks.* +* **Azure Data Factory pipeline**. You can follow any of the tutorials and create a new data factory pipeline in case you do not already have one, or create one with one click in [Get started and try out your first data factory pipeline](quickstart-get-started.md). +* **Setup a Service Principal**. You will need to [create a new service principal](../active-directory/develop/howto-create-service-principal-portal.md) or use an existing one and grant it permission to run the pipeline (example ΓÇô contributor role in the data factory where the existing pipelines exist), even if the Managed Airflow environment and the pipelines exist in the same data factory. You will need to get the Service PrincipalΓÇÖs Client ID and Client Secret (API Key). ++## Steps ++1. Create a new Python file **adf.py** with the below contents: + ```python + from airflow import DAG + from airflow.operators.python_operator import PythonOperator + from azure.common.credentials import ServicePrincipalCredentials + from azure.mgmt.datafactory import DataFactoryManagementClient + from datetime import datetime, timedelta + + # Default arguments for the DAG + default_args = { + 'owner': 'me', + 'start_date': datetime(2022, 1, 1), + 'depends_on_past': False, + 'retries': 1, + 'retry_delay': timedelta(minutes=5), + } ++ # Create the DAG + dag = DAG( + 'run_azure_data_factory_pipeline', + default_args=default_args, + schedule_interval=timedelta(hours=1), + ) ++ # Define a function to run the pipeline + + def run_pipeline(**kwargs): + # Create the client + credentials = ServicePrincipalCredentials( + client_id='your_client_id', + secret='your_client_secret', + tenant='your_tenant_id', + ) + client = DataFactoryManagementClient(credentials, 'your_subscription_id') ++ # Run the pipeline + pipeline_name = 'your_pipeline_name' + run_response = client.pipelines.create_run( + 'your_resource_group_name', + 'your_data_factory_name', + pipeline_name, + ) + run_id = run_response.run_id ++ # Print the run ID + print(f'Pipeline run ID: {run_id}') ++ # Create a PythonOperator to run the pipeline + run_pipeline_operator = PythonOperator( + task_id='run_pipeline', + python_callable=run_pipeline, + provide_context=True, + dag=dag, + ) ++ # Set the dependencies + run_pipeline_operator + ``` ++ You will have to fill in your **client_id**, **client_secret**, **tenant_id**, **subscription_id**, **resource_group_name**, **data_factory_name**, and **pipeline_name**. ++1. Upload the **adf.py** file to your blob storage within a folder called **DAG**. +1. [Import the **DAG** folder into your Managed Airflow environment](./how-does-managed-airflow-work.md#import-dags). If you do not have one, [create a new one](./how-does-managed-airflow-work.md#create-a-managed-airflow-environment) ++ :::image type="content" source="media/tutorial-run-existing-pipeline-with-airflow/airflow-environment.png" alt-text="Screenshot showing the data factory management tab with the Airflow section selected."::: ++## Next steps ++* [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) +* [Managed Airflow pricing](airflow-pricing.md) +* [Changing password for Managed Airflow environments](password-change-airflow.md) |
databox-online | Azure Stack Edge Deploy Aks On Azure Stack Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge.md | Depending on the workloads you intend to deploy, you may need to ensure the foll There are multiple steps to deploy AKS on Azure Stack Edge. Some steps are optional, as noted below. -## Enable AKS +## Verify AKS is enabled -1. [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface). --1. Run the following command to enable AKS: -- ```azurepowershell - Enable-HcsAzureKubernetesService ΓÇôf - ``` -- This step doesn't deploy the Kubernetes cluster. The cluster is deployed later in the section for [Set up Kubernetes cluster and enable Arc](azure-stack-edge-deploy-aks-on-azure-stack-edge.md#set-up-kubernetes-cluster-and-enable-arc). --1. To verify that AKS is enabled, go to your Azure Stack Edge resource in the Azure portal. In the **Overview** pane, select the **Azure Kubernetes Service** tile. +To verify that AKS is enabled, go to your Azure Stack Edge resource in the Azure portal. In the **Overview** pane, select the **Azure Kubernetes Service** tile. - [](./media/azure-stack-edge-deploy-aks-on-azure-stack-edge/azure-stack-edge-azure-kubernetes-service-tile.png#lightbox) + [](./media/azure-stack-edge-deploy-aks-on-azure-stack-edge/azure-stack-edge-azure-kubernetes-service-tile.png#lightbox) ## Set custom locations (optional) |
defender-for-iot | Concept Sentinel Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md | The following table shows how both the OT team, on the Defender for IoT side, an |SOC teams respond with OT playbooks and notebooks | **OT incident response** | OT teams either suppress the alert or learn it for next time, as needed | |After the threat is mitigated, SOC teams close the incident | **OT incident closure** | After the threat is mitigated, OT teams close the alert | +### Alert status synchronizations ++Alert status changes are synchronized from Microsoft Sentinel to Defender for IoT only, and not from Defender for IoT to Microsoft Sentinel. ++If you integrate Defender for IoT with Microsoft Sentinel, we recommend that you manage your alert statuses together with the related incidents in Microsoft Sentinel. + ## Microsoft Sentinel incidents for Defender for IoT After you've configured the Defender for IoT data connector and have IoT/OT alert data streaming to Microsoft Sentinel, use one of the following methods to create incidents based on those alerts: |
defender-for-iot | Iot Advanced Threat Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md | The **Microsoft Defender for IoT** solution includes a more detailed set of out- ## Investigate Defender for IoT incidents -After youΓÇÖve [configured your Defender for IoT data to trigger new incidents in Microsoft Sentinel](#detect-threats-out-of-the-box-with-defender-for-iot-data), start investigating those incidents in Microsoft Sentinel as you would other incidents. +After youΓÇÖve [configured your Defender for IoT data to trigger new incidents in Microsoft Sentinel](#detect-threats-out-of-the-box-with-defender-for-iot-data), start investigating those incidents in Microsoft Sentinel [as you would other incidents](/sentinel/investigate-cases). **To investigate Microsoft Defender for IoT incidents**: For more information on how to investigate incidents and use the investigation g ### Investigate further with IoT device entities -When investigating an incident in Microsoft Sentinel, in an incident details pane, select an IoT device entity from the **Entities** list to open its device entity page. You can identify an IoT device by the IoT device icon: :::image type="icon" source="media/iot-solution/iot-device-icon.png" border="false"::: +When investigating an incident in Microsoft Sentinel, in an incident details pane, select an IoT device entity from the **Entities** list to open its [device entity page]](/azure/sentinel/entity-pages). ++You can identify an IoT device by the IoT device icon: :::image type="icon" source="media/iot-solution/iot-device-icon.png" border="false"::: If you don't see your IoT device entity right away, select **View full details** under the entities listed to open the full incident page. In the **Entities** tab, select an IoT device to open its entity page. For example: You can also hunt for vulnerable devices on the Microsoft Sentinel **Entity beha For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md). +### Investigate the alert in Defender for IoT ++To open an alert in Defender for IoT for further investigation, go to your incident details page and select **Investigate in Microsoft Defender for IoT**. For example: +++The Defender for IoT alert details page opens for the related alert. For more information, see [Investigate and respond to an OT network alert](respond-ot-alert.md). + ## Visualize and monitor Defender for IoT data To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the [Microsoft Defender for IoT](#install-the-defender-for-iot-solution) solution. The following table describes the workbooks included in the **Microsoft Defender ## Automate response to Defender for IoT alerts -Playbooks are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response; it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively. +[Playbooks](/azure/sentinel/tutorial-respond-threats-playbook) are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response; it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively. The [Microsoft Defender for IoT](#install-the-defender-for-iot-solution) solution includes out-of-the-box playbooks that provide the following functionality: |
defender-for-iot | Iot Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md | Before you start, make sure you have the following requirements on your workspac ## Connect your data from Defender for IoT to Microsoft Sentinel -Start by enabling the **Defender for IoT** data connector to stream all your Defender for IoT events into Microsoft Sentinel. +Start by enabling the [Defender for IoT data connector](/azure/sentinel/data-connectors-reference.md#microsoft-defender-for-iot) to stream all your Defender for IoT events into Microsoft Sentinel. **To enable the Defender for IoT data connector**: The following types of updates generate new records in the **SecurityAlert** tab - A new device is added to an existing alert - The device properties for an alert are updated -## Next steps -[Install the **Microsoft Defender for IoT** solution](iot-advanced-threat-monitoring.md) to your Microsoft Sentinel workspace. -The **Microsoft Defender for IoT** solution is a set of bundled, out-of-the-box content that's configured specifically for Defender for IoT data, and includes analytics rules, workbooks, and playbooks. +## Next steps -For more information, see: +The [Microsoft Defender for IoT](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview) solution is a set of bundled, out-of-the-box content that's configured specifically for Defender for IoT data, and includes analytics rules, workbooks, and playbooks. -- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md)-- [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)-- [Microsoft Defender for IoT solution](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview)-- [Microsoft Defender for IoT data connector](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot)+> [!div class="nextstepaction"] +> [Install the **Microsoft Defender for IoT** solution](iot-advanced-threat-monitoring.md) |
defender-for-iot | Respond Ot Alert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/respond-ot-alert.md | + + Title: Respond to an alert in the Azure portal - Microsoft Defender for IoT +description: Learn about how to fully respond to OT network alerts in Microsoft Defender for IoT. Last updated : 12/05/2022++++# Investigate and respond to an OT network alert ++This article describes how to investigate and respond to an OT network alert in Microsoft Defender for IoT. ++You might be a security operations center (SOC) engineer using Microsoft Sentinel, who's seen a new incident in your Microsoft Sentinel workspace and is continuing in Defender for IoT for further details about related devices and recommended remediation steps. ++Alternately, you might be an OT engineer watching for operational alerts directly in Defender for IoT. Operational alerts might not be malicious but can indicate operational activity that can aid in security investigations. ++## Prerequisites ++Before you start, make sure that you have: ++- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/). ++- A cloud-connected [OT network sensor](onboard-sensors.md) onboarded to Defender for IoT, with alerts streaming into the Azure portal. ++- To investigate an alert from a Microsoft Sentinel incident, make sure that you've completed the following tutorials: ++ - [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md) + - [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) ++- An alert details page open, accessed either from the Defender for IoT **Alerts** page in the [Azure portal](how-to-manage-cloud-alerts.md), a Defender for IoT [device details page](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory), or a Microsoft Sentinel [incident](/azure/sentinel/investigate-incidents). ++## Investigate an alert from the Azure portal ++On an alert details page in the Azure portal, start by changing the alert status to **Active**, indicating that it's currently under investigation. ++For example: +++> [!IMPORTANT] +> If you're integrating with Microsoft Sentinel, make sure to manage your alert status only from the [incident](/azure/sentinel/investigate-incidents) in Microsoft Sentinel. Alerts statuses are not synchronized from Defender for IoT to Microsoft Sentinel. +++After updating the status, check the alert details page for the following details to aid in your investigation: ++- **Source and destination device details**. Source and destination devices are listed in **Alert details** tab, and also in the **Entities** area below, as Microsoft Sentinel *entities*, with their own [entity pages](iot-advanced-threat-monitoring.md#investigate-further-with-iot-device-entities). In the **Entities** area, you'll use the links in the **Name** column to open the relevant device details pages for [further investigation](#investigate-related-alerts-on-the-azure-portal). ++- **Site and/or zone**. These values help you understand the geographic and network location of the alert and if there are areas of the network that are now more vulnerable to attack. ++- **MITRE ATT&CK** tactics and techniques. Scroll down in the left pane to view all MITRE ATT&CK details. In addition to descriptions of the tactics and techniques, select the links to the MITRE ATT&CK site to learn more about each one. ++- **Download PCAP**. At the top of the page, select **Download PCAP** to [download the raw traffic files](how-to-manage-cloud-alerts.md#access-alert-pcap-data) for the selected alert. ++## Investigate related alerts on the Azure portal ++Look for other alerts triggered by the same source or destination device. Correlations between multiple alerts may indicate that the device is at risk and can be exploited. ++For example, a device that attempted to connect to a malicious IP, together with another alert about unauthorized PLC programming changes on the device, might indicate that an attacker has already gained control of the device. ++**To find related alerts in Defender for IoT**: ++1. On the **Alerts** page, select an alert to view details on the right. ++1. Locate the device links in the **Entities** area, either in the details pane on the right or in the alert details page. Select an entity link to open the related device details page, for both a source and destination device. <!--no links for some alerts?--> ++1. On the device details page, select the **Alerts** tab to view all alerts for that device. For example: ++ :::image type="content" source="media/iot-solution/device-details-alerts.png" alt-text="Screenshot of the Alerts tab on a device details page."::: ++## Investigate alert details on the OT sensor ++The OT sensor that triggered the alert will have more details to help your investigation. ++**To continue your investigation on the OT sensor**: ++1. Sign into your OT sensor as a **Viewer** or **Security Analyst** user. ++1. Select the **Alerts** page and find then alert you're investigating. Select **View more details to open the OT sensor's alert details page. For example: ++ :::image type="content" source="media/iot-solution/alert-on-sensor.png" alt-text="Screenshot of the alert on the sensor console."::: ++On the sensor's alert details page: ++- Select the **Map view** tab to view the alert inside the OT sensor's [device map](how-to-work-with-the-sensor-device-map.md), including any connected devices. ++- Select the **Event timeline** tab to view the alert's [full event timeline](how-to-track-sensor-activity.md), including other related activity also detected by the OT sensor. ++- Select **Export PDF** to download a PDF summary of the alert details. ++## Take remediation action ++The timing for when you take remediation actions may depend on the severity of the alert. For example, for high severity alerts, you might want to take action even before investigating, such as if you need to immediately quarantine an area of your network. ++For lower severity alerts, or for operational alerts, you might want to fully investigate before taking action. ++**To remediate an alert**, use the following Defender for IoT resources: ++- **On an alert details page** on either the Azure portal or the OT sensor, select the **Take action** tab to view details about recommended steps to mitigate the risk. ++- **On a device details page** in the Azure portal, for both the [source and destination devices](#investigate-an-alert-from-the-azure-portal): ++ - Select the **Vulnerabilities** tab and check for detected vulnerabilities on each device. ++ - Select the **Recommendations** tab and check for current security [recommendations](recommendations.md) for each device. ++Defender for IoT vulnerability data and security recommendations can provide simple actions you can take to mitigate the risks, such as updating firmware or applying a patch. Other actions may take more planning. ++When you've finished with mitigation activities and are ready to close the alert, make sure to update the alert status to **Closed** or notify your SOC team for further incident management. ++> [!NOTE] +> If you integrate Defender for IoT with Microsoft Sentinel, alert status changes you make in Defender for IoT are *not* updated in Microsoft Sentinel. Make sure to manage your alerts in Microsoft Sentinel together with the related incident. ++## Triage alerts regularly ++Triage alerts on a regular basis to prevent alert fatigue in your network and ensure that you're able to see and handle important alerts in a timely manner. ++**To triage alerts**: ++1. In Defender for IoT in the Azure portal, go to the **Alerts** page. By default, alerts are sorted by the **Last detection** column, from most recent to oldest alert, so that you can first see the latest alerts in your network. ++1. Use other filters, such as **Sensor** or **Severity** to find specific alerts. ++1. Check the alert details and investigate as needed before you take any alert action. When you're ready, take action on an alert details page for a specific alert, or on the **Alerts** page for bulk actions. ++ For example, update alert status or severity, or [learn](how-to-manage-the-alert-event.md#learn-and-unlearn-alert-traffic) an alert to authorize the detected traffic. *Learned* alerts are not triggered again if the same exact traffic is detected again. ++ :::image type="content" source="media/iot-solution/learn-alert.png" alt-text="Screenshot of a Learn button on the alert details page."::: ++For high severity alerts, you may want to take action immediately. ++## Next steps ++> [!div class="nextstepaction"] +> [Enhance security posture with security recommendations](recommendations.md) ++ |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Features released earlier than nine months ago are described in the [What's new > Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > +## February 2023 ++|Service area |Updates | +||| +|**Cloud features** | [Alerts GA in the Azure portal](#alerts-ga-in-the-azure-portal) | ++### Alerts GA in the Azure portal ++The **Alerts** page in the Azure portal is now out for General Availability. Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events detected in your network. Alerts are triggered when OT or Enterprise IoT network sensors, or the [Defender for IoT micro agent](/azure/defender-for-iot/device-builders/), detect changes or suspicious activity in network traffic that need your attention. ++Specific alerts triggered by the Enterprise IoT sensor currently remain in public preview. ++For more information, see: ++- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md) +- [Investigate and respond to an OT network alert](respond-ot-alert.md) +- [OT monitoring alert types and descriptions](alert-engine-messages.md) + ## January 2023 |Service area |Updates | |
devtest-labs | Devtest Lab Upload Vhd Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-powershell.md | To upload a VHD file by using PowerShell: 1. In a text editor, paste the generated PowerShell script you copied from the Azure portal. -1. Modify the `-LocalFilePath` parameter of the Add-AzureRmVhd cmdlet to point to the location of the VHD file you want to upload. +1. Modify the `-LocalFilePath` parameter of the Add-AZVHD cmdlet to point to the location of the VHD file you want to upload. -1. At a PowerShell command prompt, run the Add-AzureRmVhd cmdlet with the modified `-LocalFilePath` parameter. +1. At a PowerShell command prompt, run the Add-AZVHD cmdlet with the modified `-LocalFilePath` parameter. The process of uploading a VHD file might be lengthy depending on the size of the VHD file and your connection speed. |
digital-twins | Concepts Data Explorer Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-explorer-plugin.md | You can invoke the plugin in a Kusto query with the following command. There are evaluate azure_digital_twins_query_request(<Azure-Digital-Twins-endpoint>, <Azure-Digital-Twins-query>) ``` -The plugin works by calling the [Azure Digital Twins query API](/rest/api/digital-twins/dataplane/query), and the [query language structure](concepts-query-language.md) is the same as when using the API, with two exceptions: +The plugin works by calling the [Azure Digital Twins Query API](/rest/api/digital-twins/dataplane/query), and the [query language structure](concepts-query-language.md) is the same as when using the API, with two exceptions: * The `*` wildcard in the `SELECT` clause isn't supported. Instead, Azure Digital Twin queries that are executed using the plugin should use aliases in the `SELECT` clause. For example, consider the below Azure Digital Twins query that is executed using the API: |
digital-twins | Concepts Query Units | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-query-units.md | This article explains how to understand Query Units and track Query Unit consump ## Find the Query Unit consumption in Azure Digital Twins -When you run a query using the Azure Digital Twins [Query API](/rest/api/digital-twins/dataplane/query), you can examine the response header to track the number of QUs that the query consumed. Look for "query-charge" in the response sent back from Azure Digital Twins. +When you run a query using the [Azure Digital Twins Query API](/rest/api/digital-twins/dataplane/query), you can examine the response header to track the number of QUs that the query consumed. Look for "query-charge" in the response sent back from Azure Digital Twins. The [Azure Digital Twins SDKs](concepts-apis-sdks.md) allow you to extract the query-charge header from the pageable response. This section shows how to query for digital twins and how to iterate over the pageable response to extract the query-charge header. -The following code snippet demonstrates how you can extract the query charges incurred when calling the query API. It iterates over the response pages first to access the query-charge header, and then iterates over the digital twin results within each page. +The following code snippet demonstrates how you can extract the query charges incurred when calling the Query API. It iterates over the response pages first to access the query-charge header, and then iterates over the digital twin results within each page. :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/getQueryCharges.cs"::: |
digital-twins | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-security.md | For instructions on how to enable a managed identity for Azure Digital Twins and [Azure Private Link](../private-link/private-link-overview.md) is a service that enables you to access Azure resources (like [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Storage](../storage/common/storage-introduction.md), and [Azure Cosmos DB](../cosmos-db/introduction.md)) and Azure-hosted customer and partner services over a private endpoint in your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md). -Similarly, you can use private endpoints for your Azure Digital Twins instance to allow clients located in your virtual network to securely access the instance over Private Link. Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md). +Similarly, you can use private access endpoints for your Azure Digital Twins instance to allow clients located in your virtual network to have secure REST API access to the instance over Private Link. Configuring a private access endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md). -The private endpoint uses an IP address from your Azure VNet address space. Network traffic between a client on your private network and the Azure Digital Twins instance traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure to the public internet. Here's a visual representation of this system: +The private access endpoint uses an IP address from your Azure VNet address space. Network traffic between a client on your private network and the Azure Digital Twins instance traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure to the public internet. Here's a visual representation of this system: :::image type="content" source="media/concepts-security/private-link.png" alt-text="Diagram showing a network that is a protected VNET with no public cloud access, connecting through Private Link to an Azure Digital Twins instance."::: -Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure, as well as avoid data exfiltration from your VNet. +Configuring a private access endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure, as well as avoid data exfiltration from your VNet. -For instructions on how to set up Private Link for Azure Digital Twins, see [Enable private access with Private Link](./how-to-enable-private-link.md). +For instructions on how to set up Private Link for Azure Digital Twins, see [Enable private access with Private Link](how-to-enable-private-link.md). ++>[!NOTE] +> Private network access with Azure Private Link applies to accessing Azure Digital Twins through its rest APIs. This feature does not apply to egress scenarios using Azure Digital Twins's [event routing](concepts-route-events.md) feature. ### Design considerations |
digital-twins | How To Enable Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-enable-private-link.md | For more information and examples, see the [az dt network private-link reference ## Disable / enable public network access flags -You can configure your Azure Digital Twins instance to deny all public connections and allow only connections through private endpoints to enhance the network security. This action is done with a *public network access flag*. +You can configure your Azure Digital Twins instance to deny all public connections and allow only connections through private access endpoints to enhance the network security. This action is done with a *public network access flag*. This policy allows you to restrict API access to Private Link connections only. When the public network access flag is set to `disabled`, all REST API calls to the Azure Digital Twins instance data plane from the public cloud will return `403, Unauthorized`. Otherwise, when the policy is set to `disabled` and a request is made through a private endpoint, the API call will succeed. |
digital-twins | How To Manage Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md | Only properties that have been set at least once are returned when you retrieve >[!TIP] >The `displayName` for a twin is part of its model metadata, so it will not show when getting data for the twin instance. To see this value, you can [retrieve it from the model](how-to-manage-model.md#retrieve-models). -To retrieve multiple twins using a single API call, see the query API examples in [Query the twin graph](how-to-query-graph.md). +To retrieve multiple twins using a single API call, see the Query API examples in [Query the twin graph](how-to-query-graph.md). Consider the following model (written in [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/tree/master/DTDL)) that defines a Moon: |
digital-twins | How To Parse Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-parse-models.md | The following code shows an example of how to use the parser library to reflect ## Next steps -Once you're done writing your models, see how to upload them (and do other management operations) with the DigitalTwinsModels APIs: +Once you're done writing your models, see how to upload them (and do other management operations) with the Azure Digital Twins Models APIs: * [Manage DTDL models](how-to-manage-model.md) |
digital-twins | How To Query Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-query-graph.md | -The article contains sample queries that illustrate the query language structure and common query operations for digital twins. It also describes how to run your queries after you've written them, using the Azure Digital Twins [Query API](/rest/api/digital-twins/dataplane/query) or an [SDK](concepts-apis-sdks.md#data-plane-apis). +The article contains sample queries that illustrate the query language structure and common query operations for digital twins. It also describes how to run your queries after you've written them, using the [Azure Digital Twins Query API](/rest/api/digital-twins/dataplane/query) or an [SDK](concepts-apis-sdks.md#data-plane-apis). > [!NOTE] > If you're running the sample queries below with an API or SDK call, you'll need to condense the query text into a single line. |
digital-twins | How To Use Postman With Digital Twins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-postman-with-digital-twins.md | Next, continue on to the next section to add a bearer token to the collection fo ### Configure authorization -Next, edit the collection you've created to configure some access details. Highlight the collection you've created and select the **View more actions** icon to pull up a menu. Select **Edit**. +Next, edit the collection you've created to configure some access details. Highlight the collection you've created and select the **View more actions** icon to display action options. Select **Edit**. :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-edit-collection.png" alt-text="Screenshot of Postman. The 'View more actions' icon for the imported collection is highlighted, and 'Edit' is highlighted in the options." lightbox="media/how-to-use-postman-with-digital-twins/postman-edit-collection.png"::: Follow these steps to add a bearer token to the collection for authorization. Us :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-authorization-imported.png" alt-text="Screenshot of the imported collection's edit dialog in Postman, showing the 'Authorization' tab." lightbox="media/how-to-use-postman-with-digital-twins/postman-authorization-imported.png"::: -1. Set the Type to **OAuth 2.0**, paste your access token into the Access Token box, and select **Save**. +1. Set the Type to **OAuth 2.0** and paste your access token into the Access Token box. You must use the correct token for the type of API you're using, as there are different tokens for data plane APIs versus control plane APIs. Select **Save**. :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-paste-token-imported.png" alt-text="Screenshot of Postman edit dialog for the imported collection, on the 'Authorization' tab. Type is 'OAuth 2.0', and Access Token box is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-paste-token-imported.png"::: > [!TIP]- > You must use the correct token for the type of API you're using. There is one access token for data plane APIS and another for control plane APIs. + > You can choose to turn on token sharing if you want to store the token with the request on Postman cloud, and potentially share your token with others. ### Other configuration You can now view your request under the collection, and select it to pull up its To make a Postman request to one of the Azure Digital Twins APIs, you'll need the URL of the API and information about what details it requires. You can find this information in the [Azure Digital Twins REST API reference documentation](/rest/api/azure-digitaltwins/). -To proceed with an example query, this article will use the Query API (and its [reference documentation](/rest/api/digital-twins/dataplane/query/querytwins)) to query for all the digital twins in an instance. +To proceed with an example query, this article will use the [Azure Digital Twins Query API](/rest/api/digital-twins/dataplane/query/querytwins) to query for all the digital twins in an instance. 1. Get the request URL and type from the reference documentation. For the Query API, this is currently *POST* `https://digitaltwins-host-name/query?api-version=2020-10-31`. 1. In Postman, set the type for the request and enter the request URL, filling in placeholders in the URL as required. Use your instance's host name from the [Prerequisites section](#prerequisites). |
digital-twins | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md | Azure Digital Twins provides a rich event system to keep your graph current, inc ## Query for environment insights -Azure Digital Twins provides a powerful query APIΓÇï to help you extract insights from the live execution environment. The API can query with extensive search conditions, including property values, relationships, relationship properties, model information, and more. You can also combine queries, gathering a broad range of insights about your environment and answering custom questions that are important to you. For more details about the language used to craft these queries, see [Query language](concepts-query-language.md). +Azure Digital Twins provides a powerful [query APIΓÇï](/rest/api/digital-twins/dataplane/query) to help you extract insights from the live execution environment. The API can query with extensive search conditions, including property values, relationships, relationship properties, model information, and more. You can also combine queries, gathering a broad range of insights about your environment and answering custom questions that are important to you. For more details about the language used to craft these queries, see [Query language](concepts-query-language.md). ## Visualize environment in 3D Scenes Studio (preview) |
expressroute | Expressroute Locations Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md | The following table shows connectivity locations and the service providers for e | **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | Supported | CDC | | **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2| Supported | CDC, Equinix | | **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | Supported | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Teraco, Vodacom |-| **Chennai** | Tata Communications | 2 | South India | Supported | BSNL, DE-CIX, Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea | +| **Chennai** | Tata Communications | 2 | South India | Supported | BSNL, DE-CIX, Global CloudXchange (GCX), Lightstorm, SIFY, Tata Communications, VodafoneIdea | | **Chennai2** | Airtel | 2 | South India | Supported | Airtel |-| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo | -| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | Supported | CoreSite | +| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | +| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | Supported | CoreSite, DE-CIX | | **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | Interxion | | **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo| | **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite, Megaport, PacketFabric, Zayo |-| **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | | -| **Doha2** | [Ooredoo](https://www.ooredoo.qa/portal/OoredooQatar/b2b-data-centre) | 3 | Qatar Central | Supported | | +| **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect | +| **Doha2** | [Ooredoo](https://www.ooredoo.qa/portal/OoredooQatar/b2b-data-centre) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect | | **Dubai** | [PCCS](http://www.pacificcontrols.net/cloudservices/) | 3 | UAE North | Supported | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom | | **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport, Zayo| | **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | Supported | Interxion | | **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | Supported | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems |-| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX, Deutsche Telekom AG, Equinix | +| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX, Deutsche Telekom AG, Equinix, InterCloud | | **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | Supported | Colt, Equinix, InterCloud, Megaport, Swisscom | | **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | Supported | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon, Zayo |-| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | Supported | China Mobile International, China Telecom Global, Equinix, iAdvantage, Megaport, PCCW Global Limited, SingTel | +| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | Supported | China Mobile International, China Telecom Global, Deutsche Telekom AG, Equinix, iAdvantage, Megaport, PCCW Global Limited, SingTel | | **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | Supported | NTT Communications, Telin, XL Axiata | | **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | Supported | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Orange, Teraco, Vodacom |-| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom | +| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | DE-CIX, TIME dotCom | | **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | Supported | CenturyLink Cloud Connect, Megaport, PacketFabric |-| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | Supported | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, Intelsat, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo | -| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, GTT, Interxion, IX Reach, JISC, Megaport, NTT Global DataCenters EMEA, Orange, SES, Sohonet, Telehouse - KDDI, Zayo | -| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | CoreSite, Equinix*, Megaport, Neutrona Networks, NTT, Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* | -| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix | +| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | Supported | AT&T NetBond, Bezeq International, British Telecom, CenturyLink, Colt, Equinix, euNetworks, Intelsat, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo | +| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, GTT, Interxion, IX Reach, JISC, Megaport, NTT Global DataCenters EMEA, Ooredoo Cloud Connect, Orange, SES, Sohonet, Telehouse - KDDI, Zayo | +| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | CoreSite, Cloudflare, Equinix*, Megaport, Neutrona Networks, NTT, Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* | +| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix, PacketFabric | | **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | Supported | DE-CIX, Interxion, Megaport, Telefonica | | **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt, DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect | | **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | Supported | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Orange, Telstra Corporation, TPG Telecom | The following table shows connectivity locations and the service providers for e | **Mumbai** | Tata Communications | 2 | West India | Supported | BSNL, DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon | | **Mumbai2** | Airtel | 2 | West India | Supported | Airtel, Sify, Orange, Vodafone Idea | | **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | Supported | Colt, DE-CIX, Megaport |-| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | Supported | CenturyLink Cloud Connect, Coresite, DE-CIX, Equinix, InterCloud, Megaport, Packet, Zayo | +| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | Supported | CenturyLink Cloud Connect, Coresite, Crown Castle, DE-CIX, Equinix, InterCloud, Lightpath, Megaport, NTT Communications, Packet, Zayo | | **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | Supported | British Telecom, Colt, Jisc, Level 3 Communications, Next Generation Data | | **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | Supported | AT TOKYO, BBIX, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications | | **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | Supported| GlobalConnect, Megaport, Telenor, Telia Carrier | | **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | Supported | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo | | **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix |-| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Megaport, NextDC | -| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | Supported | Cox Business Cloud Port, CenturyLink Cloud Connect, Megaport, Zayo | +| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Equinix, Megaport, NextDC | +| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | Supported | Cox Business Cloud Port, CenturyLink Cloud Connect, DE-CIX, Megaport, Zayo | | **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | Supported | |-| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| Supported | Tata Communications | +| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| Supported | Lightstorm, Tata Communications | | **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada, Equinix, Megaport, Telus | | **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | Supported | Megaport, Transtelco| | **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | Supported | | The following table shows connectivity locations and the service providers for e | **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | KT | | **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, InterCloud, Internet2, IX Reach, Packet, PacketFabric, Level 3 Communications, Megaport, Orange, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | Supported | Colt, Coresite | -| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | Supported | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone | +| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | Supported | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, PCCW Global Limited, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone | | **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | Supported | CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Epsilon Global Communications, Equinix, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI |-| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | Supported |GlobalConnect, Megaport | +| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | Supported |GlobalConnect, Megaport, Telenor | | **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | Sweden Central | Supported | Equinix, Interxion, Megaport, Telia Carrier | | **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ | | **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport, NextDC | The following table shows connectivity locations and the service providers for e | **Tel Aviv** | Bezeq International | 2 | n/a | Supported | | | **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> | | **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO, China Unicom Global, Colt, Equinix, Fibrenoire, IX Reach, Megaport, PCCW Global Limited, Tokai Communications |-| **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | | +| **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | NEC, SCSK | | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | | | **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada, Cologix, Megaport, Telus, Zayo |-| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/), [Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US, East US 2 | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, Internet2, InterCloud, Iron Mountain, IX Reach, Level 3 Communications, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | +| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/), [Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US, East US 2 | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, Internet2, InterCloud, Iron Mountain, IX Reach, Level 3 Communications, Lightpath, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US, East US 2 | n/a | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom, Zayo | |
expressroute | Expressroute Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md | The following table shows locations by service provider. If you want to view ava | **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka, Tokyo, Tokyo2 | | **[BCX](https://www.bcx.co.za/solutions/connectivity/)** |Supported |Supported | Cape Town, Johannesburg| | **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported | Montreal, Toronto, Quebec City, Vancouver |+| **[Bezeq International](https://selfservice.bezeqint.net/english)** | Supported | Supported | London | | **[BICS](https://www.bics.com/cloud-connect/)** | Supported | Supported | Amsterdam2, London2 | | **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** |Supported |Supported | Amsterdam, Amsterdam2, Chicago, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC | | **BSNL** |Supported |Supported | Chennai, Mumbai | The following table shows locations by service provider. If you want to view ava | **[China Unicom Global](https://cloudbond.chinaunicom.cn/home-en)** |Supported |Supported | Frankfurt, Hong Kong, Singapore2, Tokyo2 | | **Chunghwa Telecom** |Supported |Supported | Taipei | | **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami |+| **Cloudflare** |Supported |Supported | Los Angeles | | **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** |Supported |Supported | Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC | | **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, Osaka, Paris, Seoul, Silicon Valley, Singapore2, Tokyo, Tokyo2, Washington DC, Zurich | | **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported | Chicago, Silicon Valley, Washington DC | | **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported | Chicago, Chicago2, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 | | **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** |Supported |Supported | Dallas, Phoenix, Silicon Valley, Washington DC |-| **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported | Amsterdam2, Chennai, Dallas, Dubai2, Frankfurt, Frankfurt2, Madrid, Marseille, Mumbai, Munich, New York, Singapore2 | +| **Crown Castle** |Supported |Supported | New York | +| **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported | Amsterdam2, Chennai, Chicago2, Dallas, Dubai2, Frankfurt, Frankfurt2, Kuala Lumpur, Madrid, Marseille, Mumbai, Munich, New York, Phoenix, Singapore2 | | **[Devoli](https://devoli.com/expressroute)** | Supported |Supported | Auckland, Melbourne, Sydney | | **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt |-| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-services/managed-platform-services/azure-managed-services/cloudconnect-for-azure)** | Supported |Supported | Frankfurt2 | +| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-services/managed-platform-services/azure-managed-services/cloudconnect-for-azure)** | Supported |Supported | Frankfurt2, Hong Kong2 | | **du datamena** |Supported |Supported | Dubai2 | | **[eir](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin| | **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** |Supported |Supported | Hong Kong2, Singapore, Singapore2 |-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, Hong Kong2, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Paris2, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Tokyo2, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* | +| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, Hong Kong2, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Paris2, Perth, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Tokyo2, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* | | **Etisalat UAE** |Supported |Supported | Dubai | | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported | Taipei | The following table shows locations by service provider. If you want to view ava | **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai, Mumbai | | **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 | | **Intelsat** | Supported | Supported | London2, Washington DC2 |-| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Frankfurt, Geneva, Hong Kong, London, New York, Paris, Sao Paulo, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich | +| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Frankfurt, Frankfurt2, Geneva, Hong Kong, London, New York, Paris, Sao Paulo, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich | | **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported | Chicago, Dallas, Silicon Valley, Washington DC | | **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported | Osaka, Tokyo, Tokyo2 | | **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported | Cape Town, Johannesburg, London |-| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Copenhagen, Dublin, Dublin2, Frankfurt, London, London2, Madrid, Marseille, Paris, Zurich | +| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Copenhagen, Dublin, Dublin2, Frankfurt, London, London2, Madrid, Marseille, Paris, Stockholm, Zurich | | **[IRIDEOS](https://irideos.it/)** |Supported |Supported | Milan | | **Iron Mountain** | Supported |Supported | Washington DC | | **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**|Supported |Supported | Amsterdam, London2, Silicon Valley, Tokyo2, Toronto, Washington DC | | **Jaguar Network** |Supported |Supported | Marseille, Paris | | **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** |Supported |Supported | London, London2, Newport(Wales) |+| **KDDI** | Supported |Supported | Osaka, Tokyo, Tokyo2 | | **[KINX](https://www.kinx.net/service/cloudhub/clouds/microsoft_azure_expressroute/?lang=en)** |Supported |Supported | Seoul | | **[Kordia](https://www.kordia.co.nz/cloudconnect)** | Supported |Supported | Auckland, Sydney | | **[KPN](https://www.kpn.com/zakelijk/cloud/connect.htm)** | Supported | Supported | Amsterdam | | **[KT](https://cloud.kt.com/)** | Supported | Supported | Seoul, Seoul2 | | **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** |Supported |Supported | Amsterdam, Chicago, Dallas, London, Newport (Wales), Sao Paulo, Seattle, Silicon Valley, Singapore, Washington DC | | **LG CNS** |Supported |Supported | Busan, Seoul |+| **Lightpath** |Supported |Supported | New York, Washington DC | +| **Lightstorm** |Supported |Supported | Pune, Chennai | | **[Liquid Intelligent Technologies ](https://liquidcloud.africa/connect/)** |Supported |Supported | Cape Town, Johannesburg | | **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul | | **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported | Amsterdam, Atlanta, Auckland, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, Queretaro (Mexico), San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich | | **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported | London | | **MTN Global Connect** |Supported |Supported | Cape Town, Johannesburg| | **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** |Supported |Supported | Bangkok |+| **NEC** |Supported |Supported | Tokyo3 | +| **[NETSG](https://www.netsg.co/dc-cloud/cloud-and-dc-interconnect/)** |Supported |Supported | Melbourne, Sydney2 | | **[Neutrona Networks](https://flo.net/)** |Supported |Supported | Dallas, Los Angeles, Miami, Sao Paulo, Washington DC | | **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** |Supported |Supported | Newport(Wales) | | **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** |Supported |Supported | Melbourne, Perth, Sydney, Sydney2 | | **NL-IX** |Supported |Supported | Amsterdam2, Dublin2 | | **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** |Supported |Supported | Amsterdam2, Madrid |-| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** |Supported |Supported | Amsterdam, Hong Kong SAR, London, Los Angeles, Osaka, Singapore, Sydney, Tokyo, Washington DC | +| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** |Supported |Supported | Amsterdam, Hong Kong SAR, London, Los Angeles, New York, Osaka, Singapore, Sydney, Tokyo, Washington DC | | **NTT Communications India Network Services Pvt Ltd** |Supported |Supported | Chennai, Mumbai | | **NTT Communications - Flexible InterConnect** |Supported |Supported | Jakarta, Osaka, Singapore2, Tokyo, Tokyo2 | | **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |Supported |Supported | Tokyo | | **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |Supported |Supported | Amsterdam2, Berlin, Frankfurt, London2 | | **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |Supported |Supported | Osaka |-| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Doha2, London2, Marseille | +| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Doha, Doha2, London2, Marseille | | **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** |Supported |Supported | Melbourne, Sydney | | **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam, Amsterdam2, Chicago, Dallas, Dubai2, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Mumbai2, Melbourne, Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC | | **[Orixcom](https://www.orixcom.com/cloud-solutions/)** | Supported | Supported | Dubai2 |-| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Las Vegas, London, Miami, New York, Silicon Valley, Toronto, Washington DC | -| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported | Chicago, Hong Kong, Hong Kong2, London, Singapore2, Tokyo2 | +| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Las Vegas, London, Los Angeles2, Miami, New York, Silicon Valley, Toronto, Washington DC | +| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported | Chicago, Hong Kong, Hong Kong2, London, Singapore, Singapore2, Tokyo2 | | **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | Supported | Supported | Auckland | | **[Reliance Jio](https://www.jio.com/business/jio-cloud-connect)** | Supported | Supported | Mumbai | | **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan | The following table shows locations by service provider. If you want to view ava | **[Sohonet](https://www.sohonet.com/fastlane/)** |Supported |Supported | Los Angeles, London2 | | **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** |Supported |Supported | Auckland, Sydney | | **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva, Zurich |-| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** |Supported |Supported | Amsterdam, Chennai, Hong Kong SAR, London, Mumbai, Pune, Sao Paulo, Silicon Valley, Singapore, Washington DC | +| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** |Supported |Supported | Amsterdam, Chennai, Chicago, Hong Kong SAR, London, Mumbai, Pune, Sao Paulo, Silicon Valley, Singapore, Washington DC | | **[Telefonica](https://www.telefonica.com/es/home)** |Supported |Supported | Amsterdam, Sao Paulo, Madrid | | **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported | London, London2, Singapore2 |-| **Telenor** |Supported |Supported | Amsterdam, London, Oslo | +| **Telenor** |Supported |Supported | Amsterdam, London, Oslo, Stavanger | | **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported | Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong, London, Oslo, Paris, Seattle, Silicon Valley, Stockholm, Washington DC | | **[Telin](https://www.telin.net/product/data-connectivity/telin-cloud-exchange)** | Supported | Supported | Jakarta | | **Telmex Uninet**| Supported | Supported | Dallas | |
external-attack-surface-management | Data Connections Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/data-connections-overview.md | - Title: Data connections overview (preview)- -description: This article describes the data connections functionality in Defender EASM, enabling users to easily export asset or attack surface insight data to either Log Analytics or Azure Data Explorer. --- Previously updated : 1/30/2022----# Data connections overview (preview) --Microsoft Defender External Attack Surface Management (Defender EASM) now offers data connections to help users seamlessly integrate their attack surface data into other Microsoft solutions to supplement existing workflows with new insights. Users must get data from Defender EASM into the other security tools they use for remediation purposes to best operationalize their attack surface data. --The data connector sends Defender EASM asset data to two different platforms: Microsoft Log Analytics and Azure Data Explorer. Users need to be active customers to export Defender EASM data to either tool, and data connections are subject to the pricing model for each respective platform. --[Microsoft Log Analytics](https://learn.microsoft.com/azure/azure-monitor/logs/log-analytics-workspace-overview) provides SIEM (security information and event management) and SOAR (security orchestration, automation and response) capabilities. Defender EASM asset or insights information can be used in Log Analytics to enrich existing workflows in conjunction with other security data. This information can supplement firewall and configuration information, threat intelligence, compliance data and more to provide visibility into your external-facing infrastructure on the open internet. Users can create or enrich security incidents, build investigation playbooks, train machine learning algorithms, or trigger remediation actions. --[Azure Data Explorer](https://learn.microsoft.com/azure/data-explorer/data-explorer-overview) is a big data analytics platform that helps users analyze high volumes of data from various sources with flexible customization capabilities. Defender EASM asset and insights data can be integrated to leverage visualization, query, ingestion and management capabilities within the platform. Whether building custom reports with Power BI or hunting for assets that match precise KQL queries, exporting Defender EASM data to Azure Data Explorer enables users to leverage their attack surface data with endless customization potential. -- ----## Data content options --Defender EASM Data Connections offers users the ability to integrate two different kinds of attack surface data into the tool of their choice. Users can elect to migrate asset data, attack surface insights or both data types. Asset data provides granular details about your entire inventory, whereas attack surface insights provide immediately actionable insights based on Defender EASM dashboards. --To accurately present the infrastructure that matters most to your organization, please note that both content options will only include assets in the ΓÇ£Approved InventoryΓÇ¥ state. ---### Asset data --The Asset Data option will send data about all your inventory assets to the tool of your choice. This option is best for use cases where the granular underlying metadata is key to the operationalization of your Defender EASM integration (e.g. Sentinel, customized reporting in Data Explorer). Users can export high-level context on every asset in inventory as well as granular details specific to the particular asset type. This option does not provide any pre-determined insights about the assets; instead, it offers an expansive amount of data so users can surface the customized insights they care about most. ---### Attack surface insights --Attack Surface Insights provide an actionable set of results based on the key insights delivered through dashboards in Defender EASM. This option provides less granular metadata on each asset; instead, it categorizes assets based on the corresponding insight(s) and provides the high-level context required to investigate further. This option is ideal for those who want to integrate these pre-determined insights into custom reporting workflows in conjunction with data from other tools. -----## Configuring data connections --### Accessing data connections --Users can access Data Connections from the **Manage** section of the left-hand navigation pane within their Defender EASM resource blade. This page displays the data connectors for both Log Analytics and Azure Data Explorer, listing any current connections and providing the option to add, edit or remove connections. -- -----### Connection prerequisites --To successfully create a data connection, users must first ensure that they have completed the required steps to grant Defender EASM permission to the tool of their choice. This process enables the application to ingest our exported data and provides the authentication credentials needed to configure the connection. -----#### Configuring Log Analytics permissions --1. Open the Log Analytics workspace that will ingest your Defender EASM data, or [create a new workspace](https://learn.microsoft.com/azure/azure-monitor/logs/quick-create-workspace?tabs=azure-portal). --2. Select **Access control (IAM)** from the left-hand navigation pane. For more information on access control, see [identity documentation](https://learn.microsoft.com/azure/cloud-adoption-framework/decision-guides/identity/). --  --3. On this page, select **+Add** to create a new role assignment. --4. From the **Role** tab, select **Contributor**. Click **Next**. --5. Open the **Members** tab. Click **+Select members** to open a configuration pane. Search for **EASM-API** and click on the value in the members list. Once done, click **Select**, then **Review + assign**. --6. Once the role assignment has been created, select **Agents** from the **Settings** section of the left-hand navigation menu. --  --7. Expand the **Log Analytics agent instructions** section to view your Workspace ID and Primary key. These values will be used to set up your data connection. Save the values in the following format: WorksapceId=XXX;ApiKey=YYY -----#### Configuring Data Explorer permissions --1. Open the Data Explorer cluster that will ingest your Defender EASM data or [create a new cluster](https://learn.microsoft.com/azure/data-explorer/create-cluster-database-portal). --2. Select **Databases** in the Data section of the left-hand navigation menu. --3. Select **+Add Database** to create a database to house your Defender EASM data. --  --4. Name your database, configure retention and cache periods, then select **Create**. --  --5. Once your Defender EASM database has been created, click on the database name to open the details page. Select **Permissions** from the Overview section of the left-hand navigation menu. --  -- To successfully export Defender EASM data to Data Explorer, users must create two new permissions for the EASM API: **user** and **ingestor**. - -6. First, select **+Add** and create a user. Search for **EASM API**, select the value then click **Select**. --7. Select **+Add** to create an ingestor. Follow the same steps outlined above to add the EASM API as an ingestor. --8. Your database is now ready to connect to Defender EASM. You will need the cluster name, database name and region in the following format when configuring your Data Connection: ClusterName=XXX;Region=YYY;DatabaseName=ZZZ -----### Add a connection --Users can connect their Defender EASM data to either Log Analytics or Azure Data Explorer. To do so, simply select **Add connection** for the appropriate tool from the Data Connections page. --A configuration pane will open on the right-hand side of the Data Connections screen. The following four fields are required: -- **Name**: enter a name for this data connection. -- **Connection String**: enter the details required to connect your Defender EASM resource to another tool. For Log Analytics, users enter the workspaceID and coinciding API key associated to their account. For Azure Data Explorer, users enter the cluster name, region and database name associated to their account. Both values must be entered in the format shown when the field is blank. -- **Content**: users can select to integrate asset data, attack surface insights or both datasets. -- **Frequency**: select the frequency that the Defender EASM connection sends updated data to the tool of your choice. Available options are daily, weekly and monthly. ---  ---Once all four fields are configured, select **Add** to create the data connection. At this point, the Data Connections page will display a banner that indicates the resource has been successfully created and data will begin populating within 30 minutes. Once connections are created, they will be listed under the applicable tool on the main Data Connections page. -----### Edit or delete a connection --Users can edit or delete a data connection. For example, you may notice that a connection is listed as ΓÇ£DisconnectedΓÇ¥ and would therefore need to re-enter the configuration details to fix the issue. --To edit or delete a data connection: --1. Select the appropriate connection from the list on the main Data Connections page. --  --2. This action will open a page that provides additional data about the connection. This page displays the configurations you elected when creating the connection, as well as any error messages. Users will also see the following additional data: - - **Recurring**: the day of the week or month that Defender EASM sends updated data to the connected tool. - - **Created**: the date and time that the data connection was created. - - **Updated**: the date and time that the data connection was last updated. --  ---3. From this page, users can elect to reconnect, edit or delete their data connection. -- - **Reconnect**: this option attempts to validate the data connection without any changes to the configuration. This option is best for those who have validated the authentication credentials used for the data connection. - - **Edit**: this option allows users to change the configuration for the data connection. - - **Delete**: this option deletes the data connection. -----## Next steps --- [Defender for EASM REST API documentation](https://learn.microsoft.com/rest/api/defenderforeasm/)-- [Understanding asset details](https://learn.microsoft.com/azure/external-attack-surface-management/understanding-asset-details)-- [Inventory filters overview](https://learn.microsoft.com/azure/external-attack-surface-management/inventory-filters)---- |
firewall | Premium Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md | -Organizations can use Premium stock-keeping unit (SKU) features like IDPS and TLS inspection to prevent malware and viruses from spreading across networks in both lateral and horizontal directions. To meet the increased performance demands of IDPS and TLS inspection, Azure Firewall Premium uses a more powerful virtual machine SKU. Like the Standard SKU, the Premium SKU can seamlessly scale up to 30 Gbps and integrate with availability zones to support the service level agreement (SLA) of 99.99 percent. The Premium SKU complies with Payment Card Industry Data Security Standard (PCI DSS) environment needs. +Organizations can use Premium stock-keeping unit (SKU) features like IDPS and TLS inspection to prevent malware and viruses from spreading across networks in both lateral and horizontal directions. To meet the increased performance demands of IDPS and TLS inspection, Azure Firewall Premium uses a more powerful virtual machine SKU. Like the Standard SKU, the Premium SKU can seamlessly scale up to 100 Gbps and integrate with availability zones to support the service level agreement (SLA) of 99.99 percent. The Premium SKU complies with Payment Card Industry Data Security Standard (PCI DSS) environment needs. :::image type="content" source="media/premium-features/premium-overview.png" alt-text="Azure Firewall Premium overview diagram"::: |
frontdoor | Create Front Door Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md | az group create --name myRGFD --location centralus ``` ## Create an Azure Front Door profile +In this step, you'll create the Azure Front Door profile that your two App services will use as your origin. + Run [az afd profile create](/cli/azure/afd/profile#az-afd-profile-create) to create an Azure Front Door profile. > [!NOTE] az afd profile create \ ## Create two instances of a web app -You need two instances of a web application that run in different Azure regions for this tutorial. Both the web application instances run in Active/Active mode, so either one can service traffic. --If you don't already have a web app, use the following script to set up two example web apps. +In this step, you'll create two web app instances that run in different Azure regions for this tutorial. Both the web application instances run in Active/Active mode, so either one can service traffic. This configuration differs from an *Active/Stand-By* configuration, where one acts as a failover. ### Create app service plans az appservice plan create \ ### Create web apps -Run [az webapp create](/cli/azure/webapp#az-webapp-create) to create a web app in each of the app service plans in the previous step. Web app names have to be globally unique. +Once the app service plans have been created, run [az webapp create](/cli/azure/webapp#az-webapp-create) to create a web app in each of the app service plans in the previous step. Web app names have to be globally unique. ```azurecli-interactive az webapp create \ az afd profile create \ ``` ### Add an endpoint -Run [az afd endpoint create](/cli/azure/afd/endpoint#az-afd-endpoint-create) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience. +In this step, you'll create an endpoint in your Front Door profile. In Front Door Standard/Premium, an *endpoint* is a logical grouping of one or more routes that are associated with domain names. Each endpoint is assigned a domain name by Front Door, and you can associate endpoints with custom domains by using routes. Front Door profiles can also contain multiple endpoints. ++Run [az afd endpoint create](/cli/azure/afd/endpoint#az-afd-endpoint-create) to create an endpoint in your profile. ```azurecli-interactive az afd endpoint create \ az afd endpoint create \ --enabled-state Enabled ``` +For more information about endpoints in Front Door, please read [Endpoints in Azure Front Door](/azure/frontdoor/endpoint). + ### Create an origin group +You'll now create an origin group that will define the traffic and expected responses for your app instances. Origin groups also define how origins should be evaluated by health probes, which you'll also define in this step. + Run [az afd origin-group create](/cli/azure/afd/origin-group#az-afd-origin-group-create) to create an origin group that contains your two web apps. ```azurecli-interactive az afd origin-group create \ ### Add an origin to the group -Run [az afd origin create](/cli/azure/afd/origin#az-afd-origin-create) to add an origin to your origin group. +You'll now add both of your app instances created earlier as origins to your new origin group. Origins in Front Door refers to applications that Front Door will retrieve contents from when caching isn't enabled or when a cache gets missed. ++Run [az afd origin create](/cli/azure/afd/origin#az-afd-origin-create) to add your first app instance as an origin to your origin group. ```azurecli-interactive az afd origin create \ az afd origin create \ --https-port 443 ``` -Repeat this step and add your second origin. +Repeat this step and add your second app instances as an origin to your origin group. ```azurecli-interactive az afd origin create \ az afd origin create \ --https-port 443 ``` +For more information about origins, origin groups and health probes, please read [Origins and origin groups in Azure Front Door](/azure/frontdoor/origin) + ### Add a route -Run [az afd route create](/cli/azure/afd/route#az-afd-route-create) to map your endpoint to the origin group. This route forwards requests from the endpoint to your origin group. +You'll now add a route to map the endpoint that you created earlier to the origin group. This route forwards requests from the endpoint to your origin group. ++Run [az afd route create](/cli/azure/afd/route#az-afd-route-create) to map your endpoint to the origin group. ```azurecli-interactive az afd route create \ az afd route create \ --supported-protocols Http Https \ --link-to-default-domain Enabled ```-Your Front Door profile would become fully functional with the last step. ++To learn more about routes in Azure Front Door, please read [Traffic routing methods to origin](/azure/frontdoor/routing-methods). ## Create a new security policy +Azure Web Application Firewall (WAF) on Front Door provides centralized protection for your web applications, defending them against common exploits and vulnerabilities. ++In this tutorial, you'll create a WAF policy that adds two managed rules. You can also create WAF policies with custom rules + ### Create a WAF policy Run [az network front-door waf-policy create](/cli/azure/network/front-door/waf-policy#az-network-front-door-waf-policy-create) to create a new WAF policy for your Front Door. This example creates a policy that is enabled and in prevention mode. az network front-door waf-policy create \ > [!NOTE] > If you select `Detection` mode, your WAF doesn't block any requests. +To learn more about WAF policy settings for Front Door, please read [Policy settings for Web Application Firewall on Azure Front Door](/azure/web-application-firewall/afds/waf-front-door-policy-settings). + ### Assign managed rules to the WAF policy +Azure-managed rule sets provide an easy way to protect your application against common security threats. + Run [az network front-door waf-policy managed-rules add](/cli/azure/network/front-door/waf-policy/managed-rules#az-network-front-door-waf-policy-managed-rules-add) to add managed rules to your WAF Policy. This example adds Microsoft_DefaultRuleSet_1.2 and Microsoft_BotManagerRuleSet_1.0 to your policy. az network front-door waf-policy managed-rules add \ --type Microsoft_BotManagerRuleSet \ --version 1.0 ```++To learn more about managed rules in Front Door, please read [Web Application Firewall DRS rule groups and rules](/azure/web-application-firewall/afds/waf-front-door-drs). + ### Create the security policy +You'll now apply these two WAF polcies to your Front Door by creating a security policy. This will apply the Azure-managed rules to the endpoint that you defined earlier. + Run [az afd security-policy create](/cli/azure/afd/security-policy#az-afd-security-policy-create) to apply your WAF policy to the endpoint's default domain. > [!NOTE] |
frontdoor | Create Front Door Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-portal.md | +In this quickstart, you'll learn how to create an Azure Front Door profile using the Azure portal. You can create an Azure Front Door profile through *Quick create* with basic configurations or through the *Custom create* which allows a more advanced configuration. -In this quickstart, you'll learn how to create an Azure Front Door profile using the Azure portal. You can create an Azure Front Door profile through *Quick Create* with basic configurations or through the *Custom create* which allows a more advanced configuration. With *Custom create*, you deploy two App services. Then, you create the Azure Front Door profile using the two App services as your origin. Lastly, you'll verify connectivity to your App services using the Azure Front Door frontend hostname. +With *Custom create*, you deploy two App services. Then, you create the Azure Front Door profile using the two App services as your origin. Lastly, you'll verify connectivity to your App services using the Azure Front Door frontend hostname. ## Prerequisites An Azure account with an active subscription. [Create an account for free](https ## Create Front Door profile - Custom Create +In the previous tutorial, you created an Azure Front Door profile through *Quick create*, which created your profile with basic configurations. ++You'll now create an Azure Front Door profile using *Custom create* and deploy two App services that your Azure Front Door profile will use as your origin. + ### Create two Web App instances If you already have services to use as an origin, skip to [create a Front Door for your application](#create-a-front-door-for-your-application). -In this example, we create two Web App instances that is deployed in two different Azure regions. Both web application instances will run in *Active/Active* mode, so either one can service incoming traffic. This configuration differs from an *Active/Stand-By* configuration, where one acts as a failover. +In this example, we create two Web App instances that are deployed in two different Azure regions. Both web application instances will run in *Active/Active* mode, so either one can service incoming traffic. This configuration differs from an *Active/Stand-By* configuration, where one acts as a failover. Use the following steps to create two Web Apps used in this example. Use the following steps to create two Web Apps used in this example. 1. Select **Review + create**, review the summary, and then select **Create**. Deployment of the Web App can take up to a minute. -1. After your create the first Web App, create a second Web App. Use the same settings as above, except for the following settings: +1. After you create the first Web App, create a second Web App. Use the same settings as above, except for the following settings: | Setting | Description | |--|--| Use the following steps to create two Web Apps used in this example. ### Create a Front Door for your application -Configure Azure Front Door to direct user traffic based on lowest latency between the two Web Apps origins. You will also secure your Azure Front Door with a Web Application Firewall (WAF) policy. +Configure Azure Front Door to direct user traffic based on lowest latency between the two Web Apps origins. You'll also secure your Azure Front Door with a Web Application Firewall (WAF) policy. 1. Sign in to the [Azure portal](https://portal.azure.com). |
frontdoor | Migrate Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier.md | Select **Grant** to add managed identities from the last section to all the Key > [!NOTE] > If you cancel the migration, only the new Front Door profile will get deleted. Any new WAF policy copies will need to be manually deleted. + > [!WARNING] + > Deleting the new profile will delete the production configuration once the **Migrate** step is initiated, which is an irreversible change. ++ 1. Once the migration completes, you can select the banner the top of the page or the link from the successful message to go to the new Front Door profile. :::image type="content" source="./media/migrate-tier/successful-migration.png" alt-text="Screenshot of a successful Front Door migration."::: |
hdinsight | Apache Kafka Ssl Encryption Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-ssl-encryption-authentication.md | Title: Apache Kafka TLS encryption & authentication - Azure HDInsight -description: Set up TLS encryption for communication between Kafka clients and Kafka brokers as well as between Kafka brokers. Set up SSL authentication of clients. +description: Set up TLS encryption for communication between Kafka clients and Kafka brokers, Set up SSL authentication of clients. Previously updated : 03/31/2022 Last updated : 02/03/2023 # Set up TLS encryption and authentication for Apache Kafka in Azure HDInsight This article shows you how to set up Transport Layer Security (TLS) encryption, > [!Important] > There are two clients which you can use for Kafka applications: a Java client and a console client. Only the Java client `ProducerConsumer.java` can use TLS for both producing and consuming. The console producer client `console-producer.sh` does not work with TLS. -> [!Note] -> HDInsight Kafka console producer with version 1.1 does not support SSL. - ## Apache Kafka broker setup The Kafka TLS broker setup will use four HDInsight cluster VMs in the following way: The Kafka TLS broker setup will use four HDInsight cluster VMs in the following The summary of the broker setup process is as follows: 1. The following steps are repeated on each of the three worker nodes:- 1. Generate a certificate. 1. Create a cert signing request. 1. Send the cert signing request to the Certificate Authority (CA). Use the following detailed instructions to complete the broker setup: cd ssl ``` -1. On each of the worker nodes, execute the following steps using the code snippet below. +1. On each of the worker nodes, execute the following steps using the code snippet. 1. Create a keystore and populate it with a new private certificate. 1. Create a certificate signing request. 1. SCP the certificate signing request to the CA (headnode0) Use the following detailed instructions to complete the broker setup: keytool -keystore kafka.server.keystore.jks -certreq -file cert-file -storepass "MyServerPassword123" -keypass "MyServerPassword123" scp cert-file sshuser@HeadNode0_Name:~/ssl/wnX-cert-sign-request ```--1. On the CA machine run the following command to create ca-cert and ca-key files: + > [!Note] + > FQDN_WORKER_NODE is Fully Qualified Domain Name of worker node machine.You can get that details from /etc/hosts file in head node + + For example, + ``` + wn0-espkaf.securehadooprc.onmicrosoft.com + wn0-kafka2.zbxwnwsmpcsuvbjqbmespcm1zg.bx.internal.cloudapp.net + ``` + :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/etc-hosts.png" alt-text="Screenshot showing etc hosts output." border="true"::: + +1. On the CA machine, run the following command to create ca-cert and ca-key files: ```bash openssl req -new -newkey rsa:4096 -days 365 -x509 -subj "/CN=Kafka-Security-CA" -keyout ca-key -out ca-cert -nodes To complete the configuration modification, do the following steps: :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-ambari.png" alt-text="Editing Kafka ssl configuration properties in Ambari" border="true"::: -1. Under **Custom kafka-broker** set the **ssl.client.auth** property to `required`. This step is only required if you are setting up authentication and encryption. +1. Under **Custom kafka-broker** set the **ssl.client.auth** property to `required`. + + > [!Note] + > Note: This step is only required if you are setting up authentication and encryption. + :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-ambari2.png" alt-text="Editing kafka ssl configuration properties in Ambari" border="true"::: -1. For HDI version 3.6, go to Ambari UI and add the following configurations under **Advanced kafka-env** and the **kafka-env template** property. -- ```bash - # Configure Kafka to advertise IP addresses instead of FQDN - IP_ADDRESS=$(hostname -i) - echo advertised.listeners=$IP_ADDRESS - sed -i.bak -e '/advertised/{/advertised@/!d;}' /usr/hdp/current/kafka-broker/conf/server.properties - echo "advertised.listeners=PLAINTEXT://$IP_ADDRESS:9092,SSL://$IP_ADDRESS:9093" >> /usr/hdp/current/kafka-broker/conf/server.properties - echo "ssl.keystore.location=/home/sshuser/ssl/kafka.server.keystore.jks" >> /usr/hdp/current/kafka-broker/conf/server.properties - echo "ssl.keystore.password=MyServerPassword123" >> /usr/hdp/current/kafka-broker/conf/server.properties - echo "ssl.key.password=MyServerPassword123" >> /usr/hdp/current/kafka-broker/conf/server.properties - echo "ssl.truststore.location=/home/sshuser/ssl/kafka.server.truststore.jks" >> /usr/hdp/current/kafka-broker/conf/server.properties - echo "ssl.truststore.password=MyServerPassword123" >> /usr/hdp/current/kafka-broker/conf/server.properties - ``` --1. Here is the screenshot that shows Ambari configuration UI with these changes. -- For HDI version 3.6: +1. Here's the screenshot that shows Ambari configuration UI with these changes. - :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-kafka-env.png" alt-text="Editing kafka-env template property in Ambari" border="true"::: -- For HDI version 4.0: + For HDI version 4.0 or 5.0 :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-kafka-env-four.png" alt-text="Editing kafka-env template property in Ambari four" border="true"::: These steps are detailed in the following code snippets. ssl.truststore.password=MyClientPassword123 ``` -1. Start the admin client with producer and consumer options to verify that both producers and consumers are working on port 9093. Please refer to [Verification](apache-kafka-ssl-encryption-authentication.md#verification) section below for steps needed to verify the setup using console producer/consumer. +1. Start the admin client with producer and consumer options to verify that both producers and consumers are working on port 9093. Refer to [Verification](apache-kafka-ssl-encryption-authentication.md#verification) section for steps needed to verify the setup using console producer/consumer. ## Client setup (with authentication) The following four steps summarize the tasks needed to complete the client setup 1. Switch to the CA machine (active head node) to sign the client certificate. 1. Go to the client machine (standby head node) and navigate to the `~/ssl` folder. Copy the signed cert to client machine. -The details of each step are given below. +The details of each step are given. 1. Sign in to the client machine (standby head node). Run these steps on the client machine. /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning ``` -### Kafka 1.1 --1. Create a topic if it doesn't exist already. -- ```bash - /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_NODE_0>:2181 --create --topic topic1 --partitions 2 --replication-factor 2 - ``` --1. Start console producer and provide the path to client-ssl-auth.properties as a configuration file for the producer. -- ```bash - /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <FQDN_WORKER_NODE>:9092 --topic topic1 - ``` --1. Open another ssh connection to client machine and start console consumer and provide the path to `client-ssl-auth.properties` as a configuration file for the consumer. -- ```bash - $ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning - ``` - ## Next steps * [What is Apache Kafka on HDInsight?](apache-kafka-introduction.md) |
healthcare-apis | Configure Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac.md | In the Role selection, search for one of the built-in roles for the FHIR data pl * **FHIR Data Exporter**: Can read and export ($export operator) data. * **FHIR Data Contributor**: Can perform all data plane operations. * **FHIR Data Converter**: Can use the converter to perform data conversion.+* **FHIR SMART User**: Role allows to read and write FHIR data according to the SMART IG V1.0.0 specifications. In the **Select** section, type the client application registration name. If the name is found, the application name is listed. Select the application name, and then select **Save**. |
healthcare-apis | How To Use Iotjsonpathcontenttemplate Mappings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontenttemplate-mappings.md | Title: How to use IotJsonPathContentTemplate mappings in the MedTech service device mapping - Azure Health Data Services -description: This article describes how to use IotJsonPathContentTemplate mappings with the MedTech service device mapping. + Title: How to use IotJsonPathContentTemplate mappings in the MedTech service device mappings - Azure Health Data Services +description: This article describes how to use IotJsonPathContentTemplate mappings with the MedTech service device mappings. Previously updated : 1/12/2023 Last updated : 02/02/2023 # How to use IotJsonPathContentTemplate mappings -This article describes how to use IoTJsonPathContentTemplate mappings with the MedTech service [device mapping](how-to-configure-device-mappings.md). +This article describes how to use IoTJsonPathContentTemplate mappings with the MedTech service [device mappings](how-to-configure-device-mappings.md). ## IotJsonPathContentTemplate The IotJsonPathContentTemplate is similar to the JsonPathContentTemplate except the `DeviceIdExpression` and `TimestampExpression` aren't required. -The assumption, when using this template, is the messages being evaluated were sent using the [Azure IoT Hub Device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) or [Export Data (legacy)](../../iot-central/core/howto-export-data-legacy.md) feature of [Azure IoT Central](../../iot-central/core/overview-iot-central.md). +The assumption, when using this template, is the device messages being evaluated were sent using the [Azure IoT Hub Device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) or [Export Data (legacy)](../../iot-central/core/howto-export-data-legacy.md) feature of [Azure IoT Central](../../iot-central/core/overview-iot-central.md). When you're using these SDKs, the device identity and the timestamp of the message are known. If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent ### Examples With each of these examples, you're provided with:- * A valid IoT device message. - * An example of what the IoT device message will look like after being received and processed by the IoT Hub. - * A valid MedTech service device mapping for normalizing the IoT device message after IoT Hub processing. + * A valid device message. + * An example of what the device message will look like after being received and processed by the IoT hub. + * Conforming and valid MedTech service device mappings for normalizing the device message after IoT hub processing. * An example of what the MedTech service device message will look like after normalization. > [!IMPORTANT] With each of these examples, you're provided with: **Heart rate** -**A valid IoT device message to send to your IoT Hub.** +**A valid device message to send to your IoT hub.** ```json With each of these examples, you're provided with: ``` -**An example of what the IoT device message will look like after being received and processed by the IoT Hub.** +**An example of what the device message will look like after being received and processed by the IoT hub.** > [!NOTE] > The IoT Hub enriches the device message before sending it to the MedTech service device event hub with all properties starting with `iothub`. For example: `iothub-creation-time-utc`. With each of these examples, you're provided with: ``` -**A valid MedTech service device mapping for normalizing the IoT device message after IoT Hub processing.** +**Conforming and valid MedTech service device mappings for normalizing device message data after IoT Hub processing.** ```json With each of these examples, you're provided with: **Blood pressure** -**A valid IoT device message to send to your IoT Hub.** +**A valid IoT device message to send to your IoT hub.** ```json With each of these examples, you're provided with: ``` -**An example of what the IoT device message will look like after being received and processed by the IoT Hub.** +**An example of what the device message will look like after being received and processed by the IoT hub.** > [!NOTE]-> The IoT Hub enriches the device message before sending it to the MedTech service device event hub with all properties starting with `iothub`. For example: `iothub-creation-time-utc`. +> The IoT hyub enriches the device message before sending it to the MedTech service device event hub with all properties starting with `iothub`. For example: `iothub-creation-time-utc`. > > `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. These examples assume your MedTech service is in a **Create** mode. For more information on the **Create** and **Lookup** **Destination properties**, see [Configure Destination properties](deploy-05-new-config.md#destination-properties). With each of these examples, you're provided with: ``` -**A valid MedTech service device mapping for normalizing the IoT device message after IoT Hub processing.** +**Conforming and valid MedTech service device mappings for normalizing the device message after IoT hub processing.** ```json With each of these examples, you're provided with: ``` > [!TIP]-> The IotJsonPathTemplate device mapping examples provided in this article may be combined into a single MedTech service device mapping as shown below. +> The IotJsonPathContentTemplate device mapping examples provided in this article may be combined into a single MedTech service device mappings as shown. >-> Additionally, the IotJasonPathTemplates can also be combined with with other template types such as [JasonPathContentTemplate mappings](how-to-use-jsonpath-content-mappings.md) to further expand your MedTech service device mapping. +> Additionally, the IotJasonPathContentTemplate can also be combined with with other template types such as [JsonPathContentTemplate mappings](how-to-use-jsonpath-content-mappings.md) to further expand your MedTech service device mapping. -**Combined heart rate and blood pressure MedTech service device mapping example.** +**Combined heart rate and blood pressure MedTech service device mappings example.** ```json |
iot-dps | Concepts Symmetric Key Attestation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-symmetric-key-attestation.md | Once a registration ID has been defined for the device, the symmetric key for th # [Azure CLI](#tab/azure-cli) -The IoT extension for the Azure CLI provides the [`compute-device-key`](/cli/azure/iot/dps#az-iot-dps-compute-device-key) command for generating derived device keys. This command can be used from Windows-based or Linux systems, in PowerShell or a Bash shell. +The IoT extension for the Azure CLI provides the [`compute-device-key`](/cli/azure/iot/dps/enrollment-group#az-iot-dps-enrollment-group-compute-device-key) command for generating derived device keys. This command can be used from Windows-based or Linux systems, in PowerShell or a Bash shell. Replace the value of `--key` argument with the **Primary Key** from your enrollment group. Replace the value of `--registration-id` argument with your registration ID. ```azurecli-az iot dps compute-device-key --key 8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw== --registration-id sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6 +az iot dps enrollment-group compute-device-key --key 8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw== --registration-id sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6 ``` Example result: |
iot-edge | Module Edgeagent Edgehub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-edgeagent-edgehub.md | The module twin for the IoT Edge agent is called `$edgeAgent` and coordinates th | runtime.type | Has to be "docker" | Yes | | runtime.settings.minDockerVersion | Set to the minimum Docker version required by this deployment manifest | Yes | | runtime.settings.loggingOptions | A stringified JSON containing the logging options for the IoT Edge agent container. [Docker logging options](https://docs.docker.com/engine/admin/logging/overview/) | No |-| runtime.settings.registryCredentials<br>.{registryId}.username | The username of the container registry. For Azure Container Registry, the username is usually the registry name.<br><br> Registry credentials are necessary for any private module images. | No | -| runtime.settings.registryCredentials<br>.{registryId}.password | The password for the container registry. | No | -| runtime.settings.registryCredentials<br>.{registryId}.address | The address of the container registry. For Azure Container Registry, the address is usually *{registry name}.azurecr.io*. | No | +| runtime.settings.registryCredentials.{registryId}.username | The username of the container registry. For Azure Container Registry, the username is usually the registry name.<br><br>Registry credentials are necessary for any private module images. | No | +| runtime.settings.registryCredentials.{registryId}.password | The password for the container registry. | No | +| runtime.settings.registryCredentials.{registryId}.address | The address of the container registry. For Azure Container Registry, the address is usually *{registry name}.azurecr.io*. | No | | systemModules.edgeAgent.type | Has to be "docker" | Yes |+| systemModules.edgeAgent.startupOrder | An integer value for which spot a module has in the startup order. 0 is first and max integer (4294967295) is last. If a value isn't provided, the default is max integer. | No | | systemModules.edgeAgent.settings.image | The URI of the image of the IoT Edge agent. Currently, the IoT Edge agent isn't able to update itself. | Yes |-| systemModules.edgeAgent.settings<br>.createOptions | A stringified JSON containing the options for the creation of the IoT Edge agent container. [Docker create options](https://docs.docker.com/engine/api/v1.32/#operation/ContainerCreate) | No | +| systemModules.edgeAgent.settings.createOptions | A stringified JSON containing the options for the creation of the IoT Edge agent container. [Docker create options](https://docs.docker.com/engine/api/v1.32/#operation/ContainerCreate) | No | | systemModules.edgeAgent.configuration.id | The ID of the deployment that deployed this module. | IoT Hub sets this property when the manifest is applied using a deployment. Not part of a deployment manifest. | | systemModules.edgeHub.type | Has to be "docker" | Yes | | systemModules.edgeHub.status | Has to be "running" | Yes | | systemModules.edgeHub.restartPolicy | Has to be "always" | Yes | | systemModules.edgeHub.startupOrder | An integer value for which spot a module has in the startup order. 0 is first and max integer (4294967295) is last. If a value isn't provided, the default is max integer. | No | | systemModules.edgeHub.settings.image | The URI of the image of the IoT Edge hub. | Yes |-| systemModules.edgeHub.settings<br>.createOptions | A stringified JSON containing the options for the creation of the IoT Edge hub container. [Docker create options](https://docs.docker.com/engine/api/v1.32/#operation/ContainerCreate) | No | +| systemModules.edgeHub.settings.createOptions | A stringified JSON containing the options for the creation of the IoT Edge hub container. [Docker create options](https://docs.docker.com/engine/api/v1.32/#operation/ContainerCreate) | No | | systemModules.edgeHub.configuration.id | The ID of the deployment that deployed this module. | IoT Hub sets this property when the manifest is applied using a deployment. Not part of a deployment manifest. | | modules.{moduleId}.version | A user-defined string representing the version of this module. | Yes | | modules.{moduleId}.type | Has to be "docker" | Yes | The following table does not include the information that is copied from the des | runtime.platform.architecture | Reporting the architecture of the CPU on the device | | systemModules.edgeAgent.runtimeStatus | The reported status of IoT Edge agent: {"running" \| "unhealthy"} | | systemModules.edgeAgent.statusDescription | Text description of the reported status of the IoT Edge agent. |+| systemModules.edgeAgent.exitCode | The exit code reported by the IoT Edge agent container if the container exits | +| systemModules.edgeAgent.lastStartTimeUtc | Time when IoT Edge agent was last started | +| systemModules.edgeAgent.lastExitTimeUtc | Time when IoT Edge agent last exited | | systemModules.edgeHub.runtimeStatus | Status of IoT Edge hub: { "running" \| "stopped" \| "failed" \| "backoff" \| "unhealthy" } | | systemModules.edgeHub.statusDescription | Text description of the status of IoT Edge hub if unhealthy. | | systemModules.edgeHub.exitCode | The exit code reported by the IoT Edge hub container if the container exits |-| systemModules.edgeHub.startTimeUtc | Time when IoT Edge hub was last started | +| systemModules.edgeHub.lastStartTimeUtc | Time when IoT Edge hub was last started | | systemModules.edgeHub.lastExitTimeUtc | Time when IoT Edge hub last exited | | systemModules.edgeHub.lastRestartTimeUtc | Time when IoT Edge hub was last restarted | | systemModules.edgeHub.restartCount | Number of times this module was restarted as part of the restart policy. | | modules.{moduleId}.runtimeStatus | Status of the module: { "running" \| "stopped" \| "failed" \| "backoff" \| "unhealthy" } | | modules.{moduleId}.statusDescription | Text description of the status of the module if unhealthy. | | modules.{moduleId}.exitCode | The exit code reported by the module container if the container exits |-| modules.{moduleId}.startTimeUtc | Time when the module was last started | +| modules.{moduleId}.lastStartTimeUtc | Time when the module was last started | | modules.{moduleId}.lastExitTimeUtc | Time when the module last exited | | modules.{moduleId}.lastRestartTimeUtc | Time when the module was last restarted | | modules.{moduleId}.restartCount | Number of times this module was restarted as part of the restart policy. | |
iot-hub | Iot Hub Create Use Iot Toolkit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-use-iot-toolkit.md | -This article shows you how to use the [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) to create an Azure IoT hub. You can create one without an existing IoT project or create one from an existing IoT project. +This article shows you how to use the [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) to create an Azure IoT hub. You can create one without an existing IoT project or create one from an existing IoT project. [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] This article shows you how to use the [Azure IoT Tools for Visual Studio Code](h - [Visual Studio Code](https://code.visualstudio.com/) -- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) installed for Visual Studio Code+- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) installed for Visual Studio Code - An Azure resource group: [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) in the Azure portal Now that you've deployed an IoT hub using the Azure IoT Tools for Visual Studio * [Use the Azure IoT Tools for Visual Studio Code for Azure IoT Hub device management](iot-hub-device-management-iot-toolkit.md) -* [See the Azure IoT Hub for VS Code wiki page](https://github.com/microsoft/vscode-azure-iot-toolkit/wiki). +* [See the Azure IoT Hub for VS Code wiki page](https://github.com/microsoft/vscode-azure-iot-toolkit/wiki). |
iot-hub | Iot Hub Vscode Iot Toolkit Cloud Device Messaging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-vscode-iot-toolkit-cloud-device-messaging.md | In this article, you learn how to use Azure IoT Tools for Visual Studio Code to * [Visual Studio Code](https://code.visualstudio.com/) -* [Azure IoT Tools for VS Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) or copy and paste this URL into a browser window: `vscode:extension/vsciot-vscode.azure-iot-tools` +* [Azure IoT Tools for VS Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) or copy and paste this URL into a browser window: `vscode:extension/vsciot-vscode.azure-iot-toolkit` ## Sign in to access your IoT hub |
iot-hub | Tutorial Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-connectivity.md | A device must authenticate with your hub before it can exchange any data with th To simulate a device sending telemetry to your IoT hub, run the Node.js simulated device application you downloaded previously. -In a terminal window on your development machine, navigate to the root folder of the sample Node.js project you downloaded. Then navigate to the **iot-hub\Tutorials\ConnectivityTests** folder. +1. In a terminal window on your development machine, navigate to the root folder of the sample Node.js project that you downloaded. Then navigate to the **iot-hub\Tutorials\ConnectivityTests** folder. -In the terminal window, run the following commands to install the required libraries and run the simulated device application. Use the device connection string you made a note of when you registered the device. +1. In the terminal window, run the following commands to install the required libraries and run the simulated device application. Use the device connection string you made a note of when you registered the device. -```cmd/sh -npm install -node SimulatedDevice-1.js "{your_device_connection_string}" -``` + ```cmd/sh + npm install + node SimulatedDevice-1.js "{your_device_connection_string}" + ``` -The terminal window displays information as it tries to connect to your hub: + The terminal window displays a success message once it connects to your hub: - + :::image type="content" source="media/tutorial-connectivity/sim-1-connected.png" alt-text="Screenshot that shows the simulated device connecting."::: You've now successfully authenticated from a device using a device key generated by your IoT hub. You've now successfully authenticated from a device using a device key generated In this section, you reset the device key and observe the error when the simulated device tries to connect. -To reset the primary device key for your device, run the following commands: +1. To reset the primary device key for your device, run the [az iot hub device-identity update](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-update) command: -```azurecli-interactive -# Generate a new Base64 encoded key using the current date -read key < <(date +%s | sha256sum | base64 | head -c 32) + ```azurecli-interactive + # Generate a new Base64 encoded key using the current date + read key < <(date +%s | sha256sum | base64 | head -c 32) -# Reset the primary device key for test device -az iot hub device-identity update --device-id {your_device_id} --set authentication.symmetricKey.primaryKey=$key --hub-name {your_iot_hub_name} -``` + # Reset the primary device key for test device + az iot hub device-identity update --device-id {your_device_id} --set authentication.symmetricKey.primaryKey=$key --hub-name {your_iot_hub_name} + ``` -In the terminal window on your development machine, run the simulated device application again: +1. In the terminal window on your development machine, run the simulated device application again: -```cmd/sh -npm install -node SimulatedDevice-1.js "{your_device_connection_string}" -``` + ```cmd/sh + npm install + node SimulatedDevice-1.js "{your_device_connection_string}" + ``` -This time you see an authentication error when the application tries to connect: + This time you see an authentication error when the application tries to connect: - + :::image type="content" source="media/tutorial-connectivity/sim-1-fail.png" alt-text="Screenshot that shows the connection failing after the key reset."::: -### Generate shared access signature (SAS) token +### Generate a shared access signature (SAS) token If your device uses one of the IoT Hub device SDKs, the SDK library code generates the SAS token used to authenticate with the hub. A SAS token is generated from the name of your hub, the name of your device, and the device key. In some scenarios, such as in a cloud protocol gateway or as part of a custom au > [!NOTE] > The SimulatedDevice-2.js sample includes examples of generating a SAS token both with and without the SDK. -To generate a known-good SAS token using the CLI, run the following command: +1. Run the [az iot hub genereate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token) command to generate a known-good SAS token using the CLI: -```azurecli-interactive -az iot hub generate-sas-token --device-id {your_device_id} --hub-name {your_iot_hub_name} -``` + ```azurecli-interactive + az iot hub generate-sas-token --device-id {your_device_id} --hub-name {your_iot_hub_name} + ``` -Make a note of the full text of the generated SAS token. A SAS token looks like the following example: `SharedAccessSignature sr=tutorials-iot-hub.azure-devices.net%2Fdevices%2FMyTestDevice&sig=....&se=1524155307` +1. Copy the full text of the generated SAS token. A SAS token looks like the following example: `SharedAccessSignature sr=tutorials-iot-hub.azure-devices.net%2Fdevices%2FmyDevice&sig=xxxxxx&se=111111` -In a terminal window on your development machine, navigate to the root folder of the sample Node.js project you downloaded. Then navigate to the **iot-hub\Tutorials\ConnectivityTests** folder. +1. In a terminal window on your development machine, navigate to the root folder of the sample Node.js project you downloaded. Then navigate to the **iot-hub\Tutorials\ConnectivityTests** folder. -In the terminal window, run the following commands to install the required libraries and run the simulated device application: +1. In the terminal window, run the following commands to install the required libraries and run the simulated device application: -```cmd/sh -npm install -node SimulatedDevice-2.js "{Your SAS token}" -``` + ```cmd/sh + npm install + node SimulatedDevice-2.js "{Your SAS token}" + ``` -The terminal window displays information as it tries to connect to your hub using the SAS token: + The terminal window displays a success message once it connects to your hub using the SAS token: - + :::image type="content" source="media/tutorial-connectivity/sim-2-connected.png" alt-text="Screenshot that shows a successful connection using a SAS token."::: You've now successfully authenticated from a device using a test SAS token generated by a CLI command. The **SimulatedDevice-2.js** file includes sample code that shows you how to generate a SAS token in code. A device can use any of the following protocols to connect to your IoT hub: If the outbound port is blocked by a firewall, the device can't connect: - ## Check device-to-cloud connectivity After a device connects, it can start sending telemetry to your IoT hub. This section shows you how you can verify that the telemetry sent by the device reaches your hub. -First, retrieve the current connection string for your simulated device using the following command: +### Send device-to-cloud messages -```azurecli-interactive -az iot hub device-identity connection-string show --device-id {your_device_id} --output table --hub-name {your_iot_hub_name} -``` +1. Since we reset the connection string for your device in the previous section, use the [az iot hub device-identity connection-string show](/cli/azure/iot/hub/device-identity/connection-string#az-iot-hub-device-identity-connection-string-show) command to retrieve the updated connection string: -To run a simulated device that sends messages, navigate to the **iot-hub\Tutorials\ConnectivityTests** folder in the code you downloaded. + ```azurecli-interactive + az iot hub device-identity connection-string show --device-id {your_device_id} --output table --hub-name {your_iot_hub_name} + ``` -In the terminal window, run the following commands to install the required libraries and run the simulated device application: +1. To run a simulated device that sends messages, navigate to the **iot-hub\Tutorials\ConnectivityTests** folder in the code you downloaded. -```cmd/sh -npm install -node SimulatedDevice-3.js "{your_device_connection_string}" -``` +1. In the terminal window, run the following commands to install the required libraries and run the simulated device application: -The terminal window displays information as it sends telemetry to your hub: + ```cmd/sh + npm install + node SimulatedDevice-3.js "{your_device_connection_string}" + ``` - + The terminal window displays information as it sends telemetry to your hub: -You can use **Metrics** in the portal to verify that the telemetry messages are reaching your IoT hub. + :::image type="content" source="media/tutorial-connectivity/sim-3-sending.png" alt-text="Screenshot that shows the simulated device sending messages."::: -In the [Azure portal](https://portal.azure.com), select your IoT hub in the **Resource** drop-down. Select **Metrics** from the **Monitoring** section of the navigation menu. Select **Telemetry messages sent** as the metric, and set the time range to **Past hour**. The chart shows the aggregate count of messages sent by the simulated device: +### Monitor incoming messages +You can use **Metrics** in the portal to verify that the telemetry messages are reaching your IoT hub. ++1. In the [Azure portal](https://portal.azure.com), select your IoT hub in the **Resource** drop-down. ++1. Select **Metrics** from the **Monitoring** section of the navigation menu. ++1. Select **Telemetry messages sent** as the metric, and set the time range to **Past hour**. The chart shows the aggregate count of messages sent by the simulated device: ++ :::image type="content" source="media/tutorial-connectivity/metrics-portal.png" alt-text="Screenshot showing left pane metrics." border="true"::: It takes a few minutes for the metrics to become available after you start the simulated device. It takes a few minutes for the metrics to become available after you start the s This section shows how you can make a test direct method call to a device to check cloud-to-device connectivity. You run a simulated device on your development machine to listen for direct method calls from your hub. -In a terminal window, use the following command to run the simulated device application: +1. In a terminal window, use the following command to run the simulated device application: -```cmd/sh -node SimulatedDevice-3.js "{your_device_connection_string}" -``` + ```cmd/sh + node SimulatedDevice-3.js "{your_device_connection_string}" + ``` -Use a CLI command to call a direct method on the device: +1. In a separate window, use the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command to call a direct method on the device: -```azurecli-interactive -az iot hub invoke-device-method --device-id {your_device_id} --method-name TestMethod --timeout 10 --method-payload '{"key":"value"}' --hub-name {your_iot_hub_name} -``` + ```azurecli-interactive + az iot hub invoke-device-method --device-id {your_device_id} --method-name TestMethod --timeout 10 --method-payload '{"key":"value"}' --hub-name {your_iot_hub_name} + ``` -The simulated device prints a message to the console when it receives a direct method call: + The simulated device prints a message to the console when it receives a direct method call: - + :::image type="content" source="media/tutorial-connectivity/receive-method-call.png" alt-text="Screenshot that shows the device confirming that the direct method was received."::: -When the simulated device successfully receives the direct method call, it sends an acknowledgment back to the hub: + When the simulated device successfully receives the direct method call, it sends an acknowledgment back to the hub: - + :::image type="content" source="media/tutorial-connectivity/method-acknowledgement.png" alt-text="Screenshot showing that the device returns a direct method acknowledgment."::: ## Check twin synchronization Devices use twins to synchronize state between the device and the hub. In this s The simulated device you use in this section sends reported properties to the hub whenever it starts up, and prints desired properties to the console whenever it receives them. -In a terminal window, use the following command to run the simulated device application: +1. In a terminal window, use the following command to run the simulated device application: -```cmd/sh -node SimulatedDevice-3.js "{your_device_connection_string}" -``` + ```cmd/sh + node SimulatedDevice-3.js "{your_device_connection_string}" + ``` -To verify that the hub received the reported properties from the device, use the following CLI command: +1. In a separate window, run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command to verify that the hub received the reported properties from the device: -```azurecli-interactive -az iot hub device-twin show --device-id {your_device_id} --hub-name {your_iot_hub_name} -``` + ```azurecli-interactive + az iot hub device-twin show --device-id {your_device_id} --hub-name {your_iot_hub_name} + ``` -In the output from the command, you can see the **devicelaststarted** property in the reported properties section. This property shows the date and time you last started the simulated device. + In the output from the command, you can see the **devicelaststarted** property in the reported properties section. This property shows the date and time you last started the simulated device. - + :::image type="content" source="media/tutorial-connectivity/reported-properties.png" alt-text="Screenshot showing the reported properties of a device."::: -To verify that the hub can send desired property values to the device, use the following CLI command: +1. To verify that the hub can send desired property values to the device, use the [az iot hub device-twin update](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-update) command: -```azurecli-interactive -az iot hub device-twin update --set properties.desired='{"mydesiredproperty":"propertyvalue"}' --device-id {your_device_id} --hub-name {your_iot_hub_name} -``` + ```azurecli-interactive + az iot hub device-twin update --set properties.desired='{"mydesiredproperty":"propertyvalue"}' --device-id {your_device_id} --hub-name {your_iot_hub_name} + ``` -The simulated device prints a message when it receives a desired property update from the hub: + The simulated device prints a message when it receives a desired property update from the hub: - + :::image type="content" source="media/tutorial-connectivity/desired-properties.png" alt-text="Screenshot that shows the device confirming that the desired properties update was received."::: In addition to receiving desired property changes as they're made, the simulated device automatically checks for desired properties when it starts up. |
key-vault | Overview Renew Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/overview-renew-certificate.md | For more information about creating a new CSR, see [Create and merge a CSR in Ke Azure Key Vault also handles autorenewal of self-signed certificates. To learn more about changing the issuance policy and updating a certificate's lifecycle attributes, see [Configure certificate autorotation in Key Vault](./tutorial-rotate-certificates.md#update-lifecycle-attributes-of-a-stored-certificate). ## Next steps-- [Azure Key Vault certificate renewal frequently as questions](faq.yml)+- [Azure Key Vault certificate renewal frequently asked questions](faq.yml) - [Integrate Key Vault with DigiCert certificate authority](how-to-integrate-certificate-authority.md) - [Tutorial: Configure certificate autorotation in Key Vault](tutorial-rotate-certificates.md) |
logic-apps | Biztalk Server To Azure Integration Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-to-azure-integration-services-overview.md | Integration platforms offer ways to solve problems in a consistent and unified m - Rules Engine policies - A BizTalk Rules Engine policy is another kind of artifact that you can share across BizTalk Server applications deployed within the same [BizTalk group](/biztalk/core/biztalk-groups). If you have common BizTalk Rules Engine rules, for example, related to message routing, you can manage these rules in one location and share them widely across installed BizTalk applications. The BizTalk Rules Engine caches these rules, so if you make any updates to these rules, you must restart the BizTalk Rules Engine Update Service. Otherwise, the changes are picked up in the next [Cache Timeout](/biztalk/core/rule-engine-configuration-and-tuning-parameters). + A Business Rules Engine policy is another kind of artifact that you can share across BizTalk Server applications deployed within the same [BizTalk group](/biztalk/core/biztalk-groups). If you have common Business Rules Engine rules, for example, related to message routing, you can manage these rules in one location and share them widely across installed BizTalk applications. The Business Rules Engine caches these rules, so if you make any updates to these rules, you must restart the Business Rules Engine Update Service. Otherwise, the changes are picked up in the next [Cache Timeout](/biztalk/core/rule-engine-configuration-and-tuning-parameters). #### Azure Integration Services |
machine-learning | Concept Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-environments.md | If you provide your own images, you are responsible for updating them. For more information on the base images, see the following links: * [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers) GitHub repository.-* [Train a model using a custom image](how-to-train-with-custom-image.md). * [Deploy a TensorFlow model using a custom container](how-to-deploy-custom-container.md) ## Next steps |
machine-learning | How To Create Image Labeling Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md | Image data can be files with any of these types: ".jpg", ".jpeg", ".png", ".jpe" [!INCLUDE [start](../../includes/machine-learning-data-labeling-start.md)] -1. To create a project, select **Add project**. Give the project an appropriate name. The project name cannot be reused, even if the project is deleted in future. +1. To create a project, select **Add project**. Give the project an appropriate name. The project name can't be reused, even if the project is deleted in future. 1. Select **Image** to create an image labeling project. For bounding boxes, important questions include: * What should the labelers do if the object is tiny? Should it be labeled as an object or should it be ignored as background? * How to label the object that is partially shown in the image? * How to label the object that partially covered by other object?-* How to label the object if there is no clear boundary of the object? -* How to label the object which is not object class of interest but visually similar to an interested object type? +* How to label the object if there's no clear boundary of the object? +* How to label the object that isn't the object class of interest but visually similar to an interested object type? > [!NOTE] > Be sure to note that the labelers will be able to select the first 9 labels by using number keys 1-9. For bounding boxes, important questions include: ## Use ML-assisted data labeling -The **ML-assisted labeling** page lets you trigger automatic machine learning models to accelerate labeling tasks. Medical images (".dcm") are not included in assisted labeling. +The **ML-assisted labeling** page lets you trigger automatic machine learning models to accelerate labeling tasks. Medical images (".dcm") aren't included in assisted labeling. At the beginning of your labeling project, the items are shuffled into a random order to reduce potential bias. However, any biases that are present in the dataset will be reflected in the trained model. For example, if 80% of your items are of a single class, then approximately 80% of the data used to train the model will be of that class. ML-assisted labeling consists of two phases: * Clustering * Prelabeling -The exact number of labeled data necessary to start assisted labeling is not a fixed number. This can vary significantly from one labeling project to another. For some projects, is sometimes possible to see prelabel or cluster tasks after 300 items have been manually labeled. ML Assisted Labeling uses a technique called *Transfer Learning*, which uses a pre-trained model to jump-start the training process. If your dataset's classes are similar to those in the pre-trained model, pre-labels may be available after only a few hundred manually labeled items. If your dataset is significantly different from the data used to pre-train the model, it may take much longer. +The exact number of labeled data necessary to start assisted labeling isn't a fixed number. This number can vary significantly from one labeling project to another. For some projects, is sometimes possible to see prelabel or cluster tasks after 300 items have been manually labeled. ML Assisted Labeling uses a technique called *Transfer Learning*, which uses a pre-trained model to jump-start the training process. If your dataset's classes are similar to those in the pre-trained model, pre-labels may be available after only a few hundred manually labeled items. If your dataset is significantly different from the data used to pre-train the model, it may take much longer. When you're using consensus labeling, the consensus label is used for training. Since the final labels still rely on input from the labeler, this technology is ### Clustering -After a certain number of labels are submitted, the machine learning model for classification starts to group together similar items. These similar images are presented to the labelers on the same screen to speed up manual tagging. Clustering is especially useful when the labeler is viewing a grid of 4, 6, or 9 images. +After some labels are submitted, the machine learning model for classification starts to group together similar items. These similar images are presented to the labelers on the same screen to speed up manual tagging. Clustering is especially useful when the labeler is viewing a grid of 4, 6, or 9 images. -Once a machine learning model has been trained on your manually labeled data, the model is truncated to its last fully-connected layer. Unlabeled images are then passed through the truncated model in a process commonly known as "embedding" or "featurization." This embeds each image in a high-dimensional space defined by this model layer. Images that are nearest neighbors in the space are used for clustering tasks. +Once a machine learning model has been trained on your manually labeled data, the model is truncated to its last fully connected layer. Unlabeled images are then passed through the truncated model in a process commonly known as "embedding" or "featurization." This process embeds each image in a high-dimensional space defined by this model layer. Images that are nearest neighbors in the space are used for clustering tasks. -The clustering phase does not appear for object detection models, or for text classification. +The clustering phase doesn't appear for object detection models, or for text classification. ### Prelabeling The **Dashboard** tab shows the progress of the labeling task. :::image type="content" source="./media/how-to-create-labeling-projects/labeling-dashboard.png" alt-text="Data labeling dashboard"::: -The progress chart shows how many items have been labeled, skipped, in need of review, or not yet done. Hover over the chart to see the number of item in each section. +The progress chart shows how many items have been labeled, skipped, in need of review, or not yet done. Hover over the chart to see the number of items in each section. -The middle section shows the queue of tasks yet to be assigned. When ML assisted labeling is off, this section shows the number of manual tasks to be assigned. When ML assisted labeling is on, this will also show: +The middle section shows the queue of tasks yet to be assigned. When ML assisted labeling is off, this section shows the number of manual tasks to be assigned. When ML assisted labeling is on, this section will also show: * Tasks containing clustered items in the queue * Tasks containing prelabeled items in the queue If your project uses consensus labeling, you'll also want to review those images :::image type="content" source="media/how-to-create-labeling-projects/select-filters.png" alt-text="Screenshot: select filters to review consensus label problems." lightbox="media/how-to-create-labeling-projects/select-filters.png"::: -1. Under **Labeled datapoints**, select **Consensus labels in need of review**. This shows only those images where a consensus was not achieved among the labelers. +1. Under **Labeled datapoints**, select **Consensus labels in need of review**. This shows only those images where a consensus wasn't achieved among the labelers. :::image type="content" source="media/how-to-create-labeling-projects/select-need-review.png" alt-text="Screenshot: Select labels in need of review."::: View and change details of your project. In this tab you can: * View details of the storage container used to store labeled outputs in your project * Add labels to your project * Edit instructions you give to your labels-* Edit details of ML assisted labeling, including enable/disable +* Change settings for ML assisted labeling, and kick off a labeling task + ### Access for labelers View and change details of your project. In this tab you can: [!INCLUDE [add-label](../../includes/machine-learning-data-labeling-add-label.md)] +## Start an ML assisted labeling task ++ ## Export the labels Use the **Export** button on the **Project details** page of your labeling project. You can export the label data for Machine Learning experimentation at any time. * Image labels can be exported as:- * [COCO format](http://cocodataset.org/#format-data).The COCO file is created in the default blob store of the Azure Machine Learning workspace in a folder within *Labeling/export/coco*. + * [COCO format](http://cocodataset.org/#format-data). The COCO file is created in the default blob store of the Azure Machine Learning workspace in a folder within *Labeling/export/coco*. * An [Azure Machine Learning dataset with labels](v1/how-to-use-labeled-dataset.md). Access exported Azure Machine Learning datasets in the **Datasets** section of Machine Learning. The dataset details page also provides sample code to access your labels from Python.  -Once you have exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data. Learn more at [Set up AutoML to train computer vision models with Python](how-to-auto-train-image-models.md) +Once you've exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data. Learn more at [Set up AutoML to train computer vision models with Python](how-to-auto-train-image-models.md) ## Troubleshooting |
machine-learning | How To Create Text Labeling Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md | To directly upload your data: > [!NOTE] > Incremental refresh is available for projects that use tabular (.csv or .tsv) dataset input. However, only new tabular files are added. Changes to existing tabular files will not be recognized from the refresh. + ## Specify label categories [!INCLUDE [classes](../../includes/machine-learning-data-labeling-classes.md)] To use **ML-assisted labeling**: At the beginning of your labeling project, the items are shuffled into a random order to reduce potential bias. However, any biases that are present in the dataset will be reflected in the trained model. For example, if 80% of your items are of a single class, then approximately 80% of the data used to train the model will be of that class. -For training the text DNN model used by ML-assist, the input text per training example will be limited to approximately the first 128 words in the document. For tabular input, all text columns are first concatenated before applying this limit. This is a practical limit imposed to allow for the model training to complete in a timely manner. The actual text in a document (for file input) or set of text columns (for tabular input) can exceed 128 words. The limit only pertains to what is internally leveraged by the model during the training process. +For training the text DNN model used by ML-assist, the input text per training example will be limited to approximately the first 128 words in the document. For tabular input, all text columns are first concatenated before applying this limit. This is a practical limit imposed to allow for the model training to complete in a timely manner. The actual text in a document (for file input) or set of text columns (for tabular input) can exceed 128 words. The limit only pertains to what is internally used by the model during the training process. The exact number of labeled items necessary to start assisted labeling isn't a fixed number. This can vary significantly from one labeling project to another, depending on many factors, including the number of labels classes and label distribution. If your project uses consensus labeling, you'll also want to review those images :::image type="content" source="media/how-to-create-text-labeling-projects/text-labeling-select-filter.png" alt-text="Screenshot: select filters to review consensus label problems." lightbox="media/how-to-create-text-labeling-projects/text-labeling-select-filter.png"::: -1. Under **Labeled datapoints**, select **Consensus labels in need of review**. This shows only those images where a consensus was not achieved among the labelers. +1. Under **Labeled datapoints**, select **Consensus labels in need of review**. This shows only those images where a consensus wasn't achieved among the labelers. :::image type="content" source="media/how-to-create-labeling-projects/select-need-review.png" alt-text="Screenshot: Select labels in need of review."::: View and change details of your project. In this tab you can: * View details of the storage container used to store labeled outputs in your project * Add labels to your project * Edit instructions you give to your labels+* Change settings for ML assisted labeling, and kick off a labeling task ### Access for labelers View and change details of your project. In this tab you can: [!INCLUDE [add-label](../../includes/machine-learning-data-labeling-add-label.md)] +## Start an ML assisted labeling task ++ ## Export the labels Use the **Export** button on the **Project details** page of your labeling project. You can export the label data for Machine Learning experimentation at any time. |
machine-learning | How To Manage Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md | Azure Machine Learning allows you to work with different types of models. In thi * The Azure Machine Learning [SDK v2 for Python](https://aka.ms/sdk-v2-install). * The Azure Machine Learning [CLI v2](how-to-configure-cli.md). +Additionally, you will need to: ++# [Azure CLI](#tab/cli) ++- Install the Azure CLI and the ml extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md). ++# [Python SDK](#tab/python) ++- Install the Azure Machine Learning SDK for Python + + ```bash + pip install azure-ai-ml + ``` ++ ## Supported paths When you provide a model you want to register, you'll need to specify a `path` parameter that points to the data or job location. Below is a table that shows the different data locations supported in Azure Machine Learning and examples for the `path` parameter: These snippets use `custom` and `mlflow`. - `custom` is a type that refers to a model file or folder trained with a custom standard not currently supported by Azure ML. - `mlflow` is a type that refers to a model trained with [mlflow](how-to-use-mlflow-cli-runs.md). MLflow trained models are in a folder that contains the *MLmodel* file, the *model* file, the *conda dependencies* file, and the *requirements.txt* file. +### Connect to your workspace ++First, let's connect to Azure Machine Learning workspace where we are going to work on. ++# [Azure CLI](#tab/cli) ++```azurecli +az account set --subscription <subscription> +az configure --defaults workspace=<workspace> group=<resource-group> location=<location> +``` ++# [Python SDK](#tab/python) ++The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks. ++1. Import the required libraries: ++ ```python + from azure.ai.ml import MLClient, Input + from azure.ai.ml.entities import Model + from azure.ai.ml.constants import AssetTypes + from azure.identity import DefaultAzureCredential + ``` ++2. Configure workspace details and get a handle to the workspace: ++ ```python + subscription_id = "<SUBSCRIPTION_ID>" + resource_group = "<RESOURCE_GROUP>" + workspace = "<AML_WORKSPACE_NAME>" + + ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace) + ``` +++ ### Register your model as an asset in Machine Learning by using the CLI Use the following tabs to select where your model is located. |
machine-learning | How To Secure Workspace Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md | When your Azure Machine Learning workspace is configured with a private endpoint When ACR is behind a virtual network, Azure Machine Learning canΓÇÖt use it to directly build Docker images. Instead, the compute cluster is used to build the images. > [!IMPORTANT]-> The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](how-to-use-private-python-packages.md), or use [custom Docker images](how-to-train-with-custom-image.md) that already include the packages. +> The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](how-to-use-private-python-packages.md), or use [custom Docker images](v1/how-to-train-with-custom-image.md) that already include the packages. > [!WARNING] > If your Azure Container Registry uses a private endpoint or service endpoint to communicate with the virtual network, you cannot use a managed identity with an Azure Machine Learning compute cluster. |
machine-learning | How To Setup Mlops Azureml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-azureml.md | -Azure Machine Learning allows you to integration with [Azure DevOps pipeline](/azure/devops/pipelines/) to automate the machine learning lifecycle. Some of the operations you can automate are: +Azure Machine Learning allows you to integrate with [Azure DevOps pipeline](/azure/devops/pipelines/) to automate the machine learning lifecycle. Some of the operations you can automate are: * Deployment of AzureML infrastructure * Data preparation (extract, transform, load operations) Azure Machine Learning allows you to integration with [Azure DevOps pipeline](/a * Deployment of machine learning models as public or private web services * Monitoring deployed machine learning models (such as for performance analysis) -In this article, you learn about using Azure Machine Learning to set up an end-to-end MLOps pipeline that runs a linear regression to predict taxi fares in NYC. The pipeline is made up of components, each serving different functions, which can be registered with the workspace, versioned, and reused with various inputs and outputs. you are going to be using the [recommended Azure architecture for MLOps](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2) and [Azure MLOps (v2) solution accelerator](https://github.com/Azure/mlops-v2) to quickly set up an MLOps project in AzureML. +In this article, you learn about using Azure Machine Learning to set up an end-to-end MLOps pipeline that runs a linear regression to predict taxi fares in NYC. The pipeline is made up of components, each serving different functions, which can be registered with the workspace, versioned, and reused with various inputs and outputs. you're going to be using the [recommended Azure architecture for MLOps](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2) and [Azure MLOps (v2) solution accelerator](https://github.com/Azure/mlops-v2) to quickly setup an MLOps project in AzureML. > [!TIP] > We recommend you understand some of the [recommended Azure architectures](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2) for MLOps before implementing any solution. You'll need to pick the best architecture for your given Machine learning project. In this article, you learn about using Azure Machine Learning to set up an end-t - An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). - An Azure Machine Learning workspace.-- The Azure Machine Learning [SDK v2 for Python](https://aka.ms/sdk-v2-install).-- The Azure Machine Learning [CLI v2](how-to-configure-cli.md). - Git running on your local machine. - An [organization](/azure/devops/organizations/accounts/create-organization) in Azure DevOps. - [Azure DevOps project](how-to-devops-machine-learning.md) that will host the source repositories and pipelines.-- The [Terraform extension for Azure DevOps](https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks) if you are using Azure DevOps + Terraform to spin up infrastructure+- The [Terraform extension for Azure DevOps](https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks) if you're using Azure DevOps + Terraform to spin up infrastructure > [!NOTE] > Before you can set up an MLOps project with AzureML, you need to set up authenti  -1. Copy the bash commands below to your computer and update the **projectName**, **subscriptionId**, and **environment** variables with the values for your project. If you are creating both a Dev and Prod environment, you'll need to run this script once for each environment, creating a service principal for each. This command will also grant the **Contributor** role to the service principal in the subscription provided. This is required for Azure DevOps to properly use resources in that subscription. +1. Copy the following bash commands to your computer and update the **projectName**, **subscriptionId**, and **environment** variables with the values for your project. If you're creating both a Dev and Prod environment, you'll need to run this script once for each environment, creating a service principal for each. This command will also grant the **Contributor** role to the service principal in the subscription provided. This is required for Azure DevOps to properly use resources in that subscription. ``` bash projectName="<your project name>" Before you can set up an MLOps project with AzureML, you need to set up authenti } ``` -1. Repeat **Step 3.** if you're creating service principals for Dev and Prod environments. +1. Repeat **Step 3.** if you're creating service principals for Dev and Prod environments. For this demo, we'll be creating only one environment, which is Prod. 1. Close the Cloud Shell once the service principals are created. Before you can set up an MLOps project with AzureML, you need to set up authenti - **Tenant ID** - Use the `tenant` from **Step 1.** output as the Tenant ID -6. Name the service connection **Azure-ARM-Dev**. +6. Name the service connection **Azure-ARM-Prod**. -7. Select **Grant access permission to all pipelines**, then select **Verify and Save**. Repeat this step to create another service connection **Azure-ARM-Prod** using the details of the Prod service principal created in **Step 1.** +7. Select **Grant access permission to all pipelines**, then select **Verify and Save**. The Azure DevOps setup is successfully finished. The Azure DevOps setup is successfully finished. 1. Open the Repos section and select **Import Repository** -  +  -1. Enter https://github.com/Azure/mlops-v2-ado-demo into the Clone URL field. Click import at the bottom of the page +1. Enter https://github.com/Azure/mlops-v2-ado-demo into the Clone URL field. Select import at the bottom of the page -  ---1. Open the Repos section. Click on the default repo name at the top of the screen and select Import Repository --  --1. Enter https://github.com/Azure/mlops-templates into the Clone URL field. Click import at the bottom of the page --  -- > [!TIP] - > Learn more about the MLOps v2 accelerator structure and the MLOps [template](https://github.com/Azure/mlops-v2/) +  1. Open the **Project settings** at the bottom of the left hand navigation pane -1. Under the Repos section, click **Repositories**. Select the repository you created in **Step 6.** Select the **Security** tab +1. Under the Repos section, select **Repositories**. Select the repository you created in **Step 6.** Select the **Security** tab 1. Under the User permissions section, select the **mlopsv2 Build Service** user. Change the permission **Contribute** permission to **Allow** and the **Create branch** permission to **Allow**.-  +  -1. Open the **Pipelines** section in the left hand navigation pane and click on the 3 vertical dots next to the **Create Pipelines** button. Select **Manage Security** +1. Open the **Pipelines** section in the left hand navigation pane and select on the 3 vertical dots next to the **Create Pipelines** button. Select **Manage Security**  This step deploys the training pipeline to the Azure Machine Learning workspace > Make sure you understand the [Architectural Patterns](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2) of the solution accelerator before you checkout the MLOps v2 repo and deploy the infrastructure. In examples you'll use the [classical ML project type](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2#classical-machine-learning-architecture). ### Run Azure infrastructure pipeline-1. Go to the first repo you imported in the previous section, `mlops-v2-ado-demo`. Make sure you have the `main` branch selected and then select the **config-infra-dev.yml** file. +1. Go to your repository, `mlops-v2-ado-demo`, and select the **config-infra-prod.yml** file. ++ > [!IMPORTANT] + > Make sure you've selected the **main** branch of the repo.  This step deploys the training pipeline to the Azure Machine Learning workspace > [!NOTE] > If you are running a Deep Learning workload such as CV or NLP, ensure your GPU compute is available in your deployment zone. -1. Click Commit and push code to get these values into the pipeline. --1. Repeat this step for **config-infra-prod.yml** file. +1. Select Commit and push code to get these values into the pipeline. 1. Go to Pipelines section This step deploys the training pipeline to the Azure Machine Learning workspace 1. Select **Existing Azure Pipeline YAML File** -  +  -1. Select `main` as a branch and choose based on your deployment method your preferred yml path. - - For a terraform scenario, choose `infrastructure/pipelines/tf-ado-deploy-infra.yml`, then select **Continue**. - - For a bicep scenario, choose `infrastructure/pipelines/bicep-ado-deploy-infra.yml`, then select **Continue**. --> [!CAUTION] -> For this example, make sure the [Terraform extension for Azure DevOps](https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks) is installed. +1. Select the `main` branch and choose `mlops/devops-pipelines/cli-ado-deploy-infra.yml`, then select **Continue**. 1. Run the pipeline; it will take a few minutes to finish. The pipeline should create the following artifacts: * Resource Group for your Workspace including Storage Account, Container Registry, Application Insights, Keyvault and the Azure Machine Learning Workspace itself. This step deploys the training pipeline to the Azure Machine Learning workspace 1. Select **Existing Azure Pipeline YAML File** -  +  -1. Select `main` as a branch and choose: - - - For Managed Batch Endpoint `/mlops/devops-pipelines/deploy-batch-endpoint-pipeline.yml` - - - For Managed Online Endpoint `/mlops/devops-pipelines/deploy-online-endpoint-pipeline.yml` - - Then select **Continue**. +1. Select `main` as a branch and choose Managed Online Endpoint `/mlops/devops-pipelines/deploy-online-endpoint-pipeline.yml` then select **Continue**. -1. Batch/Online endpoint names need to be unique, so change **[your endpoint-name]** to another unique name and then select **Run**. +1. Online endpoint names need to be unique, so change `taxi-online-$(namespace)$(postfix)$(environment)` to another unique name and then select **Run**. No need to change the default if it doesn't fail. -  +  > [!IMPORTANT] > If the run fails due to an existing online endpoint name, recreate the pipeline as described previously and change **[your endpoint-name]** to **[your endpoint-name (random number)]** |
machine-learning | How To Submit Spark Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md | Azure Machine Learning supports submission of standalone machine learning jobs, - Azure Machine Learning CLI - Azure Machine Learning SDK +See [this resource](./apache-spark-azure-ml-concepts.md) for more information about **Apache Spark in Azure Machine Learning** concepts. + ## Prerequisites # [CLI](#tab/cli) |
machine-learning | How To Use Batch Azure Data Factory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-azure-data-factory.md | Title: "Invoking batch endpoints from Azure Data Factory" + Title: "Run batch endpoints from Azure Data Factory" description: Learn how to use Azure Data Factory to invoke Batch Endpoints. -# Invoking batch endpoints from Azure Data Factory +# Run batch endpoints from Azure Data Factory [!INCLUDE [ml v2](../../includes/machine-learning-dev-v2.md)] Azure Data Factory can invoke the REST APIs of batch endpoints by using the [Web You can use a service principal or a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate against Batch Endpoints. We recommend using a managed identity as it simplifies the use of secrets. -> [!IMPORTANT] -> Batch Endpoints can consume data stored in storage accounts instead of Azure Machine Learning Data Stores or Data Assets. However, you may need to configure additional permissions for the identity of the compute where the batch endpoint runs on. See [Security considerations when reading data](how-to-access-data-batch-endpoints-jobs.md#security-considerations-when-reading-data). - # [Using a Managed Identity](#tab/mi) 1. You can use Azure Data Factory managed identity to communicate with Batch Endpoints. In this case, you only need to make sure that your Azure Data Factory resource was deployed with a managed identity. The pipeline requires the following parameters to be configured: | Parameter | Description | Sample value | | | -|- | | `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |-| `api_version` | The API version to use with REST API calls. Defaults to `2022-10-01` | `2022-10-01` | | `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` | | `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |+| `endpoint_input_type` | The type of the input data you are providing. Currently batch endpoints support folders (`UriFolder`) and File (`UriFile`). Defaults to `UriFolder`. | `UriFolder` | | `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. You can use the default Azure Machine Learning data store, named `workspaceblobstore`. | `azureml://datastores/workspaceblobstore/paths/batch/predictions.csv` | # [Using a Service Principal](#tab/sp) It is composed of the following activities: * __Authorize__: It's a Web Activity that uses the service principal created in [Authenticating against batch endpoints](#authenticating-against-batch-endpoints) to obtain an authorization token. This token will be used to invoke the endpoint later. * __Run Batch-Endpoint__: It's a Web Activity that uses the batch endpoint URI to invoke it. It passes the input data URI where the data is located and the expected output file. * __Wait for job__: It's a loop activity that checks the status of the created job and waits for its completion, either as **Completed** or **Failed**. This activity, in turns, uses the following activities:- * __Authorize Management__: It's a Web Activity that uses the service principal created in [Authenticating against batch endpoints](#authenticating-against-batch-endpoints) to obtain an authorization token to be used for job's status query. * __Check status__: It's a Web Activity that queries the status of the job resource that was returned as a response of the __Run Batch-Endpoint__ activity. * __Wait__: It's a Wait Activity that controls the polling frequency of the job's status. We set a default of 120 (2 minutes). The pipeline requires the following parameters to be configured: | `client_id` | The client ID of the service principal used to invoke the endpoint | `00000000-0000-0000-00000000` | | `client_secret` | The client secret of the service principal used to invoke the endpoint | `ABCDEFGhijkLMNOPQRstUVwz` | | `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |-| `api_version` | The API version to use with REST API calls. Defaults to `2022-10-01` | `2022-10-01` | | `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` | | `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |+| `endpoint_input_type` | The type of the input data you are providing. Currently batch endpoints support folders (`UriFolder`) and File (`UriFile`). Defaults to `UriFolder`. | `UriFolder` | | `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. You can use the default Azure Machine Learning data store, named `workspaceblobstore`. | `azureml://datastores/workspaceblobstore/paths/batch/predictions.csv` | The pipeline requires the following parameters to be configured: > [!WARNING] > Remember that `endpoint_output_uri` should be the path to a file that doesn't exist yet. Otherwise, the job will fail with the error *the path already exists*. -> [!IMPORTANT] -> The input data URI can be a path to an Azure Machine Learning data store, data asset, or a cloud URI. Depending on the case, further configuration may be required to ensure the deployment can read the data properly. See [Accessing storage services](how-to-identity-based-service-authentication.md#accessing-storage-services) for details. - ## Steps -To create this pipeline in your existing Azure Data Factory, follow these steps: +To create this pipeline in your existing Azure Data Factory and invoke batch endpoints, follow these steps: ++1. Ensure the compute where the batch endpoint is running has permissions to mount the data Azure Data Factory is providing as input. Notice that access is still granted by the identity that invokes the endpoint (in this case Azure Data Factory). However, the compute where the batch endpoint runs needs to have permission to mount the storage account your Azure Data Factory provide. See [Accessing storage services](how-to-identity-based-service-authentication.md#accessing-storage-services) for details. 1. Open Azure Data Factory Studio and under __Factory Resources__ click the plus sign. To create this pipeline in your existing Azure Data Factory, follow these steps: When calling Azure Machine Learning batch deployments consider the following limitations: -* __Data inputs__: - * Only Azure Machine Learning data stores or Azure Storage Accounts (Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2) are supported as inputs. If your input data is in another source, use the Azure Data Factory Copy activity before the execution of the batch job to sink the data to a compatible store. - * Ensure the deployment has the required access to read the input data depending on the type of input you are using. See [Accessing storage services](how-to-identity-based-service-authentication.md#accessing-storage-services) for details. -* __Data outputs__: - * Only registered Azure Machine Learning data stores are supported. - * Only Azure Blob Storage Accounts are supported for outputs. For instance, Azure Data Lake Storage Gen2 isn't supported as output in batch deployment jobs. If you need to output the data to a different location/sink, use the Azure Data Factory Copy activity after the execution of the batch job. --## Considerations when reading and writing data --When reading and writing data, take into account the following considerations: +### Data inputs +* Only Azure Machine Learning data stores or Azure Storage Accounts (Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2) are supported as inputs. If your input data is in another source, use the Azure Data Factory Copy activity before the execution of the batch job to sink the data to a compatible store. * Batch endpoint jobs don't explore nested folders and hence can't work with nested folder structures. If your data is distributed in multiple folders, notice that you will have to flatten the structure. * Make sure that your scoring script provided in the deployment can handle the data as it is expected to be fed into the job. If the model is MLflow, read the limitation in terms of the file type supported by the moment at [Using MLflow models in batch deployments](how-to-mlflow-batch.md).-* Batch endpoints distribute and parallelize the work across multiple workers at the file level. Make sure that each worker node has enough memory to load the entire data file at once and send it to the model. Such is especially true for tabular data. -* When estimating the memory consumption of your jobs, take into account the model memory footprint too. Some models, like transformers in NLP, don't have a liner relationship between the size of the inputs and the memory consumption. On those cases, you may want to consider further partitioning your data into multiple files to allow a greater degree of parallelization with smaller files. +++### Data outputs + +* Only registered Azure Machine Learning data stores are supported by the moment. We recommend you to register the storage account your Azure Data Factory is using as a Data Store in Azure Machine Learning. In that way, you will be able to write back to the same storage account from where you are reading. +* Only Azure Blob Storage Accounts are supported for outputs. For instance, Azure Data Lake Storage Gen2 isn't supported as output in batch deployment jobs. If you need to output the data to a different location/sink, use the Azure Data Factory Copy activity after the execution of the batch job. ++## Next steps ++* [Use low priority VMs in batch deployments](how-to-use-low-priority-batch.md) +* [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md) +* [Network isolation in batch endpoints](how-to-secure-batch-endpoint.md) |
machine-learning | Migrate To V2 Local Runs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-local-runs.md | This article gives a comparison of scenario(s) in SDK v1 and SDK v2. ## Next steps -* [Train models with Azure Machine Learning](concept-train-machine-learning-model.md) +* [Train models with Azure Machine Learning](concept-train-machine-learning-model.md) |
machine-learning | Monitor Azure Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-azure-machine-learning.md | When you have critical applications and business processes relying on Azure reso > * [Start, monitor, and cancel training runs](how-to-track-monitor-analyze-runs.md) > * [Log metrics for training runs](how-to-log-view-metrics.md) > * [Track experiments with MLflow](how-to-use-mlflow.md)-> * [Visualize runs with TensorBoard](how-to-monitor-tensorboard.md) +> * [Visualize runs with TensorBoard](v1/how-to-monitor-tensorboard.md) > > If you want to monitor information generated by models deployed to online endpoints, see [Monitor online endpoints](how-to-monitor-online-endpoints.md). |
machine-learning | Quickstart Spark Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-jobs.md | The Azure Machine Learning integration, with Azure Synapse Analytics (preview), In this quickstart guide, you'll learn how to submit a Spark job using Azure Machine Learning Managed (Automatic) Spark compute, Azure Data Lake Storage (ADLS) Gen 2 storage account, and user identity passthrough in a few simple steps. +See [this resource](./apache-spark-azure-ml-concepts.md) for more information about **Apache Spark in Azure Machine Learning** concepts. + ## Prerequisites # [CLI](#tab/cli) |
machine-learning | Concept Azure Machine Learning Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md | Azure Machine Learning provides the following monitoring and logging capabilitie * [Start, monitor, and cancel training runs](../how-to-track-monitor-analyze-runs.md) * [Log metrics for training runs](../how-to-log-view-metrics.md) * [Track experiments with MLflow](../how-to-use-mlflow.md)- * [Visualize runs with TensorBoard](../how-to-monitor-tensorboard.md) + * [Visualize runs with TensorBoard](how-to-monitor-tensorboard.md) * For **Administrators**, you can monitor information about the workspace, related Azure resources, and events such as resource creation and deletion by using Azure Monitor. For more information, see [How to monitor Azure Machine Learning](../monitor-azure-machine-learning.md). * For **DevOps** or **MLOps**, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md). |
machine-learning | How To Migrate From Estimators To Scriptrunconfig | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-migrate-from-estimators-to-scriptrunconfig.md | myenv.environment_variables = {"MESSAGE":"Hello from Azure Machine Learning"} For information on configuring and managing Azure ML environments, see: * [How to use environments](how-to-use-environments.md) * [Curated environments](../resource-curated-environments.md)-* [Train with a custom Docker image](../how-to-train-with-custom-image.md) +* [Train with a custom Docker image](how-to-train-with-custom-image.md) ## Using data for training ### Datasets |
machine-learning | How To Monitor Tensorboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-monitor-tensorboard.md | + + Title: Visualize experiments with TensorBoard ++description: Launch TensorBoard to visualize experiment job histories and identify potential areas for hyperparameter tuning and retraining. ++++++ Last updated : 10/21/2021+++++[//]: # (needs PM review; Do URL Links names change if it includes 'Run') ++# Visualize experiment jobs and metrics with TensorBoard and Azure Machine Learning +++In this article, you learn how to view your experiment jobs and metrics in TensorBoard using [the `tensorboard` package](/python/api/azureml-tensorboard/) in the main Azure Machine Learning SDK. Once you've inspected your experiment jobs, you can better tune and retrain your machine learning models. ++[TensorBoard](/python/api/azureml-tensorboard/azureml.tensorboard.tensorboard) is a suite of web applications for inspecting and understanding your experiment structure and performance. ++How you launch TensorBoard with Azure Machine Learning experiments depends on the type of experiment: ++ If your experiment natively outputs log files that are consumable by TensorBoard, such as PyTorch, Chainer and TensorFlow experiments, then you can [launch TensorBoard directly](#launch-tensorboard) from experiment's job history. +++ For experiments that don't natively output TensorBoard consumable files, such as like Scikit-learn or Azure Machine Learning experiments, use [the `export_to_tensorboard()` method](#option-2-export-history-as-log-to-view-in-tensorboard) to export the job histories as TensorBoard logs and launch TensorBoard from there. ++> [!TIP] +> The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](../monitor-azure-machine-learning.md). ++## Prerequisites ++* To launch TensorBoard and view your experiment job histories, your experiments need to have previously enabled logging to track its metrics and performance. +* The code in this document can be run in either of the following environments: + * Azure Machine Learning compute instance - no downloads or installation necessary + * Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository. + * In the samples folder on the notebook server, find two completed and expanded notebooks by navigating to these directories: + * **SDK v1 > how-to-use-azureml > track-and-monitor-experiments > tensorboard > export-run-history-to-tensorboard > export-run-history-to-tensorboard.ipynb** + * **SDK v1 > how-to-use-azureml > track-and-monitor-experiments > tensorboard > tensorboard > tensorboard.ipynb** + * Your own Juptyer notebook server + * [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) with the `tensorboard` extra + * [Create an Azure Machine Learning workspace](../quickstart-create-resources.md). + * [Create a workspace configuration file](how-to-configure-environment-v1.md). ++## Option 1: Directly view job history in TensorBoard ++This option works for experiments that natively outputs log files consumable by TensorBoard, such as PyTorch, Chainer, and TensorFlow experiments. If that is not the case of your experiment, use [the `export_to_tensorboard()` method](#option-2-export-history-as-log-to-view-in-tensorboard) instead. ++The following example code uses the [MNIST demo experiment](https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py) from TensorFlow's repository in a remote compute target, Azure Machine Learning Compute. Next, we will configure and start a job for training the TensorFlow model, and then +start TensorBoard against this TensorFlow experiment. ++### Set experiment name and create project folder ++Here we name the experiment and create its folder. + +```python +from os import path, makedirs +experiment_name = 'tensorboard-demo' ++# experiment folder +exp_dir = './sample_projects/' + experiment_name ++if not path.exists(exp_dir): + makedirs(exp_dir) ++``` ++### Download TensorFlow demo experiment code ++TensorFlow's repository has an MNIST demo with extensive TensorBoard instrumentation. We do not, nor need to, alter any of this demo's code for it to work with Azure Machine Learning. In the following code, we download the MNIST code and save it in our newly created experiment folder. ++```python +import requests +import os ++tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py") +with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file: + file.write(tf_code.text) +``` +Throughout the MNIST code file, mnist_with_summaries.py, notice that there are lines that call `tf.summary.scalar()`, `tf.summary.histogram()`, `tf.summary.FileWriter()` etc. These methods group, log, and tag key metrics of your experiments into job history. The `tf.summary.FileWriter()` is especially important as it serializes the data from your logged experiment metrics, which allows for TensorBoard to generate visualizations off of them. ++ ### Configure experiment ++In the following, we configure our experiment and set up directories for logs and data. These logs will be uploaded to the job history, which TensorBoard accesses later. ++> [!Note] +> For this TensorFlow example, you will need to install TensorFlow on your local machine. Further, the TensorBoard module (that is, the one included with TensorFlow) must be accessible to this notebook's kernel, as the local machine is what runs TensorBoard. ++```Python +import azureml.core +from azureml.core import Workspace +from azureml.core import Experiment ++ws = Workspace.from_config() ++# create directories for experiment logs and dataset +logs_dir = os.path.join(os.curdir, "logs") +data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data")) ++if not path.exists(data_dir): + makedirs(data_dir) ++os.environ["TEST_TMPDIR"] = data_dir ++# Writing logs to ./logs results in their being uploaded to the job history, +# and thus, made accessible to our TensorBoard instance. +args = ["--log_dir", logs_dir] ++# Create an experiment +exp = Experiment(ws, experiment_name) +``` ++### Create a cluster for your experiment +We create an AmlCompute cluster for this experiment, however your experiments can be created in any environment and you are still able to launch TensorBoard against the experiment job history. ++```Python +from azureml.core.compute import ComputeTarget, AmlCompute ++cluster_name = "cpu-cluster" ++cts = ws.compute_targets +found = False +if cluster_name in cts and cts[cluster_name].type == 'AmlCompute': + found = True + print('Found existing compute target.') + compute_target = cts[cluster_name] +if not found: + print('Creating a new compute target...') + compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', + max_nodes=4) ++ # create the cluster + compute_target = ComputeTarget.create(ws, cluster_name, compute_config) ++compute_target.wait_for_completion(show_output=True, min_node_count=None) ++# use get_status() to get a detailed status for the current cluster. +# print(compute_target.get_status().serialize()) +``` +++### Configure and submit training job ++Configure a training job by creating a ScriptRunConfig object. ++```Python +from azureml.core import ScriptRunConfig +from azureml.core import Environment ++# Here we will use the TensorFlow 2.2 curated environment +tf_env = Environment.get(ws, 'AzureML-TensorFlow-2.2-GPU') ++src = ScriptRunConfig(source_directory=exp_dir, + script='mnist_with_summaries.py', + arguments=args, + compute_target=compute_target, + environment=tf_env) +run = exp.submit(src) +``` ++### Launch TensorBoard ++You can launch TensorBoard during your run or after it completes. In the following, we create a TensorBoard object instance, `tb`, that takes the experiment job history loaded in the `job`, and then launches TensorBoard with the `start()` method. + +The [TensorBoard constructor](/python/api/azureml-tensorboard/azureml.tensorboard.tensorboard) takes an array of jobs, so be sure and pass it in as a single-element array. ++```python +from azureml.tensorboard import Tensorboard ++tb = Tensorboard([job]) ++# If successful, start() returns a string with the URI of the instance. +tb.start() ++# After your job completes, be sure to stop() the streaming otherwise it will continue to run. +tb.stop() +``` ++> [!Note] +> While this example used TensorFlow, TensorBoard can be used as easily with PyTorch or Chainer. TensorFlow must be available on the machine running TensorBoard, but is not necessary on the machine doing PyTorch or Chainer computations. +++## Option 2: Export history as log to view in TensorBoard ++The following code sets up a sample experiment, begins the logging process using the Azure Machine Learning job history APIs, and exports the experiment job history into logs consumable by TensorBoard for visualization. ++### Set up experiment ++The following code sets up a new experiment and names the job directory `root_run`. ++```python +from azureml.core import Workspace, Experiment +import azureml.core ++# set experiment name and job name +ws = Workspace.from_config() +experiment_name = 'export-to-tensorboard' +exp = Experiment(ws, experiment_name) +root_run = exp.start_logging() +``` ++Here we load the diabetes dataset-- a built-in small dataset that comes with scikit-learn, and split it into test and training sets. ++```Python +from sklearn.datasets import load_diabetes +from sklearn.linear_model import Ridge +from sklearn.metrics import mean_squared_error +from sklearn.model_selection import train_test_split +X, y = load_diabetes(return_X_y=True) +columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] +x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) +data = { + "train":{"x":x_train, "y":y_train}, + "test":{"x":x_test, "y":y_test} +} +``` ++### Run experiment and log metrics ++For this code, we train a linear regression model and log key metrics, the alpha coefficient, `alpha`, and mean squared error, `mse`, in run history. ++```Python +from tqdm import tqdm +alphas = [.1, .2, .3, .4, .5, .6 , .7] +# try a bunch of alpha values in a Linear Regression (aka Ridge regression) mode +for alpha in tqdm(alphas): + # create child runs and fit lines for the resulting models + with root_run.child_run("alpha" + str(alpha)) as run: + + reg = Ridge(alpha=alpha) + reg.fit(data["train"]["x"], data["train"]["y"]) + + preds = reg.predict(data["test"]["x"]) + mse = mean_squared_error(preds, data["test"]["y"]) + # End train and eval ++# log alpha, mean_squared_error and feature names in run history + root_run.log("alpha", alpha) + root_run.log("mse", mse) +``` ++### Export jobs to TensorBoard ++With the SDK's [export_to_tensorboard()](/python/api/azureml-tensorboard/azureml.tensorboard.export) method, we can export the job history of our Azure machine learning experiment into TensorBoard logs, so we can view them via TensorBoard. ++In the following code, we create the folder `logdir` in our current working directory. This folder is where we will export our experiment job history and logs from `root_run` and then mark that job as completed. ++```Python +from azureml.tensorboard.export import export_to_tensorboard +import os ++logdir = 'exportedTBlogs' +log_path = os.path.join(os.getcwd(), logdir) +try: + os.stat(log_path) +except os.error: + os.mkdir(log_path) +print(logdir) ++# export job history for the project +export_to_tensorboard(root_run, logdir) ++root_run.complete() +``` ++> [!Note] +> You can also export a particular run to TensorBoard by specifying the name of the run `export_to_tensorboard(run_name, logdir)` ++### Start and stop TensorBoard +Once our job history for this experiment is exported, we can launch TensorBoard with the [start()](/python/api/azureml-tensorboard/azureml.tensorboard.tensorboard#start-start-browser-false-) method. ++```Python +from azureml.tensorboard import Tensorboard ++# The TensorBoard constructor takes an array of jobs, so be sure and pass it in as a single-element array here +tb = Tensorboard([], local_root=logdir, port=6006) ++# If successful, start() returns a string with the URI of the instance. +tb.start() +``` ++When you're done, make sure to call the [stop()](/python/api/azureml-tensorboard/azureml.tensorboard.tensorboard#stop--) method of the TensorBoard object. Otherwise, TensorBoard will continue to run until you shut down the notebook kernel. ++```python +tb.stop() +``` ++## Next steps ++In this how-to you, created two experiments and learned how to launch TensorBoard against their job histories to identify areas for potential tuning and retraining. ++* If you are satisfied with your model, head over to our [How to deploy a model](how-to-deploy-and-where.md) article. +* Learn more about [hyperparameter tuning](../how-to-tune-hyperparameters.md). |
machine-learning | How To Secure Workspace Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-workspace-vnet.md | When your Azure Machine Learning workspace is configured with a private endpoint When ACR is behind a virtual network, Azure Machine Learning canΓÇÖt use it to directly build Docker images. Instead, the compute cluster is used to build the images. > [!IMPORTANT]-> The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](how-to-use-private-python-packages.md), or use [custom Docker images](../how-to-train-with-custom-image.md) that already include the packages. +> The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](how-to-use-private-python-packages.md), or use [custom Docker images](how-to-train-with-custom-image.md) that already include the packages. > [!WARNING] > If your Azure Container Registry uses a private endpoint or service endpoint to communicate with the virtual network, you cannot use a managed identity with an Azure Machine Learning compute cluster. |
machine-learning | How To Train Pytorch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md | pytorch_env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.1 ``` > [!TIP]-> Optionally, you can just capture all your dependencies directly in a custom Docker image or Dockerfile, and create your environment from that. For more information, see [Train with custom image](../how-to-train-with-custom-image.md). +> Optionally, you can just capture all your dependencies directly in a custom Docker image or Dockerfile, and create your environment from that. For more information, see [Train with custom image](how-to-train-with-custom-image.md). For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md). |
machine-learning | How To Train Tensorflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-tensorflow.md | tf_env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.1-cudn ``` > [!TIP]-> Optionally, you can just capture all your dependencies directly in a custom Docker image or Dockerfile, and create your environment from that. For more information, see [Train with custom image](../how-to-train-with-custom-image.md). +> Optionally, you can just capture all your dependencies directly in a custom Docker image or Dockerfile, and create your environment from that. For more information, see [Train with custom image](how-to-train-with-custom-image.md). For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md). |
machine-learning | How To Train With Custom Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-custom-image.md | + + Title: Train a model by using a custom Docker image ++description: Learn how to use your own Docker images, or curated ones from Microsoft, to train models in Azure Machine Learning. ++++++ Last updated : 08/11/2021+++++# Train a model by using a custom Docker image +++In this article, learn how to use a custom Docker image when you're training models with Azure Machine Learning. You'll use the example scripts in this article to classify pet images by creating a convolutional neural network. ++Azure Machine Learning provides a default Docker base image. You can also use Azure Machine Learning environments to specify a different base image, such as one of the maintained [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers) or your own [custom image](../how-to-deploy-custom-container.md). Custom base images allow you to closely manage your dependencies and maintain tighter control over component versions when running training jobs. ++## Prerequisites ++Run the code on either of these environments: ++* Azure Machine Learning compute instance (no downloads or installation necessary): + * Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) tutorial to create a dedicated notebook server preloaded with the SDK and the sample repository. +* Your own Jupyter Notebook server: + * Create a [workspace configuration file](../how-to-configure-environment.md#local-and-dsvm-only-create-a-workspace-configuration-file). + * Install the [Azure Machine Learning SDK](/python/api/overview/azure/ml/install). + * Create an [Azure container registry](../../container-registry/index.yml) or other Docker registry that's available on the internet. ++## Set up a training experiment ++In this section, you set up your training experiment by initializing a workspace, defining your environment, and configuring a compute target. ++### Initialize a workspace ++The [Azure Machine Learning workspace](../concept-workspace.md) is the top-level resource for the service. It gives you a centralized place to work with all the artifacts that you create. In the Python SDK, you can access the workspace artifacts by creating a [`Workspace`](/python/api/azureml-core/azureml.core.workspace.workspace) object. ++Create a `Workspace` object from the config.json file that you created as a [prerequisite](#prerequisites). ++```Python +from azureml.core import Workspace ++ws = Workspace.from_config() +``` ++### Define your environment ++Create an `Environment` object. ++```python +from azureml.core import Environment ++fastai_env = Environment("fastai2") +``` ++The specified base image in the following code supports the fast.ai library, which allows for distributed deep-learning capabilities. For more information, see the [fast.ai Docker Hub repository](https://hub.docker.com/u/fastdotai). ++When you're using your custom Docker image, you might already have your Python environment properly set up. In that case, set the `user_managed_dependencies` flag to `True` to use your custom image's built-in Python environment. By default, Azure Machine Learning builds a Conda environment with dependencies that you specified. The service runs the script in that environment instead of using any Python libraries that you installed on the base image. ++```python +fastai_env.docker.base_image = "fastdotai/fastai2:latest" +fastai_env.python.user_managed_dependencies = True +``` ++#### Use a private container registry (optional) ++To use an image from a private container registry that isn't in your workspace, use `docker.base_image_registry` to specify the address of the repository and a username and password: ++```python +# Set the container registry information. +fastai_env.docker.base_image_registry.address = "myregistry.azurecr.io" +fastai_env.docker.base_image_registry.username = "username" +fastai_env.docker.base_image_registry.password = "password" +``` ++#### Use a custom Dockerfile (optional) ++It's also possible to use a custom Dockerfile. Use this approach if you need to install non-Python packages as dependencies. Remember to set the base image to `None`. ++```python +# Specify Docker steps as a string. +dockerfile = r""" +FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1 +RUN echo "Hello from custom container!" +""" ++# Set the base image to None, because the image is defined by Dockerfile. +fastai_env.docker.base_image = None +fastai_env.docker.base_dockerfile = dockerfile ++# Alternatively, load the string from a file. +fastai_env.docker.base_image = None +fastai_env.docker.base_dockerfile = "./Dockerfile" +``` ++>[!IMPORTANT] +> Azure Machine Learning only supports Docker images that provide the following software: +> * Ubuntu 18.04 or greater. +> * Conda 4.7.# or greater. +> * Python 3.7+. +> * A POSIX compliant shell available at /bin/sh is required in any container image used for training. ++For more information about creating and managing Azure Machine Learning environments, see [Create and use software environments](../how-to-use-environments.md). ++### Create or attach a compute target ++You need to create a [compute target](concept-azure-machine-learning-architecture.md#compute-targets) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource. ++Creation of `AmlCompute` takes a few minutes. If the `AmlCompute` resource is already in your workspace, this code skips the creation process. ++As with other Azure services, there are limits on certain resources (for example, `AmlCompute`) associated with the Azure Machine Learning service. For more information, see [Default limits and how to request a higher quota](../how-to-manage-quotas.md). ++```python +from azureml.core.compute import ComputeTarget, AmlCompute +from azureml.core.compute_target import ComputeTargetException ++# Choose a name for your cluster. +cluster_name = "gpu-cluster" ++try: + compute_target = ComputeTarget(workspace=ws, name=cluster_name) + print('Found existing compute target.') +except ComputeTargetException: + print('Creating a new compute target...') + compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', + max_nodes=4) ++ # Create the cluster. + compute_target = ComputeTarget.create(ws, cluster_name, compute_config) ++ compute_target.wait_for_completion(show_output=True) ++# Use get_status() to get a detailed status for the current AmlCompute. +print(compute_target.get_status().serialize()) +``` +++>[!IMPORTANT] +>Use CPU SKUs for any image build on compute. +++## Configure your training job ++For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning. ++Create a `ScriptRunConfig` resource to configure your job for running on the desired [compute target](how-to-set-up-training-targets.md). ++```python +from azureml.core import ScriptRunConfig ++src = ScriptRunConfig(source_directory='fastai-example', + script='train.py', + compute_target=compute_target, + environment=fastai_env) +``` ++## Submit your training job ++When you submit a training run by using a `ScriptRunConfig` object, the `submit` method returns an object of type `ScriptRun`. The returned `ScriptRun` object gives you programmatic access to information about the training run. ++```python +from azureml.core import Experiment ++run = Experiment(ws,'Tutorial-fastai').submit(src) +run.wait_for_completion(show_output=True) +``` ++> [!WARNING] +> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use an [.ignore file](../concept-train-machine-learning-model.md#understand-what-happens-when-you-submit-a-training-job) or don't include it in the source directory. Instead, access your data by using a [datastore](/python/api/azureml-core/azureml.data). ++## Next steps +In this article, you trained a model by using a custom Docker image. See these other articles to learn more about Azure Machine Learning: +* [Track run metrics](../how-to-log-view-metrics.md) during training. +* [Deploy a model](../how-to-deploy-custom-container.md) by using a custom Docker image. |
machine-learning | How To Use Reinforcement Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-reinforcement-learning.md | -> Azure Machine Learning reinforcement learning via the [`azureml.contrib.train.rl`](/python/api/azureml-contrib-reinforcementlearning/azureml.contrib.train.rl) package will no longer be supported after June 2022. We recommend customers use the [Ray on Azure Machine Learning library](https://github.com/microsoft/ray-on-aml) for reinforcement learning experiments with Azure Machine Learning. For an example, see the notebook [Reinforcement Learning in Azure Machine Learning - Pong problem](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb). +> This article uses the [`azureml.contrib.train.rl`](/python/api/azureml-contrib-reinforcementlearning/azureml.contrib.train.rl) package, which will no longer be supported after June 2022. We recommend customers instead use the [Ray on Azure Machine Learning library](https://github.com/microsoft/ray-on-aml) for reinforcement learning experiments with Azure Machine Learning. For an example, see the notebook [Reinforcement Learning in Azure Machine Learning - Pong problem](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb). In this article, you learn how to train a reinforcement learning (RL) agent to play the video game Pong. You use the open-source Python library [Ray RLlib](https://docs.ray.io/en/master/rllib/) with Azure Machine Learning to manage the complexity of distributed RL. |
managed-grafana | Find Help Open Support Ticket | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/find-help-open-support-ticket.md | + + Title: Find help or open a support ticket for Azure Managed Grafana +description: Learn how to find help or open a support ticket for Azure Managed Grafana +++ Last updated : 01/23/2023++++# Find help or open a support ticket for Azure Managed Grafana ++In the page below, find out how you can get technical information about Azure Managed Grafana, look up answers to your questions or open a support ticket. ++## Find help without opening a support ticket ++Before creating a support ticket, check out the following resources for answers and information. ++* [Technical documentation for Azure Managed Grafana](/index.yml): find content such as how-to guides, tutorials and the [troubleshooting guide](troubleshoot-managed-grafana.md) for Azure Managed Grafana. +* [Microsoft Q&A](/answers/tags/249/azure-managed-grafana): browse existing questions and answers, and ask your questions around Azure Managed Grafana. +* [Microsoft Technical Community](https://techcommunity.microsoft.com/) is the place for IT professionals and customers to collaborate, share, and learn. The website contains [Grafana-related content](https://techcommunity.microsoft.com/t5/forums/searchpage/tab/message?q=grafana). ++## Open a support ticket ++If you're unable to find answers using the above self-help resources, open an online support ticket. ++### How to open a support ticket for Azure Managed Grafana in the Azure portal ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Open an Azure Managed Grafana instance. +1. In the left menu, under **Support + troubleshooting**, select **New Support Request**. + :::image type="content" source="media/support/open-ticket.png" alt-text="Screenshot of how to find help and submit support ticket part 1."::: +1. In the **1. Problem description** tab: + 1. For **Summary**, describe your issue. + 1. For **Issue type**, select **Technical**. + 1. For **Subscription**, select your Azure subscription. + 1. For **Service**, select **Azure Managed Grafana**. + 1. For **Resource**, select your resource. + 1. For **Problem type**, select a type of issue and then a subtype. + 1. Select **Next**. ++1. In the **2. Recommended solution** tab, read the recommended solution. If it doesn't resolve your problem, select the back arrow or close the solution with **X**, and then select **Next**. +1. In the **3. Additional details** tab, fill out the required details. For example: + 1. Share the time and date when the problem occurred. + 1. Add more details to describe the problem. + 1. Optionally, add a screenshot or another file type under **File upload** + 1. Under **Advanced diagnostic information**, select **Yes (Recommended)** to allow Microsoft support to access your Azure resources for faster problem resolution. + 1. Select a **[Severity](https://azure.microsoft.com/support/plans/response)**, and your preferred contact method. + 1. Select your preferred **Support language**. + 1. Enter your **contact information** +1. Select **Next**. +1. Under **4. Review + create**, check that the summary of your support ticket is accurate and then select **Create** to open a support ticket, or select **Previous** to amend your submission. ++1. If the details of your support ticket are accurate, select **Create** to submit the support ticket. Otherwise, select **Previous** to make corrections. ++## Next steps ++> [!div class="nextstepaction"] +> [Microsoft Q&A](/answers/tags/249/azure-managed-grafana) ++> [!div class="nextstepaction"] +> [Troubleshooting](troubleshoot-managed-grafana.md) |
managed-grafana | Troubleshoot Managed Grafana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md | Data sources configured with a managed identity may still be able to access data ## Next steps > [!div class="nextstepaction"]-> [Configure data sources](./how-to-data-source-plugins-managed-identity.md) +> [Support](./find-help-open-support-ticket.md) |
marketplace | Azure Vm Plan Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-manage.md | Complete these steps when you are notified that new vCPU sizes are now supported 1. On the **Offer overview** page, under **Plan overview**, select a plan within your offer. 1. In the left-nav menu, select **Pricing and availability**. 1. Do one of the following:- - If either the _Per_ *vCPU* _size_ or _Per market and_ vCPU _size_ price entry options are used, under **Pricing**, verify the price and make any necessary adjustments for the new vCPU sizes that have been added. - - If your price entry option is set to _Free_, _Flat rate_, or _Per_ *vCPU*, go to step 7. + - If either the _Per_ *vCPU* _size_ or _Per market and_ vCPU _size_ price input options are used, under **Pricing**, verify the price and make any necessary adjustments for the new vCPU sizes that have been added. + - If your price input option is set to _Free_, _Flat rate_, or _Per_ *vCPU*, go to step 7. 1. Select **Save draft** and then **Review and publish**. After the offer is republished, the new vCPU sizes will be available to your customers at the prices that you have set. + |
marketplace | Azure Vm Plan Pricing And Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-pricing-and-availability.md | On this pane, you configure: Every plan must be available in at least one market. Most markets are selected by default. To edit the list, select **Edit markets** and select or clear check boxes for each market location where this plan should (or shouldn't) be available for purchase. Users in selected markets can still deploy the offer to all Azure regions selected in the ["Plan setup"](azure-vm-plan-setup.md) section. -Select **Select only Microsoft Tax** Remitted to select only countries/regions in which Microsoft remits sales and use tax on your behalf. Publishing to China is limited to plans that are either *Free* or *Bring-your-own-license* (BYOL). +Select **Select only Microsoft Tax** Remitted to select only countries/regions in which Microsoft remits sales and uses tax on your behalf. Publishing to China is limited to plans that are either *Free* or *Bring-your-own-license* (BYOL). If you've already set prices for your plan in US dollar (USD) currency and add another market location, the price for the new market is calculated according to current exchange rates. Always review the price for each market before you publish. Review your pricing by selecting **Export prices (xlsx)** after you save your modifications. For a usage-based monthly billed plan, Microsoft will charge the customer for th - **Per** **vCPU** **size** ΓÇô Your VM offer is priced based on the number of vCPU on the hardware it's deployed on. - **Per market and** **vCPU** **size** ΓÇô Assign prices based on the number of vCPU on the hardware it's deployed on, and for all markets. Currency conversion is done by you, the publisher. This option is easier if you use the import pricing feature. -For **Per** vCPU **size** and **Per market and** vCPU **size**, enter s **Price per** vCPU, and then select **Generate prices**. The tables of price/hour calculations are populated for you. You can then adjust the price per vCPU if you choose. If using the *Per market and* vCPU *size* pricing option, you can additionally customize the price/hour calculation tables for each market thatΓÇÖs selected for this plan. +For **Per** vCPU **size** and **Per market and** **vCPU** **size**, enter a **Price per** vCPU, and then select **Generate prices**. The tables of price/hour calculations are populated for you. You can then adjust the price per vCPU if you choose. If using the *Per market and* vCPU *size* pricing option, you can additionally customize the price/hour calculation tables for each market thatΓÇÖs selected for this plan. > [!NOTE] > To ensure the prices are right before you publish them, export the pricing spreadsheet, and review them in each market. Before you export pricing data, first select **Save draft** to save pricing changes. A hidden plan is not visible on Azure Marketplace and can only be deployed throu ## Next steps - [Technical configuration](azure-vm-plan-technical-configuration.md)+ |
mysql | Concepts Customer Managed Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md | -With data encryption with customer-managed keys for Azure Database for MySQL - Flexible Server Preview, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. With customer managed keys (CMKs), the customer is responsible for and ultimately controls the key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys. +With data encryption with customer-managed keys for Azure Database for MySQL - Flexible Server, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. With customer managed keys (CMKs), the customer is responsible for and ultimately controls the key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys. ## Benefits For Azure Database for MySQL flexible server, the support for encryption of data ## Next steps -- [Data encryption with Azure CLI (Preview)](how-to-data-encryption-cli.md)-- [Data encryption with Azure portal (Preview)](how-to-data-encryption-portal.md)+- [Data encryption with Azure CLI](how-to-data-encryption-cli.md) +- [Data encryption with Azure portal](how-to-data-encryption-portal.md) - [Security in encryption rest](../../security/fundamentals/encryption-atrest.md)-- [Active Directory authentication (Preview)](concepts-azure-ad-authentication.md)+- [Active Directory authentication](concepts-azure-ad-authentication.md) |
mysql | How To Connect Tls Ssl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md | To use encrypted connections with your client applications,you need to download :::image type="content" source="./media/how-to-connect-tls-ssl/download-ssl.png" alt-text="Screenshot showing how to download public SSL certificate from Azure portal."::: +> [!NOTE] +> You must download this [SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) for your servers in Azure Government cloud. ++ Save the certificate file to your preferred location. For example, this tutorial uses `c:\ssl` or `\var\www\html\bin` on your local environment or the client environment where your application is hosted. This allows applications to connect securely to the database over SSL. If you created your flexible server with *Private access (VNet Integration)*, you need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your flexible server. |
mysql | How To Data Encryption Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-cli.md | -This tutorial shows you how to set up and manage data encryption for your Azure Database for MySQL - Flexible Server using Azure CLI preview. +This tutorial shows you how to set up and manage data encryption for your Azure Database for MySQL - Flexible Server using Azure CLI. In this tutorial, you learn how to: The params **identityUri** and **primaryKeyUri** are the resource ID of the user ## Next steps -- [Customer managed keys data encryption (Preview)](concepts-customer-managed-key.md)-- [Data encryption with Azure portal (Preview)](how-to-data-encryption-portal.md)+- [Customer managed keys data encryption](concepts-customer-managed-key.md) +- [Data encryption with Azure portal](how-to-data-encryption-portal.md) |
mysql | How To Data Encryption Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-portal.md | After your Azure Database for MySQL flexible server is encrypted with a customer ## Next steps -- [Customer managed keys data encryption (Preview)](concepts-customer-managed-key.md)-- [Data encryption with Azure CLI (Preview)](how-to-data-encryption-cli.md)+- [Customer managed keys data encryption](concepts-customer-managed-key.md) +- [Data encryption with Azure CLI](how-to-data-encryption-cli.md) |
networking | Networking Partners Msp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md | Use the links in this section for more information about managed cloud networkin |[Lumen](https://www.lumen.com/en-us/solutions/hybrid-cloud.html)||[ExpressRoute Consulting |[Macquarie Telecom](https://macquariecloudservices.com/azure-managed-services/)|[Azure Managed Services by Macquarie Cloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_services?tab=Overview); [Azure Extend by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_extend?tab=Overview)||[Azure Deploy by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview); [SD-WAN Virtual Edge offer by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview)||[Managed Security by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_security?tab=Overview)| |[Megaport](https://www.megaport.com/services/microsoft-expressroute/)||[Managed Routing Service for ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/megaport1582290752989.megaport_mcr?tab=Overview)||||-|[Netfosys](https://www.netfosys.com/services/azure-networking-services/)|||[Netfosys Managed Services for Azure vWAN](https://azuremarketplace.microsoft.com/en-ca/marketplace/apps/netfosys1637934664103.azure-vwan?tab=Overview)||| +|[Netfosys](https://www.netfosys.com/azurewan)|||[Netfosys Managed Services for Azure vWAN](https://azuremarketplace.microsoft.com/en-ca/marketplace/apps/netfosys1637934664103.azure-vwan?tab=Overview)||| |[Nokia](https://www.nokia.com/networks/services/managed-services/)|||[NBConsult Nokia Nuage SDWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nbconsult1588859334197.nbconsult-nokia-nuage?tab=Overview); [Nuage SD-WAN 2.0 Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.nuage_sd-wan_2-0_azure_virtual_wan?tab=Overview)|[Nokia 4G & 5G Private Wireless (NDAC)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.ndac_5g-ready_private_wireless?tab=Overview)| |[NTT Ltd](https://www.nttglobal.net/)|[Azure Cloud Discovery: 2-Week Workshop](https://azuremarketplace.microsoft.com/marketplace/apps/capside.replica-azure-cloud-governance-capside?tab=Overview)|NTT Managed ExpressRoute Service;NTT Managed IP VPN Service|NTT Managed SD-WAN Service||| |[NTT Data](https://www.nttdata.com/global/en/services/cloud)|[Managed Use the links in this section for more information about managed cloud networkin |[Zertia](https://zertia.es/)||[ExpressRoute ΓÇô Intercloud connectivity](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-inter-conect-of103?tab=Overview)|[Enterprise Connectivity Suite - Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-vw-suite-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-fortinet-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Cisco Meraki](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-cisco-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Citrix](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-citrix-of101?tab=Overview);||| Azure Marketplace offers for Managed ExpressRoute, Virtual WAN, Security Services and Private Edge Zone Services from the following Azure Networking MSP Partners are on our roadmap:-[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/cognizant-digital-systems-technology/cloud-enablement-services); [InterCloud](https://intercloud.com/partners/microsoft-azure/); [KINX](https://www.kinx.net/service/cloud/?lang=en); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute); +[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/us/en/glossary/cloud-enablement); [InterCloud](https://intercloud.com/what-we-do/partners/microsoft-azure); [KINX](https://www.kinx.net/service/cloud/?lang=en); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute); ## <a name="expressroute"></a>ExpressRoute partners |
networking | Nva Accelerated Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/nva-accelerated-connections.md | Title: Network connections performance optimization and NVAs description: Learn how Accelerated Connections improves Network Virtual Appliance (NVA) performance.-+ Last updated 02/01/2023 This feature is supported on all SKUs supported by Accelerated Networking except ## Next steps -Sign up for the [Preview](https://go.microsoft.com/fwlink/?linkid=2223706). +Sign up for the [Preview](https://go.microsoft.com/fwlink/?linkid=2223706). |
notification-hubs | Eu Data Boundary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/eu-data-boundary.md | -The EU Data Boundary (EUDB) is a response to increasing concerns about the transnational transfer of European Union customer personal data. Microsoft strives to foster trust in its services by limiting data transfer. +## Overview -## EUDB in Azure +The EU Data Boundary is a geographically defined boundary within which Microsoft has committed to store and process customer data for our major commercial enterprise online services, including Azure, Dynamics 365, Power Platform, and Microsoft 365, subject to limited circumstances where customer data will continue to be transferred outside the EU Data Boundary. Notification Hubs meets the EU Data Boundary commitment to store and process customer data. For more information about how to configure services for use in the EU Data Boundary, [see the Azure EU Data Boundary documentation](/privacy/eudb/eu-data-boundary-learn). -If you use the Azure portal to create an Azure Notification Hubs namespace in an EU country, your data will remain in the EU region, and will not be transferred outside the EU data boundary. A full list of countries in scope for EUDB is as follows: --- Austria-- Belgium-- Bulgaria-- Croatia-- Republic of Cyprus-- Czech Republic-- Denmark-- Estonia-- Finland-- France-- Germany-- Greece-- Hungary-- Ireland-- Italy-- Latvia-- Lithuania-- Luxembourg-- Malta-- Netherlands-- Poland-- Portugal-- Romania-- Slovakia-- Slovenia-- Spain-- Sweden+> [!IMPORTANT] +> For complete details about Microsoft's EU Data Boundary commitment, [see the Azure EU Data Boundary documentation](/privacy/eudb/eu-data-boundary-learn). ## Next steps |
partner-solutions | Dynatrace Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md | Title: Create Azure Native Dynatrace Service resource description: This article describes how to use the Azure portal to create an instance of Dynatrace.+ -- Previously updated : 10/12/2022 Last updated : 02/02/2023 When you use the integrated Dynatrace experience in Azure portal, the following :::image type="content" source="media/dynatrace-create/dynatrace-entities.png" alt-text="Flowchart showing three entities: Marketplace S A A S connecting to Dynatrace resource, connecting to Dynatrace environment."::: - **Dynatrace resource in Azure** - Using the Dynatrace resource, you can manage the Dynatrace environment in Azure. The resource is created in the Azure subscription and resource group that you select during the create process or linking process.-- **Dynatrace environment** - This is the Dynatrace environment on Dynatrace _Software as a Service_ (SaaS). When you create a new environment, the environment on Dynatrace SaaS is automatically created, in addition to the Dynatrace resource in Azure.+- **Dynatrace environment** - The Dynatrace environment on Dynatrace _Software as a Service_ (SaaS). When you create a new environment, the environment on Dynatrace SaaS is automatically created, in addition to the Dynatrace resource in Azure. - **Marketplace SaaS resource** - The SaaS resource is created automatically, based on the plan you select from the Dynatrace Marketplace offer. This resource is used for billing purposes. ## Prerequisites Use the Azure portal to find Azure Native Dynatrace Service application. 1. If you've visited the **Marketplace** in a recent session, select the icon from the available options. Otherwise, search for _Marketplace_. - :::image type="content" source="media/dynatrace-create/dynatrace-search-marketplace.png" alt-text="Screenshot showing a search for Marketplace in the Azure portal."::: + :::image type="content" source="media/dynatrace-create/dynatrace-search-marketplace.png" alt-text="Screenshot showing a search for Marketplace in the Azure portal."::: 1. In the Marketplace, search for _Dynatrace_.-- :::image type="content" source="media/dynatrace-create/dynatrace-subscribe.png" alt-text="Screenshot showing Dynatrace in the working pane to create a subscription."::: + :::image type="content" source="media/dynatrace-create/dynatrace-marketplace.png" alt-text="Screenshot showing the Azure Native Dynatrace Service offering."::: 1. Select **Subscribe**.+ :::image type="content" source="media/dynatrace-create/dynatrace-subscribe.png" alt-text="Screenshot showing Dynatrace in the working pane to create a subscription."::: ## Create a Dynatrace resource in Azure -1. When creating a Dynatrace resource, you see two options: one to create a new Dynatrace environment, and another to link Azure subscription to an existing Dynatrace environment. -- :::image type="content" source="media/dynatrace-create/dynatrace-create.png" alt-text="Screenshot offering to create a Dynatrace resource."::: --1. If you want to create a new Dynatrace environment, select **Create** action under the **Create a new Dynatrace environment** option - :::image type="content" source="media/dynatrace-create/dynatrace-create-new-link-existing.png" alt-text="Screenshot showing two options: new Dynatrace or existing Dynatrace."::: +1. When creating a Dynatrace resource, you see two options: one to create a new Dynatrace environment, and another to link Azure subscription to an existing Dynatrace environment. If you want to create a new Dynatrace environment, select **Create** action under the **Create a new Dynatrace environment** option. + :::image type="content" source="media/dynatrace-create/dynatrace-create-new-link-existing.png" alt-text="Screenshot showing two options: new Dynatrace or existing Dynatrace."::: 1. You see a form to create a Dynatrace resource in the working pane. - :::image type="content" source="media/dynatrace-create/dynatrace-basic-properties.png" alt-text="Screenshot of basic properties needed for new Dynatrace instance."::: + :::image type="content" source="media/dynatrace-create/dynatrace-basic-properties.png" alt-text="Screenshot of basic properties needed for new Dynatrace instance."::: -1. Provide the following values: + Provide the following values: | **Property** | **Description** | |--|-|- | Subscription | Select the Azure subscription you want to use for creating the Dynatrace resource. You must have owner or contributor access.| - | Resource group | Specify whether you want to create a new resource group or use an existing one. A [resource group](../../azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution. | - | Resource name | Specify a name for the Dynatrace resource. This name will be the friendly name of the new Dynatrace environment.| - | Location | Select the region. Select the region where the Dynatrace resource in Azure and the Dynatrace environment is created.| - | Pricing plan | Select from the list of available plans. | + | **Subscription** | Select the Azure subscription you want to use for creating the Dynatrace resource. You must have owner or contributor access.| + | **Resource group** | Specify whether you want to create a new resource group or use an existing one. A [resource group](../../azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution. | + | **Resource name** | Specify a name for the Dynatrace resource. This name will be the friendly name of the new Dynatrace environment.| + | **Location** | Select the region. Select the region where the Dynatrace resource in Azure and the Dynatrace environment is created.| + | **Pricing plan** | Select from the list of available plans. | ++1. Select **Next: Metrics and Logs**. ### Configure metrics and logs -1. Your next step is to configure metrics and logs. When creating the Dynatrace resource, you can set up automatic log forwarding for three types of logs: - :::image type="content" source="media/dynatrace-create/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs."::: +1. Your next step is to configure metrics and logs for your resources. Azure Native Dynatrace Service supports the metrics for both compute and non-compute resources. Compute resources include VMs, app services and more. If you have an _owner role_ in the subscription, you see the option to enable metrics collection. + + - **Metrics for compute resources** – Users can send metrics for the compute resources, virtual machines and app services, by installing the Dynatrace OneAgent extension on the compute resources after the Dynatrace resource has been created. + - **Metrics for non-compute resources** – These metrics can be collected by configuring the Dynatrace resource to automatically query Azure monitor for metrics. To enable metrics collection, select the checkbox. If you have an **owner access** in your subscription, you can enable and disable the metrics collection using the checkbox. Proceed to the configuring logs. However, if you have contributor access, use the information in the following step. ++ +1. If you have a _contributor role_ in the subscription, you don't see the option to enable metrics collection because in Azure a contributor can't assign a _monitoring reader_ role to a resource that is required by the metrics crawler to collect metrics. ++ :::image type="content" source="media/dynatrace-create/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs."::: + + - - **Send subscription activity logs** - Subscription activity logs provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription. + Complete the resource provisioning excluding the metrics configuration and ask an owner to assign an appropriate role manually to your resource. If you have an _owner role_ in the subscription, you can take the following steps to grant a monitoring reader identity to a contributor user: - - **Send Azure resource logs for all defined sources** - Azure resource logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type. - - **Send Azure Active Directory logs** – Azure Active Directory logs allow you to route the audit, sign-in, and provisioning logs to Dynatrace. The details are listed in [Azure AD activity logs in Azure Monitor](/azure/active-directory/reports-monitoring/concept-activity-logs-azure-monitor). The global administrator or security administrator for your Azure Active Directory (AAD) tenant can enable AAD logs. + 1. Go to the resource created by a contributor. + + 1. Go to **Access control** in the resource menu on the left and select **Add** then **Add role assignment**. + :::image type="content" source="media/dynatrace-create/dynatrace-contributor-guide-1.png" alt-text="Screenshot showing the access control page."::: ++ 1. In the list, scroll down and select on **Monitoring reader**. Then, select **Next**. + :::image type="content" source="media/dynatrace-create/dynatrace-contributor-guide-2.png" alt-text="Screenshot showing the process for selecting Monitoring reader role."::: ++ 1. In **Assign access to**, select **Managed identity**. Then, **Select members**. + :::image type="content" source="media/dynatrace-create/dynatrace-contributor-guide-3.png" alt-text="Screenshot showing the process to assign a role to a managed identity."::: ++ 1. Select the **Subscription**. In **Managed identity**, select **Dynatrace** and the Dynatrace resource created by the contributor. After you select the resource, use **Select** to continue. + :::image type="content" source="media/dynatrace-create/dynatrace-contributor-select.png" alt-text="Screenshot showing the Dynatrace resource with a new contributor selected."::: ++ 1. When you have completed the selection, select **Review + assign** + :::image type="content" source="media/dynatrace-create/dynatrace-review-and-assign.png" alt-text="Screenshot showing Add role assignment working pane with Review and assign with a red box around it."::: ++1. When creating the Dynatrace resource, you can set up automatic log forwarding for three types of logs: ++ - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription. ++ - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type. + + - **Azure Active Directory logs** – The global administrator or security administrator for your Azure Active Directory (Azure AD) tenant can enable Azure AD logs so that you can route the audit, sign-in, and provisioning logs to Dynatrace. The details are listed in [Azure AD activity logs in Azure Monitor](../../active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md). 1. To send subscription level logs to Dynatrace, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Dynatrace. -1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). +1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories). When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources. To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags: Use the Azure portal to find Azure Native Dynatrace Service application. The logs sent to Dynatrace are charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners. - > [!NOTE] - > Metrics for virtual machines and App Services can be collected by installing the Dynatrace OneAgent after the Dynatrace resource has been created. - 1. Once you have completed configuring metrics and logs, select **Next: Single sign-on**. ### Configure single sign-on |
partner-solutions | Dynatrace How To Configure Prereqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-configure-prereqs.md | Title: Configure pre-deployment to use Azure Native Dynatrace Service description: This article describes how to complete the prerequisites for Dynatrace on the Azure portal. -- Previously updated : 10/12/2022+ Last updated : 02/04/2023 |
partner-solutions | Dynatrace How To Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md | Title: Manage your Azure Native Dynatrace Service integration description: This article describes how to manage Dynatrace on the Azure portal. - - Previously updated : 10/12/2022 Last updated : 02/04/2023 The columns in the table denote important information for your resource: - **Total resources** - Count of all resources for the resource type. - **Logs to Dynatrace** - Count of resources sending logs to Dynatrace through the integration. -## Reconfigure rules for logs +## Reconfigure rules for metrics and logs To change the configuration rules for logs, select **Metrics and logs** in the Resource menu on the left. For more information, see [Configure metrics and logs](dynatrace-create.md#configure-metrics-and-logs). ## View monitored resources -To see the list of resources emitting logs to Dynatrace, select Monitored Resources in the left pane. +To see the list of resources emitting logs to Dynatrace, select **Monitored Resources** in the left pane. :::image type="content" source="media/dynatrace-how-to-manage/dynatrace-monitored-resources.png" alt-text="Screenshot showing monitored resources in the working pane."::: You can install Dynatrace OneAgent on virtual machines as an extension. Select * For each virtual machine, the following info is displayed: -| Column header | Definition of column | +| Column | Description | |||-| **Resource Name** | Virtual machine name | -| **Resource Status** | Indicates whether the virtual machine is stopped or running. Dynatrace OneAgent can only be installed on virtual machines that are running. If the virtual machine is stopped, installing the Dynatrace OneAgent will be disabled. | -| **OneAgent status** | Whether the Dynatrace OneAgent is running on the virtual machine | -| **OneAgent version** | The Dynatrace OneAgent version number | -| **Auto-update** | Whether auto-update has been enabled for the OneAgent | -| **Log monitoring** | Whether log monitoring option was selected when OneAgent was installed | -| **Monitoring mode** | Whether the Dynatrace OneAgent is monitoring hosts in [full-stack monitoring mode or infrastructure monitoring mode](https://www.dynatrace.com/support/help/how-to-use-dynatrace/hosts/basic-concepts/get-started-with-infrastructure-monitoring) | +| **Name** | Virtual machine name. | +| **Status** | Indicates whether the virtual machine is stopped or running. Dynatrace OneAgent can only be installed on virtual machines that are running. If the virtual machine is stopped, installing the Dynatrace OneAgent will be disabled. | +| **OneAgent status** | Whether the Dynatrace OneAgent is running on the virtual machine. | +| **OneAgent version** | The Dynatrace OneAgent version number. | +| **Auto-update** | Whether auto-update has been enabled for the OneAgent. | +| **Log monitoring** | Whether log monitoring option was selected when OneAgent was installed. | +| **Monitoring mode** | Whether the Dynatrace OneAgent is monitoring hosts in [full-stack monitoring mode or infrastructure monitoring mode](https://www.dynatrace.com/support/help/how-to-use-dynatrace/hosts/basic-concepts/get-started-with-infrastructure-monitoring). | > [!NOTE] > If a virtual machine shows that an OneAgent is installed, but the option Uninstall extension is disabled, then the agent was configured through a different Dynatrace resource in the same Azure subscription. To make any changes, please go to the other Dynatrace resource in the Azure subscription. ## Monitor App Services using Dynatrace OneAgent -You can install Dynatrace OneAgent on App Services as an extension. Select **App Services** in the Resource menu. In the working pane, you see This screen a list of all App Services in the subscription. +You can install Dynatrace OneAgent on an App Service as an extension. Select an App Service in the Resource menu. In the working pane, you see a list of any App Service in the subscription. -For each app service, the following information is displayed: +For each App Service, the following information is displayed: -| Column header | Definition of column | +| Column | Description | |||-| **Resource name** | App service name | -| **Resource status** | Indicates whether the App service is running or stopped. Dynatrace OneAgent can only be installed on app services that are running. | -| **App Service plan** | The plan configured for the app service | -| **OneAgent version** | The Dynatrace OneAgent version | -| **OneAgent status** | status of the agent | +| **Name** | App Service name. | +| **Status** | Indicates whether the App Service is running or stopped. Dynatrace OneAgent can only be installed on an App Service that is running. | +| **App Service plan** | The plan configured for the App Service. | +| **OneAgent version** | The Dynatrace OneAgent version. | +| **OneAgent status** | status of the agent. | -To install the Dynatrace OneAgent, select the app service and select **Install Extension.** The application settings for the selected app service are updated and the app service is restarted to complete the configuration of the Dynatrace OneAgent. +To install the Dynatrace OneAgent, select the App Service and select **Install Extension.** The application settings for the selected App Service are updated and the App Service is restarted to complete the configuration of the Dynatrace OneAgent. > [!NOTE] >App Service extensions are currently supported only for App Services that are running on Windows OS. App Services using the Linux OS are not shown in the list. |
partner-solutions | Dynatrace Link To Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md | Title: Linking to an existing Azure Native Dynatrace Service resource description: This article describes how to use the Azure portal to link to an instance of Dynatrace.+ -- Previously updated : 10/12/2022 Last updated : 02/04/2023 |
partner-solutions | Dynatrace Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md | Title: Azure Native Dynatrace Service overview description: Learn about using the Dynatrace Cloud-Native Observability Platform in the Azure Marketplace.+ -- Previously updated : 10/12/2022 Last updated : 02/04/2023 |
partner-solutions | Dynatrace Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md | Title: Troubleshooting Azure Native Dynatrace Service description: This article provides information about troubleshooting Dynatrace for Azure - - - Previously updated : 01/06/2023 Last updated : 02/04/2023 This document contains information about troubleshooting your solutions that use - **App not showing in Single sign-on settings page** - First, search for application ID. If no result is shown, check the SAML settings of the app. The grid only shows apps with correct SAML settings. +### Metrics checkbox disabled ++- To collect metrics you must have owner permission on the subscription. If you are a contributor, refer to the contributor guide mentioned in [Configure metrics and logs](dynatrace-create.md#configure-metrics-and-logs). + ## Next steps - Learn about [managing your instance](dynatrace-how-to-manage.md) of Dynatrace. |
postgresql | Concepts Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md | SELECT cron.schedule_in_database('VACUUM','0 10 * * * ','VACUUM','testcron',null ``` > [!NOTE]-> pg_cron extension is preloaded in shared_preload_libraries for every Azure Database for PostgreSQL -Flexible Server inside postgres database to provide you with ability to schedule jobs to run in other databases within your PostgreSQL DB instance without compromising security. However, for security reasons, you still have to [allow list](#how-to-use-postgresql-extensions) pg_cron extension and install it using [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. +> pg_cron extension is preloaded in shared_preload_libraries for every Azure Database for PostgreSQL -Flexible Server inside postgres database to provide you with ability to schedule jobs to run in other databases within your PostgreSQL DB instance without compromising security. However, for security reasons, you still have to [allow list](#how-to-use-postgresql-extensions) pg_cron extension and install it using [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. Starting with pg_cron version 1.4, you can use the cron.schedule_in_database and cron.alter_job functions to schedule your job in a specific database and update an existing schedule respectively. To delete old data on Saturday at 3:30am (GMT) on database DBName ``` SELECT cron.schedule_in_database('JobName', '30 3 * * 6', $$DELETE FROM events WHERE event_time < now() - interval '1 week'$$,'DBName'); ```+>[!NOTE] +> cron_schedule_in_database function allows for user name as optional parameter. Setting the username to a non-null value requires PostgreSQL superuser privilege and is not supported in Azure Database for PostgreSQL - Flexible Server. Above examples show running this function with optional user name parameter ommitted or set to null, which runs the job in context of user scheduling the job, which should have azure_pg_admin role priviledges. + To update or change the database name for the existing schedule ``` |
postgresql | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md | One advantage of running your workload in Azure is global reach. The flexible se | | | | | | | Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Australia Southeast | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |-| Brazil South | :heavy_check_mark: (v3 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | +| Brazil South | :heavy_check_mark: (v3 only) | :x: $ | :heavy_check_mark: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | One advantage of running your workload in Azure is global reach. The flexible se | Sweden Central | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland West | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |-| UAE North | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: | +| UAE North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | US Gov Arizona | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: | | US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
private-5g-core | Enable Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-azure-active-directory.md | In this how-to guide, you'll carry out the steps you need to complete after depl - You must have completed the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) and [Collect the required information for a site](collect-required-information-for-a-site.md). - You must have deployed a site with Azure Active Directory set as the authentication type. - Identify the IP address for accessing the local monitoring tools that you set up in [Management network](complete-private-mobile-network-prerequisites.md#management-network).-- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have permission to manage applications in Azure AD. [Azure AD built-in roles](/azure/active-directory/roles/permissions-reference.md#application-developer) that have the required permissions include, for example, Application administrator, Application developer, and Cloud application administrator.+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have permission to manage applications in Azure AD. [Azure AD built-in roles](/azure/active-directory/roles/permissions-reference) that have the required permissions include, for example, Application administrator, Application developer, and Cloud application administrator. - Ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access). ## Configure domain system name (DNS) for local monitoring IP Follow this step if you need to update your existing Kubernetes Secret Objects; If you haven't already done so, you should now design the policy control configuration for your private mobile network. This allows you to customize how your packet core instances apply quality of service (QoS) characteristics to traffic. You can also block or limit certain flows. - [Learn more about designing the policy control configuration for your private mobile network](policy-control.md)+ |
private-link | Create Private Link Service Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-powershell.md | New-AzLoadBalancer @loadbalancer ``` +## Disable network policy ++Before a private link service can be created in the virtual network, the setting `privateLinkServiceNetworkPolicies` must be disabled. ++* Disable the network policy with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-AzVirtualNetwork). ++```azurepowershell-interactive +## Place the subnet name into a variable. ## +$subnet = 'mySubnet' ++## Place the virtual network configuration into a variable. ## +$net = @{ + Name = 'myVNet' + ResourceGroupName = 'CreatePrivLinkService-rg' +} +$vnet = Get-AzVirtualNetwork @net ++## Set the policy as disabled on the virtual network. ## +($vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $subnet}).privateLinkServiceNetworkPolicies = "Disabled" ++## Save the configuration changes to the virtual network. ## +$vnet | Set-AzVirtualNetwork +``` + ## Create a private link service In this section, create a private link service that uses the Standard Azure Load Balancer created in the previous step. In this section, create a private link service that uses the Standard Azure Load * Create the private link service with [New-AzPrivateLinkService](/powershell/module/az.network/new-azprivatelinkservice). -```azurepowershell +```azurepowershell-interactive ## Place the virtual network into a variable. ## $vnet = Get-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'CreatePrivLinkService-rg' $vnetpe = New-AzVirtualNetwork @net * Use [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint) to create the endpoint. -- ```azurepowershell-interactive ## Place the private link service configuration into variable. ## $par1 = @{ |
private-link | Disable Private Link Service Network Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-link-service-network-policy.md | Title: 'Disable network policies for Azure Private Link service source IP address ' description: Learn how to disable network policies for Azure private Link -+ Previously updated : 09/16/2019 Last updated : 02/02/2023 ms.devlang: azurecli ms.devlang: azurecli In order to choose a source IP address for your Private Link service, an explicit disable setting `privateLinkServiceNetworkPolicies` is required on the subnet. This setting is only applicable for the specific private IP address you chose as the source IP of the Private Link service. For other resources in the subnet, access is controlled based on Network Security Groups (NSG) security rules definition. -When using the portal to create a Private Link service, this setting is automatically disabled as part of the create process. Deployments using any Azure client (PowerShell, CLI or templates), require an additional step to change this property. You can disable the policy using the cloud shell from the Azure portal, or local installations of Azure PowerShell, Azure CLI, or use Azure Resource Manager templates. +When using the portal to create a Private Link service, this setting is automatically disabled as part of the create process. Deployments using any Azure client (PowerShell, CLI or templates), require an extra step to change this property. -Follow the steps below to disable private link service network policies for a virtual network named *myVirtualNetwork* with a *default* subnet hosted in a resource group named *myResourceGroup*. +You can use the following to enable or disable the setting: -## Using Azure PowerShell -This section describes how to disable subnet private endpoint policies using Azure PowerShell. -In the code, replace "default" with the name of the virtual subnet. +* Azure PowerShell -```azurepowershell -$virtualSubnetName = "default" -$virtualNetwork= Get-AzVirtualNetwork ` - -Name "myVirtualNetwork" ` - -ResourceGroupName "myResourceGroup" - -($virtualNetwork | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $virtualSubnetName} ).privateLinkServiceNetworkPolicies = "Disabled" +* Azure CLI ++* Azure Resource Manager templates -$virtualNetwork | Set-AzVirtualNetwork +The following examples describe how to enable and disable `privateLinkServiceNetworkPolicies` for a virtual network named **myVNet** with a **default** subnet of **10.1.0.0/24** hosted in a resource group named **myResourceGroup**. ++# [**PowerShell**](#tab/private-link-network-policy-powershell) ++This section describes how to disable subnet private endpoint policies using Azure PowerShell. In the following code, replace "default" with the name of your virtual subnet. ++```azurepowershell +$subnet = 'default' ++$net = @{ + Name = 'myVNet' + ResourceGroupName = 'myResourceGroup' +} +$vnet = Get-AzVirtualNetwork @net ++($vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $subnet}).privateLinkServiceNetworkPolicies = "Disabled" ++$vnet | Set-AzVirtualNetwork ```-## Using Azure CLI ++# [**CLI**](#tab/private-link-network-policy-cli) + This section describes how to disable subnet private endpoint policies using Azure CLI.+ ```azurecli az network vnet subnet update \ - --name default \ - --resource-group myResourceGroup \ - --vnet-name myVirtualNetwork \ - --disable-private-link-service-network-policies true + --name default \ + --resource-group myResourceGroup \ + --vnet-name myVNet \ + --disable-private-link-service-network-policies true ```-## Using a template ++# [**JSON**](#tab/private-link-network-policy-json) + This section describes how to disable subnet private endpoint policies using Azure Resource Manager Template. ```json { - "name": "myVirtualNetwork", + "name": "myVNet", "type": "Microsoft.Network/virtualNetworks", "apiVersion": "2019-04-01", "location": "WestUS", "properties": { "addressSpace": { "addressPrefixes": [ - "10.0.0.0/16" + "10.1.0.0/16" ] }, "subnets": [ { "name": "default", "properties": { - "addressPrefix": "10.0.0.0/24", + "addressPrefix": "10.1.0.0/24", "privateLinkServiceNetworkPolicies": "Disabled" } } This section describes how to disable subnet private endpoint policies using Azu } ```+++ ## Next steps+ - Learn more about [Azure Private Endpoint](private-endpoint-overview.md) |
purview | How To Data Share Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-share-faq.md | Yes, you can use [REST API](/rest/api/purview/) or [.NET SDK](/dotnet/api/overvi ## How can I share data from containers? -To share data from a container, select all files and folders within a container. +When adding assets, you can select the container(s) that you would like to share. ## Can I share data in-place with storage account in a different Azure region? |
search | Search Lucene Query Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-lucene-query-architecture.md | POST /indexes/hotels/docs/search?api-version=2020-06-30 For this request, the search engine does the following operations: -1. Filters out documents where the price is at least $60 and less than $300. +1. Finds documents where the price is at least $60 and less than $300. 2. Executes the query. In this example, the search query consists of phrases and terms: `"Spacious, air-condition* +\"Ocean view\""` (users typically don't enter punctuation, but including it in the example allows us to explain how analyzers handle it). This article explored full text search in the context of Azure Cognitive Search. [1]: ./media/search-lucene-query-architecture/architecture-diagram2.png [2]: ./media/search-lucene-query-architecture/azSearch-queryparsing-should2.png [3]: ./media/search-lucene-query-architecture/azSearch-queryparsing-must2.png-[4]: ./media/search-lucene-query-architecture/azSearch-queryparsing-spacious2.png +[4]: ./media/search-lucene-query-architecture/azSearch-queryparsing-spacious2.png |
security | End To End | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/end-to-end.md | The security services map organizes services by the resources they protect (colu - Detect threats ΓÇô Services that identify suspicious activities and facilitate mitigating the threat. - Investigate and respond ΓÇô Services that pull logging data so you can assess a suspicious activity and respond. -The diagram includes the Microsoft cloud security benchmark, a collection of high-impact security recommendations you can use to help secure the services you use in Azure. - :::image type="content" source="media/end-to-end/security-diagram.svg" alt-text="Diagram showing end-to-end security services in Azure." border="false"::: ## Security controls and baselines The [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) | [Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS products which includes virtual machines, virtual networks, application gateways, and load balancers. | | [Azure Policy](../../governance/policy/overview.md) | Helps to enforce organizational standards and to assess compliance at-scale. Azure Policy uses activity logs, which are automatically enabled to include event source, date, user, timestamp, source addresses, destination addresses, and other useful elements. | | **Data & Application** | |-| [Microsoft Defender for container registries](../../security-center/defender-for-container-registries-introduction.md) | Includes a vulnerability scanner to scan the images in your Azure Resource Manager-based Azure Container Registry registries and provide deeper visibility into your images' vulnerabilities. | -| [Microsoft Defender for Kubernetes](../../security-center/defender-for-kubernetes-introduction.md) | Provides cluster-level threat protection by monitoring your AKS-managed services through the logs retrieved by Azure Kubernetes Service (AKS). | +| [Microsoft Defender for Containers](../../defender-for-cloud/defender-for-containers-introduction.md) | A cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. | | [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) | A cloud access security broker (CASB) that operates on multiple clouds. It provides rich visibility, control over data travel, and sophisticated analytics to identify and combat cyberthreats across all your cloud services. | ## Investigate and respond |
sentinel | Connect Microsoft 365 Defender | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-365-defender.md | Title: Connect Microsoft 365 Defender data to Microsoft Sentinel| Microsoft Docs description: Learn how to ingest incidents, alerts, and raw event data from Microsoft 365 Defender into Microsoft Sentinel. - Previously updated : 03/23/2022 -+ Last updated : 02/01/2023 # Connect data from Microsoft 365 Defender to Microsoft Sentinel For more information about incident integration and advanced hunting event colle > [!IMPORTANT] >-> The Microsoft 365 Defender connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> The Microsoft 365 Defender connector is now generally available! ## Prerequisites For more information about incident integration and advanced hunting event colle - Your user must have read and write permissions on your Microsoft Sentinel workspace. +- To make any changes to the connector settings, your user must be a member of the same Azure Active Directory tenant with which your Microsoft Sentinel workspace is associated. + ### Prerequisites for Active Directory sync via MDI - Your tenant must be onboarded to Microsoft Defender for Identity. For more information about incident integration and advanced hunting event colle ## Connect to Microsoft 365 Defender -In Microsoft Sentinel, select **Data connectors**, select **Microsoft 365 Defender (Preview)** from the gallery and select **Open connector page**. +In Microsoft Sentinel, select **Data connectors**, select **Microsoft 365 Defender** from the gallery and select **Open connector page**. The **Configuration** section has three parts: |
sentinel | Microsoft 365 Defender Cloud Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-cloud-support.md | Title: Support for Microsoft 365 Defender connector data types in Microsoft Sentinel for different clouds (GCC environments) description: This article describes support for different Microsoft 365 Defender connector data types in Microsoft Sentinel across different clouds, including Commercial, GCC, GCC-High, and DoD. - Previously updated : 11/14/2022 + Last updated : 02/01/2023 # Support for Microsoft 365 Defender connector data types in different clouds The type of cloud your environment uses affects Microsoft Sentinel's ability to Read more about [data type support for different clouds in Microsoft Sentinel](data-type-cloud-support.md). -## Microsoft Defender for Endpoint +## Connector data ++### Incidents ++| Data type | Commercial / GCC<br>(Azure Commercial) | GCC-High / DoD<br>(Azure Government) | +| -- | - | -- | +| **Incidents** | Generally available | Generally available | ++### Alerts ++#### From Microsoft 365 Defender ++| Data type | Commercial / GCC<br>(Azure Commercial) | GCC-High / DoD<br>(Azure Government) | +| -- | - | -- | +| **Microsoft 365 Defender alerts: *SecurityAlert*** | Generally available | Public preview | ++#### From standalone component connectors ++| Data type | Commercial | GCC | GCC-High / DoD | +| -- | - | | - | +| **Microsoft Defender for Endpoint: *SecurityAlert (MDATP)*** | Generally available | Generally available | Generally available | +| **Microsoft Defender for Office 365: *SecurityAlert (OATP)*** | Public preview | Public preview | Public preview | +| **Microsoft Defender for Identity: *SecurityAlert (AATP)*** | Generally available | Unsupported | Unsupported | +| **Microsoft Defender for Cloud Apps: *SecurityAlert (MCAS)*** | Generally available | Generally available | Unsupported | +| **Microsoft Defender for Cloud Apps: *McasShadowItReporting*** | Generally available | Generally available | Unsupported | ++## Raw event data ++### Microsoft Defender for Endpoint -|Data type |Commercial |GCC |GCC-High |DoD | -|||||| -|DeviceInfo |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> | -|DeviceNetworkInfo |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> | -|DeviceProcessEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</ul></li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> | -|DeviceNetworkEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> | -|DeviceFileEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> | -|DeviceRegistryEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> | -|DeviceLogonEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> | -|DeviceImageLoadEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> | -|DeviceEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> | -|DeviceFileCertificateInfo |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> | +| Data type | Commercial / GCC<br>(Azure Commercial) | GCC-High / DoD<br>(Azure Government) | +| | - | -- | +| **DeviceInfo** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | +| **DeviceNetworkInfo** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | +| **DeviceProcessEvents** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | +| **DeviceNetworkEvents** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | +| **DeviceFileEvents** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | +| **DeviceRegistryEvents** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | +| **DeviceLogonEvents** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | +| **DeviceImageLoadEvents** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | +| **DeviceEvents** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | +| **DeviceFileCertificateInfo** | Generally available | Microsoft 365 Defender: Generally available<br>Microsoft Sentinel: Public preview | -## Microsoft Defender for Identity +### Microsoft Defender for Identity -|Data type |Commercial |GCC |GCC-High |DoD | -|||||| -|IdentityDirectoryEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |Unsupported |Unsupported |Unsupported | -IdentityLogonEvents|<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |Unsupported |Unsupported |Unsupported | -IdentityQueryEvents|<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |Unsupported |Unsupported |Unsupported | +| Data type | Commercial / GCC<br>(Azure Commercial) | GCC-High / DoD<br>(Azure Government) | +| | - | -- | +| **IdentityDirectoryEvents** | Generally available | Unsupported | +| **IdentityLogonEvents** | Generally available | Unsupported | +| **IdentityQueryEvents** | Generally available | Unsupported | -## Microsoft Defender for Cloud Apps +### Microsoft Defender for Cloud Apps -|Data type |Commercial |GCC |GCC-High |DoD | -|||||| -|CloudAppEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |Unsupported |Unsupported |Unsupported | +| Data type | Commercial / GCC<br>(Azure Commercial) | GCC-High / DoD<br>(Azure Government) | +| | - | -- | +| **CloudAppEvents** | Generally available | Unsupported | -## Microsoft 365 Defender incidents +### Microsoft Defender for Office 365 -|Data type |Commercial |GCC |GCC-High |DoD | -|||||| -|SecurityIncident |Microsoft Sentinel: Public Preview |Microsoft Sentinel: Public Preview |Microsoft Sentinel: Public Preview |Microsoft Sentinel: Public Preview | +| Data type | Commercial / GCC<br>(Azure Commercial) | GCC-High / DoD<br>(Azure Government) | +| | - | -- | +| **EmailEvents** | Generally available | Public preview | +| **EmailAttachmentInfo** | Generally available | Public preview | +| **EmailUrlInfo** | Generally available | Public preview | +| **EmailPostDeliveryEvents** | Generally available | Public preview | +| **UrlClickEvents** | Generally available | Public preview | -## Alerts +### Alerts -|Connector/Data type |Commercial |GCC |GCC-High |DoD | -|||||| -|Microsoft 365 Defender Alerts: SecurityAlert |Public Preview |Public Preview |Public Preview |Public Preview | -|Microsoft Defender for Endpoint Alerts (standalone connector): SecurityAlert (MDATP) |Public Preview |Public Preview |Public Preview |Public Preview | -| Microsoft Defender for Office 365 Alerts (standalone connector): SecurityAlert (OATP) |Public Preview |Public Preview |Public Preview |Public Preview | -Microsoft Defender for Identity Alerts (standalone connector): SecurityAlert (AATP) |Public Preview |Unsupported |Unsupported |Unsupported | -Microsoft Defender for Cloud Apps Alerts (standalone connector): SecurityAlert (MCAS), |Public Preview |Unsupported |Unsupported |Unsupported | -|Microsoft Defender for Cloud Apps Alerts (standalone connector): McasShadowItReporting |Public Preview |Unsupported |Unsupported |Unsupported | +| Data type | Commercial / GCC<br>(Azure Commercial) | GCC-High / DoD<br>(Azure Government) | +| -- | - | -- | +| **AlertInfo** | Generally available | Public preview | +| **AlertEvidence** | Generally available | Public preview | -## Azure Active Directory Identity Protection -|Data type |Commercial |GCC |GCC-High |DoD | -|||||| -|SecurityAlert (IPC) |Public Preview/GA |Supported |Supported |Supported | -|AlertEvidence |Public Preview |Unsupported |Unsupported |Unsupported | ## Next steps In this article, you learned which Microsoft 365 Defender connector data types are supported in Microsoft Sentinel for different cloud environments. - Read more about [GCC environments in Microsoft Sentinel](data-type-cloud-support.md).-- Learn how to [get visibility into your data, and potential threats](get-visibility.md).+- Learn about [Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md). +- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md). - [Use workbooks](monitor-your-data.md) to monitor your data. |
sentinel | Microsoft 365 Defender Sentinel Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md | Title: Microsoft 365 Defender integration with Microsoft Sentinel | Microsoft Docs description: Learn how using Microsoft 365 Defender together with Microsoft Sentinel lets you use Microsoft Sentinel as your universal incidents queue while seamlessly applying Microsoft 365 Defender's strengths to help investigate Microsoft 365 security incidents. Also, learn how to ingest Defender components' advanced hunting data into Microsoft Sentinel. - Previously updated : 03/23/2022 -+ Last updated : 02/01/2023 # Microsoft 365 Defender integration with Microsoft Sentinel Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/micros This integration gives Microsoft 365 security incidents the visibility to be managed from within Microsoft Sentinel, as part of the primary incident queue across the entire organization, so you can see ΓÇô and correlate ΓÇô Microsoft 365 incidents together with those from all of your other cloud and on-premises systems. At the same time, it allows you to take advantage of the unique strengths and capabilities of Microsoft 365 Defender for in-depth investigations and a Microsoft 365-specific experience across the Microsoft 365 ecosystem. Microsoft 365 Defender enriches and groups alerts from multiple Microsoft 365 products, both reducing the size of the SOCΓÇÖs incident queue and shortening the time to resolve. The component services that are part of the Microsoft 365 Defender stack are: -- **Microsoft Defender for Endpoint** (formerly Microsoft Defender ATP)-- **Microsoft Defender for Identity** (formerly Azure ATP)-- **Microsoft Defender for Office 365** (formerly Office 365 ATP)-- **Microsoft Defender for Cloud Apps** (formerly Microsoft Cloud App Security)+- **Microsoft Defender for Endpoint (MDE)** +- **Microsoft Defender for Identity (MDI)** +- **Microsoft Defender for Office 365 (MDO)** +- **Microsoft Defender for Cloud Apps (MDA)** Other services whose alerts are collected by Microsoft 365 Defender include: Other services whose alerts are collected by Microsoft 365 Defender include: In addition to collecting alerts from these components and other services, Microsoft 365 Defender generates alerts of its own. It creates incidents from all of these alerts and sends them to Microsoft Sentinel. > [!IMPORTANT]-> The Microsoft 365 Defender connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> The Microsoft 365 Defender connector is now generally available! ## Common use cases and scenarios Once the Microsoft 365 Defender integration is connected, the connectors for all - To avoid creating duplicate incidents for the same alerts, we recommend that customers turn off all **Microsoft incident creation rules** for Microsoft 365 Defender-integrated products (Defender for Endpoint, Defender for Identity, Defender for Office 365, Defender for Cloud Apps, and Azure Active Directory Identity Protection) when connecting Microsoft 365 Defender. This can be done by disabling incident creation in the connector page. Keep in mind that if you do this, any filters that were applied by the incident creation rules will not be applied to Microsoft 365 Defender incident integration. - > [!NOTE] - > All Microsoft Defender for Cloud Apps alert types are now being onboarded to Microsoft 365 Defender. - ## Working with Microsoft 365 Defender incidents in Microsoft Sentinel and bi-directional sync Microsoft 365 Defender incidents will appear in the Microsoft Sentinel incidents queue with the product name **Microsoft 365 Defender**, and with similar details and functionality to any other Sentinel incidents. Each incident contains a link back to the parallel incident in the Microsoft 365 Defender portal. The Microsoft 365 Defender connector also lets you stream **advanced hunting** e In this document, you learned how to benefit from using Microsoft 365 Defender together with Microsoft Sentinel, using the Microsoft 365 Defender connector. - Get instructions for [enabling the Microsoft 365 Defender connector](connect-microsoft-365-defender.md).-- Create [custom alerts](detect-threats-custom.md) and [investigate incidents](investigate-cases.md).+- Check [availability of different Microsoft 365 Defender data types](microsoft-365-defender-cloud-support.md) in the different Microsoft 365 and Azure clouds. +- Create [custom alerts](detect-threats-custom.md) and [investigate incidents](investigate-incidents.md). |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | See these [important announcements](#announcements) about recent changes to feat ## February 2023 +- [Microsoft 365 Defender data connector is now generally available](#microsoft-365-defender-data-connector-is-now-generally-available) - [Advanced scheduling for analytics rules (Preview)](#advanced-scheduling-for-analytics-rules-preview) +### Microsoft 365 Defender data connector is now generally available ++Microsoft 365 Defender incidents, alerts, and raw event data can be ingested into Microsoft Sentinel using this connector. It also enables the bi-directional synchronization of incidents between Microsoft 365 Defender and Microsoft Sentinel. This integration allows you to manage all of your incidents in Microsoft Sentinel, while taking advantage of Microsoft 365 Defender's specialized tools and capabilities to investigate those incidents that originated in Microsoft 365. ++- Learn more about [Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md). +- Learn how to [connect Microsoft 365 Defender to Microsoft Sentinel](connect-microsoft-365-defender.md). + ### Advanced scheduling for analytics rules (Preview) To give you more flexibility in scheduling your analytics rule execution times and to help you avoid potential conflicts, Microsoft Sentinel now allows you to determine when newly created analytics rules will run for the first time. The default behavior is as it has been: for them to run immediately upon creation. |
service-bus-messaging | Service Bus Dotnet Multi Tier App Using Service Bus Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-multi-tier-app-using-service-bus-queues.md | -You will learn the following: +You'll learn the following: * How to enable your computer for Azure development with a single download and install. You will learn the following: [!INCLUDE [create-account-note](../../includes/create-account-note.md)] -In this tutorial, you'll build and run the multi-tier application in an Azure cloud service. The front end is an ASP.NET MVC web role and the back end is a worker-role that uses a Service Bus queue. You can create the same multi-tier application with the front end as a web project, that is deployed to an Azure website instead of a cloud service. You can also try out the [.NET on-premises/cloud hybrid application](../azure-relay/service-bus-dotnet-hybrid-app-using-service-bus-relay.md) tutorial. +In this tutorial, you'll build and run the multi-tier application in an Azure cloud service. The front end is an ASP.NET MVC web role and the back end is a worker-role that uses a Service Bus queue. You can create the same multi-tier application with the front end as a web project that is deployed to an Azure website instead of a cloud service. You can also try out the [.NET on-premises/cloud hybrid application](../azure-relay/service-bus-dotnet-hybrid-app-using-service-bus-relay.md) tutorial. The following screenshot shows the completed application. the communication between the tiers. Using Service Bus messaging between the web and middle tiers decouples the two components. In contrast to direct messaging (that is, TCP or HTTP),-the web tier does not connect to the middle tier directly; instead it +the web tier doesn't connect to the middle tier directly; instead it pushes units of work, as messages, into Service Bus, which reliably retains them until the middle tier is ready to consume and process them. configured with filter rules that restrict the set of messages passed to the subscription queue to those that match the filter. The following example uses Service Bus queues. This communication mechanism has several advantages over direct messaging: -* **Temporal decoupling.** With the asynchronous messaging pattern, - producers and consumers need not be online at the same time. Service - Bus reliably stores messages until the consuming party is ready to - receive them. This enables the components of the distributed - application to be disconnected, either voluntarily, for example, for - maintenance, or due to a component crash, without impacting the - system as a whole. Furthermore, the consuming application might only - need to come online during certain times of the day. +* **Temporal decoupling.** When you use the asynchronous messaging pattern, producers and consumers don't need to be online at the same time. Service Bus reliably stores messages until the consuming party is ready to receive them. This enables the components of the distributed application to be disconnected, either voluntarily, for example, for maintenance, or due to a component crash, without impacting the system as a whole. Furthermore, the consuming application might only need to come online during certain times of the day. * **Load leveling.** In many applications, system load varies over time, while the processing time required for each unit of work is typically constant. Intermediating message producers and consumers messaging: added to read from the queue. Each message is processed by only one of the worker processes. Furthermore, this pull-based load balancing enables optimal use of the worker machines even if the- worker machines differ in terms of processing power, as they will + worker machines differ in terms of processing power, as they'll pull messages at their own maximum rate. This pattern is often termed the *competing consumer* pattern. The first step is to create a *namespace*, and obtain a [Shared Access Signature [!INCLUDE [service-bus-create-namespace-portal](./includes/service-bus-create-namespace-portal.md)] + ## Create a web role In this section, you build the front end of your application. First, you queue and displays status information about the queue. ### Create the project 1. Using administrator privileges, start Visual- Studio: right-click the **Visual Studio** program icon, and then click **Run as administrator**. The Azure Compute Emulator, + Studio: right-click the **Visual Studio** program icon, and then select **Run as administrator**. The Azure Compute Emulator, discussed later in this article, requires that Visual Studio be started with administrator privileges. - In Visual Studio, on the **File** menu, click **New**, and then - click **Project**. + In Visual Studio, on the **File** menu, select **New**, and then + select **Project**. 2. On the **Templates** page, follow these steps: 1. Select **C#** for programming language. 1. Select **Cloud** for the project type. queue and displays status information about the queue. 1. On the **Roles** page, double-click **ASP.NET Web Role**, and select **OK**. :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-11.png" alt-text="Select Web Role":::-4. Hover over **WebRole1** under **Azure Cloud Service solution**, click - the pencil icon, and rename the web role to **FrontendWebRole**. Then click **OK**. (Make sure you enter "Frontend" with a lower-case 'e,' not "FrontEnd".) +4. Hover over **WebRole1** under **Azure Cloud Service solution**, select + the pencil icon, and rename the web role to **FrontendWebRole**. Then select **OK**. (Make sure you enter "Frontend" with a lower-case 'e,' not "FrontEnd".) :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-02.png" alt-text="Screenshot of the New Microsoft Azure Cloud Service dialog box with the solution renamed to FrontendWebRole."::: 5. In the **Create a new ASP.NET Web Application** dialog box, select **MVC**, and then select **Create**. :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-12.png" alt-text="Screenshot of the New ASP.NET Project dialog box with MVC highlighted and outlined in red and the Change Authentication option outlined in red.":::-8. In **Solution Explorer**, in the **FrontendWebRole** project, right-click **References**, then click +8. In **Solution Explorer**, in the **FrontendWebRole** project, right-click **References**, then select **Manage NuGet Packages**.-9. Click the **Browse** tab, then search for **Azure.Messaging.ServiceBus**. Select the **Azure.Messaging.ServiceBus** package, select **Install**, and accept the terms of use. +9. Select the **Browse** tab, then search for **Azure.Messaging.ServiceBus**. Select the **Azure.Messaging.ServiceBus** package, select **Install**, and accept the terms of use. :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-13.png" alt-text="Screenshot of the Manage NuGet Packages dialog box with the Azure.Messaging.ServiceBus highlighted and the Install option outlined in red."::: Note that the required client assemblies are now referenced and some new code files have been added. 10. Follow the same steps to add the `Azure.Identity` NuGet package to the project. -10. In **Solution Explorer**, expand **FronendWebRole**, right-click **Models** and click **Add**, - then click **Class**. In the **Name** box, type the name - **OnlineOrder.cs**. Then click **Add**. +10. In **Solution Explorer**, expand **FronendWebRole**, right-click **Models** and select **Add**, + then select **Class**. In the **Name** box, type the name + **OnlineOrder.cs**. Then select **Add**. ### Write the code for your web role In this section, you create the various pages that your application displays. In this section, you create the various pages that your application displays. } } ```-4. From the **Build** menu, click **Build Solution** to test the accuracy of your work so far. +4. From the **Build** menu, select **Build Solution** to test the accuracy of your work so far. 5. Now, create the view for the `Submit()` method you created earlier. Right-click within the `Submit()` method (the overload of `Submit()` that takes no parameters) in the **HomeController.cs** file, and then choose **Add View**. 6. In the **Add New Scaffolded Item** dialog box, select **Add**. In this section, you create the various pages that your application displays. the queue. In **Solution Explorer**, double-click the **Views\Home\Submit.cshtml** file to open it in the Visual Studio editor. Add the following line after `<h2>Submit</h2>`. For now,- the `ViewBag.MessageCount` is empty. You will populate it later. + the `ViewBag.MessageCount` is empty. You'll populate it later. ```html <p>Current number of orders in queue waiting to be processed: @ViewBag.MessageCount</p> Global.aspx.cs. Finally, update the submission code you created earlier in HomeController.cs to actually submit items to a Service Bus queue. -1. In **Solution Explorer**, right-click **FrontendWebRole** (right-click the project, not the role). Click **Add**, and then click **Class**. -2. Name the class **QueueConnector.cs**. Click **Add** to create the class. +1. In **Solution Explorer**, right-click **FrontendWebRole** (right-click the project, not the role). Select **Add**, and then select **Class**. +2. Name the class **QueueConnector.cs**. Select **Add** to create the class. 3. Now, add code that encapsulates the connection information and initializes the connection to a Service Bus queue. Replace the entire contents of QueueConnector.cs with the following code, and enter values for `your Service Bus namespace` (your namespace name) and `yourKey`, which is the **primary key** you previously obtained from the Azure portal. ```csharp Service Bus queue. :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-app2.png" alt-text="Screenshot of the application's Submit page with the message count incremented to 1."::: ## Create the worker role-You will now create the worker role that processes the order +You'll now create the worker role that processes the order submissions. This example uses the **Worker Role with Service Bus Queue** Visual Studio project template. You already obtained the required credentials from the portal. 1. Make sure you have connected Visual Studio to your Azure account. 2. In Visual Studio, in **Solution Explorer** right-click the **Roles** folder under the **MultiTierApp** project.-3. Click **Add**, and then click **New Worker Role Project**. The **Add New Role Project** dialog box appears. +3. Select **Add**, and then select **New Worker Role Project**. The **Add New Role Project** dialog box appears. :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/SBNewWorkerRole.png" alt-text="Screenshot of the Solution Explorer pane with the New Worker Role Project option and Add option highlighted."::: 1. In the **Add New Role Project** dialog box, select **Worker Role**. Don't select **Worker Role with Service Bus Queue** as it generates code that uses the legacy Service Bus SDK. :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/SBWorkerRole1.png" alt-text="Screenshot of the Ad New Role Project dialog box with the Worker Role with Service Bus Queue option highlighted and outlined in red.":::-5. In the **Name** box, name the project **OrderProcessingRole**. Then click **Add**. +5. In the **Name** box, name the project **OrderProcessingRole**. Then select **Add**. 1. In **Solution Explorer**, right-click **OrderProcessingRole** project, and select **Manage NuGet Packages**. 9. Select the **Browse** tab, then search for **Azure.Messaging.ServiceBus**. Select the **Azure.Messaging.ServiceBus** package, select **Install**, and accept the terms of use. :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-13.png" alt-text="Screenshot of the Manage NuGet Packages dialog box with the Azure.Messaging.ServiceBus highlighted and the Install option outlined in red."::: 1. Follow the same steps to add the `Azure.Identity` NuGet package to the project. -1. Create an **OnlineOrder** class to represent the orders as you process them from the queue. You can reuse a class you have already created. In **Solution Explorer**, right-click the **OrderProcessingRole** class (right-click the class icon, not the role). Click **Add**, then click **Existing Item**. +1. Create an **OnlineOrder** class to represent the orders as you process them from the queue. You can reuse a class you have already created. In **Solution Explorer**, right-click the **OrderProcessingRole** class (right-click the class icon, not the role). Select **Add**, then select **Existing Item**. 1. Browse to the subfolder for **FrontendWebRole\Models**, and then double-click **OnlineOrder.cs** to add it to this project. 1. Add the following `using` statement to the **WorkerRole.cs** file in the **OrderProcessingRole** project. submissions. This example uses the **Worker Role with Service Bus Queue** Visual } } ```-14. You have completed the application. You can test the full +14. You've completed the application. You can test the full application by right-clicking the MultiTierApp project in Solution Explorer,- selecting **Set as Startup Project**, and then pressing F5. Note that the - message count does not increment, because the worker role processes items + selecting **Set as Startup Project**, and then pressing F5. The + message count doesn't increment, because the worker role processes items from the queue and marks them as complete. You can see the trace output of your worker role by viewing the Azure Compute Emulator UI. You can do this by right-clicking the emulator icon in the notification area of your taskbar and selecting **Show Compute Emulator UI**. - :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-38.png" alt-text="Screenshot of what appears when you click the emulator icon. Show Compute Emulator UI is in the list of options."::: + :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-38.png" alt-text="Screenshot of what appears when you select the emulator icon. Show Compute Emulator UI is in the list of options."::: :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-39.png" alt-text="Screenshot of the Microsoft Azure Compute Emulator (Express) dialog box."::: |
service-bus-messaging | Service Bus Tutorial Topics Subscriptions Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-tutorial-topics-subscriptions-portal.md | Each [subscription to a topic](service-bus-messaging-overview.md#topics) can rec [!INCLUDE [service-bus-create-namespace-portal](./includes/service-bus-create-namespace-portal.md)] ++ [!INCLUDE [service-bus-create-topics-three-subscriptions-portal](./includes/service-bus-create-topics-three-subscriptions-portal.md)] ## Create filter rules on subscriptions -After the namespace and topic/subscriptions are provisioned, and you have the necessary credentials, you're ready to create filter rules on the subscriptions, then send and receive messages. You can examine the code in [this GitHub sample folder](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/TopicFilters). +After the namespace and topic/subscriptions are provisioned, and you have the connection string to the namespace, you're ready to create filter rules on the subscriptions, then send and receive messages. You can examine the code in [this GitHub sample folder](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/TopicFilters). ## Send and receive messages To run the code, follow these steps: 2. Navigate to the sample folder `azure-service-bus\samples\DotNet\Azure.Messaging.ServiceBus\BasicSendReceiveTutorialWithFilters`. -3. Obtain the connection string you copied to Notepad in the Obtain the management credentials section of this tutorial. You also need the name of the topic you created in the previous section. +3. Obtain the connection string you copied to Notepad earlier in this tutorial. You also need the name of the topic you created in the previous section. 4. At the command prompt, type the following command: |
site-recovery | Encryption Feature Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/encryption-feature-deprecation.md | Title: Deprecation of Azure Site Recovery data encryption feature | Microsoft Docs -description: Details regarig Azure Site Recovery data encryption feature + Title: Deprecation of Azure Site Recovery data encryption feature +description: Get details about the Azure Site Recovery data encryption feature. Last updated 11/15/2019 -# Deprecation of Site Recovery data encryption feature +# Deprecation of the Site Recovery data encryption feature -This document describes the deprecation details and the remediation action you need to take if you are using the Site Recovery data encryption feature while configuring disaster recovery of Hyper-V virtual machines to Azure. +This article describes the deprecation details and the remediation action that you need to take if you're using the Azure Site Recovery data encryption feature while configuring disaster recovery of Hyper-V virtual machines (VMs) to Azure. ## Deprecation information +The Site Recovery data encryption feature was available for customers who wanted to protect replicated data for Hyper-V VMs against security threats. This feature was deprecated on *April 30, 2022*. It was replaced by the [encryption at rest](https://azure.microsoft.com/blog/azure-site-recovery-encryption-at-rest/) feature, which uses [service-side encryption](../storage/common/storage-service-encryption.md) (SSE). -The Site Recovery data encryption feature was available for customers protecting Hyper-V vms to ensure that the replicated data was protected against security threats. this feature will be deprecated by **April 30, 2022**. It is being replaced by the more advanced [Encryption at Rest](https://azure.microsoft.com/blog/azure-site-recovery-encryption-at-rest/) feature, which uses [Storage Service Encryption](../storage/common/storage-service-encryption.md) (SSE). With SSE, data is encrypted before persisting to storage and decrypted on retrieval, and, upon failover to Azure, your VMs will run from the encrypted storage accounts, allowing for an improved recovery time objective (RTO). --Please note that if you are an existing customer using this feature, you would have received communications with the deprecation details and remediation steps. +With SSE, data is encrypted before persisting to storage and decrypted on retrieval. Upon failover to Azure, your VMs will run from the encrypted storage accounts to help improve recovery time objective (RTO). +If you're an existing customer who's using this feature, you should have received communications with the deprecation details and remediation steps. ## What are the implications? -After **April 30, 2022**, any VMs that still use the retired encryption feature will not be allowed to perform failover. +As of *April 30, 2022*, any VMs that still use the retired encryption feature can't perform failover. ## Required action-To continue successful failover operations, and replications follow the steps mentioned below: -Follow these steps for each VM: -1. [Disable replication](./site-recovery-manage-registration-and-protection.md#disable-protection-for-a-hyper-v-virtual-machine-replicating-to-azure-using-the-system-center-vmm-to-azure-scenario). -2. [Create a new replication policy](./hyper-v-azure-tutorial.md#replication-policy). -3. [Enable replication](./hyper-v-vmm-azure-tutorial.md#enable-replication) and select a storage account with SSE enabled. +To continue successful failover operations and replications, follow these steps for each VM: -After completing the initial replication to storage accounts with SSE enabled, your VMs will be using Encryption at Rest with Azure Site Recovery. +1. [Disable replication](./site-recovery-manage-registration-and-protection.md#disable-protection-for-a-hyper-v-virtual-machine-replicating-to-azure-using-the-system-center-vmm-to-azure-scenario). +2. [Create a new replication policy](./hyper-v-azure-tutorial.md#replication-policy). +3. [Enable replication](./hyper-v-vmm-azure-tutorial.md#enable-replication) and select a storage account with SSE enabled. +After you complete the initial replication to storage accounts with SSE enabled, your VMs will use encryption at rest with Azure Site Recovery. ## Next steps-Plan for performing the remediation steps, and execute them at the earliest. In case you have any queries regarding this deprecation, please reach out to Microsoft Support. To read more about Hyper-V to Azure scenario, refer [here](hyper-v-vmm-architecture.md). ++Plan for performing the remediation steps, and execute them as soon as possible. If you have any questions about this deprecation, contact Microsoft Support. To read more about the scenario of Hyper-V replication to Azure, see [this article](hyper-v-vmm-architecture.md). |
site-recovery | Upgrade 2012R2 To 2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/upgrade-2012R2-to-2016.md | Title: Upgrade Windows Server/System Center VMM 2012 R2 to Windows Server 2016-Azure Site Recovery -description: Learn how to upgrade Windows Server 2012 R2 hosts & SCVMM 2012 R2 that are configured with Azure Site Recovery, to Windows Server 2016 & SCVMM 2016. + Title: Upgrade Windows Server and System Center VMM 2012 R2 to 2016 +description: Learn how to upgrade Windows Server 2012 R2 hosts and System Center Virtual Machine Manager 2012 R2 configured with Azure Site Recovery to Windows Server 2016 and Virtual Machine Manager 2016. Last updated 12/03/2018 -# Upgrade Windows Server Server/System Center 2012 R2 VMM to Windows Server/VMM 2016 +# Upgrade Windows Server and System Center VMM 2012 R2 to 2016 -This article shows you how to upgrade Windows Server 2012 R2 hosts & SCVMM 2012 R2 that are configured with Azure Site Recovery, to Windows Server 2016 & SCVMM 2016 +This article shows you how to upgrade Windows Server 2012 R2 hosts and System Center Virtual Machine Manager (VMM) 2012 R2 configured with Azure Site Recovery to Windows Server 2016 and VMM 2016. -Site Recovery contributes to your business continuity and disaster recovery (BCDR) strategy. The service ensures that your VM workloads remain available when expected and unexpected outages occur. +Site Recovery contributes to your business continuity and disaster recovery (BCDR) strategy. The service ensures that your virtual machine (VM) workloads remain available when expected and unexpected outages occur. > [!IMPORTANT]-> When you upgrade Windows Server 2012 R2 hosts that are already configured for replication with Azure Site Recovery, you must follow the steps mentioned in this document. Any alternate path chosen for upgrade can result in unsupported states and can result in a break in replication or ability to perform failover. -+> When you upgrade Windows Server 2012 R2 hosts that are already configured for replication with Azure Site Recovery, you must follow the steps mentioned in this article. Any alternative path chosen for upgrade can result in unsupported states and can affect replication or the ability to perform failover. In this article, you learn how to upgrade the following configurations in your environment: > [!div class="checklist"]-> * **Windows Server 2012 R2 hosts which aren't managed by SCVMM** -> * **Windows Server 2012 R2 hosts which are managed by a standalone SCVMM 2012 R2 server** -> * **Windows Server 2012 R2 hosts which are managed by highly available SCVMM 2012 R2 server** -+> * Windows Server 2012 R2 hosts that VMM doesn't manage +> * Windows Server 2012 R2 hosts that a standalone VMM 2012 R2 server manages +> * Windows Server 2012 R2 hosts that a highly available VMM 2012 R2 server manages -## Prerequisites & factors to consider +## Prerequisites and factors to consider -Before you upgrade, note the following:- +Before you upgrade, note the following: -- If you have Windows Server 2012 R2 hosts that are not managed by SCVMM, and its a stand-alone environment setup, there will be a break in replication if you try to perform the upgrade.-- If you had selected "*not store my Keys in Active Directory under Distributed Key Management*" while installing SCVMM 2012 R2 in the first place, the upgrades will not complete successfully.+- If you have Windows Server 2012 R2 hosts that VMM doesn't manage, and it's a standalone environment setup, there will be a break in replication if you try to perform the upgrade. +- If you selected **Do not store my keys in Active Directory under Distributed Key Management** while installing VMM 2012 R2, the upgrades won't finish successfully. -- If you are using System Center 2012 R2 VMM, +- If you're using VMM 2012 R2: - - Check the database information on VMM: **VMM console** -> **settings** -> **General** -> **Database connection** - - Check the service accounts being used for System Center Virtual Machine Manager Agent service - - Make sure that you have a backup of the VMM Database. - - Note down the database name of the SCVMM servers involved. This can be done by navigating to **VMM console** -> **Settings** -> **General** -> **Database connection** - - Note down the VMM ID of both the 2012R2 primary and recovery VMM servers. VMM ID can be found from the registry "HKLM:\SOFTWARE\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup". - - Ensure that you the new SCVMMs that you add to the cluster has the same names as was before. + - Check the database information on VMM. You can find it by going to the VMM console and selecting **Settings** > **General** > **Database connection**. + - Check the service accounts that you're using for the System Center Virtual Machine Manager Agent service. + - Make sure that you have a backup of the VMM database. + - Note down the database names of the VMM servers involved. You can find them by going to the VMM console and selecting **Settings** > **General** > **Database connection**. + - Note down the VMM IDs of the 2012 R2 primary and recovery VMM servers. You can find the VMM IDs in the registry: *HKLM:\SOFTWARE\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup*. + - Ensure that the new VMM instances that you add to the cluster have the same names as before. -- If you are replicating between two of your sites managed by SCVMMs on both sides, ensure that you upgrade your recovery side first before you upgrade the primary side.+- If you're replicating between two sites managed by VMM on both sides, ensure that you upgrade the recovery side before you upgrade the primary side. > [!WARNING]- > While upgrading the SCVMM 2012 R2, under Distributed Key Management, select to **store encryption keys in Active Directory**. Choose the settings for the service account and distributed key management carefully. Based on your selection, encrypted data such as passwords in templates might not be available after the upgrade, and can potentially affect replication with Azure Site Recovery + > When you're upgrading VMM 2012 R2, under **Distributed Key Management**, select **Store encryption keys in Active Directory**. Choose the settings for the service account and distributed key management carefully. Based on your selections, encrypted data such as passwords in templates might not be available after the upgrade and can potentially affect replication with Azure Site Recovery. -> [!IMPORTANT] -> Please refer to the detailed SCVMM documentation of [prerequisites](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#requirements-and-limitations) +For more information, see the detailed VMM [documentation of prerequisites](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#requirements-and-limitations). -## Windows Server 2012 R2 hosts which aren't managed by SCVMM -The list of steps mentioned below applies to the user configuration from [Hyper-V hosts to Azure](./hyper-v-azure-architecture.md) executed by following this [tutorial](./hyper-v-prepare-on-premises-tutorial.md) +## Windows Server 2012 R2 hosts that VMM doesn't manage ++The following steps apply to the user configuration from [Hyper-V hosts to Azure](./hyper-v-azure-architecture.md). You can complete this configuration by following [this tutorial](./hyper-v-prepare-on-premises-tutorial.md). > [!WARNING]-> As mentioned in the prerequisites, these steps only apply to a clustered environment scenario, and not in a stand-alone Hyper-V host configuration. +> As mentioned in the prerequisites, these steps apply only to a clustered environment scenario and not in a standalone Hyper-V host configuration. ++1. Follow the [steps to perform the rolling cluster upgrade](/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade#cluster-os-rolling-upgrade-process). +2. With every new Windows Server 2016 host that's introduced in the cluster, remove the reference of a Windows Server 2012 R2 host from Azure Site Recovery by [following these steps](/azure/site-recovery/site-recovery-manage-registration-and-protection). This should be the host that you chose to drain and evict from the cluster. +3. Run the `Update-VMVersion` command for all virtual machines to complete the upgrades. +4. [Use these steps](./hyper-v-azure-tutorial.md#source-settings) to register the new Windows Server 2016 host to Azure Site Recovery. Note that the Hyper-V site is already active and you just need to register the new host in the cluster. +5. Go to the Azure portal and verify the replicated health status inside the Recovery Services vault. -1. Follow the steps to perform the [rolling cluster upgrade.](/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade#cluster-os-rolling-upgrade-process) to execute the rolling cluster upgrade process. -2. With every new Windows Server 2016 host that is introduced in the cluster, remove the reference of a Windows Server 2012 R2 host from Azure Site Recovery by following steps mentioned [here]. This should be the host you chose to drain & evict from the cluster. -3. Once the *Update-VMVersion* command has been executed for all virtual machines, the upgrades have been completed. -4. Use the steps mentioned [here](./hyper-v-azure-tutorial.md#source-settings) to register the new Windows Server 2016 host to Azure Site Recovery. Please note that the Hyper-V site is already active and you just need to register the new host in the cluster. -5. Go to the Azure portal and verify the replicated health status inside the Recovery Services +## Upgrade Windows Server 2012 R2 hosts that a standalone VMM 2012 R2 server manages -## Upgrade Windows Server 2012 R2 hosts managed by stand-alone SCVMM 2012 R2 server -Before you upgrade your Windows Sever 2012 R2 hosts, you need to upgrade the SCVMM 2012 R2 to SCVMM 2016. Follow the below steps:- +Before you upgrade your Windows Server 2012 R2 hosts, you need to upgrade VMM 2012 R2 to VMM 2016. Use the following steps. -**Upgrade standalone SCVMM 2012 R2 to SCVMM 2016** +### Upgrade standalone VMM 2012 R2 to VMM 2016 -1. Uninstall ASR provider by navigating to Control Panel -> Programs -> Programs and Features ->Microsoft Azure Site Recovery , and click on Uninstall -2. [Retain the SCVMM database and upgrade the operating system](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#back-up-and-upgrade-the-operating-system) -3. In **Add remove programs**, select **VMM** > **Uninstall**. b. Select **Remove Features**, and then select V**MM management Server and VMM Console**. c. In **Database Options**, select **Retain database**. d. Review the summary and click **Uninstall**. +1. Uninstall the Azure Site Recovery provider. Go to **Control Panel** > **Programs** > **Programs and Features** > **Microsoft Azure Site Recovery**, and then select **Uninstall**. +2. [Retain the VMM database and upgrade the operating system](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#back-up-and-upgrade-the-operating-system): -4. [Install VMM 2016](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#install-vmm-2016) -5. Launch SCVMM and check status of each hosts under **Fabrics** tab. Click **Refresh** to get the most recent status. You should see status as "Needs Attention". -17. Install the latest [Microsoft Azure Site Recovery Provider](https://aka.ms/downloaddra) on the SCVMM. -16. Install the latest [Microsoft Azure Recovery Service (MARS) agent](https://aka.ms/latestmarsagent) on each host of the cluster. Refresh to ensure SCVMM is able to successfully query the hosts. + a. In **Add or remove programs**, select **VMM** > **Uninstall**. -**Upgrade Windows Server 2012 R2 hosts to Windows Server 2016** + b. Select **Remove Features**, and then select **VMM management Server and VMM Console**. -1. Follow the steps mentioned [here](/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade#cluster-os-rolling-upgrade-process) to execute the rolling cluster upgrade process. -2. After adding the new host to the cluster, refresh the host from the SCVMM console to install the VMM Agent on this updated host. -3. Execute *Update-VMVersion* to update the VM versions of the Virtual machines. -4. Go to the Azure portal and verify the replicated health status of the virtual machines inside the Recovery Services Vault. + c. In **Database Options**, select **Retain database**. -## Upgrade Windows Server 2012 R2 hosts are managed by highly available SCVMM 2012 R2 server -Before you upgrade your Windows Sever 2012 R2 hosts, you need to upgrade the SCVMM 2012 R2 to SCVMM 2016. The following modes of upgrade are supported while upgrading SCVMM 2012 R2 servers configured with Azure Site Recovery - Mixed mode with no additional VMM servers & Mixed mode with additional VMM servers. + d. Review the summary and select **Uninstall**. -**Upgrade SCVMM 2012 R2 to SCVMM 2016** +4. [Install VMM 2016](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#install-vmm-2016). +5. Open VMM and check the status of each host under the **Fabrics** tab. Select **Refresh** to get the most recent status. You should see a status of **Needs Attention**. +6. [Install the latest Azure Site Recovery provider (direct download)](https://aka.ms/downloaddra) on VMM. +7. Install the latest [Microsoft Azure Recovery Services (MARS) agent (direct download)](https://aka.ms/azurebackup_agent) on each host of the cluster. Refresh to ensure that VMM can successfully query the hosts. -1. Uninstall ASR provider by navigating to Control Panel -> Programs -> Programs and Features ->Microsoft Azure Site Recovery , and click on Uninstall -2. Follow the steps mentioned [here](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#upgrade-a-standalone-vmm-server) based on the mode of upgrade you wish to execute. -3. Launch SCVMM console and check status of each hosts under **Fabrics** tab. Click **Refresh** to get the most recent status. You should see status as "Needs Attention". -4. Install the latest [Microsoft Azure Site Recovery Provider](https://aka.ms/downloaddra) on the SCVMM. -5. Update the latest [Microsoft Azure Recovery Service (MARS) agent](https://aka.ms/latestmarsagent) on each host of the cluster. Refresh to ensure SC VMM is able to successfully query the hosts. +## Upgrade Windows Server 2012 R2 hosts to Windows Server 2016 +1. Follow [these steps](/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade#cluster-os-rolling-upgrade-process) to perform the rolling cluster upgrade. +2. After you add the new host to the cluster, refresh the host from the VMM console to install the VMM agent on this updated host. +3. Run `Update-VMVersion` to update the versions of the virtual machines. +4. Go to the Azure portal and verify the replicated health status of the virtual machines inside the Recovery Services vault. -**Upgrade Windows Server 2012 R2 hosts to Windows Server 2016** +## Upgrade Windows Server 2012 R2 hosts that a highly available VMM 2012 R2 server manages -1. Follow the steps mentioned [here](/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade#cluster-os-rolling-upgrade-process) to execute the rolling cluster upgrade process. -2. After adding the new host to the cluster, refresh the host from the SCVMM console to install the VMM Agent on this updated host. -3. Execute *Update-VMVersion* to update the VM versions of the Virtual machines. -4. Go to the Azure portal and verify the replicated health status of the virtual machines inside the Recovery Services Vault. +Before you upgrade your Windows Server 2012 R2 hosts, you need to upgrade VMM 2012 R2 to VMM 2016. The following modes of upgrade are supported while you're upgrading VMM 2012 R2 servers configured with Site Recovery mixed mode (either with or without additional VMM servers). ++### Upgrade VMM 2012 R2 to VMM 2016 ++1. Uninstall the Azure Site Recovery provider. Go to **Control Panel** > **Programs** > **Programs and Features** > **Microsoft Azure Site Recovery**, and then select **Uninstall**. +2. Follow [these steps](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#upgrade-a-standalone-vmm-server) based on the mode of upgrade that you want to execute. +3. Open the VMM console and check the status of each host under the **Fabrics** tab. Select **Refresh** to get the most recent status. You should see a status of **Needs Attention**. +4. [Install the latest Azure Site Recovery provider (direct download)](https://aka.ms/downloaddra) on VMM. +5. Update the latest [MARS agent (direct download)](https://aka.ms/azurebackup_agent) on each host of the cluster. Refresh to ensure that VMM can successfully query the hosts. ++### Upgrade Windows Server 2012 R2 hosts to Windows Server 2016 ++1. Follow [these steps](/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade#cluster-os-rolling-upgrade-process) to perform the rolling cluster upgrade. +2. After you add the new host to the cluster, refresh the host from the VMM console to install the VMM agent on this updated host. +3. Run `Update-VMVersion` to update the VM versions of the virtual machines. +4. Go to the Azure portal and verify the replicated health status of the virtual machines inside the Recovery Services vault. ## Next steps-Once the upgrade of the hosts is performed, you can perform a [test failover](tutorial-dr-drill-azure.md) to test the health of your replication and disaster recovery status. ++After you upgrade the hosts, you can perform a [test failover](tutorial-dr-drill-azure.md) to test the health of your replication and disaster recovery status. |
static-web-apps | Static Web Apps Cli Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/static-web-apps-cli-configuration.md | + + Title: Configure the Azure Static Web Apps CLI +description: Configure the Azure Static Web Apps CLI ++++ Last updated : 09/30/2022++++# Configure the Azure Static Web Apps CLI ++The Azure Static Web Apps (SWA) CLI gets configuration information for your static web app in one of two ways: ++- CLI options (passed in at runtime) +- A CLI configuration file named *swa-cli.config.json* ++> [!NOTE] +> By default, the SWA CLI looks for a configuration file named *swa-cli.config.json* in the current directory. ++The configuration file can contain multiple configurations, each identified by a unique configuration name. ++- If only a single configuration is present in the *swa-cli.config.json* file, `swa start` uses it by default. +- If options are loaded from a config file, then command line options are ignored. For example, if you run `swa start app --ssl`, the `ssl=true` option is not be picked up by the CLI. ++## Configuration file example ++```json +{ + "configurations": { + "app": { + "appDevserverUrl": "http://localhost:3000", + "apiLocation": "api", + "run": "npm run start", + "swaConfigLocation": "./my-app-source" + } + } +} +``` ++## Initialize a configuration file ++Use `swa init` to kickstart the workflow to create a configuration file for a new or existing project. If the project exists, `swa init` tries to guess the configuration settings for you. ++By default, the process creates these settings in a *swa-cli.config.json* in the current working directory of your project. This directory is the default file name and location used by `swa` when searching for project configuration values. ++```azstatic-cli +swa --config <PATH> +``` ++If the file contains only one named configuration, then it is used by default. If multiple configurations are defined, you need to specify the one to use as an option. ++```azstatic-cli +swa --config-name +``` ++When the configuration file option is used, the settings are stored in JSON format. Once created, you can manually edit the file to update settings or use `swa init` to make updates. ++## View configuration ++The Static Webs CLI provides a `--print-config` option so you can determine resolved options for your current setup. ++Here is an example of what that output looks like when run on a new project with default settings. ++```azstatic-cli +swa --print-config ++Options: + - port: 4280 + - host: localhost + - apiPort: 7071 + - appLocation: . + - apiLocation: <undefined> + - outputLocation: . + - swaConfigLocation: <undefined> + - ssl: false + - sslCert: <undefined> + - sslKey: <undefined> + - appBuildCommand: <undefined> + - apiBuildCommand: <undefined> + - run: <undefined> + - verbose: log + - serverTimeout: 60 + - open: false + - githubActionWorkflowLocation: <undefined> + - env: preview + - appName: <undefined> + - dryRun: false + - subscriptionId: <undefined> + - resourceGroupName: <undefined> + - tenantId: <undefined> + - clientId: <undefined> + - clientSecret: <undefined> + - useKeychain: true + - clearCredentials: false + - config: swa-cli.config.json + - printConfig: true +``` ++Running `swa --print-config` provide's the current configuration defaults. ++> [!NOTE] +> If the project has not yet defined a configuration file, this automatically triggers the `swa init` workflow to help you create one. ++## Validate configuration ++The swa-cli.config.json file can be validated against the following schema: https://aka.ms/azure/static-web-apps-cli/schema |
static-web-apps | Static Web Apps Cli Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/static-web-apps-cli-deploy.md | + + Title: Deploy a static web app with Azure Static Web Apps CLI +description: Deploy a static web app with Azure Static Web Apps CLI ++++ Last updated : 09/30/2022++++# Deploy a static web app with Azure Static Web Apps CLI ++The `deploy` command deploys the current project to Azure Static Web Apps. ++Some common use cases include: ++- Deploy a front-end app without an API +- Deploy a front-end app with an API +- Deploy a Blazor app ++## Deployment token ++The SWA CLI supports deploying using a deployment token. This is usually useful when deploying from a CI/CD environment. You can get a deployment token either from: ++- The [Azure portal](https://portal.azure.com/): **Home → Static Web App → Your Instance → Overview → Manage deployment token** ++- If you are using the [Azure CLI](https://aka.ms/azcli), you can get the deployment token of your project using the following command: ++```azstatic-cli +az staticwebapp secrets list --name <APPLICATION_NAME> --query "properties.apiKey" +``` ++- If you are using the Azure Static Web Apps CLI, you can use the following command: ++```azstatic-cli +swa deploy --print-token +``` ++You can then use that value with the `--deployment-token <token>` or you can create an environment variable called `SWA_CLI_DEPLOYMENT_TOKEN` and set it to the deployment token. ++> [!IMPORTANT] +> Don't store deployment tokens in a public repository. ++## Deploy a front-end app without an API ++You can deploy a front-end application without an API to Azure Static Web Apps by running the following steps: ++If your front-end application requires a build step, run `swa build` or refer to your application build instructions. ++* **Option 1:** From build folder you would like to deploy, run the deploy command: ++ ```azstatic-cli + cd build/ + swa deploy + ``` ++ > [!NOTE] + > The *build* folder must contain the static content of your app to be deployed. ++* **Option 2:** You can also deploy a specific folder: ++ 1. If your front-end application requires a build step, run `swa build` or refer to your application build instructions. ++ 2. Deploy your app: ++ ```azstatic-cli + swa deploy ./my-dist + ``` ++## Deploy a front-end app with an API ++To deploy both the front-end app and an API to Azure Static Web Apps, use the following steps. ++1. If your front-end application requires a build step, run `swa build` or refer to your application build instructions. ++2. Make sure the API language runtime version in the *staticwebapp.config.json* file is set correctly, for example: ++```json +{ + "platform": { + "apiRuntime": "node:16" + } +} +``` ++> [!NOTE] +> If your project doesn't have the *staticwebapp.config.json* file, add one under your `outputLocation` folder. ++3. Deploy your app: ++```azstatic-cli +swa deploy ./my-dist --api-location ./api +``` ++### Deploy a Blazor app ++To deploy a Blazor app with an API to Azure Static Web Apps, use the following steps: ++1. Build your Blazor app in **Release** mode: ++```azstatic-cli +dotnet publish -c Release -o bin/publish +``` ++2. From the root of your project, run the deploy command: ++```azstatic-cli +swa deploy ./bin/publish/wwwroot --api-location ./Api +``` ++## Deploy using the `swa-cli.config.json` ++> [!NOTE] +> The path for `outputLocation` must be relative to the `appLocation`. ++If you are using a [`swa-cli.config.json`](./static-web-apps-cli-configuration.md) configuration file in your project and have a single configuration entry, for example: ++```json +{ + "configurations": { + "my-app": { + "appLocation": "./", + "apiLocation": "api", + "outputLocation": "frontend", + "start": { + "outputLocation": "frontend" + }, + "deploy": { + "outputLocation": "frontend" + } + } + } +} +``` ++Then you can deploy your application by running the following steps: ++1. If your front-end application requires a build step, run `swa build` or refer to your application build instructions. ++2. Deploy your app: ++```azstatic-cli +swa deploy +``` ++If you have multiple configuration entries, you can provide the entry ID to specify which one to use: ++```azstatic-cli +swa deploy my-otherapp +``` ++## Options ++Here are the options you can use with `swa deploy`: ++- `-a, --app-location <path>`: the folder containing the source code of the front-end application (default: "`.`") +- `-i, --api-location <path>`: the folder containing the source code of the API application +- `-O, --output-location <path>`: the folder containing the built source of the front-end application. The path is relative to `--app-location` (default: "`.`") +- `-w, --swa-config-location <swaConfigLocation>`: the directory where the staticwebapp.config.json file is located +- `-d, --deployment-token <secret>`: the secret token used to authenticate with the Static Web Apps +- `-dr, --dry-run`: simulate a deploy process without actually running it (default: `false`) +- `-pt, --print-token`: print the deployment token (default: `false`) +- `--env [environment]`: the type of deployment environment where to deploy the project (default: "`preview`") +- `-S, --subscription-id <subscriptionId>`: Azure subscription ID used by this project (default: `process.env.AZURE_SUBSCRIPTION_ID`) +- `-R, --resource-group <resourceGroupName>`: Azure resource group used by this project +- `-T, --tenant-id <tenantId>`: Azure tenant ID (default: `process.env.AZURE_TENANT_ID`) +- `-C, --client-id <clientId>`: Azure client ID +- `-CS, --client-secret <clientSecret>`: Azure client secret +- `-n, --app-name <appName>`: Azure Static Web App application name +- `-cc, --clear-credentials`: clear persisted credentials before login (default: `false`) +- `-u, --use-keychain`: enable using the operating system native keychain for persistent credentials (default: `true`) +- `-nu, --no-use-keychain`: disable using the operating system native keychain +- `-h, --help`: display help for command ++## Usage ++Deploy using a deployment token. ++```azstatic-cli +swa deploy ./dist/ --api-location ./api/ --deployment-token <TOKEN> +``` ++Deploy using a deployment token from the environment variables. ++```azstatic-cli +SWA_CLI_DEPLOYMENT_TOKEN=123 swa deploy ./dist/ --api-location ./api/ +``` ++Deploy using `swa-cli.config.json` file ++```azstatic-cli +swa deploy +swa deploy myconfig +``` ++Print the deployment token. ++```azstatic-cli +swa deploy --print-token +``` ++Deploy to a specific environment. ++```azstatic-cli +swa deploy --env production +``` |
storage | Data Lake Storage Use Databricks Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-databricks-spark.md | This tutorial shows you how to connect your Azure Databricks cluster to data sto In this tutorial, you will: > [!div class="checklist"]-> - Create a Databricks cluster > - Ingest unstructured data into a storage account > - Run analytics on your data in Blob storage If you don't have an Azure subscription, create a [free account](https://azure.m See [Tutorial: Connect to Azure Data Lake Storage Gen2](/azure/databricks/getting-started/connect-to-azure-storage) (Steps 1 through 3). After completing these steps, make sure to paste the tenant ID, app ID, and client secret values into a text file. You'll need those soon. -### Download the flight data +- An Azure Databricks workspace. See [Create an Azure Databricks workspace](/azure/databricks/getting-started/#--create-an-azure-databricks-workspace). ++- An Azure Databricks cluster. See [Create a cluster](/azure/databricks/getting-started/quick-start#step-1-create-a-cluster). ++## Download the flight data This tutorial uses flight data from the Bureau of Transportation Statistics to demonstrate how to perform an ETL operation. You must download this data to complete the tutorial. This tutorial uses flight data from the Bureau of Transportation Statistics to d 2. Unzip the contents of the zipped file and make a note of the file name and the path of the file. You need this information in a later step. -## Create an Azure Databricks service --In this section, you create an Azure Databricks service by using the Azure portal. --1. In the Azure portal, select **Create a resource** > **Analytics** > **Azure Databricks**. --  --2. Under **Azure Databricks Service**, provide the following values to create a Databricks service: -- |Property |Description | - ||| - |**Workspace name** | Provide a name for your Databricks workspace. | - |**Subscription** | From the drop-down, select your Azure subscription. | - |**Resource group** | Specify whether you want to create a new resource group or use an existing one. A resource group is a container that holds related resources for an Azure solution. For more information, see [Azure Resource Group overview](../../azure-resource-manager/management/overview.md). | - |**Location** | Select **West US 2**. For other available regions, see [Azure services available by region](https://azure.microsoft.com/regions/services/). | - |**Pricing Tier** | Select **Standard**. | --  --3. The account creation takes a few minutes. To monitor the operation status, view the progress bar at the top. --4. Select **Pin to dashboard** and then select **Create**. --## Create a Spark cluster in Azure Databricks --1. In the Azure portal, go to the Databricks service that you created, and select **Launch Workspace**. --2. You're redirected to the Azure Databricks portal. From the portal, select **Cluster**. --  --3. In the **New cluster** page, provide the values to create a cluster. --  -- Fill in values for the following fields, and accept the default values for the other fields: -- - Enter a name for the cluster. -- - Make sure you select the **Terminate after 120 minutes of inactivity** checkbox. Provide a duration (in minutes) to terminate the cluster, if the cluster is not being used. --4. Select **Create cluster**. After the cluster is running, you can attach notebooks to the cluster and run Spark jobs. - ## Ingest data ### Copy source data into the storage account In this section, you'll create a container and a folder in your storage account. 1. In the [Azure portal](https://portal.azure.com), go to the Azure Databricks service that you created, and select **Launch Workspace**. -2. On the left, select **Workspace**. From the **Workspace** drop-down, select **Create** > **Notebook**. +2. In the sidebar, select **Workspace**. ++3. In the Workspace folder, select **Create > Notebook**. ++ > [!div class="mx-imgBorder"] + >  -  +4. In the **Create Notebook** dialog, enter a name and then select **Python** in the **Default Language** drop-down list. This selection determines the default language of the notebook. -3. In the **Create Notebook** dialog box, enter a name for the notebook. Select **Python** as the language, and then select the Spark cluster that you created earlier. +5. In the **Cluster** drop-down list, make sure that the cluster you created earlier is selected. -4. Select **Create**. +6. Click **Create**. The notebook opens with an empty cell at the top. -5. Copy and paste the following code block into the first cell, but don't run this code yet. +7. Copy and paste the following code block into the first cell, but don't run this code yet. ```python configs = {"fs.azure.account.auth.type": "OAuth", |
storage | Point In Time Restore Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md | Point-in-time restore for block blobs has the following limitations and known is - Performing a customer-managed failover on a storage account resets the earliest possible restore point for that storage account. For example, suppose you have set the retention period to 30 days. If more than 30 days have elapsed since the failover, then you can restore to any point within that 30 days. However, if fewer than 30 days have elapsed since the failover, then you cannot restore to a point prior to the failover, regardless of the retention period. For example, if it's been 10 days since the failover, then the earliest possible restore point is 10 days in the past, not 30 days in the past. - Snapshots are not created or deleted as part of a restore operation. Only the base blob is restored to its previous state. - Point-in-time restore is not supported for hierarchical namespaces or operations via Azure Data Lake Storage Gen2.-- Point-in-time restore is not supported when a private endpoint is enabled on the storage account. - Point-in-time restore is not supported when the storage account's **AllowedCopyScope** property is set to restrict copy scope to the same Azure AD tenant or virtual network. For more information, see [About Permitted scope for copy operations (preview)](../common/security-restrict-copy-operations.md?toc=/azure/storage/blobs/toc.json&tabs=portal#about-permitted-scope-for-copy-operations-preview). > [!IMPORTANT] |
storage | Storage Blob Container Properties Metadata Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-python.md | |
synapse-analytics | Resources Self Help Sql On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md | If your query returns NULL values instead of partitioning columns or can't find The error `Inserting value to batch for column type DATETIME2 failed` indicates that the serverless pool can't read the date values from the underlying files. The datetime value stored in the Parquet or Delta Lake file can't be represented as a `DATETIME2` column. -Inspect the minimum value in the file by using Spark, and check that some dates are less than 0001-01-03. If you stored the files by using Spark 2.4, the datetime values before are written by using the Julian calendar that isn't aligned with the proleptic Gregorian calendar used in serverless SQL pools. +Inspect the minimum value in the file by using Spark, and check that some dates are less than 0001-01-03. If you stored the files by using the Spark 2.4 version or with the higher Spark version that still uses legacy datetime storage format, the datetime values before are written by using the Julian calendar that isn't aligned with the proleptic Gregorian calendar used in serverless SQL pools. There might be a two-day difference between the Julian calendar used to write the values in Parquet (in some Spark versions) and the proleptic Gregorian calendar used in serverless SQL pool. This difference might cause conversion to a negative date value, which is invalid. deltaTable.update(col("MyDateTimeColumn") < '0001-02-02', { "MyDateTimeColumn": This change removes the values that can't be represented. The other date values might be properly loaded but incorrectly represented because there's still a difference between Julian and proleptic Gregorian calendars. You might see unexpected date shifts even for the dates before `1900-01-01` if you use Spark 3.0 or older versions. -Consider [migrating to Spark 3.1 or higher](https://spark.apache.org/docs/latest/sql-migration-guide.html). It uses a proleptic Gregorian calendar that's aligned with the calendar in serverless SQL pool. Reload your legacy data with the higher version of Spark, and use the following setting to correct the dates: +Consider [migrating to Spark 3.1 or higher](https://spark.apache.org/docs/latest/sql-migration-guide.html) and switching to the proleptic Gregorian calendar. The latest Spark versions use by default a proleptic Gregorian calendar that's aligned with the calendar in serverless SQL pool. Reload your legacy data with the higher version of Spark, and use the following setting to correct the dates: ```spark spark.conf.set("spark.sql.legacy.parquet.int96RebaseModeInWrite", "CORRECTED") You don't need to use separate databases to isolate data for different tenants. - [Azure Synapse Analytics frequently asked questions](../overview-faq.yml) - [Store query results to storage using serverless SQL pool in Azure Synapse Analytics](create-external-table-as-select.md) - [Synapse Studio troubleshooting](../troubleshoot/troubleshoot-synapse-studio.md)-- [Troubleshoot a slow query on a dedicated SQL Pool](/troubleshoot/azure/synapse-analytics/dedicated-sql/troubleshoot-dsql-perf-slow-query)+- [Troubleshoot a slow query on a dedicated SQL Pool](/troubleshoot/azure/synapse-analytics/dedicated-sql/troubleshoot-dsql-perf-slow-query) |
update-center | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md | -Update management center (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. Using Update management center (preview), you can make updates in real-time or schedule them within a defined maintenance window. +Update management center (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update management center (preview) to make real-time updates or schedule them within a defined maintenance window. You can use the update management center (preview) in Azure to: Update management center (preview) has been redesigned and doesn't depend on Azu - Helps secure machines with new ways of patching such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hotpatching](../automanage/automanage-hotpatch.md) or custom maintenance schedules. - Sync patch cycles in relation to patch TuesdayΓÇöthe unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month. - The following diagram illustrates how update management center (preview) assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux.  To support management of your Azure VM or non-Azure machine, update management c - [Azure virtual machine Windows agent](../virtual-machines/extensions/agent-windows.md) or [Azure virtual machine Linux agent](../virtual-machines/extensions/agent-linux.md) for Azure VMs. - [Azure arc-enabled servers agent](../azure-arc/servers/agent-overview.md) for non-Azure Linux and Windows machines or physical servers. - The extension agent installation and configuration is managed by update management center (preview) and there's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The update management center (preview) extension runs code locally on the machine to interact with the operating system, and it includes: + The extension agent installation and configuration are managed by the update management center (preview). There's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The update management center (preview) extension runs code locally on the machine to interact with the operating system, and it includes: - Retrieving the assessment information about status of system updates for it specified by the Windows Update client or Linux package manager. - Initiating the download and installation of approved updates with Windows Update client or Linux package manager. Along with the prerequisites listed below, see [support matrix](support-matrix.m |Azure VM | [Azure Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) or Azure [Owner](../role-based-access-control/built-in-roles.md#owner). Arc enabled server | [Azure Connected Machine Resource Administrator](../azure-arc/servers/security-overview.md#identity-and-access-control). - ### Permissions -You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using update management center (preview). -+You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using the update management center (preview). **Actions** |**Permission** |**Scope** | | | | You need the following permissions to create and manage update deployments. The |Read permission for Maintenance updates resource |*Microsoft.Maintenance/updates/read* |Machine | |Read permission for Maintenance apply updates resource |*Microsoft.Maintenance/applyUpdates/read* |Machine | - ### Network planning To prepare your network to support update management center (preview), you may need to configure some infrastructure components. For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../v Update management center (preview) supports Azure VMs created using Azure Marketplace images, where the virtual machine agent is already included in the Azure Marketplace image. - ## Next steps - [View updates for single machine](view-updates.md) - [Deploy updates now (on-demand) for single machine](deploy-updates.md) - [Schedule recurring updates](scheduled-patching.md) - [Manage update settings via Portal](manage-update-settings.md)-- [Manage multiple machines using update management center](manage-multiple-machines.md)+- [Manage multiple machines using update management center](manage-multiple-machines.md) |
virtual-desktop | Teams Supported Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-supported-features.md | Title: Supported features for Microsoft Teams on Azure Virtual Desktop - Azure description: Supported features for Microsoft Teams on Azure Virtual Desktop. Previously updated : 01/19/2023 Last updated : 02/02/2023 The following table lists whether the Windows Desktop client or macOS client sup |Configure audio devices|Yes|No| |Live captions|Yes|Yes| |Communication Access Real-time Translation (CART) transcriptions|Yes|Yes|-|Give and take control |Yes|No| +|Give and take control |Yes|Yes| |Multiwindow|Yes|Yes| |Background blur|Yes|Yes| |Background images|Yes|Yes| The following table lists whether the Windows Desktop client or macOS client sup |Secondary ringer|Yes|No| |Dynamic e911|Yes|Yes| |Diagnostic overlay|Yes|No|-|Noise suppression|Yes|No| +|Noise suppression|Yes|Yes| ## Minimum requirements The following table lists the minimum required versions for each Teams feature. |Configure audio devices|1.2.1755 and later|Not supported|1.0.2006.11001 and later|Updates within 90 days of the current version| |Live captions|1.2.2322 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |CART transcriptions|1.2.2322 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|-|Give and take control |1.2.2924 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| +|Give and take control |1.2.2924 and later|10.7.10 and later|1.0.2006.11001 and later (Windows), 1.31.2211.15001 and later (macOS)|Updates within 90 days of the current version| |Multiwindow|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|1.5.00.11865 and later| |Background blur|1.2.3004 and later|10.7.10 and later|1.0.2006.11001 and later|1.5.00.11865 and later| |Background images|1.2.3004 and later|10.7.10 and later|1.0.2006.11001 and later|1.5.00.11865 and later| The following table lists the minimum required versions for each Teams feature. |Secondary ringer|1.2.3004 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |Dynamic e911|1.2.2600 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |Diagnostic overlay|1.2.3316 and later|Not supported|1.17.2205.23001 and later|Updates within 90 days of the current version|-|Noise suppression|1.2.3316 and later|Not supported|1.0.2006.11001 and later|Updates within 90 days of the current version| +|Noise suppression|1.2.3316 and later|10.8.1 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| ## Next steps |
virtual-machines | Network Watcher Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-update.md | az vm extension image list-versions --publisher Microsoft.Azure.NetworkWatcher - Customers with large deployments who need to update multiple VMs at once. For updating select VMs manually, see the next section. ```powershell-<# - .SYNOPSIS - This script will scan all VMs in the provided subscription and upgrade any out of date AzureNetworkWatcherExtensions -- .DESCRIPTION - This script should be no-op if AzureNetworkWatcherExtensions are up to date - Requires Azure PowerShell 4.2 or higher to be installed (e.g. Install-Module AzureRM). -- .EXAMPLE - .\UpdateVMAgentsInSub.ps1 -SubID F4BC4873-5DAB-491E-B713-1358EF4992F2 -NoUpdate -+<# + .SYNOPSIS + This script will scan all VMs in the provided subscription and upgrade any out of date AzureNetworkWatcherExtensions + .DESCRIPTION + This script should be no-op if AzureNetworkWatcherExtensions are up to date + Requires Azure PowerShell 4.2 or higher to be installed (e.g. Install-Module AzureRM). + .EXAMPLE + .\UpdateVMAgentsInSub.ps1 -SubID F4BC4873-5DAB-491E-B713-1358EF4992F2 -NoUpdate #>-[CmdletBinding()] -param( - [Parameter(Mandatory=$true)] - [string] $SubID, - [Parameter(Mandatory=$false)] - [Switch] $NoUpdate = $false, - [Parameter(Mandatory=$false)] - [string] $MinVersion = "1.4.1974.1" -) ---function NeedsUpdate($version) -{ - if ($version -eq $MinVersion) - { - return $false - } -- $lessThan = $true; - $versionParts = $version -split '\.'; - $minVersionParts = $MinVersion -split '\.'; - for ($i = 0; $i -lt $versionParts.Length; $i++) - { - if ([int]$versionParts[$i] -gt [int]$minVersionParts[$i]) - { - $lessThan = $false; - break; - } - } -- return $lessThan -} --Write-Host "Scanning all VMs in the subscription: $($SubID)" -Select-AzSubscription -SubscriptionId $SubID; -$vms = Get-AzVM; -$foundVMs = $false; -Write-Host "Starting VM search, this may take a while" --foreach ($vmName in $vms) -{ - # Get Detailed VM info - $vm = Get-AzVM -ResourceGroupName $vmName.ResourceGroupName -Name $vmName.name -Status; - $isWindows = $vm.OsVersion -match "Windows"; - foreach ($extension in $vm.Extensions) - { - if ($extension.Name -eq "AzureNetworkWatcherExtension") - { - if (NeedsUpdate($extension.TypeHandlerVersion)) - { - $foundVMs = $true; - if (-not ($NoUpdate)) - { - Write-Host "Found VM that needs to be updated: subscriptions/$($SubID)/resourceGroups/$($vm.ResourceGroupName)/providers/Microsoft.Compute/virtualMachines/$($vm.Name) -> Updating " -NoNewline - Remove-AzVMExtension -ResourceGroupName $vm.ResourceGroupName -VMName $vm.Name -Name "AzureNetworkWatcherExtension" -Force - Write-Host "... " -NoNewline - $type = if ($isWindows) { "NetworkWatcherAgentWindows" } else { "NetworkWatcherAgentLinux" }; - Set-AzVMExtension -ResourceGroupName $vm.ResourceGroupName -Location $vmName.Location -VMName $vm.Name -Name "AzureNetworkWatcherExtension" -Publisher "Microsoft.Azure.NetworkWatcher" -Type $type -typeHandlerVersion "1.4" - Write-Host "Done" - } - else - { - Write-Host "Found $(if ($isWindows) {"Windows"} else {"Linux"}) VM that needs to be updated: subscriptions/$($SubID)/resourceGroups/$($vm.ResourceGroupName)/providers/Microsoft.Compute/virtualMachines/$($vm.Name)" - } - } - } - } -} --if ($foundVMs) -{ - Write-Host "Finished $(if ($NoUpdate) {"searching"} else {"updating"}) out of date AzureNetworkWatcherExtension on VMs" + +[CmdletBinding()] +param( + [Parameter(Mandatory=$true)] + [string] $SubID, + [Parameter(Mandatory=$false)] + [Switch] $NoUpdate = $false, + [Parameter(Mandatory=$false)] + [string] $MinVersion = "1.4.2423.1" +) +function NeedsUpdate($version) +{ + if ([Version]$version -lt [Version]$MinVersion) + { + $lessThan = $true + }else{ + $lessThan = $false + } + return $lessThan +} +Write-Host "Scanning all VMs in the subscription: $($SubID)" +Set-AzContext -SubscriptionId $SubID +$vms = Get-AzVM +$foundVMs = $false +Write-Host "Starting VM search, this may take a while" +foreach ($vmName in $vms) +{ + # Get Detailed VM info + $vm = Get-AzVM -ResourceGroupName $vmName.ResourceGroupName -Name $vmName.name -Status + $isitWindows = $vm.OsName -like "*Windows*" + + foreach ($extension in $vm.Extensions) + { + if ($extension.Name -eq "AzureNetworkWatcherExtension") + { + if (NeedsUpdate($extension.TypeHandlerVersion)) + { + $foundVMs = $true + if (-not ($NoUpdate)) + { + Write-Host "Found VM that needs to be updated: subscriptions/$($SubID)/resourceGroups/$($vm.ResourceGroupName)/providers/Microsoft.Compute/virtualMachines/$($vm.Name) -> Updating " -NoNewline + Remove-AzVMExtension -ResourceGroupName $vm.ResourceGroupName -VMName $vm.Name -Name "AzureNetworkWatcherExtension" -Force + Write-Host "... " -NoNewline + $type = if ($isitWindows) { "NetworkWatcherAgentWindows" } else { "NetworkWatcherAgentLinux" } + Set-AzVMExtension -ResourceGroupName $vm.ResourceGroupName -Location $vmName.Location -VMName $vm.Name -Name "AzureNetworkWatcherExtension" -Publisher "Microsoft.Azure.NetworkWatcher" -Type $type -typeHandlerVersion $MinVersion + Write-Host "Done" + } + else + { + Write-Host "Found $(if ($isitWindows) {"Windows"} else {"Linux"}) VM that needs to be updated: subscriptions/$($SubID)/resourceGroups/$($vm.ResourceGroupName)/providers/Microsoft.Compute/virtualMachines/$($vm.Name)" + } + } + } + } }-else -{ - Write-Host "All AzureNetworkWatcherExtensions up to date" + +if ($foundVMs) +{ + Write-Host "Finished $(if ($NoUpdate) {"searching"} else {"updating"}) out of date AzureNetworkWatcherExtension on VMs" +} +else +{ + Write-Host "All AzureNetworkWatcherExtensions up to date" } ``` |
virtual-machines | Disk Encryption Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md | Linux server distributions that are not endorsed by Azure do not support Azure D | Oracle | Oracle Linux 8.6 Gen 2 | 8.6 | Oracle:Oracle-Linux:ol86-lvm-gen2:latest | OS and data disk (see note below) | | Oracle | Oracle Linux 8.5 | 8.5 | Oracle:Oracle-Linux:ol85-lvm:latest | OS and data disk (see note below) | | Oracle | Oracle Linux 8.5 Gen 2 | 8.5 | Oracle:Oracle-Linux:ol85-lvm-gen2:latest | OS and data disk (see note below) |+| RedHat | RHEL 8.7 | 8.7 | RedHat:RHEL:8_7:latest | OS and data disk (see note below) | +| RedHat | RHEL 8.7 Gen 2 | 8.7 | RedHat:RHEL:87-gen2:latest | OS and data disk (see note below) | | RedHat | RHEL 8.6 | 8.6 | RedHat:RHEL:8_6:latest | OS and data disk (see note below) |-| RedHat | RHEL 8.6 Gen 2 | 8.5 | RedHat:RHEL:86-gen2:latest | OS and data disk (see note below) | +| RedHat | RHEL 8.6 Gen 2 | 8.6 | RedHat:RHEL:86-gen2:latest | OS and data disk (see note below) | | RedHat | RHEL 8.5 | 8.5 | RedHat:RHEL:8_5:latest | OS and data disk (see note below) | | RedHat | RHEL 8.5 Gen 2 | 8.5 | RedHat:RHEL:85-gen2:latest | OS and data disk (see note below) | | RedHat | RHEL 8.4 | 8.4 | RedHat:RHEL:8.4:latest | OS and data disk (see note below) | |
virtual-machines | Run Command Managed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command-managed.md | Get-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" This command will retrieve current execution progress, including latest output, start/end time, exit code, and terminal state of the execution. ```powershell-interactive-Get-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -RunCommandName "RunCommandName" -Status +Get-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -RunCommandName "RunCommandName" -Expand InstanceView ``` ### Create or update Run Command on a VM using SourceScriptUri (storage blob SAS URL) |
virtual-machines | Disaster Recovery Sap Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/disaster-recovery-sap-guide.md | To achieve DR for highly available SAP Web Dispatcher setup in primary region, y The SAP central services contain enqueue and message server, which is one of the SPOF of your SAP application. In an SAP system, there can be only one such instance, and it can be configured for high availability. Read [High Availability for SAP Central Service](sap-planning-supported-configurations.md#high-availability-for-sap-central-service) to understand the different high availability solution for SAP workload on Azure. -Configuring high availability for SAP Central Services protects resources and processes from local incidents. To achieve DR for SAP Central Services, you can use Azure Site Recovery. Azure Site Recovery replicate VMs and the attached managed disks, but there are additional considerations for the DR strategy. Check the section below for more information, based on the operating system used for SAP central services. +Configuring high availability for SAP Central Services protects resources and processes from local incidents. To achieve DR for SAP Central Services, you can use Azure Site Recovery. Azure Site Recovery replicates VMs and the attached managed disks, but there are additional considerations for the DR strategy. Check the section below for more information, based on the operating system used for SAP central services. #### [Linux](#tab/linux) Irrespective of the operating system (SLES or RHEL) and its version, pacemaker r >[!Note] > We recommend to have same fencing mechanism for both primary and DR region for ease of operation and failover. It is not advised to have different fencing mechanism after failover to DR site. +#### [Windows](#tab/windows) ++For SAP system, the redundancy of SPOF component in the primary region is achieved by configuring high availability. To achieve similar high availability setup in the disaster recovery region after failover, you need to consider additional points like cluster reconfiguration, SAP shared directories availability, alongside of replicating VMs and attached managed disk to DR site using Azure Site Recovery. On Windows, the high availability of SAP application can be achieved using Windows Server Failover Cluster (WSFC). The diagram below shows the different components involved in configuring high availability of SAP central services with WSFC. Each component must be evaluated to achieve similar high availability set up in the DR site. If you have configured SAP Web Dispatcher using WSFC, similar consideration would apply as well. ++ ++##### SAP system configured with File share ++If you've configured your SAP system using file share on primary region, you need to make sure all components and the data in the file share (SMB on Azure Files, SMB on ANF) are replicated to the disaster recovery region if there is failover. You can use Azure Site Recovery to replicate the cluster VMs and other application server VMs to the disaster recovery region. There are some additional considerations that are outlined below. ++###### Load balancer ++Azure Site Recovery replicates VMs to the DR site, but it doesnΓÇÖt replicate Azure load balancer. You'll need to create a separate internal load balancer on DR site beforehand or after failover. If you create internal load balancer beforehand, create an empty backend pool and add VMs after the failover event. ++###### Quorum (cloud witness) ++If you have configured cluster with cloud witness at its quorum mechanism, then you would need to create a separate storage account on DR region. On the event of failover, quorum setting must be updated with the new storage account name and access keys. ++###### Windows server failover cluster ++If there is failover, SAP ASCS/ERS VMs configured with WSFC wonΓÇÖt work out-of-the-box. Additional reconfiguration is required to start SAP system on the DR region. ++Read [SAP NetWeaver HA deployment with File Share running on Windows failover to DR Region using ASR](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-netweaver-ha-deployment-with-file-share-running-on-windows/ba-p/3727034) blog to learn more about the additional steps that are required in the DR region. ++###### File share directories ++The high availability setup of SAP NetWeaver or ABAP platform uses enqueue replication server for achieving application level redundancy for the enqueue service of SAP system with WSFC configuration. The high availability setup of SAP central services (ASCS and ERS) with file share uses SMB shares. You will need to make sure that the SAP binaries and data on these SMB shares are replicated to the DR site. Azure Site Recovery replicates VMs and local managed disk attached, but it doesn't replicate the file shares. Choose the replication method, based on the type of file share storage you've configured for the setup. The cross regional replication methodology for each storage is presented at abstract level. You need to confirm exact steps to replicate storage and perform testing. ++| SAP file share directories | Cross region replication mechanism | +| -- | | +| SMB on Azure Files | [Robocopy](../../../storage/files/storage-files-migration-robocopy.md) | +| SMB on Azure NetApp Files | [Cross Region Replication](../../../azure-netapp-files/cross-region-replication-introduction.md) | + ### SAP Application Servers -In primary region, the redundancy of SAP application servers is achieved by installing instances in multiple VMs. To have DR for SAP application servers, [Azure Site Recovery](../../../site-recovery/azure-to-azure-tutorial-enable-replication.md) can be set up for each application server VM. For shared storages (transport filesystem, interface data filesystem) that are attached to the application servers, follow the appropriate DR practice based on the type of [shared storage](disaster-recovery-overview-guide.md#storage). +In the primary region, the redundancy of the SAP application servers is achieved by installing instances in multiple VMs. To have DR for SAP application servers, [Azure Site Recovery](../../../site-recovery/azure-to-azure-tutorial-enable-replication.md) can be set up for each application server VM. For shared storages (transport filesystem, interface data filesystem) that is attached to the application servers, follow the appropriate DR practice based on the type of [shared storage](disaster-recovery-overview-guide.md#storage). ### SAP Database Servers |
virtual-machines | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md | In the SAP workload documentation space, you can find the following areas: ## Change Log +- February 02, 2023: Add new HA provider susChkSrv for [SAP HANA Scale-out HA on SUSE](sap-hana-high-availability-scale-out-hsr-suse.md) and change from SAPHanaSR to SAPHanaSrMultiTarget provider, enabling HANA multi-target replication - January 27, 2023: Mark Azure Active Directory Domain Services as supported AD solution in [SAP workload on Azure virtual machine supported scenarios](planning-supported-configurations.md) after successful testing - December 28, 2022: Update documents [Azure Storage types for SAP workload](./planning-guide-storage.md) and [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) to provide more details on ANF deployment processes to achieve proximity and low latency. Introduction of zonal deployment process of NFS shares on ANF - December 28, 2022: Updated the guide [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md) across all topics. Also added VM configuration examples for different sizes of databases |
virtual-machines | Sap Hana High Availability Scale Out Hsr Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-suse.md | -In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. The examples are based on HANA 2.0 SP5 and SUSE Linux Enterprise Server 12 SP5. +In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. Before you begin, refer to the following SAP notes and papers: For the configuration presented in this document, deploy seven virtual machines: 1. Select the virtual machines of the HANA cluster (the NICs for the `client` subnet). 1. Select **Add**. 2. Select **Save**.- 1. Next, create a health probe: For the configuration presented in this document, deploy seven virtual machines: > [!Note] > When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). - > [!IMPORTANT] > Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../../load-balancer/load-balancer-custom-probe-overview.md). > See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). The next sections describe the steps to deploy NFS - you'll need to select only > [!TIP] > You chose to deploy `/han). - #### Deploy the Azure NetApp Files infrastructure Deploy ANF volumes for the `/han#set-up-the-azure-netapp-files-infrastructure). In this example, the following Azure NetApp Files volumes were used: * volume **HN1**-shared-s1 (nfs://10.23.1.7/**HN1**-shared-s1) * volume **HN1**-shared-s2 (nfs://10.23.1.7/**HN1**-shared-s2) - #### Deploy the NFS on Azure Files infrastructure Deploy Azure Files NFS shares for the `/han?tabs=azure-portal). In this example, the following Azure Files NFS shares were used: ## Operating system configuration and preparation The instructions in the next sections are prefixed with one of the following abbreviations:-* **[A]**: Applicable to all nodes +* **[A]**: Applicable to all nodes, including majority maker * **[AH]**: Applicable to all HANA DB nodes-* **[M]**: Applicable to the majority maker node +* **[M]**: Applicable to the majority maker node only * **[AH1]**: Applicable to all HANA DB nodes on SITE 1 * **[AH2]**: Applicable to all HANA DB nodes on SITE 2 * **[1]**: Applicable only to HANA DB node 1, SITE 1 Configure and prepare your OS by doing the following steps: 3. **[A]** SUSE delivers special resource agents for SAP HANA and by default agents for SAP HANA scale-up are installed. Uninstall the packages for scale-up, if installed and install the packages for scenario SAP HANA scale-out. The step needs to be performed on all cluster VMs, including the majority maker. + > [!NOTE] + > SAPHanaSR-ScaleOut version 0.181 or higher must be installed. + ```bash # Uninstall scale-up packages and patterns sudo zypper remove patterns-sap-hana You chose to deploy the SAP shared directories on [NFS share on Azure Files](../ In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you are using NFS on Azure NetApp Files. -1. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings. +1. **[AH]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings. <pre><code> vi /etc/sysctl.d/91-NetApp-HANA.conf In this example, the shared HANA file systems are deployed on Azure NetApp Files net.ipv4.tcp_sack = 1 </code></pre> -2. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). +2. **[AH]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). <pre><code> vi /etc/modprobe.d/sunrpc.conf Create a dummy file system cluster resource, which will monitor and report failu `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. -## Implement HANA hooks SAPHanaSR and susChkSrv +## Implement HANA HA hooks SAPHanaSrMultiTarget and susChkSrv -This important step is to optimize the integration with the cluster and detection when a cluster failover is possible. It is highly recommended to configure the SAPHanaSR Python hook. For HANA 2.0 SP5 and above, implementing both SAPHanaSR and susChkSrv hook is recommended. +This important step is to optimize the integration with the cluster and detection when a cluster failover is possible. It is highly recommended to configure SAPHanaSrMultiTarget Python hook. For HANA 2.0 SP5 and higher, implementing both SAPHanaSrMultiTarget and susChkSrv hooks is recommended. -SusChkSrv extends the functionality of the main SAPHanaSR HA provider. It acts in the situation when HANA process hdbindexserver crashes. If a single process crashes typically HANA tries to restart it. Restarting the indexserver process can take a long time, during which the HANA database is not responsive. +> [!NOTE] +> SAPHanaSrMultiTarget HA provider replaces SAPHanaSR for HANA scale-out. SAPHanaSR was described in earlier version of this document. +> See [SUSE blog post](https://www.suse.com/c/sap-hana-scale-out-multi-target-upgrade/) about changes with the new HANA HA hook. -With susChkSrv implemented, an immediate and configurable action is executed, instead of waiting on hdbindexserver process to restart on the same node. In HANA scale-out susChkSrv acts for every HANA VM independently. The configured action will kill HANA or fence the affected VM, which triggers a failover by SAPHanaSR in the configured timeout period. +Provided steps for SAPHanaSrMultiTarget hook are for a new installation. Upgrading an existing environment from SAPHanaSR to SAPHanaSrMultiTarget provider requires several changes and are _NOT_ described in this document. If the existing environment uses no third site for disaster recovery and [HANA multi-target system replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/ba457510958241889a459e606bbcf3d3.html) is not used, SAPHanaSR HA provider can remain in use. -> [!NOTE] -> susChkSrv Python hook requires SAP HANA 2.0 SP5 and SAPHanaSR-ScaleOut version 0.184.1 or higher must be installed. +SusChkSrv extends the functionality of the main SAPHanaSrMultiTarget HA provider. It acts in the situation when HANA process hdbindexserver crashes. If a single process crashes typically HANA tries to restart it. Restarting the indexserver process can take a long time, during which the HANA database isn't responsive. With susChkSrv implemented, an immediate and configurable action is executed, instead of waiting on hdbindexserver process to restart on the same node. In HANA scale-out susChkSrv acts for every HANA VM independently. The configured action will kill HANA or fence the affected VM, which triggers a failover in the configured timeout period. ++SUSE SLES 15 SP1 or higher is required for operation of both HANA HA hooks. Following table shows other dependencies. ++|SAP HANA HA hook | HANA version required | SAPHanaSR-ScaleOut required | +|-| -- | | +| SAPHanaSrMultiTarget | HANA 2.0 SPS4 or higher | 0.180 or higher | +| susChkSrv | HANA 2.0 SPS5 or higher | 0.184.1 or higher | ++Steps to implement both hooks: 1. **[1,2]** Stop HANA on both system replication sites. Execute as <sid\>adm: With susChkSrv implemented, an immediate and configurable action is executed, in sapcontrol -nr 03 -function StopSystem ``` -2. **[1,2]** Adjust `global.ini` on each cluster site. If the requirements for susChkSrv hook are not met, remove the entire block `[ha_dr_provider_suschksrv]` from below section. -You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid values are [ ignore | stop | kill | fence ]. +2. **[1,2]** Adjust `global.ini` on each cluster site. If the prerequisites for susChkSrv hook aren't met, entire block `[ha_dr_provider_suschksrv]` shouldn't be configured. +You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid values are `[ ignore | stop | kill | fence ]`. ```bash- # add to global.ini - [ha_dr_provider_SAPHanaSR] - provider = SAPHanaSR + # add to global.ini on both sites. Do not copy global.ini between sites. + [ha_dr_provider_saphanasrmultitarget] + provider = SAPHanaSrMultiTarget path = /usr/share/SAPHanaSR-ScaleOut execution_order = 1 You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va action_on_lost = kill [trace]- ha_dr_saphanasr = info + ha_dr_saphanasrmultitarget = info ``` -Configuration pointing to the standard location /usr/share/SAPHanaSR-ScaleOut brings a benefit, that the python hook code is automatically updated through OS or package updates and it gets used by HANA at next restart. With an optional, own path, such as /hana/shared/myHooks you can decouple OS updates from the used hook version. + Default location of the HA hooks as deliveredy SUSE is /usr/share/SAPHanaSR-ScaleOut. Using the standard location brings a benefit, that the python hook code is automatically updated through OS or package updates and gets used by HANA at next restart. With an optional own path, such as /hana/shared/myHooks you can decouple OS updates from the used hook version. -3. **[AH]** The cluster requires sudoers configuration on the cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root` adapt the values of hn1/HN1 with correct SID. +3. **[AH]** The cluster requires sudoers configuration on the cluster nodes for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root` adapt the values of hn1 with correct lowercase SID. ```bash cat << EOF > /etc/sudoers.d/20-saphana- # SAPHanaSR-ScaleOut needs for srHook - Cmnd_Alias SOK = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SOK -t crm_config -s SAPHanaSR - Cmnd_Alias SFAIL = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SFAIL -t crm_config -s SAPHanaSR - hn1adm ALL=(ALL) NOPASSWD: SOK, SFAIL - hn1adm ALL=(ALL) NOPASSWD: /usr/sbin/SAPHanaSR-hookHelper --sid=HN1 --case=fenceMe + # SAPHanaSR-ScaleOut needs for HA/DR hook scripts + so1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_hn1_site_srHook_* + so1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_hn1_gsh * + so1adm ALL=(ALL) NOPASSWD: /usr/sbin/SAPHanaSR-hookHelper --sid=hn1 * EOF ``` Configuration pointing to the standard location /usr/share/SAPHanaSR-ScaleOut br sapcontrol -nr 03 -function StartSystem ``` -5. **[1]** Verify the hook installation. Execute as <sid\>adm on the active HANA system replication site. +5. **[A]** Verify the hook installation is active on all cluster nodes. Execute as <sid\>adm. ```bash cdtrace- awk '/ha_dr_SAPHanaSR.*crm_attribute/ \ - { printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_* + grep HADR.*load.*SAPHanaSrMultiTarget nameserver_*.trc | tail -3 # Example output- # 2021-03-31 01:02:42.695244 ha_dr_SAPHanaSR SFAIL - # 2021-03-31 01:02:58.966856 ha_dr_SAPHanaSR SFAIL - # 2021-03-31 01:03:04.453100 ha_dr_SAPHanaSR SFAIL - # 2021-03-31 01:03:04.619768 ha_dr_SAPHanaSR SFAIL - # 2021-03-31 01:03:04.743444 ha_dr_SAPHanaSR SFAIL - # 2021-03-31 01:04:15.062181 ha_dr_SAPHanaSR SOK + # nameserver_hana-s1-db1.31001.000.trc:[14162]{-1}[-1/-1] 2023-01-26 12:53:55.728027 i ha_dr_provider HADRProviderManager.cpp(00083) : loading HA/DR Provider 'SAPHanaSrMultiTarget' from /usr/share/SAPHanaSR-ScaleOut/ + grep SAPHanaSr.*init nameserver_*.trc | tail -3 + # Example output + # nameserver_hana-s1-db1.31001.000.trc:[17636]{-1}[-1/-1] 2023-01-26 16:30:19.256705 i ha_dr_SAPHanaSrM SAPHanaSrMultiTarget.py(00080) : SAPHanaSrMultiTarget.init() CALLING CRM: <sudo /usr/sbin/crm_attribute -n hana_hn1_gsh -v 2.2 -l reboot> rc=0 + # nameserver_hana-s1-db1.31001.000.trc:[17636]{-1}[-1/-1] 2023-01-26 16:30:19.256739 i ha_dr_SAPHanaSrM SAPHanaSrMultiTarget.py(00081) : SAPHanaSrMultiTarget.init() Running srHookGeneration 2.2, see attribute hana_hn1_gsh too ``` - Verify the susChkSrv hook installation. Execute as <sid\>adm on all HANA VMs + Verify the susChkSrv hook installation. Execute as <sid\>adm. ```bash cdtrace egrep '(LOST:|STOP:|START:|DOWN:|init|load|fail)' nameserver_suschksrv.trc Configuration pointing to the standard location /usr/share/SAPHanaSR-ScaleOut br sudo crm configure rsc_defaults resource-stickiness=1000 sudo crm configure rsc_defaults migration-threshold=50 ```-3. **[1]** verify the communication between the HOOK and the cluster - ```bash - crm_attribute -G -n hana_hn1_glob_srHook - # Expected result - # crm_attribute -G -n hana_hn1_glob_srHook - # scope=crm_config name=hana_hn1_glob_srHook value=SOK - ``` -4. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is ok and that all of the resources are started. +3. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is ok and that all of the resources are started. ```bash # Cleanup any failed resources - the following command is example crm resource cleanup rsc_SAPHana_HN1_HDB03 Configuration pointing to the standard location /usr/share/SAPHanaSR-ScaleOut br # Place the cluster out of maintenance mode sudo crm configure property maintenance-mode=false ```++4. **[1]** Verify the communication between the HANA HA hook and the cluster, showing status SOK for SID and both replication sites with status P(rimary) or S(econdary). + ```bash + sudo /usr/sbin/SAPHanaSR-showAttr + # Expected result + # Global cib-time maintenance prim sec sync_state upd + # + # HN1 Fri Jan 27 10:38:46 2023 false HANA_S1 - SOK ok + # + # Sites lpt lss mns srHook srr + # -- + # HANA_S1 1674815869 4 hana-s1-db1 PRIM P + # HANA_S2 30 4 hana-s2-db1 SWAIT S + ``` > [!NOTE] > The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup. For instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database.- ## Test SAP HANA failover |
virtual-network | Nat Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md | Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP * NAT gateway supports TCP and UDP protocols only. ICMP isn't supported. -* When virtual machine instances or other compute resources attempt to communicate on a TCP connection that doesn't exist, they send TCP reset packets. An example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address of the virtual machine to signal and force connection closure. The public side of a NAT gateway doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted. +* NAT gateway will send a TCP Rest (RST) packet to the connection endpoint that attempts to communicate on a connection flow that does not exist. This connection flow may no longer exist if the NAT gateway idle timeout was reached or the connection was closed earlier. When the NAT gateway TCP RST packet is received by the connection endpoint, this signifies that the connection is no longer usable. ### NAT gateway configurations For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr * [Learn module: Introduction to Azure Virtual Network NAT](/training/modules/intro-to-azure-virtual-network-nat). -* To learn more about architecture options for Azure Virtual Network NAT, see [Azure Well-Architected Framework review of an Azure NAT gateway](/azure/architecture/networking/guide/well-architected-network-address-translation-gateway). +* To learn more about architecture options for Azure Virtual Network NAT, see [Azure Well-Architected Framework review of an Azure NAT gateway](/azure/architecture/networking/guide/well-architected-network-address-translation-gateway). |
virtual-wan | How To Virtual Hub Routing Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing-powershell.md | The steps in this section help you set up routing configuration for a virtual ne 1. Verify static route on the virtual network connection. ```azurepowershell-interactive- Get-AzVirtualHubVnetConnection -ResourceGroupName "[Resource group]" -VirtualHubName "[virtual hub name]" -Name "[Virtual hub connection name]" + Get-AzVirtualHubVnetConnection -ResourceGroupName "[Resource group name]" -VirtualHubName "[virtual hub name]" -Name "[Virtual hub connection name]" ``` ## Next steps * For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).-* For more information about Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md). +* For more information about Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md). |
virtual-wan | Nat Rules Vpn Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/nat-rules-vpn-gateway.md | Another consideration is the address pool size for translation. If the target ad **Ingress SNAT rules** are applied on packets that are entering Azure through the Virtual WAN site-to-site VPN gateway. In this scenario, you want to connect two site-to-site VPN branches to Azure. VPN Site 1 connects via Link A, and VPN Site 2 connects via Link B. Each site has the same address space 10.30.0.0/24. -In this example, we'll NAT site1 to 127.30.0.0.0/24. The Virtual WAN spoke virtual networks and branches other will automatically learn this post-NAT address space. +In this example, we'll NAT site1 to 172.30.0.0.0/24. The Virtual WAN spoke virtual networks and branches other will automatically learn this post-NAT address space. The following diagram shows the projected end result: The following diagram shows the projected end result: 1. Ensure the site-to-site VPN gateway is able to peer with the on-premises BGP peer. - In this example, the **Ingress NAT Rule** will need to translate 10.30.0.132 to 127.30.0.132. In order to do that, click 'Edit VPN site' to configure VPN site Link A BGP address to reflect this translated BGP peer address (127.30.0.132). + In this example, the **Ingress NAT Rule** will need to translate 10.30.0.132 to 172.30.0.132. In order to do that, click 'Edit VPN site' to configure VPN site Link A BGP address to reflect this translated BGP peer address (172.30.0.132). :::image type="content" source="./media/nat-rules-vpn-gateway/edit-site-bgp.png" alt-text="Screenshot showing how to change the BGP peering IP."lightbox="./media/nat-rules-vpn-gateway/edit-site-bgp.png"::: The following diagram shows the projected end result: * If **BGP Translation** is enabled, the site-to-site VPN gateway will automatically advertise the **External Mapping** of **Egress NAT rules** to on-premises as well as **External Mapping** of **Ingress NAT rules** to Azure (virtual WAN hub, connected spoke virtual networks, connected VPN/ExpressRoute). If **BGP Translation** is disabled, translated routes aren't automatically advertised to the on-premises. As such, the on-premises BGP speaker must be configured to advertise the post-NAT (**External Mapping**) range of **Ingress NAT** rules associated to that VPN site link connection. Similarly, a route for the post-NAT (**External Mapping**) range of **Egress NAT Rules** must be applied on the on-premises device. * The site-to-site VPN gateway automatically translates the on-premises BGP peer IP address **if** the on-premises BGP peer IP address is contained within the **Internal Mapping** of an **Ingress NAT Rule**. As a result, the VPN site's **Link Connection BGP address** must reflect the NAT-translated address (part of the External Mapping). - For instance, if the on-premises BGP IP address is 10.30.0.133 and there is an **Ingress NAT Rule** that translates 10.30.0.0/24 to 127.30.0.0/24, the VPN site's **Link Connection BGP Address** must be configured to be the translated address (127.30.0.133). + For instance, if the on-premises BGP IP address is 10.30.0.133 and there is an **Ingress NAT Rule** that translates 10.30.0.0/24 to 172.30.0.0/24, the VPN site's **Link Connection BGP Address** must be configured to be the translated address (172.30.0.133). * In Dynamic NAT, on-premises BGP peer IP can't be part of the pre-NAT address range (**Internal Mapping**) as IP and port translations aren't fixed. If there is a need to translate the on-premises BGP peering IP, please create a separate **Static NAT Rule** that translates BGP Peering IP address only. For instance, if the on-premises network has an address space of 10.0.0.0/24 with an on-premises BGP peer IP of 10.0.0.1 and there is an **Ingress Dynamic NAT Rule** to translate 10.0.0.0/24 to 192.198.0.0/32, a separate **Ingress Static NAT Rule** translating 10.0.0.1/32 to 192.168.0.02/32 is required and the corresponding VPN site's **Link Connection BGP address** must be updated to the NAT-translated address (part of the External Mapping). |
virtual-wan | Scenario Route Through Nva | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-route-through-nva.md | Virtual WAN does not support a scenario where VNets 5,6 connect to virtual hub > [!NOTE] > To simplify the routing and to reduce the changes in the Virtual WAN hub route tables, we recommend the new BGP peering with Virtual WAN hub (preview). For more information, see the following articles: - >* [Scenario: BGP peering with a virtual hub (preview)](scenario-bgp-peering-hub.md) - >* [How to create BGP peering with virtual hub (preview) - Azure portal](create-bgp-peering-hub-portal.md) + >* [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md) + >* [How to create BGP peering with virtual hub - Azure portal](create-bgp-peering-hub-portal.md) > 3. Configure a static route for VNets 5,6 in VNet 2ΓÇÖs virtual network connection. To set up routing configuration for a virtual network connection, see [virtual hub routing](how-to-virtual-hub-routing.md#routing-configuration). |
virtual-wan | Virtual Wan About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-about.md | For available regions and locations, see [Virtual WAN partners, regions, and loc To configure an end-to-end virtual WAN, you create the following resources: -* **Virtual WAN:** The virtualWAN resource represents a virtual overlay of your Azure network and is a collection of multiple resources. It contains links to all your virtual hubs that you would like to have within the virtual WAN. Virtual WAN resources are isolated from each other and can't contain a common hub. Virtual hubs across Virtual WAN don't communicate with each other. +* **Virtual WAN:** The *virtualWAN* resource represents a virtual overlay of your Azure network and is a collection of multiple resources. It contains links to all your virtual hubs that you would like to have within the virtual WAN. Virtual WANs are isolated from each other and can't contain a common hub. Virtual hubs in different virtual WANs don't communicate with each other. * **Hub:** A virtual hub is a Microsoft-managed virtual network. The hub contains various service endpoints to enable connectivity. From your on-premises network (vpnsite), you can connect to a VPN gateway inside the virtual hub, connect ExpressRoute circuits to a virtual hub, or even connect mobile users to a point-to-site gateway in the virtual hub. The hub is the core of your network in a region. Multiple virtual hubs can be created in the same region. |
virtual-wan | Virtual Wan Custom Ipsec Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-custom-ipsec-portal.md | You can configure a custom IPsec policy for a Virtual WAN VPN connection in the ## Configure a policy -1. **Locate the virtual hub**. From a browser, navigate to the [Azure portal](https://aka.ms/azurevirtualwanpreviewfeatures) and sign in with your Azure account. Navigate to your Virtual WAN resource and locate the virtual hub that your VPN site is connected to. +1. **Locate the virtual hub**. In the Azure portal, go to your Virtual WAN resource and locate the virtual hub that your VPN site is connected to. 2. **Select the VPN site**. From the hub overview page, click **VPN (Site to site)** and select the VPN Site for which you want to set up a custom IPsec policy.  |
virtual-wan | Virtual Wan Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md | No. Virtual WAN doesn't require ExpressRoute from each site. Your sites may be c ### Is there a network throughput or connection limit when using Azure Virtual WAN? -Network throughput is per service in a virtual WAN hub. In each hub, the VPN aggregate throughput is up to 20 Gbps, the ExpressRoute aggregate throughput is up to 20 Gbps, and the User VPN/point-to-site VPN aggregate throughput is up to 20 Gbps. The router in virtual hub supports up to 50 Gbps for VNet-to-VNet traffic flows and assumes a total of 2000 VM workload across all VNets connected to a single virtual hub. +Network throughput is per service in a virtual WAN hub. In each hub, the VPN aggregate throughput is up to 20 Gbps, the ExpressRoute aggregate throughput is up to 20 Gbps, and the User VPN/point-to-site VPN aggregate throughput is up to 200 Gbps. The router in virtual hub supports up to 50 Gbps for VNet-to-VNet traffic flows and assumes a total of 2000 VM workload across all VNets connected to a single virtual hub. + To secure upfront capacity without having to wait for the virtual hub to scale out when more throughput is needed, you can set the minimum capacity or modify as needed. See [About virtual hub settings - hub capacity](hub-settings.md#capacity). For cost implications, see *Routing Infrastructure Unit* cost in the [Azure Virtual WAN Pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page. |
vpn-gateway | Vpn Gateway About Vpn Gateway Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md | Before you create a VPN gateway, you must create a gateway subnet. The gateway s When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others. -When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Additionally, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future additional configurations. While you can create a gateway subnet as small as /29, we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.) if you have the available address space to do so. This will accommodate most configurations. +When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Additionally, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future additional configurations. While you can create a gateway subnet as small as /29 (applicable to Basic SKU only), we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.). This will accommodate most configurations. The following Resource Manager PowerShell example shows a gateway subnet named GatewaySubnet. You can see the CIDR notation specifies a /27, which allows for enough IP addresses for most configurations that currently exist. |