Updates from: 02/09/2021 04:08:18
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/service-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/service-limits.md
@@ -35,7 +35,6 @@ The following table lists the administrative configuration limits in the Azure A
|Category |Limit | |||
-|Number of applications per Azure AD B2C tenant |250 |
|Number of scopes per application  |1000 | |Number of [custom attributes](user-profile-attributes.md#extension-attributes) per user <sup>1</sup> |100 | |Number of redirect URLs per application |100 |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/customize-application-attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/customize-application-attributes.md
@@ -8,7 +8,7 @@
Previously updated : 1/25/2021 Last updated : 02/08/2021
@@ -16,6 +16,10 @@
Microsoft Azure AD provides support for user provisioning to third-party SaaS applications such as Salesforce, G Suite and others. If you enable user provisioning for a third-party SaaS application, the Azure portal controls its attribute values through attribute-mappings.
+Before you get started, make sure you are familiar with app management and **Single Sign-On (SSO)** concepts, check out the following links:
+- [Quickstart Series on App Management in Azure AD](../manage-apps/view-applications-portal.md)
+- [What is Single Sign-On (SSO)?](../manage-apps/what-is-single-sign-on.md)
+ There's a pre-configured set of attributes and attribute-mappings between Azure AD user objects and each SaaS appΓÇÖs user objects. Some apps manage other types of objects along with Users, such as Groups. You can customize the default attribute-mappings according to your business needs. So, you can change or delete existing attribute-mappings, or create new attribute-mappings.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/user-provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning.md
@@ -8,15 +8,17 @@
Previously updated : 01/11/2021 Last updated : 02/08/2021 -+ # What is automated SaaS app user provisioning in Azure AD? In Azure Active Directory (Azure AD), the term **app provisioning** refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../saas-apps/servicenow-provisioning-tutorial.md), and more.
+Just getting started with app management and single sign-on (SSO) in Azure AD? Check out the [Quickstart Series](../manage-apps/view-applications-portal.md).
+ To learn more about SCIM and join the Tech Community conversation, see [Provisioning with SCIM Tech Community](https://aka.ms/scimoverview). ![Provisioning overview diagram](./media/user-provisioning/provisioning-overview.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-java-daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-daemon.md
@@ -18,17 +18,17 @@
# Quickstart: Acquire a token and call Microsoft Graph API from a Java console app using app's identity
-In this quickstart, you download and run a code sample that demonstrates how a Java application can obtain an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+In this quickstart, you download and run a code sample that demonstrates how a Java application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
> [!div renderon="docs"]
-> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-java-daemon/java-console-daemon.svg)
## Prerequisites
-To run this sample you will need:
+To run this sample, you need:
- [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or greater-- [Maven](https://maven.apache.org/).
+- [Maven](https://maven.apache.org/)
> [!div renderon="docs"] > ## Register and download your quickstart app
@@ -39,7 +39,7 @@ To run this sample you will need:
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the new [Azure portal - App registrations](https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaDaemonQuickstartPage/sourceType/docs) pane.
+> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application with just one click. >
@@ -49,22 +49,22 @@ To run this sample you will need:
> #### Step 1: Register your application > To register your application and add the app's registration information to your solution manually, follow these steps: >
-> 1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
-> 1. If your account gives you access to more than one tenant, select your account in the top right corner, and set your portal session to the desired Azure AD tenant.
-> 1. Navigate to the Microsoft identity platform for developers [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page.
-> 1. Select **New registration**.
-> 1. When the **Register an application** page appears, enter your application's registration information.
-> 1. In the **Name** section, enter a meaningful application name that will be displayed to users of the app, for example `Daemon-console`, then select **Register** to create the application.
-> 1. Once registered, select the **Certificates & secrets** menu.
-> 1. Under **Client secrets**, select **+ New client secret**. Give it a name and select **Add**. Copy the secret on a safe location. You will need it to use in your code.
-> 1. Now, select the **API Permissions** menu, select **+ Add a permission** button, select **Microsoft Graph**.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+> 1. Search for and select **Azure Active Directory**.
+> 1. Under **Manage**, select **App registrations** > **New registration**.
+> 1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later.
+> 1. Select **Register**.
+> 1. Under **Manage**, select **Certificates & secrets**.
+> 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
+> 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
> 1. Select **Application permissions**.
-> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**
+> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
> [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure your quickstart app
+> ### Download and configure the quickstart app
>
-> #### Step 1: Configure your application in Azure portal
+> #### Step 1: Configure the application in Azure portal
> For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission. > > [!div renderon="portal" id="makechanges" class="nextstepaction"] > > [Make these changes for me]()
@@ -72,7 +72,7 @@ To run this sample you will need:
> > [!div id="appconfigured" class="alert alert-info"] > > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-#### Step 2: Download your Java project
+#### Step 2: Download the Java project
> [!div renderon="docs"] > [Download the Java daemon project](https://github.com/Azure-Samples/ms-identity-java-daemon/archive/master.zip)
@@ -86,11 +86,11 @@ To run this sample you will need:
> [!div renderon="docs"]
-> #### Step 3: Configure your Java project
+> #### Step 3: Configure the Java project
>
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, **C:\Azure-Samples**.
+> 1. Extract the zip file to a local folder close to the root of the disk, for example, *C:\Azure-Samples*.
> 1. Navigate to the sub folder **msal-client-credential-secret**.
-> 1. Edit **src\main\resources\application.properties** and replace the values of the fields `AUTHORITY`, `CLIENT_ID`, and `SECRET` with the following snippet:
+> 1. Edit *src\main\resources\application.properties* and replace the values of the fields `AUTHORITY`, `CLIENT_ID`, and `SECRET` with the following snippet:
> > ``` > AUTHORITY=https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/
@@ -99,7 +99,7 @@ To run this sample you will need:
> ``` > Where: > - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
+> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com).
> - `Enter_the_Client_Secret_Here` - replace this value with the client secret created on step 1. > > > [!TIP]
@@ -116,10 +116,10 @@ If you try to run the application at this point, you'll receive *HTTP 403 - Forb
##### Global tenant administrator > [!div renderon="docs"]
-> If you are a global tenant administrator, go to **API Permissions** page in the Azure Portal's Application Registration (Preview) and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
+> If you are a global tenant administrator, go to **API Permissions** page in **App registrations** in the Azure portal and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
> [!div renderon="portal" class="sxs-lookup"]
-> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**
+> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
> > [!div id="apipermissionspage"] > > [Go to the API Permissions page]()
@@ -160,7 +160,7 @@ After running, the application should display the list of users in the configure
> [!IMPORTANT]
-> This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-java-daemon/tree/master/msal-client-credential-certificate) in the same GitHub repository for this sample, but in the second folder **msal-client-credential-certificate**
+> This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-java-daemon/tree/master/msal-client-credential-certificate) in the same GitHub repository for this sample, but in the second folder **msal-client-credential-certificate**.
## More information
@@ -248,13 +248,13 @@ IAuthenticationResult result;
> |Where:| Description | > |||
-> | `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure Portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure Portal's Application Registration (Preview). |
+> | `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure Portal.|
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] ## Next steps
-To learn more about daemon applications, see the scenario landing page
+To learn more about daemon applications, see the scenario landing page.
> [!div class="nextstepaction"] > [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-python-daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-daemon.md
@@ -21,7 +21,7 @@
In this quickstart, you download and run a code sample that demonstrates how a Python application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity. > [!div renderon="docs"]
-> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-python-daemon/python-console-daemon.svg)
## Prerequisites
@@ -39,7 +39,7 @@ To run this sample, you need:
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application with just one click. >
@@ -49,7 +49,7 @@ To run this sample, you need:
> #### Step 1: Register your application > To register your application and add the app's registration information to your solution manually, follow these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
@@ -62,7 +62,7 @@ To run this sample, you need:
> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**. > [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure your quickstart app
+> ### Download and configure the quickstart app
> > #### Step 1: Configure your application in Azure portal > For the code sample in this quickstart to work, create a client secret and add Graph API's **User.Read.All** application permission.
@@ -72,7 +72,7 @@ To run this sample, you need:
> > [!div id="appconfigured" class="alert alert-info"] > > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-#### Step 2: Download your Python project
+#### Step 2: Download the Python project
> [!div renderon="docs"] > [Download the Python daemon project](https://github.com/Azure-Samples/ms-identity-python-daemon/archive/master.zip)
@@ -86,10 +86,10 @@ To run this sample, you need:
> [!div renderon="docs"]
-> #### Step 3: Configure your Python project
+> #### Step 3: Configure the Python project
> > 1. Extract the zip file to a local folder close to the root of the disk, for example, **C:\Azure-Samples**.
-> 1. Navigate to the sub folder **1-Call-MsGraph-WithSecret"**.
+> 1. Navigate to the sub folder **1-Call-MsGraph-WithSecret**.
> 1. Edit **parameters.json** and replace the values of the fields `authority`, `client_id`, and `secret` with the following snippet: > > ```json
@@ -116,10 +116,10 @@ If you try to run the application at this point, you'll receive *HTTP 403 - Forb
##### Global tenant administrator > [!div renderon="docs"]
-> If you are a global tenant administrator, go to **API Permissions** page in the Azure Portal's Application Registration (Preview) and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
+> If you are a global tenant administrator, go to **API Permissions** page in **App registrations** in the Azure portal and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
> [!div renderon="portal" class="sxs-lookup"]
-> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**
+> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
> > [!div id="apipermissionspage"] > > [Go to the API Permissions page]()
@@ -142,7 +142,7 @@ https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_i
> [!div renderon="docs"] > #### Step 5: Run the application
-You'll need to install the dependencies of this sample once
+You'll need to install the dependencies of this sample once.
```console pip install -r requirements.txt
@@ -157,7 +157,7 @@ python confidential_client_secret_sample.py parameters.json
You should see on the console output some Json fragment representing a list of users in your Azure AD directory. > [!IMPORTANT]
-> This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-python-daemon/blob/master/2-Call-MsGraph-WithCertificate/README.md) in the same GitHub repository for this sample, but in the second folder **2-Call-MsGraph-WithCertificate**
+> This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-python-daemon/blob/master/2-Call-MsGraph-WithCertificate/README.md) in the same GitHub repository for this sample, but in the second folder **2-Call-MsGraph-WithCertificate**.
## More information
@@ -193,7 +193,7 @@ app = msal.ConfidentialClientApplication(
> | `config["client_id"]` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. | > | `config["authority"]` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
-For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://msal-python.readthedocs.io/en/latest/#confidentialclientapplication)
+For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://msal-python.readthedocs.io/en/latest/#confidentialclientapplication).
### Requesting tokens
@@ -210,15 +210,15 @@ if not result:
> |Where:| Description | > |||
-> | `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure Portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure Portal's Application Registration (Preview). |
+> | `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure Portal.|
-For more information, please see the [reference documentation for `AcquireTokenForClient`](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client)
+For more information, please see the [reference documentation for `AcquireTokenForClient`](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client).
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] ## Next steps
-To learn more about daemon applications, see the scenario landing page
+To learn more about daemon applications, see the scenario landing page.
> [!div class="nextstepaction"] > [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-blazor-webassembly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-blazor-webassembly.md
@@ -99,7 +99,7 @@ Next, add the following to your project's *.csproj* file in the netstandard2.1 *
Then modify the code as specified in the next few steps. These changes will add [access tokens](access-tokens.md) to the outgoing requests sent to the Microsoft Graph API. This pattern is discussed in more detail in [ASP.NET Core Blazor WebAssembly additional security scenarios](/aspnet/core/blazor/security/webassembly/additional-scenarios).
-First, create a new file named *GraphAuthorizationMessageHandler.cs* with the following code. This handler will be user to add an access token for the `User.Read` and `Mail.Read` scopes to outgoing requests to the Microsoft Graph API.
+First, create a new file named *GraphAPIAuthorizationMessageHandler.cs* with the following code. This handler will be user to add an access token for the `User.Read` and `Mail.Read` scopes to outgoing requests to the Microsoft Graph API.
```csharp using Microsoft.AspNetCore.Components;
@@ -243,4 +243,4 @@ After granting consent, navigate to the "Fetch data" page to read some email.
## Next steps > [!div class="nextstepaction"]
-> [Microsoft identity platform best practices and recommendations](./identity-platform-integration-checklist.md)
+> [Microsoft identity platform best practices and recommendations](./identity-platform-integration-checklist.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/my-apps-deployment-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/my-apps-deployment-plan.md
@@ -16,7 +16,7 @@
# Plan Azure Active Directory My Apps configuration > [!NOTE]
-> This article is designed for IT professionals who need to plan the configuration of their organizationΓÇÖs My Apps portal. For information for the end user about how to use My Apps and collections, see [Sign in and start apps from the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
+> This article is designed for IT professionals who need to plan the configuration of their organizationΓÇÖs My Apps portal.
> > **For end user documentation, see [Sign in and start apps from the My Apps portal](../user-help/my-apps-portal-end-user-access.md)**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/condeco-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/condeco-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 01/23/2019 Last updated : 01/19/2021 # Tutorial: Azure Active Directory integration with Condeco
-In this tutorial, you learn how to integrate Condeco with Azure Active Directory (Azure AD).
-Integrating Condeco with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Condeco with Azure Active Directory (Azure AD). When you integrate Condeco with Azure AD, you can:
-* You can control in Azure AD who has access to Condeco.
-* You can enable your users to be automatically signed-in to Condeco (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Condeco.
+* Enable your users to be automatically signed-in to Condeco with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Condeco, you need the following items:
-
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Condeco single sign-on enabled subscription
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* A Condeco single sign-on (SSO)-enabled subscription.
## Scenario description
@@ -39,64 +35,42 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Condeco supports **Just In Time** user provisioning
-## Adding Condeco from the gallery
+## Add Condeco from the gallery
To configure the integration of Condeco into Azure AD, you need to add Condeco from the gallery to your list of managed SaaS apps.
-**To add Condeco from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Condeco**, select **Condeco** from result panel then click **Add** button to add the application.
-
- ![Condeco in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Condeco** in the search box.
+1. Select **Condeco** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Condeco based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Condeco needs to be established.
+## Configure and test Azure AD SSO for Condeco
-To configure and test Azure AD single sign-on with Condeco, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Condeco using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Condeco.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Condeco Single Sign-On](#configure-condeco-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Condeco test user](#create-condeco-test-user)** - to have a counterpart of Britta Simon in Condeco that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Condeco, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Condeco SSO](#configure-condeco-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Condeco test user](#create-condeco-test-user)** - to have a counterpart of B.Simon in Condeco that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+### Configure Azure AD SSO
-To configure Azure AD single sign-on with Condeco, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Condeco** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Condeco** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Condeco Domain and URLs single sign-on information](common/sp-signonurl.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<companyname>.condecosoftware.com`
@@ -111,86 +85,49 @@ To configure Azure AD single sign-on with Condeco, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Condeco Single Sign-On
-
-To configure single sign-on on **Condeco** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Condeco support team](mailto:supportna@condecosoftware.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
+In this section, you'll create a test user in the Azure portal called B.Simon.
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Condeco.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Condeco**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Condeco.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Condeco**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **Condeco**.
+### Configure Condeco SSO
- ![The Condeco link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Condeco** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Condeco support team](mailto:supportna@condecosoftware.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Condeco test user
-The objective of this section is to create a user called Britta Simon in Condeco. Condeco supports **just-in-time provisioning**, which is by default enabled.
-
-There is no action item for you in this section. A new user is created during an attempt to access Condeco if it doesn't exist yet.
-
->[!NOTE]
->If you need to create a user manually, you need to contact the [Condeco support team](mailTo:supportna@condecosoftware.com).
+In this section, a user called B.Simon is created in Condeco. Condeco supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Condeco, a new one is created after authentication.
-### Test single sign-on
+### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Condeco tile in the Access Panel, you should be automatically signed in to the Condeco for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Condeco Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Condeco Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Condeco tile in the My Apps, you should be automatically signed in to the Condeco for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Condeco you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/enablon-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/enablon-tutorial.md
@@ -0,0 +1,137 @@
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Enablon | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Enablon.
++++++++ Last updated : 02/05/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Enablon
+
+In this tutorial, you'll learn how to integrate Enablon with Azure Active Directory (Azure AD). When you integrate Enablon with Azure AD, you can:
+
+* Control in Azure AD who has access to Enablon.
+* Enable your users to be automatically signed-in to Enablon with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Enablon single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Enablon supports **SP** initiated SSO
+
+## Adding Enablon from the gallery
+
+To configure the integration of Enablon into Azure AD, you need to add Enablon from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Enablon** in the search box.
+1. Select **Enablon** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Enablon
+
+Configure and test Azure AD SSO with Enablon using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Enablon.
+
+To configure and test Azure AD SSO with Enablon, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Enablon SSO](#configure-enablon-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Enablon test user](#create-enablon-test-user)** - to have a counterpart of B.Simon in Enablon that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Enablon** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://www.enablon.com/<SITEID>/`
+
+ b. In the **Identifier** box, type a URL using the following pattern:
+ `http://<SUBDOMAIN>.enablon.com/adfs/services/trust`
+
+ c. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.enablon.com/adfs/ls/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [Enablon Client support team](mailto:ena-dl-ww.it.services@enablon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Enablon.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Enablon**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Enablon SSO
+
+To configure single sign-on on **Enablon** side, you need to send the **App Federation Metadata Url** to [Enablon support team](mailto:ena-dl-ww.it.services@enablon.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Enablon test user
+
+In this section, you create a user called Britta Simon in Enablon. Work with [Enablon support team](mailto:ena-dl-ww.it.services@enablon.com) to add the users in the Enablon platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Enablon Sign-on URL where you can initiate the login flow.
+
+* Go to Enablon Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Enablon tile in the My Apps, this will redirect to Enablon Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Enablon you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zoho-mail-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zoho-mail-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 12/26/2018 Last updated : 01/19/2021 # Tutorial: Azure Active Directory integration with Zoho
-In this tutorial, you learn how to integrate Zoho with Azure Active Directory (Azure AD).
-Integrating Zoho with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Zoho with Azure Active Directory (Azure AD). When you integrate Zoho with Azure AD, you can:
-* You can control in Azure AD who has access to Zoho.
-* You can enable your users to be automatically signed-in to Zoho (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Zoho.
+* Enable your users to be automatically signed-in to Zoho with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Zoho, you need the following items:
+To configure Azure AD integration with Zoho One, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Zoho single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Zoho single sign-on enabled subscription.
## Scenario description
@@ -37,71 +33,49 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Zoho supports **SP** initiated SSO
-## Adding Zoho from the gallery
+## Add Zoho from the gallery
To configure the integration of Zoho into Azure AD, you need to add Zoho from the gallery to your list of managed SaaS apps.
-**To add Zoho from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Zoho**, select **Zoho** from result panel then click **Add** button to add the application.
-
- ![Zoho in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Zoho based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Zoho needs to be established.
-
-To configure and test Azure AD single sign-on with Zoho, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zoho** in the search box.
+1. Select **Zoho** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Zoho Single Sign-On](#configure-zoho-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Zoho test user](#create-zoho-test-user)** - to have a counterpart of Britta Simon in Zoho that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for Zoho
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with Zoho using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zoho.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with Zoho, perform the following steps:
-To configure Azure AD single sign-on with Zoho, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zoho SSO](#configure-zoho-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zoho test user](#create-zoho-test-user)** - to have a counterpart of B.Simon in Zoho that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zoho** application integration page, select **Single sign-on**.
+### Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **Zoho** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Zoho Domain and URLs single sign-on information](common/sp-signonurl.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<company name>.zohomail.com` > [!NOTE] > The value is not real. Update the value with the actual Sign-On URL. Contact [Zoho Client support team](https://www.zoho.com/mail/contact.html) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
![The Certificate download link](common/certificatebase64.png)
@@ -109,27 +83,45 @@ To configure Azure AD single sign-on with Zoho, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
- b. Azure Ad Identifier
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- c. Logout URL
+### Assign the Azure AD test user
-### Configure Zoho Single Sign-On
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zoho.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zoho**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+### Configure Zoho SSO
1. In a different web browser window, log into your Zoho Mail company site as an administrator. 2. Go to the **Control panel**.
- ![Control Panel](./media/zoho-mail-tutorial/ic789607.png "Control Panel")
+ ![Control Panel](./media/zoho-mail-tutorial/control-panel.png "Control Panel")
3. Click the **SAML Authentication** tab.
- ![SAML Authentication](./media/zoho-mail-tutorial/ic789608.png "SAML Authentication")
+ ![SAML Authentication](./media/zoho-mail-tutorial/saml-authentication.png "SAML Authentication")
4. In the **SAML Authentication Details** section, perform the following steps:
- ![SAML Authentication Details](./media/zoho-mail-tutorial/ic789609.png "SAML Authentication Details")
+ ![SAML Authentication Details](./media/zoho-mail-tutorial/details.png "SAML Authentication Details")
a. In the **Login URL** textbox, paste **Login URL** which you have copied from Azure portal.
@@ -143,57 +135,6 @@ To configure Azure AD single sign-on with Zoho, perform the following steps:
f. Click **OK**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zoho.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zoho**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, type and select **Zoho**.
-
- ![The Zoho link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create Zoho test user In order to enable Azure AD users to log into Zoho Mail, they must be provisioned into Zoho Mail. In the case of Zoho Mail, provisioning is a manual task.
@@ -209,17 +150,17 @@ In order to enable Azure AD users to log into Zoho Mail, they must be provisione
1. Go to **User Details \> Add User**.
- ![Screenshot shows the Zoho Mail site with User Details and Add User selected.](./media/zoho-mail-tutorial/ic789611.png "Add User")
+ ![Screenshot shows the Zoho Mail site with User Details and Add User selected.](./media/zoho-mail-tutorial/add-user-1.png "Add User")
1. On the **Add users** dialog, perform the following steps:
- ![Screenshot shows the Add users dialog box where you can enter the values described.](./media/zoho-mail-tutorial/ic789612.png "Add User")
+ ![Screenshot shows the Add users dialog box where you can enter the values described.](./media/zoho-mail-tutorial/add-user-2.png "Add User")
a. In the **First Name** textbox, type the first name of user like **Britta**. b. In the **Last Name** textbox, type the last name of user like **Simon**.
- c. In the **Email ID** textbox, type the email id of user like **brittasimon\@contoso.com**.
+ c. In the **Email ID** textbox, type the email ID of user like **brittasimon\@contoso.com**.
d. In the **Password** textbox, enter password of user.
@@ -228,16 +169,16 @@ In order to enable Azure AD users to log into Zoho Mail, they must be provisione
> [!NOTE] > The Azure Active Directory account holder will receive an email with a link to confirm the account before it becomes active.
-### Test single sign-on
+### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Zoho tile in the Access Panel, you should be automatically signed in to the Zoho for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Zoho Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Zoho Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Zoho tile in the My Apps, you should be automatically signed in to the Zoho for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Zoho you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zoho-one-china-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zoho-one-china-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 03/26/2020 Last updated : 01/20/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Zoho One China with Azure Active
* Enable your users to be automatically signed-in to Zoho One China with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -38,24 +36,23 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Zoho One China supports **SP and IDP** initiated SSO
-* Once you configure Zoho One China you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
-## Adding Zoho One China from the gallery
+## Add Zoho One China from the gallery
To configure the integration of Zoho One China into Azure AD, you need to add Zoho One China from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Zoho One China** in the search box. 1. Select **Zoho One China** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Zoho One China
+## Configure and test Azure AD SSO for Zoho One China
Configure and test Azure AD SSO with Zoho One China using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zoho One China.
-To configure and test Azure AD SSO with Zoho One China, complete the following building blocks:
+To configure and test Azure AD SSO with Zoho One China, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -68,9 +65,9 @@ To configure and test Azure AD SSO with Zoho One China, complete the following b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zoho One China** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Zoho One China** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -94,6 +91,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Zoho One China** section, copy the appropriate URL(s) based on your requirement. ![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
@@ -113,15 +111,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Zoho One China**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Zoho One China SSO
@@ -134,20 +126,20 @@ In this section, you create a user called Britta Simon in Zoho One China. Work w
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Zoho One China tile in the Access Panel, you should be automatically signed in to the Zoho One China for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Zoho One China Sign on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Zoho One China Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Zoho One China for which you set up the SSO.
-- [Try Zoho One China with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Zoho One China tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Zoho One China for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Zoho One China with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+ Once you configure Zoho One China you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zohoone-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zohoone-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 04/16/2019 Last updated : 01/20/2021 # Tutorial: Azure Active Directory integration with Zoho One
-In this tutorial, you learn how to integrate Zoho One with Azure Active Directory (Azure AD).
-Integrating Zoho One with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Zoho One with Azure Active Directory (Azure AD). When you integrate Zoho One with Azure AD, you can:
-* You can control in Azure AD who has access to Zoho One.
-* You can enable your users to be automatically signed-in to Zoho One (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Zoho One.
+* Enable your users to be automatically signed-in to Zoho One with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Zoho One, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Zoho One single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Zoho One single sign-on enabled subscription.
## Scenario description
@@ -37,65 +33,46 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Zoho One supports **SP** and **IDP** initiated SSO
-## Adding Zoho One from the gallery
-
-To configure the integration of Zoho One into Azure AD, you need to add Zoho One from the gallery to your list of managed SaaS apps.
-
-**To add Zoho One from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Zoho One**, select **Zoho One** from result panel then click **Add** button to add the application.
-
- ![Zoho One in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-In this section, you configure and test Azure AD single sign-on with Zoho One based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Zoho One needs to be established.
+## Add Zoho One from the gallery
-To configure and test Azure AD single sign-on with Zoho One, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Zoho One Single Sign-On](#configure-zoho-one-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Zoho One test user](#create-zoho-one-test-user)** - to have a counterpart of Britta Simon in Zoho One that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Zoho One into Azure AD, you need to add Zoho One from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zoho One** in the search box.
+1. Select **Zoho One** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Zoho One
-To configure Azure AD single sign-on with Zoho One, perform the following steps:
+Configure and test Azure AD SSO with Zoho One using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zoho One.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zoho One** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Zoho One, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zoho One SSO](#configure-zoho-one-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zoho One test user](#create-zoho-one-test-user)** - to have a counterpart of B.Simon in Zoho One that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+### Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Zoho One** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-relay.png)
-
- a. In the **Identifier** text box, type a URL:
+ a. In the **Identifier** text box, type the URL:
`one.zoho.com` b. In the **Reply URL** text box, type a URL using the following pattern:
@@ -106,13 +83,11 @@ To configure Azure AD single sign-on with Zoho One, perform the following steps:
c. Click **Set additional URLs**.
- d. In the **Relay State** text box, type a URL:
+ d. In the **Relay State** text box, type the URL:
`https://one.zoho.com` 5. If you wish to configure the application in **SP** initiated mode, perform the following step:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/both-signonurl.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://accounts.zoho.com/samlauthrequest/<domain_name>?serviceurl=https://one.zoho.com`
@@ -127,23 +102,41 @@ To configure Azure AD single sign-on with Zoho One, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
- b. Azure AD Identifier
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- c. Logout URL
+### Assign the Azure AD test user
-### Configure Zoho One Single Sign-On
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zoho One.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zoho One**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+### Configure Zoho One SSO
1. In a different web browser window, sign in to your Zoho One company site as an administrator. 2. On the **Organization** tab, Click **Setup** under **SAML Authentication**.
- ![Zoho One org](./media/zohoone-tutorial/tutorial_zohoone_setup.png)
+ ![Zoho One org](./media/zoho-one-tutorial/set-up.png)
3. On the Pop-up page perform the following steps:
- ![Zoho One sig](./media/zohoone-tutorial/tutorial_zohoone_save.png)
+ ![Zoho One sig](./media/zoho-one-tutorial/save.png)
a. In the **Sign-in URL** textbox, paste the value of **Login URL**, which you have copied from Azure portal.
@@ -155,15 +148,15 @@ To configure Azure AD single sign-on with Zoho One, perform the following steps:
4. After saving the SAML Authentication setup, copy the **SAML-Identifier** value and append it with the **Reply URL** in place of `<saml-identifier>`, like `https://accounts.zoho.com/samlresponse/one.zoho.com` and paste the generated value in the **Reply URL** textbox under **Basic SAML Configuration** section.
- ![Zoho One saml](./media/zohoone-tutorial/tutorial_zohoone_samlidenti.png)
+ ![Zoho One saml](./media/zoho-one-tutorial/saml-identifier.png)
5. Go to the **Domains** tab and then click **Add Domain**.
- ![Zoho One domain](./media/zohoone-tutorial/tutorial_zohoone_domain.png)
+ ![Zoho One domain](./media/zoho-one-tutorial/add-domain.png)
6. On the **Add Domain** page, perform the following steps:
- ![Zoho One add domain](./media/zohoone-tutorial/tutorial_zohoone_adddomain.png)
+ ![Zoho One add domain](./media/zoho-one-tutorial/add-domain-name.png)
a. In the **Domain Name** textbox, type domain like contoso.com.
@@ -172,56 +165,6 @@ To configure Azure AD single sign-on with Zoho One, perform the following steps:
>[!Note] >After adding the domain follow [these](https://www.zoho.com/one/help/admin-guide/domain-verification.html) steps to verify your domain. Once the domain is verified, use your domain name in **Sign-on URL** in **Basic SAML Configuration** section in Azure portal.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zoho One.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zoho One**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Zoho One**.
-
- ![The Zoho One link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create Zoho One test user To enable Azure AD users to sign in to Zoho One, they must be provisioned into Zoho One. In Zoho One, provisioning is a manual task.
@@ -232,11 +175,11 @@ To enable Azure AD users to sign in to Zoho One, they must be provisioned into Z
2. On the **Users** tab, Click on **user logo**.
- ![Zoho One user](./media/zohoone-tutorial/tutorial_zohoone_users.png)
+ ![Zoho One user](./media/zoho-one-tutorial/user.png)
3. On the **Add User** page, perform the following steps:
- ![Zoho One add user](./media/zohoone-tutorial/tutorial_zohoone_adduser.png)
+ ![Zoho One add user](./media/zoho-one-tutorial/add-user.png)
a. In **Name** text box, enter the name of user like **Britta simon**.
@@ -247,16 +190,22 @@ To enable Azure AD users to sign in to Zoho One, they must be provisioned into Z
c. Click **Add**.
-### Test single sign-on
+### Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Zoho One Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Zoho One Sign-on URL directly and initiate the login flow from there.
-When you click the Zoho One tile in the Access Panel, you should be automatically signed in to the Zoho One for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Zoho One for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Zoho One tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Zoho One for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Zoho One you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
aks https://docs.microsoft.com/en-us/azure/aks/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/faq.md
@@ -128,7 +128,9 @@ Windows Server support for node pool includes some limitations that are part of
## Does AKS offer a service-level agreement?
-AKS provides SLA guarantees as an optional add-on feature with [Uptime SLA][uptime-sla].
+AKS provides SLA guarantees as an optional add-on feature with [Uptime SLA][uptime-sla].
+
+The Free SLA offered by default doesn't guarantee a highly available API Server endpoint (our Service Level Objective is 99.5%). It could happen that transient connectivity issues are observed in case of upgrades, unhealthy underlay nodes, platform maintenance, etc... If your workload doesn't tolerate APIServer restarts, then we suggest using Uptime SLA.
## Can I apply Azure reservation discounts to my AKS agent nodes?
analysis-services https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-datasource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-datasource.md
@@ -4,7 +4,7 @@ description: Describes data sources and connectors supported for tabular 1200 an
Previously updated : 02/03/2021 Last updated : 02/08/2021
@@ -113,13 +113,6 @@ For cloud data sources:
* If using SQL authentication, impersonation should be Service Account.
-## Service Principal authentication
-
-When specified as a *provider* data source, Azure Analysis Services supports [MSOLEDBSQL](/sql/connect/oledb/release-notes-for-oledb-driver-for-sql-server) Azure Active Directory service principal authentication for Azure SQL Database and Azure Synapse data sources.
-
-`
-Provider=MSOLEDBSQL;Data Source=[server];Initial Catalog=[database];Authentication=ActiveDirectoryServicePrincipal;User ID=[Application (client) ID];Password=[Application (client) secret];Use Encryption for Data=true
-`
## OAuth credentials
app-service https://docs.microsoft.com/en-us/azure/app-service/troubleshoot-diagnostic-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-diagnostic-logs.md
@@ -186,10 +186,11 @@ The following table shows the supported log types and descriptions:
| AppServiceHTTPLogs | Yes | Yes | Yes | Yes | Web server logs | | AppServiceEnvironmentPlatformLogs | Yes | N/A | Yes | Yes | App Service Environment: scaling, configuration changes, and status logs| | AppServiceAuditLogs | Yes | Yes | Yes | Yes | Login activity via FTP and Kudu |
-| AppServiceFileAuditLogs | Yes | Yes | TBA | TBA | File changes made to the site content; only available for Premium tier and above |
+| AppServiceFileAuditLogs | Yes | Yes | TBA | TBA | File changes made to the site content; **only available for Premium tier and above** |
| AppServiceAppLogs | ASP .NET | ASP .NET | Java SE & Tomcat Blessed Images <sup>1</sup> | Java SE & Tomcat Blessed Images <sup>1</sup> | Application logs | | AppServiceIPSecAuditLogs | Yes | Yes | Yes | Yes | Requests from IP Rules | | AppServicePlatformLogs | TBA | Yes | Yes | Yes | Container operation logs |
+| AppServiceAntivirusScanAuditLogs | Yes | Yes | Yes | Yes | [Anti-virus scan logs](https://azure.github.io/AppService/2020/12/09/AzMon-AppServiceAntivirusScanAuditLogs.html) using Microsoft Defender; **only available for Premium tier** |
<sup>1</sup> For Java SE apps, add "$WEBSITE_AZMON_PREVIEW_ENABLED" to the app settings and set it to 1 or to true.
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/enable-dynamic-configuration-aspnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md
@@ -188,7 +188,7 @@ A *sentinel key* is a special key used to signal when configuration has changed.
``` > [!Tip]
- > To learn more about the options pattern when reading configuration values, see [Options Patterns in ASP.NET Core](/aspnet/core/fundamentals/configuration/options?view=aspnetcore-3.1).
+ > To learn more about the options pattern when reading configuration values, see [Options Patterns in ASP.NET Core](/aspnet/core/fundamentals/configuration/options).
4. Update the `Configure` method, adding the `UseAzureAppConfiguration` middleware to allow the configuration settings registered for refresh to be updated while the ASP.NET Core web app continues to receive requests.
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/enable-dynamic-configuration-dotnet-core-push-refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core-push-refresh.md
@@ -0,0 +1,233 @@
+
+ Title: "Tutorial: Use dynamic configuration using push refresh in a .NET Core app"
+
+description: In this tutorial, you learn how to dynamically update the configuration data for .NET Core apps using push refresh
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ms.devlang: csharp
+ Last updated : 07/25/2020++
+#Customer intent: I want to use push refresh to dynamically update my app to use the latest configuration data in App Configuration.
+
+# Tutorial: Use dynamic configuration using push refresh in a .NET Core app
+
+The App Configuration .NET Core client library supports updating configuration on demand without causing an application to restart. An application can be configured to detect changes in App Configuration using one or both of the following two approaches.
+
+1. Poll Model: This is the default behavior that uses polling to detect changes in configuration. Once the cached value of a setting expires, the next call to `TryRefreshAsync` or `RefreshAsync` sends a request to the server to check if the configuration has changed, and pulls the updated configuration if needed.
+
+1. Push Model: This uses [App Configuration events](./concept-app-configuration-event.md) to detect changes in configuration. Once App Configuration is set up to send key value change events to Azure Event Grid, the application can use these events to optimize the total number of requests needed to keep the configuration updated. Applications can choose to subscribe to these either directly from Event Grid, or though one of the [supported event handlers](https://docs.microsoft.com/azure/event-grid/event-handlers) such as a webhook, an Azure function or a Service Bus topic.
+
+Applications can choose to subscribe to these events either directly from Event Grid, or through a web hook, or by forwarding events to Azure Service Bus. The Azure Service Bus SDK provides an API to register a message handler that simplifies this process for applications that either do not have an HTTP endpoint or do not wish to poll the event grid for changes continuously.
+
+This tutorial shows how you can implement dynamic configuration updates in your code using push refresh. It builds on the app introduced in the quickstarts. Before you continue, finish [Create a .NET Core app with App Configuration](./quickstart-dotnet-core-app.md) first.
+
+You can use any code editor to do the steps in this tutorial. [Visual Studio Code](https://code.visualstudio.com/) is an excellent option that's available on the Windows, macOS, and Linux platforms.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Set up a subscription to send configuration change events from App Configuration to a Service Bus topic
+> * Set up your .NET Core app to update its configuration in response to changes in App Configuration.
+> * Consume the latest configuration in your application.
+
+## Prerequisites
+
+To do this tutorial, install the [.NET Core SDK](https://dotnet.microsoft.com/download).
++
+## Set up Azure Service Bus topic and subscription
+
+This tutorial uses the Service Bus integration for Event Grid to simplify the detection of configuration changes for applications that do not wish to poll App Configuration for changes continuously. The Azure Service Bus SDK provides an API to register a message handler that can be used to update configuration when changes are detected in App Configuration. Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscription](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) to create a service bus namespace, topic and subscription.
+
+Once the resources are created, add the following environment variables. These will be used to register an event handler for configuration changes in the application code.
+
+| Key | Value |
+|||
+| ServiceBusConnectionString | Connection string for the service bus namespace |
+| ServiceBusTopic | Name of the Service Bus topic |
+| ServiceBusSubscription | Name of the service bus subscription |
+
+## Set up Event subscription
+
+1. Open the App Configuration resource in the Azure portal, then click on `+ Event Subscription` in the `Events` pane.
+
+ ![App Configuration Events](./media/events-pane.png)
+
+1. Enter a name for the `Event Subscription` and the `System Topic`.
+
+ ![Create event subscription](./media/create-event-subscription.png)
+
+1. Select the `Endpoint Type` as `Service Bus Topic`, elect the Service Bus topic, then click on `Confirm Selection`.
+
+ ![Event subscription service bus endpoint](./media/event-subscription-servicebus-endpoint.png)
+
+1. Click on `Create` to create the event subscription.
+
+1. Click on `Event Subscriptions` in the `Events` pane to validated that the subscription was created successfully.
+
+ ![App Configuration event subscriptions](./media/event-subscription-view.png)
+
+> [!NOTE]
+> When subscribing for configuration changes, one or more filters can be used to reduce the number of events sent to your application. These can be configured either as [Event Grid subscription filters](https://docs.microsoft.com/azure/event-grid/event-filtering) or [Service Bus subscription filters](https://docs.microsoft.com/azure/service-bus-messaging/topic-filters). For example, a subscription filter can be used to only subscribe to events for changes in a key that starts with a specific string.
+
+## Register event handler to reload data from App Configuration
+
+Open *Program.cs* and update the file with the following code.
+
+```csharp
+using Microsoft.Azure.ServiceBus;
+using Microsoft.Extensions.Configuration;
+using Microsoft.Extensions.Configuration.AzureAppConfiguration;
+using System;
+using System.Diagnostics;
+using System.Text;
+using System.Text.Json;
+using System.Threading.Tasks;
+
+namespace TestConsole
+{
+ class Program
+ {
+ private const string AppConfigurationConnectionStringEnvVarName = "AppConfigurationConnectionString"; // e.g. Endpoint=https://{store_name}.azconfig.io;Id={id};Secret={secret}
+ private const string ServiceBusConnectionStringEnvVarName = "ServiceBusConnectionString"; // e.g. Endpoint=sb://{service_bus_name}.servicebus.windows.net/;SharedAccessKeyName={key_name};SharedAccessKey={key}
+ private const string ServiceBusTopicEnvVarName = "ServiceBusTopic";
+ private const string ServiceBusSubscriptionEnvVarName = "ServiceBusSubscription";
+
+ private static IConfigurationRefresher _refresher = null;
+
+ static async Task Main(string[] args)
+ {
+ string appConfigurationConnectionString = Environment.GetEnvironmentVariable(AppConfigurationConnectionStringEnvVarName);
+
+ IConfiguration configuration = new ConfigurationBuilder()
+ .AddAzureAppConfiguration(options =>
+ {
+ options.Connect(appConfigurationConnectionString);
+ options.ConfigureRefresh(refresh =>
+ refresh
+ .Register("TestApp:Settings:Message")
+ .SetCacheExpiration(TimeSpan.FromDays(30)) // Important: Reduce poll frequency
+ );
+
+ _refresher = options.GetRefresher();
+ }).Build();
+
+ RegisterRefreshEventHandler();
+ var message = configuration["TestApp:Settings:Message"];
+ Console.WriteLine($"Initial value: {configuration["TestApp:Settings:Message"]}");
+
+ while (true)
+ {
+ await _refresher.TryRefreshAsync();
+
+ if (configuration["TestApp:Settings:Message"] != message)
+ {
+ Console.WriteLine($"New value: {configuration["TestApp:Settings:Message"]}");
+ message = configuration["TestApp:Settings:Message"];
+ }
+
+ await Task.Delay(TimeSpan.FromSeconds(1));
+ }
+ }
+
+ private static void RegisterRefreshEventHandler()
+ {
+ string serviceBusConnectionString = Environment.GetEnvironmentVariable(ServiceBusConnectionStringEnvVarName);
+ string serviceBusTopic = Environment.GetEnvironmentVariable(ServiceBusTopicEnvVarName);
+ string serviceBusSubscription = Environment.GetEnvironmentVariable(ServiceBusSubscriptionEnvVarName);
+ SubscriptionClient serviceBusClient = new SubscriptionClient(serviceBusConnectionString, serviceBusTopic, serviceBusSubscription);
+
+ serviceBusClient.RegisterMessageHandler(
+ handler: (message, cancellationToken) =>
+ {
+ string messageText = Encoding.UTF8.GetString(message.Body);
+ JsonElement messageData = JsonDocument.Parse(messageText).RootElement.GetProperty("data");
+ string key = messageData.GetProperty("key").GetString();
+ Console.WriteLine($"Event received for Key = {key}");
+
+ _refresher.SetDirty();
+ return Task.CompletedTask;
+ },
+ exceptionReceivedHandler: (exceptionargs) =>
+ {
+ Console.WriteLine($"{exceptionargs.Exception}");
+ return Task.CompletedTask;
+ });
+ }
+ }
+}
+```
+
+The [SetDirty](https://docs.microsoft.com/dotnet/api/microsoft.extensions.configuration.azureappconfiguration.iconfigurationrefresher.setdirty) method is used to set the cached value for key-values registered for refresh as dirty. This ensures that the next call to `RefreshAsync` or `TryRefreshAsync` re-validates the cached values with App Configuration and updates them if needed.
+
+A random delay is added before the cached value is marked as dirty to reduce potential throttling in case multiple instances refresh at the same time. The default maximum delay before the cached value is marked as dirty is 30 seconds, but can be overridden by passing an optional `TimeSpan` parameter to the `SetDirty` method.
+
+> [!NOTE]
+> To reduce the number of requests to App Configuration when using push refresh, it is important to call `SetCacheExpiration(TimeSpan cacheExpiration)` with an appropriate value of `cacheExpiration` parameter. This controls the cache expiration time for pull refresh and can be used as a safety net in case there is an issue with the Event subscription or the Service Bus subscription. The recommended value is `TimeSpan.FromDays(30)`.
+
+## Build and run the app locally
+
+1. Set an environment variable named **AppConfigurationConnectionString**, and set it to the access key to your App Configuration store. If you use the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
+
+ ```console
+ setx AppConfigurationConnectionString "connection-string-of-your-app-configuration-store"
+ ```
+
+ If you use Windows PowerShell, run the following command:
+
+ ```powershell
+ $Env:AppConfigurationConnectionString = "connection-string-of-your-app-configuration-store"
+ ```
+
+ If you use macOS or Linux, run the following command:
+
+ ```console
+ export AppConfigurationConnectionString='connection-string-of-your-app-configuration-store'
+ ```
+
+1. Run the following command to build the console app:
+
+ ```console
+ dotnet build
+ ```
+
+1. After the build successfully completes, run the following command to run the app locally:
+
+ ```console
+ dotnet run
+ ```
+
+ ![Push refresh run before update](./media/dotnet-core-app-pushrefresh-initial.png)
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store instance that you created in the quickstart.
+
+1. Select **Configuration Explorer**, and update the values of the following keys:
+
+ | Key | Value |
+ |||
+ | TestApp:Settings:Message | Data from Azure App Configuration - Updated |
+
+1. Wait for 30 seconds to allow the event to be processed and configuration to be updated.
+
+ ![Push refresh run after updated](./media/dotnet-core-app-pushrefresh-final.png)
+
+## Clean up resources
++
+## Next steps
+
+In this tutorial, you enabled your .NET Core app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/enable-dynamic-configuration-dotnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
@@ -21,7 +21,7 @@
# Tutorial: Use dynamic configuration in a .NET Core app
-The App Configuration .NET Core client library supports updating a set of configuration settings on demand without causing an application to restart. This can be implemented by first getting an instance of `IConfigurationRefresher` from the options for the configuration provider and then calling `TryRefreshAsync` on that instance anywhere in your code.
+The App Configuration .NET Core client library supports updating configuration on demand without causing an application to restart. This can be implemented by first getting an instance of `IConfigurationRefresher` from the options for the configuration provider and then calling `TryRefreshAsync` on that instance anywhere in your code.
In order to keep the settings updated and avoid too many calls to the configuration store, a cache is used for each setting. Until the cached value of a setting has expired, the refresh operation does not update the value, even when the value has changed in the configuration store. The default expiration time for each request is 30 seconds, but it can be overridden if required.
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/integrate-ci-cd-pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/integrate-ci-cd-pipeline.md
@@ -32,9 +32,9 @@ You can use any code editor to do the steps in this tutorial. [Visual Studio Cod
### Prerequisites
-If you build locally, download and install the [Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest) if you havenΓÇÖt already.
+If you build locally, download and install the [Azure CLI](/cli/azure/install-azure-cli) if you havenΓÇÖt already.
-To do a cloud build, with Azure DevOps for example, make sure the [Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest) is installed in your build system.
+To do a cloud build, with Azure DevOps for example, make sure the [Azure CLI](/cli/azure/install-azure-cli) is installed in your build system.
### Export an App Configuration store
@@ -104,4 +104,4 @@ To do a cloud build, with Azure DevOps for example, make sure the [Azure CLI](/c
In this tutorial, you exported Azure App Configuration data to be used in a deployment pipeline. To learn more about how to use App Configuration, continue to the Azure CLI samples. > [!div class="nextstepaction"]
-> [Azure CLI](/cli/azure/appconfig?view=azure-cli-latest)
+> [Azure CLI](/cli/azure/appconfig)
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/integrate-kubernetes-deployment-helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/integrate-kubernetes-deployment-helm.md
@@ -31,7 +31,7 @@ Learn more about installing applications with Helm in [Azure Kubernetes Service]
## Prerequisites - [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- Install [Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest) (version 2.4.0 or later)
+- Install [Azure CLI](/cli/azure/install-azure-cli) (version 2.4.0 or later)
- Install [Helm](https://helm.sh/docs/intro/install/) (version 2.14.0 or later) - A Kubernetes cluster.
@@ -183,7 +183,7 @@ settings:
First, download the configuration from App Configuration to a *myConfig.yaml* file. Use a key filter to only download those keys that start with **settings.**. If in your case the key filter is not sufficient to exclude keys of Key Vault references, you may use the argument **--skip-keyvault** to exclude them. > [!TIP]
-> Learn more about the [export command](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-export).
+> Learn more about the [export command](/cli/azure/appconfig/kv#az-appconfig-kv-export).
```azurecli-interactive az appconfig kv export -n myAppConfiguration -d file --path myConfig.yaml --key "settings.*" --separator "." --format yaml
@@ -240,4 +240,4 @@ One secret, **password**, stores as Key Vault reference in App Configuration was
In this tutorial, you exported Azure App Configuration data to be used in a Kubernetes deployment with Helm. To learn more about how to use App Configuration, continue to the Azure CLI samples. > [!div class="nextstepaction"]
-> [Azure CLI](/cli/azure/appconfig?view=azure-cli-latest)
+> [Azure CLI](/cli/azure/appconfig)
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/overview-managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/overview-managed-identity.md
@@ -95,7 +95,7 @@ The following steps will walk you through creating a user-assigned identity and
## Removing an identity
-A system-assigned identity can be removed by disabling the feature by using the [az appconfig identity remove](/cli/azure/appconfig/identity?view=azure-cli-latest#az-appconfig-identity-remove) command in the Azure CLI. User-assigned identities can be removed individually. Removing a system-assigned identity in this way will also delete it from AAD. System-assigned identities are also automatically removed from AAD when the app resource is deleted.
+A system-assigned identity can be removed by disabling the feature by using the [az appconfig identity remove](/cli/azure/appconfig/identity#az-appconfig-identity-remove) command in the Azure CLI. User-assigned identities can be removed individually. Removing a system-assigned identity in this way will also delete it from AAD. System-assigned identities are also automatically removed from AAD when the app resource is deleted.
## Next steps
@@ -103,4 +103,4 @@ A system-assigned identity can be removed by disabling the feature by using the
> [Create an ASP.NET Core app with Azure App Configuration](quickstart-aspnet-core-app.md) [az appconfig identity assign]: /cli/azure/appconfig/identity?view=azure-cli-latest#az-appconfig-identity-assign
-[az login]: /cli/azure/reference-index#az-login
+[az login]: /cli/azure/reference-index#az-login
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/pull-key-value-devops-pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
@@ -55,7 +55,7 @@ Assign the proper App Configuration role to the service connection being used wi
This section will cover how to use the Azure App Configuration task in an Azure DevOps build pipeline.
-1. Navigate to the build pipeline page by clicking **Pipelines** > **Pipelines**. For build pipeline documentation, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?view=azure-devops&tabs=net%2Ctfs-2018-2%2Cbrowser).
+1. Navigate to the build pipeline page by clicking **Pipelines** > **Pipelines**. For build pipeline documentation, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?tabs=net%2Ctfs-2018-2%2Cbrowser).
- If you're creating a new build pipeline, click **New pipeline**, select the repository for your pipeline. Select **Show assistant** on the right side of the pipeline, and search for the **Azure App Configuration** task. - If you're using an existing build pipeline, select **Edit** to edit the pipeline. In the **Tasks** tab, search for the **Azure App Configuration** Task. 1. Configure the necessary parameters for the task to pull the key-values from the App Configuration store. Descriptions of the parameters are available in the **Parameters** section below and in tooltips next to each parameter.
@@ -68,10 +68,10 @@ This section will cover how to use the Azure App Configuration task in an Azure
This section will cover how to use the Azure App Configuration task in an Azure DevOps release pipeline.
-1. Navigate to release pipeline page by selecting **Pipelines** > **Releases**. For release pipeline documentation, see [Release pipelines](/azure/devops/pipelines/release?view=azure-devops).
+1. Navigate to release pipeline page by selecting **Pipelines** > **Releases**. For release pipeline documentation, see [Release pipelines](/azure/devops/pipelines/release).
1. Choose an existing release pipeline. If you donΓÇÖt have one, click **New pipeline** to create a new one. 1. Select the **Edit** button in the top-right corner to edit the release pipeline.
-1. Choose the **Stage** to add the task. For more information about stages, see [Add stages, dependencies, & conditions](/azure/devops/pipelines/release/environments?view=azure-devops).
+1. Choose the **Stage** to add the task. For more information about stages, see [Add stages, dependencies, & conditions](/azure/devops/pipelines/release/environments).
1. Click **+** for on "Run on agent", then add the **Azure App Configuration** task under the **Add tasks** tab. 1. Configure the necessary parameters within the task to pull your key-values from your App Configuration store. Descriptions of the parameters are available in the **Parameters** section below and in tooltips next to each parameter. - Set the **Azure subscription** parameter to the name of the service connection you created in a previous step.
@@ -110,4 +110,4 @@ If an unexpected error occurs, debug logs can be enabled by setting the pipeline
**How do I compose my configuration from multiple keys and labels?**
-There are times when configuration may need to be composed from multiple labels, for example, default and dev. Multiple App Configuration tasks may be used in one pipeline to implement this scenario. The key-values fetched by a task in a later step will supersede any values from previous steps. In the aforementioned example, a task can be used to select key-values with the default label while a second task can select key-values with the dev label. The keys with the dev label will override the same keys with the default label.
+There are times when configuration may need to be composed from multiple labels, for example, default and dev. Multiple App Configuration tasks may be used in one pipeline to implement this scenario. The key-values fetched by a task in a later step will supersede any values from previous steps. In the aforementioned example, a task can be used to select key-values with the default label while a second task can select key-values with the dev label. The keys with the dev label will override the same keys with the default label.
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/push-kv-devops-pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/push-kv-devops-pipeline.md
@@ -51,7 +51,7 @@ Assign the proper App Configuration role assignments to the credentials being us
This section will cover how to use the Azure App Configuration Push task in an Azure DevOps build pipeline.
-1. Navigate to the build pipeline page by clicking **Pipelines** > **Pipelines**. Documentation for build pipelines can be found [here](/azure/devops/pipelines/create-first-pipeline?tabs=tfs-2018-2&view=azure-devops).
+1. Navigate to the build pipeline page by clicking **Pipelines** > **Pipelines**. Documentation for build pipelines can be found [here](/azure/devops/pipelines/create-first-pipeline?tabs=tfs-2018-2).
- If you're creating a new build pipeline, select **Show assistant** on the right side of the pipeline, and search for the **Azure App Configuration Push** task. - If you're using an existing build pipeline, navigate to the **Tasks** tab when editing the pipeline, and search for the **Azure App Configuration Push** Task. 2. Configure the necessary parameters for the task to push the key-values from the configuration file to the App Configuration store. The **Configuration File Path** parameter begins at the root of the file repository.
@@ -61,10 +61,10 @@ This section will cover how to use the Azure App Configuration Push task in an A
This section will cover how to use the Azure App Configuration Push task in an Azure DevOps release pipelines.
-1. Navigate to release pipeline page by selecting **Pipelines** > **Releases**. Documentation for release pipelines can be found [here](/azure/devops/pipelines/release?view=azure-devops).
+1. Navigate to release pipeline page by selecting **Pipelines** > **Releases**. Documentation for release pipelines can be found [here](/azure/devops/pipelines/release).
1. Choose an existing release pipeline. If you donΓÇÖt have one, select **+ New** to create a new one. 1. Select the **Edit** button in the top-right corner to edit the release pipeline.
-1. Choose the **Stage** to add the task. More information about stages can be found [here](/azure/devops/pipelines/release/environments?view=azure-devops).
+1. Choose the **Stage** to add the task. More information about stages can be found [here](/azure/devops/pipelines/release/environments).
1. Select **+** for that Job, then add the **Azure App Configuration Push** task under the **Deploy** tab. 1. Configure the necessary parameters within the task to push your key-values from your configuration file to your App Configuration store. Explanations of the parameters are available in the **Parameters** section below, and in tooltips next to each parameter. 1. Save and queue a release. The release log will display any failures encountered during the execution of the task.
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/rest-api-authentication-hmac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/rest-api-authentication-hmac.md
@@ -17,7 +17,7 @@ You can authenticate HTTP requests by using the HMAC-SHA256 authentication schem
- **Credential** - \<Access Key ID\> - **Secret** - base64 decoded Access Key Value. ``base64_decode(<Access Key Value>)``
-The values for credential (also called `id`) and secret (also called `value`) must be obtained from the instance of Azure App Configuration. You can do this by using the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/?preserve-view=true&view=azure-cli-latest).
+The values for credential (also called `id`) and secret (also called `value`) must be obtained from the instance of Azure App Configuration. You can do this by using the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/).
Provide each request with all HTTP headers required for authentication. The minimum required are:
@@ -591,4 +591,4 @@ while IFS= read -r line; do
header_args+=("-H$line") done <<< "$headers" curl -X "$method" -d "$body" "${header_args[@]}" "https://$host$url"
-```
+```
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/use-key-vault-references-dotnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
@@ -90,7 +90,7 @@ To add a secret to the vault, you need to take just a few additional steps. In t
## Connect to Key Vault
-1. In this tutorial, you use a service principal for authentication to Key Vault. To create this service principal, use the Azure CLI [az ad sp create-for-rbac](/cli/azure/ad/sp?view=azure-cli-latest#az-ad-sp-create-for-rbac) command:
+1. In this tutorial, you use a service principal for authentication to Key Vault. To create this service principal, use the Azure CLI [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command:
```azurecli az ad sp create-for-rbac -n "http://mySP" --sdk-auth
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/use-key-vault-references-spring-boot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
@@ -90,7 +90,7 @@ To add a secret to the vault, you need to take just a few additional steps. In t
## Connect to Key Vault
-1. In this tutorial, you use a service principal for authentication to Key Vault. To create this service principal, use the Azure CLI [az ad sp create-for-rbac](/cli/azure/ad/sp?view=azure-cli-latest#az-ad-sp-create-for-rbac) command:
+1. In this tutorial, you use a service principal for authentication to Key Vault. To create this service principal, use the Azure CLI [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command:
```azurecli az ad sp create-for-rbac -n "http://mySP" --sdk-auth
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-configure.md
@@ -232,10 +232,7 @@ The **Schedule updates** blade allows you to designate a maintenance window for
To specify a maintenance window, check the desired days and specify the maintenance window start hour for each day, and click **OK**. The maintenance window time is in UTC.
-> [!IMPORTANT]
-> The **Schedule updates** functionality is only available for Premium tier caches. For more information and instructions, see [Azure Cache for Redis administration - Schedule updates](cache-administration.md#schedule-updates).
->
->
+For more information and instructions, see [Azure Cache for Redis administration - Schedule updates](cache-administration.md#schedule-updates)
### Geo-replication
@@ -501,4 +498,4 @@ You can move your cache to a new subscription by clicking **Move**.
For information on moving resources from one resource group to another, and from one subscription to another, see [Move resources to new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md). ## Next steps
-* For more information on working with Redis commands, see [How can I run Redis commands?](cache-development-faq.md#how-can-i-run-redis-commands)
+* For more information on working with Redis commands, see [How can I run Redis commands?](cache-development-faq.md#how-can-i-run-redis-commands)
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-premium-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
@@ -172,7 +172,7 @@ There are network connectivity requirements for Azure Cache for Redis that might
* Outbound network connectivity to Azure Storage endpoints worldwide. Endpoints located in the same region as the Azure Cache for Redis instance and storage endpoints located in *other* Azure regions are included. Azure Storage endpoints resolve under the following DNS domains: *table.core.windows.net*, *blob.core.windows.net*, *queue.core.windows.net*, and *file.core.windows.net*. * Outbound network connectivity to *ocsp.digicert.com*, *crl4.digicert.com*, *ocsp.msocsp.com*, *mscrl.microsoft.com*, *crl3.digicert.com*, *cacerts.digicert.com*, *oneocsp.microsoft.com*, and *crl.microsoft.com*. This connectivity is needed to support TLS/SSL functionality. * The DNS configuration for the virtual network must be capable of resolving all of the endpoints and domains mentioned in the earlier points. These DNS requirements can be met by ensuring a valid DNS infrastructure is configured and maintained for the virtual network.
-* Outbound network connectivity to the following Azure Monitor endpoints, which resolve under the following DNS domains: *shoebox2-black.shoebox2.metrics.nsatc.net*, *north-prod2.prod2.metrics.nsatc.net*, *azglobal-black.azglobal.metrics.nsatc.net*, *shoebox2-red.shoebox2.metrics.nsatc.net*, *east-prod2.prod2.metrics.nsatc.net*, and *azglobal-red.azglobal.metrics.nsatc.net*.
+* Outbound network connectivity to the following Azure Monitor endpoints, which resolve under the following DNS domains: *shoebox2-black.shoebox2.metrics.nsatc.net*, *north-prod2.prod2.metrics.nsatc.net*, *azglobal-black.azglobal.metrics.nsatc.net*, *shoebox2-red.shoebox2.metrics.nsatc.net*, *east-prod2.prod2.metrics.nsatc.net*, *azglobal-red.azglobal.metrics.nsatc.net*, *shoebox3.prod.microsoftmetrics.com*, *shoebox3-red.prod.microsoftmetrics.com* and *shoebox3-black.prod.microsoftmetrics.com*.
### How can I verify that my cache is working in a virtual network?
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-function-app-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-app-portal.md
@@ -10,7 +10,10 @@
Azure Functions lets you run your code in a serverless environment without having to first create a virtual machine (VM) or publish a web application. In this article, you learn how to use Azure Functions to create a "hello world" HTTP trigger function in the Azure portal.
-We recommend that you [develop your functions locally](functions-develop-local.md) and publish to a function app in Azure.
+>[!NOTE]
+>In-portal editing is only supported for JavaScript, PowerShell, TypeScript, and C# Script functions.<br><br>For C# class library, Java, and Python functions, you can create the function app in the portal, but you must also create the functions locally and then publish them to Azure.
+
+We instead recommend that you [develop your functions locally](functions-develop-local.md) and publish to a function app in Azure.
Use one of the following links to get started with your chosen local development environment and language: | Visual Studio Code | Terminal/command prompt | Visual Studio |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-map https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/app-map.md
@@ -220,6 +220,21 @@ appInsights.addTelemetryInitializer((envelope) => {
}); }); ```+
+# [Python](#tab/python)
+
+For Python, [OpenCensus Python telemetry processors](api-filtering-sampling.md#opencensus-python-telemetry-processors) can be used.
+
+```python
+def callback_function(envelope):
+ envelope.tags['ai.cloud.role'] = 'new_role_name'
+
+# AzureLogHandler
+handler.add_telemetry_processor(callback_function)
+
+# AzureExporter
+exporter.add_telemetry_processor(callback_function)
+```
### Understanding cloud role name within the context of the Application Map
@@ -292,4 +307,4 @@ To provide feedback, use the feedback option.
* To learn more about how correlation works in Application Insights consult the [telemetry correlation article](correlation.md). * The [end-to-end transaction diagnostic experience](transaction-diagnostics.md) correlates server-side telemetry from across all your Application Insights monitored components into a single view.
-* For advanced correlation scenarios in ASP.NET Core and ASP.NET consult the [track custom operations](custom-operations-tracking.md) article.
+* For advanced correlation scenarios in ASP.NET Core and ASP.NET consult the [track custom operations](custom-operations-tracking.md) article.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-performance-monitor-expressroute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor-expressroute.md
@@ -11,6 +11,9 @@ Last updated 11/27/2018
# ExpressRoute Monitor
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor to the new Connection Monitor](https://docs.microsoft.com/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor) in Azure Network Watcher before 29 February 2024.
+ You can use the Azure ExpressRoute Monitor capability in [Network Performance Monitor](network-performance-monitor.md) to monitor end-to-end connectivity and performance between your branch offices and Azure, over Azure ExpressRoute. Key advantages are: - Autodetection of ExpressRoute circuits associated with your subscription.
@@ -137,4 +140,3 @@ You can see the notification codes and set alerts on them via **LogAnalytics**.
## Next steps [Search logs](../log-query/log-query-overview.md) to view detailed network performance data records.-
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-performance-monitor-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor-faq.md
@@ -13,6 +13,9 @@ Last updated 10/12/2018
![Network Performance Monitor symbol](media/network-performance-monitor-faq/npm-symbol.png)
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor to the new Connection Monitor](https://docs.microsoft.com/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor) in Azure Network Watcher before 29 February 2024.
+ This article captures the frequently asked questions (FAQs) about Network Performance Monitor (NPM) in Azure [Network Performance Monitor](../../networking/network-monitoring-overview.md) is a cloud-based [hybrid network monitoring](./network-performance-monitor-performance-monitor.md) solution that helps you monitor network performance between various points in your network infrastructure. It also helps you monitor network connectivity to [service and application endpoints](./network-performance-monitor-service-connectivity.md) and [monitor the performance of Azure ExpressRoute](./network-performance-monitor-expressroute.md).
@@ -30,7 +33,7 @@ Listed below are the platform requirements for NPM's various capabilities:
- NPM's ExpressRoute Monitor capability supports only Windows server (2008 SP1 or later) operating system. ### Can I use Linux machines as monitoring nodes in NPM?
-The capability to monitor networks using Linux-based nodes is now generally available. Acccess the agent [here](../../virtual-machines/extensions/oms-linux.md).
+The capability to monitor networks using Linux-based nodes is now generally available. Access the agent [here](../../virtual-machines/extensions/oms-linux.md).
### What are the size requirements of the nodes to be used for monitoring by NPM? For running the NPM solution on node VMs to monitor networks, the nodes should have at least 500-MB memory and one core. You don't need to use separate nodes for running NPM. The solution can run on nodes that have other workloads running on it. The solution has the capability to stop the monitoring process if it uses more than 5% CPU.
@@ -50,7 +53,7 @@ You can get more details on the relative advantages of each protocol [here](./ne
### How can I configure a node to support monitoring using TCP protocol? For the node to support monitoring using TCP protocol: * Ensure that the node platform is Windows Server (2008 SP1 or later).
-* Run [EnableRules.ps1](https://aka.ms/npmpowershellscript) Powershell script on the node. See [instructions](./network-performance-monitor.md#configure-log-analytics-agents-for-monitoring) for more details.
+* Run [EnableRules.ps1](https://aka.ms/npmpowershellscript) PowerShell script on the node. See [instructions](./network-performance-monitor.md#configure-log-analytics-agents-for-monitoring) for more details.
### How can I change the TCP port being used by NPM for monitoring?
@@ -244,7 +247,7 @@ This can happen if either the host firewall or the intermediate firewall (networ
As the network paths between A to B can be different from the network paths between B to A, different values for loss and latency can be observed. ### Why are all my ExpressRoute circuits and peering connections not being discovered?
-NPM now discovers ExpressRoute circuits and peering connections in all subscriptions to which the user has access. Choose all the subscriptions where your Express Route resources are linked and enable monitoring for each discovered resource. NPM looks for connection objects when discovering a private peering, so please check if a VNET is associated with your peering. NPM does not detect circuits and peering that are in a diffrent tenant from the Log Analytics workspace.
+NPM now discovers ExpressRoute circuits and peering connections in all subscriptions to which the user has access. Choose all the subscriptions where your Express Route resources are linked and enable monitoring for each discovered resource. NPM looks for connection objects when discovering a private peering, so please check if a VNET is associated with your peering. NPM does not detect circuits and peering that are in a different tenant from the Log Analytics workspace.
### The ER Monitor capability has a diagnostic message "Traffic is not passing through ANY circuit". What does that mean?
@@ -257,10 +260,10 @@ This can happen if:
* The on-premises and Azure nodes chosen for monitoring the ExpressRoute circuit in the monitoring configuration, do not have connectivity to each other over the intended ExpressRoute circuit. Ensure that you have chosen correct nodes that have connectivity to each other over the ExpressRoute circuit you intend to monitor. ### Why does ExpressRoute Monitor report my circuit/peering as unhealthy when it is available and passing data.
-ExpressRoute Monitor compares the network performance values (loss, latency and bandwidth utilisation) reported by the agents/service with the thresholds set during Configuration. For a circuit, if the bandwidth utilisation reported is greater than the threshold set in Configuration, the circuit is marked as unhealthy. For peerings, if the loss, latency or bandwidth utilisation reported is greater than the threshold set in the Configuration, the peering is marked as unhealthy. NPM does not utilise metrics or any other form of data to deicde health state.
+ExpressRoute Monitor compares the network performance values (loss, latency and bandwidth utilization) reported by the agents/service with the thresholds set during Configuration. For a circuit, if the bandwidth utilization reported is greater than the threshold set in Configuration, the circuit is marked as unhealthy. For peerings, if the loss, latency or bandwidth utilization reported is greater than the threshold set in the Configuration, the peering is marked as unhealthy. NPM does not utilize metrics or any other form of data to decide health state.
-### Why does ExpressRoute Monitor'bandwidth utilisation report a value differrent from metrics bits in/out
-For ExpressRoute Monitor, bandwidth utiliation is the average of incoming and outgoing bandwidth over the last 20 mins It is expressed in Bits/sec. For Express Route metrics, bit in/out are per minute data points.Internally the dataset used for both is the same, but the aggregation valies between NPM and ER metrics. For granular, minute by minute monitoring and fast alerts, we recommend setting alerts directly on ER metrics
+### Why does ExpressRoute Monitor'bandwidth utilization report a value different from metrics bits in/out
+For ExpressRoute Monitor, bandwidth utilization is the average of incoming and outgoing bandwidth over the last 20 mins It is expressed in Bits/sec. For Express Route metrics, bit in/out are per minute data points.Internally the dataset used for both is the same, but the aggregation varies between NPM and ER metrics. For granular, minute by minute monitoring and fast alerts, we recommend setting alerts directly on ER metrics
### While configuring monitoring of my ExpressRoute circuit, the Azure nodes are not being detected. This can happen if the Azure nodes are connected through Operations Manager. The ExpressRoute Monitor capability supports only those Azure nodes that are connected as Direct Agents.
@@ -291,7 +294,7 @@ This can happen if the target service is not a web application but the test is c
NPM process is configured to stop if it utilizes more than 5% of the host CPU resources. This is to ensure that you can keep using the nodes for their usual workloads without impacting performance. ### Does NPM edit firewall rules for monitoring?
-NPM only creates a local Windows Firewall rule on the nodes on which the EnableRules.ps1 Powershell script is run to allow the agents to create TCP connections with each other on the specified port. The solution does not modify any network firewall or Network Security Group (NSG) rules.
+NPM only creates a local Windows Firewall rule on the nodes on which the EnableRules.ps1 PowerShell script is run to allow the agents to create TCP connections with each other on the specified port. The solution does not modify any network firewall or Network Security Group (NSG) rules.
### How can I check the health of the nodes being used for monitoring? You can view the health status of the nodes being used for monitoring from the following view: Network Performance Monitor -> Configuration -> Nodes. If a node is unhealthy, you can view the error details and take the suggested action.
@@ -302,4 +305,3 @@ NPM rounds the latency numbers in the UI and in milliseconds. The same data is s
## Next steps - Learn more about Network Performance Monitor by referring to [Network Performance Monitor solution in Azure](./network-performance-monitor.md).-
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-performance-monitor-performance-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor-performance-monitor.md
@@ -11,6 +11,9 @@ Last updated 02/20/2018
# Network Performance Monitor solution: Performance monitoring
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor to the new Connection Monitor](https://docs.microsoft.com/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor) in Azure Network Watcher before 29 February 2024.
+ The Performance Monitor capability in [Network Performance Monitor](network-performance-monitor.md) helps you monitor network connectivity across various points in your network. You can monitor cloud deployments and on-premises locations, multiple data centers and branch offices, and mission-critical multitier applications or microservices. With Performance Monitor, you can detect network issues before your users complain. Key advantages are that you can: - Monitor loss and latency across various subnets and set alerts.
@@ -61,7 +64,7 @@ To create custom monitoring rules:
6. Choose monitoring conditions. To set custom thresholds for health-event generation, enter threshold values. Whenever the value of the condition exceeds its selected threshold for the selected network or subnetwork pair, a health event is generated. 7. Select **Save** to save the configuration.
-After you save a monitoring rule, you can integrate that rule with Alert Management by selecting **Create Alert**. An alert rule is automatically created with the search query. Other required parameters are automatically filled in. Using an alert rule, you can receive e-mail-based alerts, in addition to the existing alerts within Network Performance Monitor. Alerts also can trigger remedial actions with runbooks, or they can integrate with existing service management solutions by using webhooks. Select **Manage Alert** to edit the alert settings.
+After you save a monitoring rule, you can integrate that rule with Alert Management by selecting **Create Alert**. An alert rule is automatically created with the search query. Other required parameters are automatically filled in. Using an alert rule, you can receive e-mail-based alerts, in addition to the existing alerts within Network Performance Monitor. Alerts also can trigger remedial actions with runbooks, or they can integrate with existing Service Management solutions by using webhooks. Select **Manage Alert** to edit the alert settings.
You can now create more Performance Monitor rules or move to the solution dashboard to use the capability.
@@ -125,4 +128,3 @@ In the following image, the root cause of the problem areas to the specific sect
## Next steps [Search logs](../log-query/log-query-overview.md) to view detailed network performance data records.-
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-performance-monitor-pricing-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor-pricing-faq.md
@@ -11,6 +11,9 @@ Last updated 04/02/2018
# Pricing changes for Azure Network Performance Monitor
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor to the new Connection Monitor](https://docs.microsoft.com/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor) in Azure Network Watcher before 29 February 2024.
+ We have listened to your feedback and recently introduced a [new pricing experience](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) for various monitoring services across Azure. This article captures the pricing changes related to Azure [Network Performance Monitor](../../networking/network-monitoring-overview.md) (NPM) in an easy-to-read question and answer format.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-performance-monitor-service-connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor-service-connectivity.md
@@ -11,6 +11,9 @@ Last updated 02/20/2018
# Service Connectivity Monitor
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor to the new Connection Monitor](https://docs.microsoft.com/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor) in Azure Network Watcher before 29 February 2024.
+ You can use the Service Connectivity Monitor capability in [Network Performance Monitor](network-performance-monitor.md) to monitor network connectivity to any endpoint that has an open TCP port. Such endpoints include websites, SaaS applications, PaaS applications, and SQL databases. You can perform the following functions with Service Connectivity Monitor:
@@ -130,4 +133,3 @@ For US Government Virginia region, only DOD URLs are built-in NPM. Customers usi
## Next steps [Search logs](../log-query/log-query-overview.md) to view detailed network performance data records.-
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-performance-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor.md
@@ -13,6 +13,8 @@ Last updated 02/20/2018
![Network Performance Monitor symbol](./media/network-performance-monitor/npm-symbol.png)
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor to the new Connection Monitor](https://docs.microsoft.com/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor) in Azure Network Watcher before 29 February 2024.
Network Performance Monitor is a cloud-based hybrid network monitoring solution that helps you monitor network performance between various points in your network infrastructure. It also helps you monitor network connectivity to service and application endpoints and monitor the performance of Azure ExpressRoute.
@@ -276,7 +278,7 @@ If you are an NPM user creating an alert via Log Analytics:
If you are an NPM user creating an alert via Azure portal: 1. You can choose to enter your email directly or you can choose to create alerts via action groups.
-2. If you choose to enter your email directly, an action group with the name **NPM Email ActionGroup** is created and the email id is added to that action group.
+2. If you choose to enter your email directly, an action group with the name **NPM Email ActionGroup** is created and the email ID is added to that action group.
3. If you choose to use action groups, you will have to select an previously created action group. You can learn how to create an action group [here.](../platform/action-groups.md#create-an-action-group-by-using-the-azure-portal) 4. Once the alert is successfully created, you can use Manage Alerts link to manage your alerts.
@@ -297,4 +299,3 @@ Information on pricing is available [online](network-performance-monitor-pricing
## Next steps Learn more about [Performance Monitor](network-performance-monitor-performance-monitor.md), [Service Connectivity Monitor](network-performance-monitor-performance-monitor.md), and [ExpressRoute Monitor](network-performance-monitor-expressroute.md). -
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/agents-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/agents-overview.md
@@ -141,6 +141,7 @@ The following tables list the operating systems that are supported by the Azure
| Windows Server 2016 Core | | | | X | | Windows Server 2012 R2 | X | X | X | X | | Windows Server 2012 | X | X | X | X |
+| Windows Server 2008 R2 SP1 | X | X | X | X |
| Windows Server 2008 R2 | | X | X | X | | Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only) | X | X | X | X | | Windows 8 Enterprise and Pro<br>(Server scenarios only) | | X | X | |
@@ -150,31 +151,36 @@ The following tables list the operating systems that are supported by the Azure
| Operating system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension | |:|::|::|::|::
-| Amazon Linux 2017.09 | | X | | |
-| CentOS Linux 8 | | X | X | |
-| CentOS Linux 7 | X | X | X | X |
-| CentOS Linux 6 | | X | | |
-| CentOS Linux 6.5+ | | X | X | X |
-| Debian 9 | X | X | x | X |
-| Debian 8 | | X | X | |
-| Debian 7 | | | | X |
-| OpenSUSE 13.1+ | | | | X |
-| Oracle Linux 8 | | X | | |
-| Oracle Linux 7 | X | X | | X |
-| Oracle Linux 6 | | X | | |
-| Oracle Linux 6.4+ | | X | | X |
-| Red Hat Enterprise Linux Server 8 | | X | X | |
-| Red Hat Enterprise Linux Server 7 | X | X | X | X |
-| Red Hat Enterprise Linux Server 6 | | X | X | |
-| Red Hat Enterprise Linux Server 6.7+ | | X | X | X |
-| SUSE Linux Enterprise Server 15.1 | | X | | |
-| SUSE Linux Enterprise Server 15 | X | X | X | |
-| SUSE Linux Enterprise Server 12 | X | X | X | X |
-| Ubuntu 20.04 LTS | | X | X | |
-| Ubuntu 18.04 LTS | X | X | X | X |
-| Ubuntu 16.04 LTS | X | X | X | X |
-| Ubuntu 14.04 LTS | | X | | X |
-
+| Amazon Linux 2017.09 | | X | | |
+| CentOS Linux 8 <sup>1</sup> <sup>2</sup> | X | X | X | |
+| CentOS Linux 7 | X | X | X | X |
+| CentOS Linux 6 | | X | | |
+| CentOS Linux 6.5+ | | X | X | X |
+| Debian 10 <sup>1</sup> | X | | | |
+| Debian 9 | X | X | x | X |
+| Debian 8 | | X | X | |
+| Debian 7 | | | | X |
+| OpenSUSE 13.1+ | | | | X |
+| Oracle Linux 8 <sup>1</sup> <sup>2</sup> | X | X | | |
+| Oracle Linux 7 | X | X | | X |
+| Oracle Linux 6 | | X | | |
+| Oracle Linux 6.4+ | | X | | X |
+| Red Hat Enterprise Linux Server 8 <sup>1</sup> <sup>2</sup> | X | X | X | |
+| Red Hat Enterprise Linux Server 7 | X | X | X | X |
+| Red Hat Enterprise Linux Server 6 | | X | X | |
+| Red Hat Enterprise Linux Server 6.7+ | | X | X | X |
+| SUSE Linux Enterprise Server 15.2 <sup>1</sup> <sup>2</sup> | X | | | |
+| SUSE Linux Enterprise Server 15.1 <sup>1</sup> <sup>2</sup> | X | X | | |
+| SUSE Linux Enterprise Server 15 | X | X | X | |
+| SUSE Linux Enterprise Server 12 | X | X | X | X |
+| Ubuntu 20.04 LTS <sup>1</sup> | X | X | X | |
+| Ubuntu 18.04 LTS | X | X | X | X |
+| Ubuntu 16.04 LTS | X | X | X | X |
+| Ubuntu 14.04 LTS | | X | | X |
+
+<sup>1</sup> Requires Python 3 to be installed on the machine.
+
+<sup>2</sup> Known issue collecting Syslog events. Only performance data is currently supported.
#### Dependency agent Linux kernel support Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/diagnostic-settings.md
@@ -5,7 +5,7 @@
Previously updated : 04/27/2020 Last updated : 02/08/2021
@@ -171,6 +171,24 @@ See [Diagnostic Settings](/rest/api/monitor/diagnosticsettings) to create or upd
## Create using Azure Policy Since a diagnostic setting needs to be created for each Azure resource, Azure Policy can be used to automatically create a diagnostic setting as each resource is created. See [Deploy Azure Monitor at scale using Azure Policy](../deploy-scale.md) for details.
+## Metric category is not supported error
+When deploying a diagnostic setting, you receive the following error message:
+
+ "Metric category '*xxxx*' is not supported"
+
+For example:
+
+ "Metric category 'ActionsFailed' is not supported"
+
+where previously your deployment succeeded.
+
+The problem occurs when using a Resource Manager template, the diagnostic settings REST API, Azure CLI or Azure PowerShell. Diagnostic settings created via the Azure portal are not affected as only the supported category names are presented.
+
+The problem is caused by a recent change in the underlying API. Metric categories other than 'AllMetrics' are not supported and never were except for a few very specific Azure services. In the past, other category names were ignored when deploying a diagnostic setting. The Azure Monitor backend simply redirected these categories to 'AllMetrics'. As of February 2021, the backend was updated to specifically confirm the metric category provided is accurate. This change has caused some deployments to fail.
+
+If you receive this error, update your deployments to replace any metric category names with 'AllMetrics' to fix the issue. If the deployment was previously adding multiple categories, only one with the 'AllMetrics' reference should be kept. If you continue to have the problem, please contact Azure support through the Azure portal.
++ ## Next steps
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/private-link-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/private-link-security.md
@@ -30,35 +30,43 @@ Azure Monitor Private Link Scope (AMPLS) connects private endpoints (and the VNe
![Diagram of basic resource topology](./media/private-link-security/private-link-basic-topology.png)
+* The Private Endpoint on your VNet allows it to reach Azure Monitor endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Monitor resources without opening your VNet to unrequired outbound traffic.
+* Traffic from the Private Endpoint to your Azure Monitor resources will go over the Microsoft Azure backbone, and not routed to public networks.
+* You can configure each of your workspaces or components to allow or deny ingestion and queries from public networks. That provides a resource-level protection, so that you can control traffic to specific resources.
+ > [!NOTE] > A single Azure Monitor resource can belong to multiple AMPLSs, but you cannot connect a single VNet to more than one AMPLS.
-### The issue of DNS overrides
-Log Analytics and Application Insights use global endpoints for some of their services, meaning they serve requests targeting any workspace/component. For example, Application Insights uses a global endpoint for log ingestion, and both Application Insights and Log Analytics use a global endpoint for query requests.
+## Planning your Private Link setup
-When you set up a Private Link connection, your DNS is updated to map Azure Monitor endpoints to private IP addresses from your VNet's IP range. This change overrides any previous mapping of these endpoints, which can have meaningful implications, reviewed below.
+Before setting up your Azure Monitor Private Link setup, consider your network topology, and specifically your DNS routing topology.
-## Planning based on your network topology
+### The issue of DNS overrides
+Some Azure Monitor services use global endpoints, meaning they serve requests targeting any workspace/component. A couple of examples are the Application Insights ingestion endpoint, and the query endpoint of both Application Insights and Log Analytics.
-Before setting up your Azure Monitor Private Link setup, consider your network topology, and specifically your DNS routing topology.
+When you set up a Private Link connection, your DNS is updated to map Azure Monitor endpoints to private IP addresses from your VNet's IP range. This change overrides any previous mapping of these endpoints, which can have meaningful implications, reviewed below.
### Azure Monitor Private Link applies to all Azure Monitor resources - it's All or Nothing
-Since some Azure Monitor endpoints are global, it's impossible to create a Private Link connection for a specific component or workspace. Instead, when you set up a Private Link to a single Application Insights component, your DNS records are updated for **all** Application Insights component. Any attempt to ingest or query a component will attempt to go through the Private Link, and possibly fail. Similarly, setting up a Private Link to a single workspace will cause all Log Analytics queries to go through the Private Link query endpoint (but not ingestion requests, which have workspace-specific endpoints).
+Since some Azure Monitor endpoints are global, it's impossible to create a Private Link connection for a specific component or workspace. Instead, when you set up a Private Link to a single Application Insights component, your DNS records are updated for **all** Application Insights component. Any attempt to ingest or query a component will go through the Private Link, and possibly fail. Similarly, setting up a Private Link to a single workspace will cause all Log Analytics queries to go through the Private Link query endpoint (but not ingestion requests, which have workspace-specific endpoints).
![Diagram of DNS overrides in a single VNet](./media/private-link-security/dns-overrides-single-vnet.png) That's true not only for a specific VNet, but for all VNets that share the same DNS server (see [The issue of DNS overrides](#the-issue-of-dns-overrides)). So, for example, request to ingest logs to any Application Insights component will always be sent through the Private Link route. Components that aren't linked to the AMPLS will fail the Private Link validation and not go through.
-**Effectively, that means you should connect all Azure Monitor resources in your network to a Private Link (add them to AMPLS), or none of them.**
+> [!NOTE]
+> To conclude:
+> Once your setup a Private Link connection to a single resource, it applies to all Azure Monitor resources in your network - it's All or Nothing. That effectively means you should add all Azure Monitor resources in your network to your AMPLS, or none of them.
### Azure Monitor Private Link applies to your entire network
-Some networks are composed of multiple VNets. If these VNets use the same DNS server, they will override each other's DNS mappings and possibly break each other's communication with Azure Monitor (see [The issue of DNS overrides](#the-issue-of-dns-overrides)). Ultimately, only the last VNet will be able to communicate with Azure Monitor, since the DNS will map Azure Monitor endpoints to private IPs from this VNets range (which may not be reachable from other VNets).
+Some networks are composed of multiple VNets. If the VNets use the same DNS server, they will override each other's DNS mappings and possibly break each other's communication with Azure Monitor (see [The issue of DNS overrides](#the-issue-of-dns-overrides)). Ultimately, only the last VNet will be able to communicate with Azure Monitor, since the DNS will map Azure Monitor endpoints to private IPs from this VNets range (which may not be reachable from other VNets).
![Diagram of DNS overrides in multiple VNets](./media/private-link-security/dns-overrides-multiple-vnets.png)
-In the above diagram, VNet 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, VNet 10.0.2.x connects to AMPLS2, and overrides the DNS mapping of the *same global endpoints* with IPs from its range. Since these VNets are not peered, the first VNet now fails to reach these endpoints.
+In the above diagram, VNet 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, VNet 10.0.2.x connects to AMPLS2, and overrides the DNS mapping of the *same global endpoints* with IPs from its range. Since these VNets aren't peered, the first VNet now fails to reach these endpoints.
-**VNets that use the same DNS should be peered - either directly or through a hub VNet. VNets that aren't peered should also use different DNS server, DNS forwarders, or other mechanism to avoid DNS clashing.**
+> [!NOTE]
+> To conclude:
+> AMPLS setup affect all networks that share the same DNS zones. To avoid overriding each other's DNS endpoint mappings, it is best to setup a single Private Endpoint on a peered network (such as a Hub VNet), or separate the networks at the DNS level (foe example by using DNS forwarders or separate DNS servers entirely).
### Hub-spoke networks Hub-spoke topologies can avoid the issue of DNS overrides by setting a Private Link on the hub (main) VNet, instead of setting up a Private Link for each VNet separately. This setup makes sense especially if the Azure Monitor resources used by the spoke VNets are shared.
@@ -66,12 +74,12 @@ Hub-spoke topologies can avoid the issue of DNS overrides by setting a Private L
![Hub-and-spoke-single-PE](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png) > [!NOTE]
-> You may intentionally prefer to create separate Private Links for your spoke VNets, for example to allow each VNet to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint and AMPLS for each VNet, but must also verify they don't share the same DNS server in order to avoid DNS overrides.
+> You may intentionally prefer to create separate Private Links for your spoke VNets, for example to allow each VNet to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint and AMPLS for each VNet, but must also verify they don't share the same DNS zones in order to avoid DNS overrides.
### Consider limits
-As listed in [Restrictions and limitations](#restrictions-and-limitations), the AMPLS object has a number of limits, depicted in the below topology:
+As listed in [Restrictions and limitations](#restrictions-and-limitations), the AMPLS object has a number of limits, shown in the below topology:
* Each VNet connects to only **1** AMPLS object. * AMPLS B is connected to Private Endpoints of two VNets (VNet2 and VNet3), using 2 of the 10 possible Private Endpoint connections. * AMPLS A connects to two workspaces and one Application Insight component, using 3 of the 50 possible Azure Monitor resources connections.
@@ -117,7 +125,7 @@ Now that you have resources connected to your AMPLS, create a private endpoint t
![Screenshot of Private Endpoint Connections UX](./media/private-link-security/ampls-select-private-endpoint-connect-3.png)
-2. Pick the subscription, resource group, and name of the endpoint, and the region it should live in. The region needs to be the same region as the virtual network you will connect it to.
+2. Pick the subscription, resource group, and name of the endpoint, and the region it should live in. The region needs to be the same region as the VNet you connect it to.
3. Select **Next: Resource**.
@@ -157,7 +165,7 @@ Go to the Azure portal. In your Log Analytics workspace resource menu, there's a
![LA Network Isolation](./media/private-link-security/ampls-log-analytics-lan-network-isolation-6.png) ### Connected Azure Monitor Private Link scopes
-All scopes connected to this workspace show up in this screen. Connecting to scopes (AMPLSs) allows network traffic from the virtual network connected to each AMPLS to reach this workspace. Creating a connection through here has the same effect as setting it up on the scope, as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Note that a workspace can connect to 5 AMPLS objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations).
+All scopes connected to the workspace show up in this screen. Connecting to scopes (AMPLSs) allows network traffic from the virtual network connected to each AMPLS to reach this workspace. Creating a connection through here has the same effect as setting it up on the scope, as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Note that a workspace can connect to 5 AMPLS objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations).
### Access from outside of private links scopes The settings on the bottom part of this page control access from public networks, meaning networks not connected through the scopes listed above. Setting **Allow public network access for ingestion** to **No** blocks ingestion of logs from machines outside of the connected scopes. Setting **Allow public network access for queries** to **No** blocks queries coming from machines outside of the scopes. That includes queries run via workbooks, dashboards, API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal, and that query Log Analytics data also have to be running within the private-linked VNET.
@@ -189,19 +197,40 @@ Go to the Azure portal. In your Azure Monitor Application Insights component res
First, you can connect this Application Insights resource to Azure Monitor Private Link scopes that you have access to. Select **Add** and select the **Azure Monitor Private Link Scope**. Select Apply to connect it. All connected scopes show up in this screen. Making this connection allows network traffic in the connected virtual networks to reach this component, and has the same effect as connecting it from the scope as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources).
-Second, you can control how this resource can be reached from outside of the private link scopes listed previously. If you set **Allow public network access for ingestion** to **No**, then machines or SDKs outside of the connected scopes cannot upload data to this component. If you set **Allow public network access for queries** to **No**, then machines outside of the scopes cannot access data in this Application Insights resource. That data includes access to APM logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more.
+Second, you can control how this resource can be reached from outside of the private link scopes (AMPLS) listed previously. If you set **Allow public network access for ingestion** to **No**, then machines or SDKs outside of the connected scopes can't upload data to this component. If you set **Allow public network access for queries** to **No**, then machines outside of the scopes can't access data in this Application Insights resource. That data includes access to APM logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more.
-Note that non-portal consumption experiences also have to be running within the private-linked VNET that includes the monitored workloads.
+> [!NOTE]
+> Non-portal consumption experiences must also run on the private-linked VNET that includes the monitored workloads.
YouΓÇÖll need to add resources hosting the monitored workloads to the private link. HereΓÇÖs [documentation](../../app-service/networking/private-endpoint.md) for how to do this for App Services.
-Restricting access in this manner only applies to data in the Application Insights resource. Configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. Instead, restrict access to Resource Manager using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](roles-permissions-security.md).
+Restricting access in this manner only applies to data in the Application Insights resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. So, you should restrict access to Resource Manager using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](roles-permissions-security.md).
> [!NOTE] > To fully secure workspace-based Application Insights, you need to lock down both access to Application Insights resource as well as the underlying Log Analytics workspace. > > Code-level diagnostics (profiler/debugger) need you to provide your own storage account to support private link. Here's [documentation](../app/profiler-bring-your-own-storage.md) for how to do this.
+### Handling the All-or-Nothing nature of Private Links
+As explained in [Planning your Private Link setup](#planning-your-private-link-setup), setting up a Private Link even for a single resource affects all Azure Monitor resources in that networks, and in other networks that share the same DNS. This can make your onboarding process challenging. Consider the following options:
+
+* All in - the simplest and most secure approach is to add all of your Application Insights components to the AMPLS. For components that you wish to still access from other networks as well, leave the ΓÇ£Allow public internet access for ingestion/queryΓÇ¥ flags set to Yes (the default).
+* Isolate networks - if you are (or can align with) using spoke vnets, follow the guidance in [Hub-spoke network topology in Azure](https://docs.microsoft.com/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, setup separate private link settings in the relevant spoke VNets. Make sure to separate DNS zones as well, since sharing DNS zones with other spoke networks will cause [DNS overrides](#the-issue-of-dns-overrides).
+* Use custom DNS zones for specific apps - this solution allows you to access select Application Insights components over a Private Link, while keeping all other traffic over the public routes.
+ - Set up a [custom private DNS zone](https://docs.microsoft.com/azure/private-link/private-endpoint-dns), and give it a unique name, such as internal.monitor.azure.com
+ - Create an AMPLS and a Private Endpoint, and choose **not** to auto-integrate with private DNS
+ - Go to Private Endpoint -> DNS Configuration and review the suggested mapping of FQDNs similar to this:
+ ![Screenshot of suggested DNS zone configuration](./media/private-link-security/private-endpoint-fqdns.png)
+ - Choose to Add Configuration and pick the internal.monitor.azure.com zone you just created
+ - Add records for the above
+ ![Screenshot of configured DNS zone](./media/private-link-security/private-endpoint-global-dns-zone.png)
+ - Go to your Application Insights component and copy its [Connection String](https://docs.microsoft.com/azure/azure-monitor/app/sdk-connection-string).
+ - Apps or scripts that wish to call this component over a Private Link should use the connection string with the EndpointSuffix=internal.monitor.azure.com
+* Map endpoints through hosts files instead of DNS - to have a Private Link access only from a specific machine/VM in your network:
+ - Set up an AMPLS and a Private Endpoint, and choose **not** to auto-integrate with private DNS
+ - Configure the above A records on a machine that runs the app in the hosts file
++ ## Use APIs and command line You can automate the process described earlier using Azure Resource Manager templates, REST, and command-line interfaces.
@@ -226,11 +255,11 @@ The AMPLS object has a number of limits you should consider when planning your P
* An AMPLS object can connect to 50 Azure Monitor resources at most. * An AMPLS object can connect to 10 Private Endpoints at most.
-See [Consider limits](#consider-limits) for a deeper review of these limits and how to plan your Private Link setup accordingly.
+See [Consider limits](#consider-limits) for a deeper review of these limits.
### Agents
-The latest versions of the Windows and Linux agents must be used on private networks to enable secure ingestion to Log Analytics workspaces. Older versions cannot upload monitoring data in a private network.
+The latest versions of the Windows and Linux agents must be used to support secure ingestion to Log Analytics workspaces. Older versions can't upload monitoring data over a private network.
**Log Analytics Windows agent**
@@ -238,7 +267,7 @@ Use the Log Analytics agent version 10.20.18038.0 or later.
**Log Analytics Linux agent**
-Use agent version 1.12.25 or later. If you cannot, run the following commands on your VM.
+Use agent version 1.12.25 or later. If you can't, run the following commands on your VM.
```cmd $ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -X
@@ -249,15 +278,16 @@ $ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -w <workspace id> -s <workspace k
To use Azure Monitor portal experiences such as Application Insights and Log Analytics, you need to allow the Azure portal and Azure Monitor extensions to be accessible on the private networks. Add **AzureActiveDirectory**, **AzureResourceManager**, **AzureFrontDoor.FirstParty**, and **AzureFrontdoor.Frontend** [service tags](../../firewall/service-tags.md) to your Network Security Group.
-### Programmatic access
+### Querying data
+The [`externaldata` operator](https://docs.microsoft.com/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor) isn't supported over a Private Link, as it reads data from storage accounts but doesn't guarantee the storage is accessed privately.
-To use the REST API, [CLI](/cli/azure/monitor) or PowerShell with Azure Monitor on private networks, add the [service tags](../../virtual-network/service-tags-overview.md) **AzureActiveDirectory** and **AzureResourceManager** to your firewall.
+### Programmatic access
-Adding these tags allows you to perform actions such as querying log data, create, and manage Log Analytics workspaces and AI components.
+To use the REST API, [CLI](/cli/azure/monitor) or PowerShell with Azure Monitor on private networks, add the [service tags](../../virtual-network/service-tags-overview.md) **AzureActiveDirectory** and **AzureResourceManager** to your firewall.
### Application Insights SDK downloads from a content delivery network
-Bundle the JavaScript code in your script so that the browser does not attempt to download code from a CDN. An example is provided on [GitHub](https://github.com/microsoft/ApplicationInsights-JS#npm-setup-ignore-if-using-snippet-setup)
+Bundle the JavaScript code in your script so that the browser doesn't attempt to download code from a CDN. An example is provided on [GitHub](https://github.com/microsoft/ApplicationInsights-JS#npm-setup-ignore-if-using-snippet-setup)
### Browser DNS settings
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-comparison.md
@@ -100,11 +100,11 @@ param objectToTest object = {
] }
-output stringOutput string = coalesce(objectToTest.null1, objectToTest.null2, objectToTest.string)
-output intOutput int = coalesce(objectToTest.null1, objectToTest.null2, objectToTest.int)
-output objectOutput object = coalesce(objectToTest.null1, objectToTest.null2, objectToTest.object)
-output arrayOutput array = coalesce(objectToTest.null1, objectToTest.null2, objectToTest.array)
-output emptyOutput bool =empty(coalesce(objectToTest.null1, objectToTest.null2))
+output stringOutput string = objectToTest.null1 ?? objectToTest.null2 ?? objectToTest.string
+output intOutput int = objectToTest.null1 ?? objectToTest.null2 ?? objectToTest.int
+output objectOutput object = objectToTest.null1 ?? objectToTest.null2 ?? objectToTest.object
+output arrayOutput array = objectToTest.null1 ?? objectToTest.null2 ?? objectToTest.array
+output emptyOutput bool =empty(objectToTest.null1 ?? objectToTest.null2)
```
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/long-term-backup-retention-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-backup-retention-configure.md
@@ -180,7 +180,7 @@ Remove-AzSqlDatabaseLongTermRetentionBackup -ResourceId $ltrBackup.ResourceId
``` > [!IMPORTANT]
-> Deleting LTR backup is non-reversible. To delete an LTR backup after the server has been deleted you must have Subscription scope permission. You can set up notifications about each delete in Azure Monitor by filtering for operation 'Deletes a long term retention backup'. The activity log contains information on who and when made the request. See [Create activity log alerts](../../azure-monitor/platform/alerts-activity-log.md) for detailed instructions.
+> Deleting LTR backup is non-reversible. To delete an LTR backup after the server or resource group has been deleted you must have Subscription scope permission. You can set up notifications about each delete in Azure Monitor by filtering for operation 'Deletes a long term retention backup'. The activity log contains information on who and when made the request. See [Create activity log alerts](../../azure-monitor/platform/alerts-activity-log.md) for detailed instructions.
### Restore from LTR backups
@@ -193,7 +193,7 @@ Restore-AzSqlDatabase -FromLongTermRetentionBackup -ResourceId $ltrBackup.Resour
``` > [!IMPORTANT]
-> To restore from an LTR backup after the server has been deleted, you must have permissions scoped to the server's subscription and that subscription must be active. You must also omit the optional -ResourceGroupName parameter.
+> To restore from an LTR backup after the server or resource group has been deleted, you must have permissions scoped to the server's subscription and that subscription must be active. You must also omit the optional -ResourceGroupName parameter.
> [!NOTE] > From here, you can connect to the restored database using SQL Server Management Studio to perform needed tasks, such as to extract a bit of data from the restored database to copy into the existing database or to delete the existing database and rename the restored database to the existing database name. See [point in time restore](recovery-using-backups.md#point-in-time-restore).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/custom-dns-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/custom-dns-configure.md
@@ -24,10 +24,10 @@ Because SQL Managed Instance uses the same DNS for its inner workings, configure
> Always use a fully qualified domain name (FQDN) for the mail server, for the SQL Server instance, and for other services, even if they're within your private DNS zone. For example, use `smtp.contoso.com` for your mail server because `smtp` won't resolve correctly. Creating a linked server or replication that references SQL Server VMs inside the same virtual network also requires an FQDN and a default DNS suffix. For example, `SQLVM.internal.cloudapp.net`. For more information, see [Name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). > [!IMPORTANT]
-> Updating virtual network DNS servers won't affect SQL Managed Instance immediately. The SQL Managed Instance DNS configuration is updated after the DHCP lease expires or after the platform upgrade, whichever occurs first. **Users are advised to set their virtual network DNS configuration before creating their first managed instance.**
+> Updating virtual network DNS servers won't affect SQL Managed Instance immediately. See [how to synchronize virtual network DNS servers setting on SQL Managed Instance virtual cluster](synchronize-vnet-dns-servers-setting-on-virtual-cluster.md) for more details.
## Next steps - For an overview, see [What is Azure SQL Managed Instance?](sql-managed-instance-paas-overview.md). - For a tutorial showing you how to create a new managed instance, see [Create a managed instance](instance-create-quickstart.md).-- For information about configuring a VNet for a managed instance, see [VNet configuration for managed instances](connectivity-architecture-overview.md).
+- For information about configuring a VNet for a managed instance, see [VNet configuration for managed instances](connectivity-architecture-overview.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/long-term-backup-retention-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/long-term-backup-retention-configure.md
@@ -56,27 +56,61 @@ $resourceGroup = "<resourceGroupName>"
$dbName = "<databaseName>" Connect-AzAccount+ Select-AzSubscription -SubscriptionId $subId $instance = Get-AzSqlInstance -Name $instanceName -ResourceGroupName $resourceGroup # create LTR policy with WeeklyRetention = 12 weeks. MonthlyRetention and YearlyRetention = 0 by default.
-Set-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy -InstanceName $instanceName `
- -DatabaseName $dbName -ResourceGroupName $resourceGroup -WeeklyRetention P12W
+$LTRPolicy = @{
+ InstanceName = $instanceName
+ DatabaseName = $dbName
+ ResourceGroupName = $resourceGroup
+ WeeklyRetention = 'P12W'
+}
+Set-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy @LTRPolicy
# create LTR policy with WeeklyRetention = 12 weeks, YearlyRetention = 5 years and WeekOfYear = 16 (week of April 15). MonthlyRetention = 0 by default.
-Set-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy -InstanceName $instanceName `
- -DatabaseName $dbName -ResourceGroupName $resourceGroup -WeeklyRetention P12W -YearlyRetention P5Y -WeekOfYear 16
+$LTRPolicy = @{
+ InstanceName = $instanceName
+ DatabaseName = $dbName
+ ResourceGroupName = $resourceGroup
+ WeeklyRetention = 'P12W'
+ YearlyRetention = 'P5Y'
+ WeekOfYear = '16'
+}
+Set-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy @LTRPolicy
``` ## View LTR policies
-This example shows how to list the LTR policies within an instance
+This example shows how to list the LTR policies within an instance for a single database
+
+```powershell
+# gets the current version of LTR policy for a database
+$LTRPolicies = @{
+ InstanceName = $instanceName
+ DatabaseName = $dbName
+ ResourceGroupName = $resourceGroup
+}
+Get-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy @LTRPolicy
+```
+
+This example shows how to list the LTR policies for all of the databases on an instance
```powershell
-# gets the current version of LTR policy for the database
-$ltrPolicies = Get-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy -InstanceName $instanceName `
- -DatabaseName $dbName -ResourceGroupName $resourceGroup
+# gets the current version of LTR policy for all of the databases on an instance
+
+$Databases = Get-AzSqlInstanceDatabase -ResourceGroupName $resourceGroup -InstanceName $instanceName
+
+$LTRParams = @{
+ InstanceName = $instanceName
+ ResourceGroupName = $resourceGroup
+}
+
+foreach($database in $Databases.Name){
+ Get-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy @LTRParams -DatabaseName $database
+ }
``` ## Clear an LTR policy
@@ -84,8 +118,14 @@ $ltrPolicies = Get-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy -InstanceN
This example shows how to clear an LTR policy from a database ```powershell
-Set-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy -InstanceName $instanceName `
- -DatabaseName $dbName -ResourceGroupName $resourceGroup -RemovePolicy
+# remove the LTR policy from a database
+$LTRPolicy = @{
+ InstanceName = $instanceName
+ DatabaseName = $dbName
+ ResourceGroupName = $resourceGroup
+ RemovePolicy = $true
+}
+Set-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy @LTRPolicy
``` ## View LTR backups
@@ -93,21 +133,42 @@ Set-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy -InstanceName $instanceNa
This example shows how to list the LTR backups within an instance. ```powershell+
+$instance = Get-AzSqlInstance -Name $instanceName -ResourceGroupName $resourceGroup
+ # get the list of all LTR backups in a specific Azure region # backups are grouped by the logical database id, within each group they are ordered by the timestamp, the earliest backup first
-$ltrBackups = Get-AzSqlInstanceDatabaseLongTermRetentionBackup -Location $instance.Location
+Get-AzSqlInstanceDatabaseLongTermRetentionBackup -Location $instance.Location
# get the list of LTR backups from the Azure region under the given managed instance
-$ltrBackups = Get-AzSqlInstanceDatabaseLongTermRetentionBackup -Location $instance.Location -InstanceName $instanceName
+$LTRBackupParam = @{
+ Location = $instance.Location
+ InstanceName = $instanceName
+}
+Get-AzSqlInstanceDatabaseLongTermRetentionBackup @LTRBackupParam
# get the LTR backups for a specific database from the Azure region under the given managed instance
-$ltrBackups = Get-AzSqlInstanceDatabaseLongTermRetentionBackup -Location $instance.Location -InstanceName $instanceName -DatabaseName $dbName
+$LTRBackupParam = @{
+ Location = $instance.Location
+ InstanceName = $instanceName
+ DatabaseName = $dbName
+}
+Get-AzSqlInstanceDatabaseLongTermRetentionBackup @LTRBackupParam
# list LTR backups only from live databases (you have option to choose All/Live/Deleted)
-$ltrBackups = Get-AzSqlInstanceDatabaseLongTermRetentionBackup -Location $instance.Location -DatabaseState Live
+$LTRBackupParam = @{
+ Location = $instance.Location
+ DatabaseState = 'Live'
+}
+Get-AzSqlInstanceDatabaseLongTermRetentionBackup @LTRBackupParam
# only list the latest LTR backup for each database
-$ltrBackups = Get-AzSqlInstanceDatabaseLongTermRetentionBackup -Location $instance.Location -InstanceName $instanceName -OnlyLatestPerDatabase
+$LTRBackupParam = @{
+ Location = $instance.Location
+ InstanceName = $instanceName
+ OnlyLatestPerDatabase = $true
+}
+Get-AzSqlInstanceDatabaseLongTermRetentionBackup @LTRBackupParam
``` ## Delete LTR backups
@@ -116,6 +177,13 @@ This example shows how to delete an LTR backup from the list of backups.
```powershell # remove the earliest backup
+# get the LTR backups for a specific database from the Azure region under the given managed instance
+$LTRBackupParam = @{
+ Location = $instance.Location
+ InstanceName = $instanceName
+ DatabaseName = $dbName
+}
+$ltrBackups = Get-AzSqlInstanceDatabaseLongTermRetentionBackup @LTRBackupParam
$ltrBackup = $ltrBackups[0] Remove-AzSqlInstanceDatabaseLongTermRetentionBackup -ResourceId $ltrBackup.ResourceId ```
@@ -129,8 +197,22 @@ This example shows how to restore from an LTR backup. Note, this interface did n
```powershell # restore a specific LTR backup as an P1 database on the instance $instanceName of the resource group $resourceGroup
-Restore-AzSqlInstanceDatabase -FromLongTermRetentionBackup -ResourceId $ltrBackup.ResourceId `
- -TargetInstanceName $instanceName -TargetResourceGroupName $resourceGroup -TargetInstanceDatabaseName $dbName
+$LTRBackupParam = @{
+ Location = $instance.Location
+ InstanceName = $instanceName
+ DatabaseName = $dbname
+ OnlyLatestPerDatabase = $true
+}
+$ltrBackup = Get-AzSqlInstanceDatabaseLongTermRetentionBackup @LTRBackupParam
+
+$RestoreLTRParam = @{
+ TargetInstanceName = $instanceName
+ TargetResourceGroupName = $resourceGroup
+ TargetInstanceDatabaseName = $dbName
+ FromLongTermRetentionBackup = $true
+ ResourceId = $ltrBackup.ResourceId
+}
+Restore-AzSqlInstanceDatabase @RestoreLTRParam
``` > [!IMPORTANT]
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/public-endpoint-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/public-endpoint-configure.md
@@ -9,7 +9,7 @@
Previously updated : 05/07/2019 Last updated : 02/08/2021 # Configure public endpoint in Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
@@ -106,10 +106,10 @@ Set-AzSqlInstance -PublicDataEndpointEnabled $false -force
## Obtaining the managed instance public endpoint connection string 1. Navigate to the managed instance configuration page that has been enabled for public endpoint. Select the **Connection strings** tab under the **Settings** configuration.
-1. Note that the public endpoint host name comes in the format <mi_name>.**public**.<dns_zone>.database.windows.net and that the port used for the connection is 3342.
+1. Note that the public endpoint host name comes in the format <mi_name>.**public**.<dns_zone>.database.windows.net and that the port used for the connection is 3342. Here's an example of a server value of the connection string denoting the public endpoint port that can be used in SQL Server Management Studio or Azure Data Studio connections: `<mi_name>.public.<dns_zone>.database.windows.net,3342`
![Screenshot shows the connection strings for your public and private endpoints.](./media/public-endpoint-configure/mi-public-endpoint-conn-string.png) ## Next steps
-Learn about using [Azure SQL Managed Instance securely with public endpoint](public-endpoint-overview.md).
+Learn about using [Azure SQL Managed Instance securely with public endpoint](public-endpoint-overview.md).
backup https://docs.microsoft.com/en-us/azure/backup/azure-file-share-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/azure-file-share-support-matrix.md
@@ -55,7 +55,7 @@ Azure file shares backup is available in all regions **except** for: Germany Cen
| Setting | Limit | | | - | | Maximum number of restores per day | 10 |
-| Maximum number of files per restore | 10 |
+| Maximum number of files per restore | 99 |
| Maximum recommended restore size per restore for large file shares | 15 TiB | ## Retention limits
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-arm-restore-vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-restore-vms.md
@@ -11,13 +11,13 @@ This article describes how to restore Azure VM data from the recovery points sto
## Restore options
-Azure Backup provides a number of ways to restore a VM.
+Azure Backup provides several ways to restore a VM.
**Restore option** | **Details** | **Create a new VM** | Quickly creates and gets a basic VM up and running from a restore point.<br/><br/> You can specify a name for the VM, select the resource group and virtual network (VNet) in which it will be placed, and specify a storage account for the restored VM. The new VM must be created in the same region as the source VM.<br><br>If a VM restore fails because an Azure VM SKU wasn't available in the specified region of Azure, or because of any other issues, Azure Backup still restores the disks in the specified resource group. **Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell.
-**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs.<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md).
+**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs and unmanaged VMs.<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md).
**Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../best-practices-availability-paired-regions.md#what-are-paired-regions).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br> <li> [Create a VM](#create-a-vm) <br> <li> [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins. > [!NOTE]
@@ -175,7 +175,7 @@ Currently, secondary region [RPO](azure-backup-glossary.md#rpo-recovery-point-ob
[Azure zone pinned VMs](https://docs.microsoft.com/azure/virtual-machines/windows/create-portal-availability-zone) can be restored in any [availability zones](https://docs.microsoft.com/azure/availability-zones/az-overview) of the same region.
-In the restore process, you'll see the option **Availability Zone.** You'll see your default zone first. To choose a different zone, choose the number of the zone of your choice. If the pinned zone is unavailable, you won't be able to restore the data to another zone because the backed-up data isn't zonally-replicated.
+In the restore process, you'll see the option **Availability Zone.** You'll see your default zone first. To choose a different zone, choose the number of the zone of your choice. If the pinned zone is unavailable, you won't be able to restore the data to another zone because the backed-up data isn't zonally replicated.
![Choose availability zone](./media/backup-azure-arm-restore-vms/cross-zonal-restore.png)
@@ -194,7 +194,7 @@ You're provided with an option to restore [unmanaged disks](../storage/common/st
## Restore VMs with special configurations
-There are a number of common scenarios in which you might need to restore VMs.
+There are many common scenarios in which you might need to restore VMs.
**Scenario** | **Guidance** |
@@ -238,7 +238,7 @@ After you trigger the restore operation, the backup service creates a job for tr
## Post-restore steps
-There are a number of things to note after restoring a VM:
+There are a few things to note after restoring a VM:
- Extensions present during the backup configuration are installed, but not enabled. If you see an issue, reinstall the extensions. - If the backed-up VM had a static IP address, the restored VM will have a dynamic IP address to avoid conflict. You can [add a static IP address to the restored VM](/powershell/module/az.network/set-aznetworkinterfaceipconfig#description).
backup https://docs.microsoft.com/en-us/azure/backup/backup-support-matrix-iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
@@ -76,7 +76,7 @@ For Azure VM Linux backups, Azure Backup supports the list of Linux [distributio
- Azure Backup doesn't support 32-bit operating systems. - Other bring-your-own Linux distributions might work as long as the [Azure VM agent for Linux](../virtual-machines/extensions/agent-linux.md) is available on the VM, and as long as Python is supported. - Azure Backup doesn't support a proxy-configured Linux VM if it doesn't have Python version 2.7 installed.-- Azure Backup doesn't support backing up NFS files that are mounted from storage, or from any other NFS server, to Linux or Windows machines. It only backs up disks which are locally attached to the VM.
+- Azure Backup doesn't support backing up NFS files that are mounted from storage, or from any other NFS server, to Linux or Windows machines. It only backs up disks that are locally attached to the VM.
## Backup frequency and retention
@@ -97,7 +97,7 @@ Recovery points on DPM/MABS disk | 64 for file servers, and 448 for app servers.
| **Create a new VM** | Quickly creates and gets a basic VM up and running from a restore point.<br/><br/> You can specify a name for the VM, select the resource group and virtual network (VNet) in which it will be placed, and specify a storage account for the restored VM. The new VM must be created in the same region as the source VM. **Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell.
-**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs and for VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) and [Key Vault](../key-vault/general/overview.md).
+**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs and for VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) and [Key Vault](../key-vault/general/overview.md).
**Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../best-practices-availability-paired-regions.md#what-are-paired-regions).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the options below:<br> <li> [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> <li> [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins. ## Support for file-level restore
@@ -135,17 +135,17 @@ Restore VM in different virtual network |Supported.<br/><br/> The virtual networ
VM size |Any Azure VM size with at least 2 CPU cores and 1-GB RAM.<br/><br/> [Learn more.](../virtual-machines/sizes.md) Back up VMs in [availability sets](../virtual-machines/availability.md#availability-sets) | Supported.<br/><br/> You can't restore a VM in an available set by using the option to quickly create a VM. Instead, when you restore the VM, restore the disk and use it to deploy a VM, or restore a disk and use it to replace an existing disk. Back up VMs that are deployed with [Hybrid Use Benefit (HUB)](../virtual-machines/windows/hybrid-use-benefit-licensing.md) | Supported.
-Back up VMs that are deployed from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=virtual-machine-images)<br/><br/> (Published by Microsoft, third party) |Supported.<br/><br/> The VM must be running a supported operating system.<br/><br/> When recovering files on the VM, you can restore only to a compatible OS (not an earlier or later OS). We don't restore Azure Marketplace VMs backed as VMs, as these need purchase information. They are only restored as disks.
+Back up VMs that are deployed from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=virtual-machine-images)<br/><br/> (Published by Microsoft, third party) |Supported.<br/><br/> The VM must be running a supported operating system.<br/><br/> When recovering files on the VM, you can restore only to a compatible OS (not an earlier or later OS). We don't restore Azure Marketplace VMs backed as VMs, as these need purchase information. They're only restored as disks.
Back up VMs that are deployed from a custom image (third-party) |Supported.<br/><br/> The VM must be running a supported operating system.<br/><br/> When recovering files on the VM, you can restore only to a compatible OS (not an earlier or later OS). Back up VMs that are migrated to Azure| Supported.<br/><br/> To back up the VM, the VM agent must be installed on the migrated machine. Back up Multi-VM consistency | Azure Backup doesn't provide data and application consistency across multiple VMs. Backup with [Diagnostic Settings](../azure-monitor/platform/platform-logs-overview.md) | Unsupported. <br/><br/> If the restore of the Azure VM with diagnostic settings is triggered using [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, then the restore fails.
-Restore of Zone-pinned VMs | Supported (for a VM that's backed-up after Jan 2019 and where [availability zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) are available).<br/><br/>We currently support restoring to the same zone that's pinned in VMs. However, if the zone is unavailable due to an outage, the restore will fail.
+Restore of Zone-pinned VMs | Supported (for a VM that's backed-up after Jan 2019 and where [availability zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) are available).<br/><br/>We currently support restoring to the same zone that's pinned in VMs. However, if the zone is unavailable because of an outage, the restore will fail.
Gen2 VMs | Supported <br> Azure Backup supports backup and restore of [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). When these VMs are restored from Recovery point, they're restored as [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Supported for managed VMs. [Spot VMs](../virtual-machines/spot-vms.md) | Unsupported. Azure Backup restores Spot VMs as regular Azure VMs. [Azure Dedicated Host](https://docs.microsoft.com/azure/virtual-machines/dedicated-hosts) | Supported
-Windows Storage Spaces configuration of standalone Azure VMs | Supported
+Windows Storage Spaces configuration of standalone Azure VMs | Supported
## VM storage support
@@ -156,13 +156,13 @@ Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB
Storage type | Standard HDD, Standard SSD, Premium SSD. Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup.
-Disks with Write Accelerator enabled | As of November 23, 2020, supported only in the Korea Central (KRC) and South Africa North (SAN) regions for a limited number of subscriptions. For those supported subscriptions, Azure Backup will backup the virtual machines having disks which are Write Accelerated (WA) enabled during backup.<br><br>For the unsupported regions, internet connectivity is required on the VM to take snapshots of Virtual Machines with WA enabled.<br><br> **Important note**: In those unsupported regions, virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
+Disks with Write Accelerator enabled | As of November 23, 2020, supported only in the Korea Central (KRC) and South Africa North (SAN) regions for a limited number of subscriptions. For those supported subscriptions, Azure Backup will back up the virtual machines having disks that are Write Accelerated (WA) enabled during backup.<br><br>For the unsupported regions, internet connectivity is required on the VM to take snapshots of Virtual Machines with WA enabled.<br><br> **Important note**: In those unsupported regions, virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Add disk to protected VM | Supported. Resize disk on protected VM | Supported. Shared storage| Backing up VMs using Cluster Shared Volume (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks containing CSV volumes might not come-up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.
-Ultra SSD disks | Not supported. For more details, see these [limitations](selective-disk-backup-restore.md#limitations).
+Ultra SSD disks | Not supported. For more information, see these [limitations](selective-disk-backup-restore.md#limitations).
[Temporary disks](https://docs.microsoft.com/azure/virtual-machines/managed-disks-overview#temporary-disk) | Temporary disks aren't backed up by Azure Backup. ## VM network support
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-powershell.md
@@ -40,7 +40,7 @@ Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
```powershell $storageAccount = New-AzStorageAccount -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Name ΓÇ£contosostorageaccountΓÇ¥ -Location ΓÇ£East USΓÇ¥ -SkuName ΓÇ£Standard_RAGRSΓÇ¥ -Kind ΓÇ£StorageV2ΓÇ¥
- $container = New-AzStorageContainer -Name ΓÇ£ContosoContainerΓÇ¥ -Context $storageAccount.Context -Permission Blob
+ $container = New-AzStorageContainer -Name ΓÇ£contosocontainerΓÇ¥ -Context $storageAccount.Context -Permission Blob
``` 4. Upload your Cloud Service package (cspkg) to the storage account.
@@ -48,8 +48,8 @@ Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
```powershell $tokenStartTime = Get-Date $tokenEndTime = $tokenStartTime.AddYears(1)
- $cspkgBlob = Set-AzStorageBlobContent -File ΓÇ£./ContosoApp/ContosoApp.cspkgΓÇ¥ -Container ΓÇ£ContosoContainerΓÇ¥ -Blob ΓÇ£ContosoApp.cspkgΓÇ¥ -Context $storageAccount.Context
- $cspkgToken = New-AzStorageBlobSASToken -Container ΓÇ£ContosoContainerΓÇ¥ -Blob $cspkgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context
+ $cspkgBlob = Set-AzStorageBlobContent -File ΓÇ£./ContosoApp/ContosoApp.cspkgΓÇ¥ -Container ΓÇ£contosocontainerΓÇ¥ -Blob ΓÇ£ContosoApp.cspkgΓÇ¥ -Context $storageAccount.Context
+ $cspkgToken = New-AzStorageBlobSASToken -Container ΓÇ£contosocontainerΓÇ¥ -Blob $cspkgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context
$cspkgUrl = $cspkgBlob.ICloudBlob.Uri.AbsoluteUri + $cspkgToken ```
@@ -57,8 +57,8 @@ Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
5. Upload your cloud service configuration (cscfg) to the storage account. ```powershell
- $cscfgBlob = Set-AzStorageBlobContent -File ΓÇ£./ContosoApp/ContosoApp.cscfgΓÇ¥ -Container ContosoContainer -Blob ΓÇ£ContosoApp.cscfgΓÇ¥ -Context $storageAccount.Context
- $cscfgToken = New-AzStorageBlobSASToken -Container ΓÇ£ContosoContainerΓÇ¥ -Blob $cscfgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context
+ $cscfgBlob = Set-AzStorageBlobContent -File ΓÇ£./ContosoApp/ContosoApp.cscfgΓÇ¥ -Container contosocontainer -Blob ΓÇ£ContosoApp.cscfgΓÇ¥ -Context $storageAccount.Context
+ $cscfgToken = New-AzStorageBlobSASToken -Container ΓÇ£contosocontainerΓÇ¥ -Blob $cscfgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context
$cscfgUrl = $cscfgBlob.ICloudBlob.Uri.AbsoluteUri + $cscfgToken ```
@@ -87,13 +87,13 @@ Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
9. Create a Key Vault. This Key Vault will be used to store certificates that are associated with the Cloud Service (extended support) roles. Ensure that you have enabled 'Access policies' (in portal) for access to 'Azure Virtual Machines for deployment' and 'Azure Resource Manager for template deployment'. The Key Vault must be located in the same region and subscription as cloud service and have a unique name. For more information see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md). ```powershell
- New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosoOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥
+ New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥
``` 10. Update the Key Vault access policy and grant certificate permissions to your user account. ```powershell
- Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosoOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete
+ Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete
``` Alternatively, set access policy via ObjectId (which can be obtained by running `Get-AzADUser`)
@@ -132,13 +132,20 @@ Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
```powershell $credential = Get-Credential $expiration = (Get-Date).AddYears(1)
- $extension = New-AzCloudServiceRemoteDesktopExtensionObject -Name 'RDPExtension' -Credential $credential -Expiration $expiration -TypeHandlerVersion '1.2.1'
+ $rdpExtension = New-AzCloudServiceRemoteDesktopExtensionObject -Name 'RDPExtension' -Credential $credential -Expiration $expiration -TypeHandlerVersion '1.2.1'
$storageAccountKey = Get-AzStorageAccountKey -ResourceGroupName "ContosOrg" -Name "contosostorageaccount" $configFile = "<WAD public configuration file path>"
- $wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosCS" -StorageAccountName "ContosSA" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true
+ $wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosCS" -StorageAccountName "contosostorageaccount" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true
$extensionProfile = @{extension = @($rdpExtension, $wadExtension)} ```
+ Note that configFile should have only PublicConfig tags and should contain a namespace as following:
+ ```xml
+ <?xml version="1.0" encoding="utf-8"?>
+ <PublicConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
+ ...............
+ </PublicConfig>
+ ```
15. (Optional) Define Tags as PowerShell hash table which you want to add to your cloud service. ```powershell
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-how-to-scale-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-scale-portal.md
@@ -1,6 +1,6 @@
Title: Auto scale a cloud service (classic) in the portal | Microsoft Docs
-description: Learn how to use the portal to configure auto scale rules for a cloud service web role or worker role in Azure.
+description: Learn how to use the portal to configure auto scale rules for a cloud service (classic) roles in Azure.
Last updated 10/14/2020
@@ -18,7 +18,7 @@
Conditions can be set for a cloud service worker role that trigger a scale in or out operation. The conditions for the role can be based on the CPU, disk, or network load of the role. You can also set a condition based on a message queue or the metric of some other Azure resource associated with your subscription. > [!NOTE]
-> This article focuses on Cloud Service web and worker roles. When you create a virtual machine (classic) directly, it is hosted in a cloud service. You can scale a standard virtual machine by associating it with an [availability set](/previous-versions/azure/virtual-machines/windows/classic/configure-availability-classic) and manually turn them on or off.
+> This article focuses on Cloud Service (classic). When you create a virtual machine (classic) directly, it is hosted in a cloud service. You can scale a standard virtual machine by associating it with an [availability set](/previous-versions/azure/virtual-machines/windows/classic/configure-availability-classic) and manually turn them on or off.
## Considerations You should consider the following information before you configure scaling for your application:
@@ -103,4 +103,4 @@ This setting removes automated scaling from the role and then you can set the in
2. A role instance slider to set the instances to scale to. 3. Instances of the role to scale to.
-After you have configured the scale settings, select the **Save** icon at the top.
+After you have configured the scale settings, select the **Save** icon at the top.
cloud-shell https://docs.microsoft.com/en-us/azure/cloud-shell/using-cloud-shell-editor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/using-cloud-shell-editor.md
@@ -48,5 +48,6 @@ To launch the command palette, use the `F1` key when focus is set on the editor.
Language highlight support in the Cloud Shell editor is supported through upstream functionality in the [Monaco Editor](https://github.com/Microsoft/monaco-editor)'s use of Monarch syntax definitions. To learn how to make contributions, read the [Monaco contributor guide](https://github.com/Microsoft/monaco-editor/blob/master/CONTRIBUTING.md). ## Next steps
-[Try the quickstart for Bash in Cloud Shell](quickstart.md)
-[View the full list of integrated Cloud Shell tools](features.md)
+
+- [Try the quickstart for Bash in Cloud Shell](quickstart.md)
+- [View the full list of integrated Cloud Shell tools](features.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/concept-recognizing-text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-recognizing-text.md
@@ -128,20 +128,20 @@ See the following example of a successful JSON response:
} ```
-## Select page(s) or page ranges for text extraction
-With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-2/operations/5d986960601faab4bf452005), for large multi-page documents, use the `pages` query parameter to specify page numbers or page ranges to extract text from only those pages. For example, the following example shows a document with 10 pages for both cases - all pages (1-10) and selected pages (3-6).
--
-## Specify text line order in the output
-With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-2/operations/5d986960601faab4bf452005), specify the order in which the text lines are output with the `read order` query parameter. Choose between `basic` for the default left-right and top-down line order or `natural` for a more human reading-friendly line order. The following example shows both sets of line order numbers for the same two-column document. Notice that The image on the right shows sequential line numbers within each column to represent the reading order.
+## Natural reading order output
+With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-2/operations/5d986960601faab4bf452005), specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example.
## Handwritten classification for text lines (Latin only) The [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-2/operations/5d986960601faab4bf452005) response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is supported only for Latin languages. The following example shows the handwritten classification for the text in the image. +
+## Select page(s) or page ranges for text extraction
+With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-2/operations/5d986960601faab4bf452005), for large multi-page documents, use the `pages` query parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
+ ## Supported languages The Read APIs support a total of 73 languages for print style text. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr). Handwritten style OCR is supported exclusively for English.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/spatial-analysis-operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
@@ -960,7 +960,7 @@ In order to get the best performance and utilization of the GPUs, you can deploy
``` | Name | Type| Description| ||||
-| `batch_size` | int | Indicates the number of cameras that will be used in the operation. |
+| `batch_size` | int | If all of the cameras have the same resolution, set `batch_size` to the number of cameras that will be used in that operation, otherwise, set `batch_size` to 1 or leave it as default (1), which indicates no batch is supported. |
## Next steps
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/whats-new.md
@@ -20,12 +20,12 @@ Learn what's new in the service. These items may be release notes, videos, blog
### Read API v3.2 Public Preview with OCR support for 73 languages Computer Vision's Read API v3.2 public preview includes these capabilities:
-* OCR for a total of [73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and major Latin languages.
-* Choose whether to output the text lines in the left-right and top-bottom (default) order or a more natural reading order.
-* For each text line output, indicate whether its handwriting style or not along with a confidence score (Latin languages only).
+* [OCR for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages.
+* Output the text lines in the natural reading order.
+* Classify text lines as handwriting style or not along with a confidence score (Latin languages only).
* For a multi-page document extract text only for selected pages or page range.
-See the [Read API overview](concept-recognizing-text.md) to learn more.
+See [Read preview features](concept-recognizing-text.md#natural-reading-order-output) for more information.
> [!div class="nextstepaction"] > [Use the Read API v3.2 Public Preview](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-2/operations/5d986960601faab4bf452005)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Concepts/azure-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/azure-resources.md
@@ -239,7 +239,30 @@ After the resources are created, they have the same name, except for the optiona
> [!TIP] > Use a naming convention to indicate pricing tiers within the name of the resource or the resource group. When you receive errors from creating a new knowledge base, or adding new documents, the Cognitive Search pricing tier limit is a common issue.
-### Resource purposes
+# [QnA Maker managed (preview release)](#tab/v2)
+
+The resource name for the QnA Maker managed (Preview) resource, such as `qna-westus-f0-b`, is also used to name the other resources.
+
+The Azure portal create window allows you to create a QnA Maker managed (Preview) resource and select the pricing tiers for the other resources.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of Azure portal for QnA Maker managed (Preview) resource creation](../media/qnamaker-how-to-setup-service/enter-qnamaker-v2-info.png)
+After the resources are created, they have the same name.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of Azure portal resource listing QnA Maker managed (Preview)](../media/qnamaker-how-to-setup-service/resources-created-v2.png)
+
+> [!TIP]
+> Create a new resource group when you create a QnA Maker resource. That allows you to see all resources associated with the QnA Maker managed (Preview) resource when searching by resource group.
+
+> [!TIP]
+> Use a naming convention to indicate pricing tiers within the name of the resource or the resource group. When you receive errors from creating a new knowledge base, or adding new documents, the Cognitive Search pricing tier limit is a common issue.
+++
+## Resource purposes
+
+# [QnA Maker GA (stable release)](#tab/v1)
Each Azure resource created with QnA Maker has a specific purpose:
@@ -249,6 +272,15 @@ Each Azure resource created with QnA Maker has a specific purpose:
* App Plan Service * Application Insights Service
+### QnA Maker resource
+
+The QnA Maker resource provides access to the authoring and publishing APIs as well as the natural language processing (NLP) based second ranking layer (ranker #2) of the QnA pairs at runtime.
+
+The second ranking applies intelligent filters that can include metadata and follow-up prompts.
+
+#### QnA Maker resource configuration settings
+
+When you create a new knowledge base in the [QnA Maker portal](https://qnamaker.ai), the **Language** setting is the only setting that is applied at the resource level. You select the language when you create the first knowledge base for the resource.
### Cognitive Search resource
@@ -267,15 +299,11 @@ A resource priced to hold 15 indexes, will hold 14 published knowledge bases, an
The first knowledge base created in the QnA Maker resource is used to determine the _single_ language set for the Cognitive Search resource and all its indexes. You can only have _one language set_ for a QnA Maker service.
-### QnA Maker resource
-
-The QnA Maker resource provides access to the authoring and publishing APIs as well as the natural language processing (NLP) based second ranking layer (ranker #2) of the QnA pairs at runtime.
-
-The second ranking applies intelligent filters that can include metadata and follow-up prompts.
+#### Using a single Cognitive Search service
-#### QnA Maker resource configuration settings
+If you create a QnA service and its dependencies (such as Search) through the portal, a Search service is created for you and linked to the QnA Maker service. After these resources are created, you can update the App Service setting to use a previously existing Search service and remove the one you just created.
-When you create a new knowledge base in the [QnA Maker portal](https://qnamaker.ai), the **Language** setting is the only setting that is applied at the resource level. You select the language when you create the first knowledge base for the resource.
+Learn [how to configure](../How-To/set-up-qnamaker-service-azure.md#configure-qna-maker-to-use-different-cognitive-search-resource) QnA Maker to use a different Cognitive Service resource than the one created as part of the QnA Maker resource creation process.
### App service and App service plan
@@ -289,7 +317,7 @@ To query the published knowledge base, all published knowledge bases use the sam
[Application Insights](../../../azure-monitor/app/app-insights-overview.md) is used to collect chat logs and telemetry. Review the common [Kusto queries](../how-to/get-analytics-knowledge-base.md) for information about your service.
-## Share services with QnA Maker
+### Share services with QnA Maker
QnA Maker creates several Azure resources. To reduce management and benefit from cost sharing, use the following table to understand what you can and can't share:
@@ -301,38 +329,17 @@ QnA Maker creates several Azure resources. To reduce management and benefit from
|Application Insights|Γ£ö|Can be shared| |Search service|Γ£ö|1. `testkb` is a reserved name for the QnAMaker service; it can't be used by others.<br>2. Synonym map by the name `synonym-map` is reserved for the QnAMaker service.<br>3. The number of published knowledge bases is limited by Search service tier. If there are free indexes available, other services can use them.|
-### Using a single Cognitive Search service
-
-If you create a QnA service and its dependencies (such as Search) through the portal, a Search service is created for you and linked to the QnA Maker service. After these resources are created, you can update the App Service setting to use a previously existing Search service and remove the one you just created.
-
-Learn [how to configure](../How-To/set-up-qnamaker-service-azure.md#configure-qna-maker-to-use-different-cognitive-search-resource) QnA Maker to use a different Cognitive Service resource than the one created as part of the QnA Maker resource creation process.
- # [QnA Maker managed (preview release)](#tab/v2)
-The resource name for the QnA Maker managed (Preview) resource, such as `qna-westus-f0-b`, is also used to name the other resources.
-
-The Azure portal create window allows you to create a QnA Maker managed (Preview) resource and select the pricing tiers for the other resources.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of Azure portal for QnA Maker managed (Preview) resource creation](../media/qnamaker-how-to-setup-service/enter-qnamaker-v2-info.png)
-After the resources are created, they have the same name.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of Azure portal resource listing QnA Maker managed (Preview)](../media/qnamaker-how-to-setup-service/resources-created-v2.png)
-
-> [!TIP]
-> Create a new resource group when you create a QnA Maker resource. That allows you to see all resources associated with the QnA Maker managed (Preview) resource when searching by resource group.
-
-> [!TIP]
-> Use a naming convention to indicate pricing tiers within the name of the resource or the resource group. When you receive errors from creating a new knowledge base, or adding new documents, the Cognitive Search pricing tier limit is a common issue.
-
-### Resource purposes
- Each Azure resource created with QnA Maker managed (Preview) has a specific purpose: * QnA Maker resource * Cognitive Search resource
+### QnA Maker resource
+
+The QnA Maker managed (Preview) resource provides access to the authoring and publishing APIs, hosts the ranking runtime as well as provides telemetry.
+ ### Azure Cognitive Search resource The [Cognitive Search](../../../search/index.yml) resource is used to:
@@ -350,10 +357,6 @@ For example, if your tier has 15 allowed indexes, you can publish 14 knowledge b
With QnA Maker managed (Preview) you have a choice to setup your QnA Maker service for knowledge bases in a single language or multiple languages. You make this choice during the creation of the first knowledge base in your QnA Maker service. See [here](#pricing-tier-considerations) how to enable language setting per knowledge base.
-### QnA Maker resource
-
-The QnA Maker managed (Preview) resource provides access to the authoring and publishing APIs, hosts the ranking runtime as well as provides telemetry.
- ## Next steps
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Concepts/query-knowledge-base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/query-knowledge-base.md
@@ -26,7 +26,7 @@ The process is explained in the following table.
|1|The client application sends the user query to the [GenerateAnswer API](../how-to/metadata-generateanswer-usage.md).| |2|QnA Maker preprocesses the user query with language detection, spellers, and word breakers.| |3|This preprocessing is taken to alter the user query for the best search results.|
-|4|This altered query is sent to an Azure Cognitive Search Index, which receives the `top` number of results. If the correct answer isn't in these results, increase the value of `top` slightly. Generally, a value of 10 for `top` works in 90% of queries.|
+|4|This altered query is sent to an Azure Cognitive Search Index, which receives the `top` number of results. If the correct answer isn't in these results, increase the value of `top` slightly. Generally, a value of 10 for `top` works in 90% of queries. Azure search filters [stop words](https://github.com/Azure-Samples/azure-search-sample-dat) in this step.|
|5|QnA Maker uses syntactic and semantic based featurization to determine the similarity between the user query and the fetched QnA results.| |6|The machine-learned ranker model uses the different features, from step 5, to determine the confidence scores and the new ranking order.| |7|The new results are returned to the client application in ranked order.|
@@ -49,7 +49,7 @@ The process is explained in the following table.
|1|The client application sends the user query to the [GenerateAnswer API](../how-to/metadata-generateanswer-usage.md).| |2|QnA Maker preprocesses the user query with language detection, spellers, and word breakers.| |3|This preprocessing is taken to alter the user query for the best search results.|
-|4|This altered query is sent to an Azure Cognitive Search Index, which receives the `top` number of results. If the correct answer isn't in these results, increase the value of `top` slightly. Generally, a value of 10 for `top` works in 90% of queries.|
+|4|This altered query is sent to an Azure Cognitive Search Index, which receives the `top` number of results. If the correct answer isn't in these results, increase the value of `top` slightly. Generally, a value of 10 for `top` works in 90% of queries. Azure search filters [stop words](https://github.com/Azure-Samples/azure-search-sample-dat) in this step.|
|5|QnA Maker uses state-of-art transformer based model to determine the similarity between the user query and the candidate QnA results fetched from Azure Cognitive Search. The transformer based model is a deep learning multi-lingual model, which is works horizontally for all the languages to determine the confidence scores and the new ranking order.| |6|The new results are returned to the client application in ranked order.| |||
@@ -120,4 +120,4 @@ The HTTP response is the answer retrieved from the knowledge base, based on the
## Next steps > [!div class="nextstepaction"]
-> [Confidence score](./confidence-score.md)
+> [Confidence score](./confidence-score.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/containers/container-image-tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/container-image-tags.md
@@ -48,14 +48,14 @@ This container image has the following tags available. You can also find a full
# [Latest version](#tab/current) Release notes for `3.2-preview.2`:-
-* New v3.2 container
+* Distroless release
+* ReadingOrder parameter to choose between text line order in JSON response
+* Enhanced logging
+* Hotfixes to CJK model
| Image Tags | Notes | |-|:|
-| `latest` | |
-| `3.2-preview.2` | |
-| `3.2-preview.1` | |
+|3.2.2.014850001-49e0eac6-amd64-preview | |
# [Previous versions](#tab/previous)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/create-communication-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/create-communication-resource.md
@@ -13,7 +13,7 @@
zone_pivot_groups: acs-plat-azp-net # Quickstart: Create and manage Communication Services resources-
+
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)] Get started with Azure Communication Services by provisioning your first Communication Services resource. Communication services resources can be provisioned through the Azure portal or with the .NET management client library. The management client library allows you to create, configure, update and delete your resource and interfaces with [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the client libraries is available in the Azure portal.
@@ -25,6 +25,10 @@ Get started with Azure Communication Services by provisioning your first Communi
[!INCLUDE [Azure portal](./includes/create-resource-azp.md)] ::: zone-end + ::: zone pivot="platform-net" [!INCLUDE [.NET](./includes/create-resource-net.md)] ::: zone-end
@@ -37,6 +41,14 @@ After navigating to your Communication Services resource, select **Keys** from t
:::image type="content" source="./media/key.png" alt-text="Screenshot of Communication Services Key page.":::
+You can also access key information using Azure CLI:
+
+```azurecli
+az communication list --resource-group "<resourceGroup>"
+
+az communication list-key --name "<communicationName>" --resource-group "<resourceGroup>"
+```
+ ## Store your connection string Communication Services client libraries use connection strings to authorize requests made to Communication Services. You have several options for storing your connection string:
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/includes/create-resource-azcli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/includes/create-resource-azcli.md
@@ -0,0 +1,39 @@
++++ Last updated : 1/28/2021+++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet/).
+
+## Create Azure Communication Resource
+
+To create an Azure Communication Services resource, [sign in to Azure CLI](https://docs.microsoft.com/cli/azure/authenticate-azure-cli), then run the following command:
+
+```azurecli
+az communication create --name "<communicationName>" --location "Global" --data-location "United States" --resource-group "<resourceGroup>"
+```
+
+You can configure your Communication Services resource with the following options:
+
+* The resource group
+* The name of the Communication Services resource
+* The geography the resource will be associated with
+
+In the next step, you can assign tags to the resource. Tags can be used to organize your Azure resources. See the [resource tagging documentation](../../../azure-resource-manager/management/tag-resources.md) for more information about tags.
+
+## Manage your Communication Services resource
+
+To add tags to your Communication Services resource, run the following commands:
+
+```azurecli
+az communication update --name "<communicationName>" --tags newTag="newVal" --resource-group "<resourceGroup>"
+
+az communication show --name "<communicationName>" --resource-group "<resourceGroup>"
+```
+
+For information on additional commands, see [az communication](https://docs.microsoft.com/cli/azure/ext/communication/communication).
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/includes/pstn-call-js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/includes/pstn-call-js.md
@@ -118,7 +118,7 @@ Use the `webpack-dev-server` to build and run your app. Run the following comman
```console
-npx webpack-dev-server --entry ./client.js --output bundle.js
+npx webpack-dev-server --entry ./client.js --output bundle.js --debug --devtool inline-source-map
``` Open your browser and navigate to `http://localhost:8080/`. You should see the following:
confidential-computing https://docs.microsoft.com/en-us/azure/confidential-computing/confidential-nodes-aks-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
@@ -4,7 +4,7 @@ description: Learn to create an AKS cluster with confidential nodes and deploy a
Previously updated : 12/11/2020 Last updated : 2/5/2020
@@ -70,7 +70,7 @@ az provider register --namespace Microsoft.ContainerService
``` ### Azure Confidential Computing feature registration on Azure (optional but recommended)
-Registering the AKS-ConfidentialComputinAddon on the Azure Subscription. This feature will add two daemonsets as discussed in details [here](./confidential-nodes-aks-overview.md#aks-provided-daemon-sets-addon):
+Registering the AKS-ConfidentialComputingAddon on the Azure Subscription. This feature will add two daemonsets as discussed in details [here](./confidential-nodes-aks-overview.md#aks-provided-daemon-sets-addon):
1. SGX Device Driver Plugin 2. SGX Attestation Quote Helper
@@ -80,7 +80,7 @@ az feature register --name AKS-ConfidentialComputingAddon --namespace Microsoft.
It might take several minutes for the status to show as Registered. You can check the registration status by using the 'az feature list' command. This feature registration is done only once per subscription. If this was registered previously you can skip the above step: ```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ConfidentialComputinAddon')].{Name:name,State:properties.state}"
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ConfidentialComputingAddon')].{Name:name,State:properties.state}"
``` When the status shows as registered, refresh the registration of the Microsoft.ContainerService resource provider by using the 'az provider register' command:
@@ -138,12 +138,12 @@ This section assumes you have an AKS cluster running already that meets the crit
First, lets add the feature to Azure Subscription ```azurecli-interactive
-az feature register --name AKS-ConfidentialComputinAddon --namespace Microsoft.ContainerService
+az feature register --name AKS-ConfidentialComputingAddon --namespace Microsoft.ContainerService
``` It might take several minutes for the status to show as Registered. You can check the registration status by using the 'az feature list' command. This feature registration is done only once per subscription. If this was registered previously you can skip the above step: ```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ConfidentialComputinAddon')].{Name:name,State:properties.state}"
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ConfidentialComputingAddon')].{Name:name,State:properties.state}"
``` When the status shows as registered, refresh the registration of the Microsoft.ContainerService resource provider by using the 'az provider register' command:
confidential-computing https://docs.microsoft.com/en-us/azure/confidential-computing/how-to-fortanix-enclave-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/how-to-fortanix-enclave-manager.md
@@ -1,6 +1,6 @@
Title: How To - Run an application with Fortanix Enclave Manager
-description: Learn how to use Fortanix Enclave Manager to convert your containerized images
+ Title: How To - Run an application with Fortanix Confidential Computing Manager
+description: Learn how to use Fortanix Confidential Computing Manager to convert your containerized images
@@ -10,14 +10,14 @@
Last updated 8/12/2020
-# How To: Run an application with Fortanix Enclave Manager
+# How To: Run an application with Fortanix Confidential Computing Manager
-Start running your application in Azure confidential computing using [Fortanix Enclave Manager](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.enclave_manager?tab=Overview) and [Fortanix Node Agent](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.rte_node_agent) from [Fortanix](https://www.fortanix.com/).
+Start running your application in Azure confidential computing using [Fortanix Confidential Computing Manager](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.enclave_manager?tab=Overview) and [Fortanix Node Agent](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.rte_node_agent) from [Fortanix](https://www.fortanix.com/).
Fortanix is a 3rd party software vendor with products and services built on top of Azure infrastructure. There are other third party providers offering similar confidential computing services on Azure.
-> [!Note]
+> [!Note]
> The products referenced in this document are not under the control of Microsoft. Microsoft is providing this information to you only as a convenience, and the reference to these non-Microsoft products do not imply endorsement by Microsoft.
@@ -29,31 +29,31 @@ This tutorial shows you how to convert your application image to a confidential
## Prerequisites
-1. If you don't have a Fortanix Enclave Manager account, [sign-up](https://em.fortanix.com/auth/sign-up) before you begin.
+1. If you don't have a Fortanix Confidential Computing Manager account, [sign-up](https://em.fortanix.com/auth/sign-up) before you begin.
1. A private [Docker](https://docs.docker.com/) registry to push converted application images. 1. If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/) before you begin. > [!NOTE] > Free trial accounts do not have access to the virtual machines used in this tutorial. Please upgrade to a Pay-As-You-Go subscription.
-## Add an application to Fortanix Enclave Manager
-1. Sign in to [Fortanix Enclave Manager (Fortanix EM)](https://em.fortanix.com)
-1. Navigate to the **Accounts** page and select **ADD ACCOUNT** to create a new account.
-
+## Add an application to Fortanix Confidential Computing Manager
+1. Sign in to [Fortanix Confidential Computing Manager (Fortanix EM)](https://em.fortanix.com)
+1. Navigate to the **Accounts** page and select **ADD ACCOUNT** to create a new account.
+ ![Create an account](media/how-to-fortanix-enclave-manager/create-account.png)
-1. After your account is created, hit **SELECT** to select the newly created account. Now we can start enrolling the compute nodes and creating applications.
-1. Select the **+ APPLICATION** button to add an application. In this example, we'll be adding a Flask Server Enclave OS application.
+1. After your account is created, hit **SELECT** to select the newly created account. Now we can start enrolling the compute nodes and creating applications.
+1. Select the **+ APPLICATION** button to add an application. In this example, we'll be adding a Flask Server Enclave OS application.
-1. Select the **ADD** button for the Enclave OS Application.
+1. Select the **ADD** button for the Enclave OS Application.
![Add application](media/how-to-fortanix-enclave-manager/add-enclave-application.png) > [!NOTE]
- > This tutorial covers adding Enclave OS Applications only. [Read more](https://support.fortanix.com/hc/en-us/articles/360044746932-Bringing-EDP-Rust-Apps-to-Enclave-Manager) about bringing EDP Rust Applications to Fortanix Enclave Manager.
+ > This tutorial covers adding Enclave OS Applications only. [Read more](https://support.fortanix.com/hc/en-us/articles/360044746932-Bringing-EDP-Rust-Apps-to-Confidential-Computing-Manager) about bringing EDP Rust Applications to Fortanix Confidential Computing Manager.
+
+6. In this tutorial, we'll use Fortanix's docker registry for the sample application. Fill in the details from the following information. Use your private docker registry to keep the output image.
-6. In this tutorial, we'll use Fortanix's docker registry for the sample application. Fill in the details from the following information. Use your private docker registry to keep the output image.
-
- **Application name**: Python Application Server - **Description**: Python Flask Server - **Input image name**: fortanix/python-flask
@@ -74,61 +74,61 @@ This tutorial shows you how to convert your application image to a confidential
1. Add a certificate. Fill in the information using the details below and then select **NEXT**: - **Domain**: myapp.domain.dom
- - **Type**: Certificate Issued by Enclave Manager
+ - **Type**: Certificate Issued by Confidential Computing Manager
- **Key path**: /appkey.pem - **Key type**: RSA - **Certificate path**: /appcert.pem - **RSA Key Size**: 2048 Bits
-
+ ## Create an Image
-A Fortanix EM Image is a software release or version of an application. Each image is associated with one enclave hash (MRENCLAVE).
-1. On the **Add Image** page, enter the **REGISTRY CREDENTIALS** for **Output image name**. These credentials are used to access the private docker registry where the image will be pushed.
+A Fortanix CCM Image is a software release or version of an application. Each image is associated with one enclave hash (MRENCLAVE).
+1. On the **Add Image** page, enter the **REGISTRY CREDENTIALS** for **Output image name**. These credentials are used to access the private docker registry where the image will be pushed.
![create image](media/how-to-fortanix-enclave-manager/create-image.png) 1. Provide the image tag and select **Create**. ![add tag](media/how-to-fortanix-enclave-manager/add-tag.png)
-## Domain and Image allow listing
-An application whose domain is added to the allow list, will get a TLS Certificate from Fortanix Enclave Manager. Similarly, when an application runs from the converted image, it will try to contact Fortanix Enclave Manager. The application will then ask for a TLS Certificate.
+## Domain and Image allow listing
+An application whose domain is added to the allow list, will get a TLS Certificate from Fortanix Confidential Computing Manager. Similarly, when an application runs from the converted image, it will try to contact Fortanix Confidential Computing Manager. The application will then ask for a TLS Certificate.
-Switch to the **Tasks** tab on the left and approve the pending requests to allow the domain and image.
+Switch to the **Tasks** tab on the left and approve the pending requests to allow the domain and image.
## Enroll compute node agent in Azure ### Generate and copy Join token
-In Fortanix Enclave Manager, you'll create a token. This token allows a compute node in Azure to authenticate itself. You'll need to give this token to your Azure virtual machine.
-1. In the management console, select the **+ ENROLL NODE** button.
+In Fortanix Confidential Computing Manager, you'll create a token. This token allows a compute node in Azure to authenticate itself. You'll need to give this token to your Azure virtual machine.
+1. In the management console, select the **+ ENROLL NODE** button.
1. Select **GENERATE TOKEN** to generate the Join token. Copy the token. ### Enroll nodes into Fortanix Node Agent in Azure Marketplace
-Creating a Fortanix Node Agent will deploy a virtual machine, network interface, virtual network, network security group, and a public IP address into your Azure resource group. Your Azure subscription will be billed hourly for the virtual machine. Before you create a Fortanix Node Agent, review the Azure [virtual machine pricing page](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) for DCsv2-Series. Delete Azure resources when not in use.
+Creating a Fortanix Node Agent will deploy a virtual machine, network interface, virtual network, network security group, and a public IP address into your Azure resource group. Your Azure subscription will be billed hourly for the virtual machine. Before you create a Fortanix Node Agent, review the Azure [virtual machine pricing page](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) for DCsv2-Series. Delete Azure resources when not in use.
1. Go to the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/) and sign in with your Azure credentials.
-1. In the search bar, type **Fortanix Confidential Computing Node Agent**. Select the App that shows up in the search box called **Fortanix Confidential Computing Node Agent** to navigate to the offering's home page.
+1. In the search bar, type **Fortanix Confidential Computing Node Agent**. Select the App that shows up in the search box called **Fortanix Confidential Computing Node Agent** to navigate to the offering's home page.
![search marketplace](media/how-to-fortanix-enclave-manager/search-fortanix-marketplace.png)
-1. Select **Get It Now**, fill in your information if necessary, and select **Continue**. You'll get redirected to the Azure portal.
+1. Select **Get It Now**, fill in your information if necessary, and select **Continue**. You'll get redirected to the Azure portal.
1. Select **Create** to enter the Fortanix Confidential Computing Node Agent deployment page.
-1. On this page, you'll be entering information to deploy a virtual machine. Specifically, this VM is a DCsv2-Series Intel SGX-enabled virtual machine from Azure with Fortanix Node Agent software installed. The Node Agent will allow your converted image to run securely on Intel SGX nodes in Azure. Select the **subscription** and **resource group** where you want to deploy the virtual machine and associated resources.
-
+1. On this page, you'll be entering information to deploy a virtual machine. Specifically, this VM is a DCsv2-Series Intel SGX-enabled virtual machine from Azure with Fortanix Node Agent software installed. The Node Agent will allow your converted image to run securely on Intel SGX nodes in Azure. Select the **subscription** and **resource group** where you want to deploy the virtual machine and associated resources.
+ > [!NOTE]
- > There are constraints when deploying DCsv2-Series virtual machines in Azure. You may need to request quota for additional cores. Read about [confidential computing solutions on Azure VMs](./virtual-machine-solutions.md) for more information.
+ > There are constraints when deploying DCsv2-Series virtual machines in Azure. You may need to request quota for additional cores. Read about [confidential computing solutions on Azure VMs](./virtual-machine-solutions.md) for more information.
1. Select an available region.
-1. Enter a name for your virtual machine in the **Node Name** field.
+1. Enter a name for your virtual machine in the **Node Name** field.
1. Enter and username and password (or SSH Key) for authenticating into the virtual machine. 1. Leave the default OS Disk Size as 200 and select a VM Size (Standard_DC4s_v2 will suffice for this tutorial) 1. Paste the token generated earlier in the **Join Token** field.
- ![deploy resource](media/how-to-fortanix-enclave-manager/deploy-fortanix-node-agent.png)
+ ![deploy resource](media/how-to-fortanix-enclave-manager/deploy-fortanix-node-agent1.png)
-1. Select **Review + Create**. Ensure the validation passes and then select **Create**. Once all the resources deploy, the compute node is now enrolled in Enclave Manager.
+1. Select **Review + Create**. Ensure the validation passes and then select **Create**. Once all the resources deploy, the compute node is now enrolled in Fortanix Confidential Computing Manager.
## Run the application image on the compute node
-Run the application by executing the following command. Ensure you change the Node IP, Port, and Converted Image Name as inputs for your specific application.
-
+Run the application by executing the following command. Ensure you change the Node IP, Port, and Converted Image Name as inputs for your specific application.
+ In this tutorial, the command to execute is: ```bash
@@ -139,33 +139,33 @@ In this tutorial, the command to execute is:
-e NODE_AGENT_BASE_URL=http://52.152.206.164:9092/v1/ fortanix-private/python-flask-sgx ```
-where,
+where,
- *52.152.206.164* is the Node Agent Host IP - *9092* is the port that Node Agent listens up-- *fortanix-private/python-flask-sgx* is the converted app that can be found in the Images tab under the **Image Name** column in the **Images** table in the Fortanix Enclave Manage Web Portal.
-
+- *fortanix-private/python-flask-sgx* is the converted app that can be found in the Images tab under the **Image Name** column in the **Images** table in the Fortanix Confidential Computing Manager Web Portal.
+ ## Verify and monitor the running application
-1. Head back to the [Fortanix Enclave Manager](https://em.fortanix.com/console)
+1. Head back to the [Fortanix Confidential Computing Manager](https://em.fortanix.com/console)
1. Ensure you're working inside the **Account** where you enrolled the node
-1. Navigate to the **Management Console** by selecting the top icon on the left navigation pane.
+1. Navigate to the **Management Console** by selecting the top icon on the left navigation pane.
1. Select the **Application** tab 1. Verify that there's a running application with an associated compute node ## Clean up resources
-When no longer needed, you can delete the resource group, virtual machine, and associated resources. Deleting the resource group will unenroll the nodes associated with your converted image.
+When no longer needed, you can delete the resource group, virtual machine, and associated resources. Deleting the resource group will unenroll the nodes associated with your converted image.
Select the resource group for the virtual machine, then select **Delete**. Confirm the name of the resource group to finish deleting the resources.
-To delete the Fortanix Enclave Manager Account you created, go the [Accounts Page](https://em.fortanix.com/accounts) in the Enclave Manager. Hover over the account you wish to delete. Select the vertical black dots in the upper right-hand corner and select **Delete Account**.
+To delete the Fortanix Confidential Computing Manager Account you created, go the [Accounts Page](https://em.fortanix.com/accounts) in the Fortanix Confidential Computing Manager. Hover over the account you wish to delete. Select the vertical black dots in the upper right-hand corner and select **Delete Account**.
![delete](media/how-to-fortanix-enclave-manager/delete-account.png) ## Next steps
-In this quickstart, you used Fortanix tooling to convert your application image to run on top of a confidential computing virtual machine. For more information about confidential computing virtual machines on Azure, see [Solutions on Virtual Machines](virtual-machine-solutions.md).
+In this quickstart, you used Fortanix tooling to convert your application image to run on top of a confidential computing virtual machine. For more information about confidential computing virtual machines on Azure, see [Solutions on Virtual Machines](virtual-machine-solutions.md).
To learn more about Azure's confidential computing offerings, see [Azure confidential computing Overview](overview.md)
- Learn how to complete similar tasks using other third-party offerings on Azure, like [Anjuna](https://azuremarketplace.microsoft.com/marketplace/apps/anjuna-5229812.aee-az-v1) and [Scone](https://sconedocs.github.io).
+ Learn how to complete similar tasks using other third-party offerings on Azure, like [Anjuna](https://azuremarketplace.microsoft.com/marketplace/apps/anjuna-5229812.aee-az-v1) and [Scone](https://sconedocs.github.io).
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-region-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-region-availability.md
@@ -29,7 +29,7 @@ The following regions and maximum resources are available to container groups wi
| Australia East | 4 | 16 | 4 | 16 | 50 | N/A | | Brazil South | 4 | 16 | 2 | 8 | 50 | N/A | | Canada Central | 4 | 16 | 4 | 16 | 50 | N/A |
-| Central India | 4 | 16 | N/A | N/A | 50 | V100 |
+| Central India | 4 | 16 | 4 | 4 | 50 | V100 |
| Central US | 4 | 16 | 4 | 16 | 50 | N/A | | East Asia | 4 | 16 | 4 | 16 | 50 | N/A | | East US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 |
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/connect-mongodb-account-experimental https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/connect-mongodb-account-experimental.md
@@ -0,0 +1,73 @@
+
+ Title: Connect a MongoDB application to Azure Cosmos DB
+description: Learn how to connect a MongoDB app to Azure Cosmos DB by getting the connection string from Azure portal
+++++ Last updated : 02/08/2021+++
+# Connect a MongoDB application to Azure Cosmos DB
+
+Learn how to connect your MongoDB app to an Azure Cosmos DB by using a MongoDB connection string. You can then use an Azure Cosmos database as the data store for your MongoDB app. In addition to the tutorial below, you can explore MongoDB [samples](mongodb-samples.md) with Azure Cosmos DB's API for MongoDB.
+
+This tutorial provides two ways to retrieve connection string information:
+
+- [The quickstart method](#get-the-mongodb-connection-string-by-using-the-quick-start), for use with .NET, Node.js, MongoDB Shell, Java, and Python drivers
+- [The custom connection string method](#get-the-mongodb-connection-string-to-customize), for use with other drivers
+
+## Prerequisites
+
+- An Azure account. If you don't have an Azure account, create a [free Azure account](https://azure.microsoft.com/free/) now.
+- A Cosmos account. For instructions, see [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md).
+
+## Get the MongoDB connection string by using the quick start
+
+1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the **Azure Cosmos DB** blade, select the API.
+3. In the left pane of the account blade, click **Quick start**.
+4. Choose your platform (**.NET**, **Node.js**, **MongoDB Shell**, **Java**, **Python**). If you don't see your driver or tool listed, don't worry--we continuously document more connection code snippets. Please comment below on what you'd like to see. To learn how to craft your own connection, read [Get the account's connection string information](#get-the-mongodb-connection-string-to-customize).
+5. Copy and paste the code snippet into your MongoDB app.
+
+ :::image type="content" source="./media/connect-mongodb-account/QuickStartBlade.png" alt-text="Quick start blade":::
+
+## Get the MongoDB connection string to customize
+
+1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the **Azure Cosmos DB** blade, select the API.
+3. In the left pane of the account blade, click **Connection String**.
+4. The **Connection String** blade opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
+
+ :::image type="content" source="./media/connect-mongodb-account/ConnectionStringBlade.png" alt-text="Connection String blade" lightbox= "./media/connect-mongodb-account/ConnectionStringBlade.png" :::
+
+## Connection string requirements
+
+> [!Important]
+> Azure Cosmos DB has strict security requirements and standards. Azure Cosmos DB accounts require authentication and secure communication via *TLS*.
+
+Azure Cosmos DB supports the standard MongoDB connection string URI format, with a couple of specific requirements: Azure Cosmos DB accounts require authentication and secure communication via TLS. So, the connection string format is:
+
+`mongodb://username:password@host:port/[database]?ssl=true`
+
+The values of this string are available in the **Connection String** blade shown earlier:
+
+* Username (required): Cosmos account name.
+* Password (required): Cosmos account password.
+* Host (required): FQDN of the Cosmos account.
+* Port (required): 10255.
+* Database (optional): The database that the connection uses. If no database is provided, the default database is "test."
+* ssl=true (required)
+
+For example, consider the account shown in the **Connection String** blade. A valid connection string is:
+
+`mongodb://contoso123:0Fc3IolnL12312asdfawejunASDF@asdfYXX2t8a97kghVcUzcDv98hawelufhawefafnoQRGwNj2nMPL1Y9qsIr9Srdw==@contoso123.documents.azure.com:10255/mydatabase?ssl=true`
+
+## Next steps
+
+- Learn how to [use Studio 3T](mongodb-mongochef.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Robo 3T](mongodb-robomongo.md) with Azure Cosmos DB's API for MongoDB.
+- Explore MongoDB [samples](mongodb-samples.md) with Azure Cosmos DB's API for MongoDB.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/connect-mongodb-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/connect-mongodb-account.md
@@ -8,6 +8,11 @@
Last updated 03/19/2020
+adobe-target: true
+adobe-target-activity: DocsExp-A/B-384740-MongoDB-2.8.2021
+adobe-target-experience: Experience B
+adobe-target-content: connect-mongodb-account-experimental.md
+ # Connect a MongoDB application to Azure Cosmos DB [!INCLUDE[appliesto-mongodb-api](includes/appliesto-mongodb-api.md)]
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/emulator-command-line-parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/emulator-command-line-parameters.md
@@ -70,10 +70,10 @@ The emulator comes with a PowerShell module to start, stop, uninstall, and retri
Import-Module "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules\Microsoft.Azure.CosmosDB.Emulator" ```
-or place the `PSModules` directory on your `PSModulesPath` and import it as shown in the following command:
+or place the `PSModules` directory on your `PSModulePath` and import it as shown in the following command:
```powershell
-$env:PSModulesPath += "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules"
+$env:PSModulePath += "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules"
Import-Module Microsoft.Azure.CosmosDB.Emulator ```
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/troubleshoot-dot-net-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-dot-net-sdk.md
@@ -3,7 +3,7 @@ Title: Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK
description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues when using .NET SDK. Previously updated : 09/12/2020 Last updated : 02/05/2021
@@ -61,6 +61,7 @@ Cosmos DB SDK on any IO failure will attempt to retry the failed operation if re
|-|-| | 400 | Bad request (Depends on the error message)| | 401 | [Not authorized](troubleshoot-unauthorized.md) |
+| 403 | [Forbidden](troubleshoot-forbidden.md) |
| 404 | [Resource is not found](troubleshoot-not-found.md) | | 408 | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) | | 409 | Conflict failure is when the ID provided for a resource on a write operation has been taken by an existing resource. Use another ID for the resource to resolve this issue as ID must be unique within all documents with the same partition key value. |
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/troubleshoot-forbidden https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-forbidden.md
@@ -0,0 +1,36 @@
+
+ Title: Troubleshoot Azure Cosmos DB forbidden exceptions
+description: Learn how to diagnose and fix forbidden exceptions.
+++ Last updated : 02/05/2021+++++
+# Diagnose and troubleshoot Azure Cosmos DB forbidden exceptions
+
+The HTTP status code 403 represents the request is forbidden to complete.
+
+## Firewall blocking requests
+On this scenario, it's common to see errors like the ones below:
+
+```
+Request originated from client IP {...} through public internet. This is blocked by your Cosmos DB account firewall settings.
+```
+
+```
+Request is blocked. Please check your authorization token and Cosmos DB account firewall settings
+```
+
+### Solution
+Verify that your current [firewall settings](how-to-configure-firewall.md) are correct and include the IPs or networks you are trying to connect from.
+If you recently updated them, keep in mind that changes can take **up to 15 minutes to apply**.
+
+## Next steps
+* Configure [IP Firewall](how-to-configure-firewall.md).
+* Configure access from [virtual networks](how-to-configure-vnet-service-endpoint.md).
+* Configure access from [private endpoints](how-to-configure-private-endpoints.md).
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/billing-subscription-transfer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/billing-subscription-transfer.md
@@ -8,7 +8,7 @@ tags: billing,top-support-issue
Previously updated : 01/06/2021 Last updated : 02/05/2021
@@ -75,7 +75,7 @@ Only one transfer request is active at a time. A transfer request is valid for 1
To cancel a transfer request: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Subscriptions** > Select the subscription that you sent a transfer request for > select **Transfer billing ownership**.
+1. Navigate to **Subscriptions** > Select the subscription that you sent a transfer request for then select **Transfer billing ownership**.
1. At the bottom of the page, select **Cancel the transfer request**. :::image type="content" source="./media/billing-subscription-transfer/transfer-billing-owership-cancel-request.png" alt-text="Example showing the Transfer billing ownership window with the Cancel the transfer request option" lightbox="./media/billing-subscription-transfer/transfer-billing-owership-cancel-request.png" :::
@@ -84,6 +84,20 @@ To cancel a transfer request:
Use the following troubleshooting information if you're having trouble transferring subscriptions.
+### Original Azure subscription billing owner leaves your organization
+
+It's possible that the original billing owner who created an Azure account and an Azure subscription leaves your organization. If that situation happens, then their user identity is no longer in the organization's Azure Active Directory. Then the Azure subscription doesn't have a billing owner. This situation prevents anyone from performing billing operations to the account, including viewing, and paying bills. The subscription could go into a past-due state. Eventually the subscription could get disabled because of non-payment. Ultimately, the subscription could get deleted and it would affect every service that runs on the subscription.
+
+When a subscription no longer has a valid billing owner, Azure sends an email to other Billing owners, Service Administrators, Co-Administrators, and Subscription Owners informing them of the situation and provides them with a link to accept billing ownership of the subscription. Any one of the users can select the link to accept billing ownership. For more information about billing roles, see [Billing Roles](understand-mca-roles.md) and [Classic Roles and Azure RBAC Roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
+
+Here's an example of what the email looks like.
++
+Additionally, Azure shows a banner in the subscription's details window in the Azure portal to Billing owners, Service Administrators, Co-Administrators, and Subscription Owners. Select the link in the banner to accept billing ownership.
++ ### The "Transfer subscription" option is unavailable <a name="no-button"></a>
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
@@ -5,7 +5,7 @@
Previously updated : 01/07/2021 Last updated : 02/08/2021
@@ -16,7 +16,7 @@
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] This article explores common ways to troubleshoot problems with Azure Data Factory connectors.
-
+ ## Azure Blob Storage ### Error code: AzureBlobOperationFailed
@@ -105,7 +105,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## Azure Cosmos DB (SQL API)
-### Error code: CosmosDbSqlApiOperationFailed
+### Error code: CosmosDbSqlApiOperationFailed
- **Message**: `CosmosDbSqlApi operation Failed. ErrorMessage: %msg;.`
@@ -157,17 +157,13 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `ADLS Gen2 operation failed for: %adlsGen2Message;.%exceptionData;.` -- **Cause**: If Azure Data Lake Storage Gen2 throws this error, the operation has failed.--- **Recommendation**: Check the detailed error message thrown by Azure Data Lake Storage Gen2. If the error is a transient failure, retry the operation. For further help, contact Azure Storage support, and provide the request ID in error message.--- **Cause**: If the error message contains the string "Forbidden," the service principal or managed identity you use might not have sufficient permission to access Azure Data Lake Storage Gen2.--- **Recommendation**: To troubleshoot this error, see [Copy and transform data in Azure Data Lake Storage Gen2 by using Azure Data Factory](https://docs.microsoft.com/azure/data-factory/connector-azure-data-lake-storage#service-principal-authentication).--- **Cause**: If the error message contains the string "InternalServerError," the error is returned by Azure Data Lake Storage Gen2.
+- **Causes and recommendations**: Different causes may lead to this error. Check below list for possible cause analysis and related recommendation.
-- **Recommendation**: The error might be caused by a transient failure. If so, retry the operation. If the issue persists, contact Azure Storage support and provide the request ID from the error message.
+ | Cause analysis | Recommendation |
+ | :-- | :-- |
+ | If Azure Data Lake Storage Gen2 throws error indicating some operation failed.| Check the detailed error message thrown by Azure Data Lake Storage Gen2. If the error is a transient failure, retry the operation. For further help, contact Azure Storage support, and provide the request ID in error message. |
+ | If the error message contains the string "Forbidden", the service principal or managed identity you use might not have sufficient permission to access Azure Data Lake Storage Gen2. | To troubleshoot this error, see [Copy and transform data in Azure Data Lake Storage Gen2 by using Azure Data Factory](https://docs.microsoft.com/azure/data-factory/connector-azure-data-lake-storage#service-principal-authentication). |
+ | If the error message contains the string "InternalServerError", the error is returned by Azure Data Lake Storage Gen2. | The error might be caused by a transient failure. If so, retry the operation. If the issue persists, contact Azure Storage support and provide the request ID from the error message. |
### Request to Azure Data Lake Storage Gen2 account caused a timeout error
@@ -197,10 +193,10 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
} ```
-
+ ΓÇï
## Azure Files storage
-### Error code: AzureFileOperationFailed
+### Error code: AzureFileOperationFailed
- **Message**: `Azure File operation Failed. Path: %path;. ErrorMessage: %msg;.`
@@ -211,55 +207,34 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## Azure Synapse Analytics, Azure SQL Database, and SQL Server
-### Error code: SqlFailedToConnect
+### Error code: SqlFailedToConnect
- **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', User: '%user;'. Check the linked service configuration is correct, and make sure the SQL Database firewall allows the integration runtime to access.`--- **Cause**: For Azure SQL, if the error message contains the string "SqlErrorNumber=47073," it means that public network access is denied in the connectivity setting.--- **Recommendation**: On the Azure SQL firewall, set the **Deny public network access** option to *No*. For more information, see [Azure SQL connectivity settings](https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#deny-public-network-access).--- **Cause**: For Azure SQL, if the error message contains an SQL error code such as "SqlErrorNumber=[errorcode]", see the Azure SQL troubleshooting guide.--- **Recommendation**: For a recommendation, see [Troubleshoot connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/database/troubleshoot-common-errors-issues).--- **Cause**: Check to see whether port 1433 is in the firewall allow list.--- **Recommendation**: For more information, see [Ports used by SQL Server](https://docs.microsoft.com/sql/sql-server/install/configure-the-windows-firewall-to-allow-sql-server-access#ports-used-by-).--- **Cause**: If the error message contains the string "SqlException," SQL Database the error indicates that some specific operation failed.--- **Recommendation**: For more information, search by SQL error code in [Database engine errors](https://docs.microsoft.com/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support.--- **Cause**: If this is a transient issue (for example, an instable network connection), add retry in the activity policy to mitigate.--- **Recommendation**: For more information, see [Pipelines and activities in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/concepts-pipelines-activities#activity-policy).--- **Cause**: If the error message contains the string "Client with IP address '...' is not allowed to access the server," and you're trying to connect to Azure SQL Database, the error is usually caused by an Azure SQL Database firewall issue.--- **Recommendation**: In the Azure SQL Server firewall configuration, enable the **Allow Azure services and resources to access this server** option. For more information, see [Azure SQL Database and Azure Synapse IP firewall rules](https://docs.microsoft.com/azure/sql-database/sql-database-firewall-configure).--
-### Error code: SqlOperationFailed
+- **Causes and recommendations**: Different causes may lead to this error. Check below list for possible cause analysis and related recommendation.
+
+ | Cause analysis | Recommendation |
+ | :-- | :-- |
+ | For Azure SQL, if the error message contains the string "SqlErrorNumber=47073", it means that public network access is denied in the connectivity setting. | On the Azure SQL firewall, set the **Deny public network access** option to *No*. For more information, see [Azure SQL connectivity settings](https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#deny-public-network-access). |
+ | For Azure SQL, if the error message contains an SQL error code such as "SqlErrorNumber=[errorcode]", see the Azure SQL troubleshooting guide. | For a recommendation, see [Troubleshoot connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/database/troubleshoot-common-errors-issues). |
+ | Check to see whether port 1433 is in the firewall allow list. | For more information, see [Ports used by SQL Server](https://docs.microsoft.com/sql/sql-server/install/configure-the-windows-firewall-to-allow-sql-server-access#ports-used-by-). |
+ | If the error message contains the string "SqlException", SQL Database the error indicates that some specific operation failed. | For more information, search by SQL error code in [Database engine errors](https://docs.microsoft.com/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support. |
+ | If this is a transient issue (for example, an instable network connection), add retry in the activity policy to mitigate. | For more information, see [Pipelines and activities in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/concepts-pipelines-activities#activity-policy). |
+ | If the error message contains the string "Client with IP address '...' is not allowed to access the server", and you're trying to connect to Azure SQL Database, the error is usually caused by an Azure SQL Database firewall issue. | In the Azure SQL Server firewall configuration, enable the **Allow Azure services and resources to access this server** option. For more information, see [Azure SQL Database and Azure Synapse IP firewall rules](https://docs.microsoft.com/azure/sql-database/sql-database-firewall-configure). |
+
+### Error code: SqlOperationFailed
- **Message**: `A database operation failed. Please search error to get more details.` -- **Cause**: If the error message contains the string "SqlException," SQL Database throws an error indicating some specific operation failed.--- **Recommendation**: If the SQL error is not clear, try to alter the database to the latest compatibility level '150'. It can throw the latest version SQL errors. For more information, see the [documentation](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level#backwardCompat).-
- For more information about troubleshooting SQL issues, search by SQL error code in [Database engine errors](https://docs.microsoft.com/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support.
+- **Causes and recommendations**: Different causes may lead to this error. Check below list for possible cause analysis and related recommendation.
-- **Cause**: If the error message contains the string "PdwManagedToNativeInteropException," it's usually caused by a mismatch between the source and sink column sizes.
+ | Cause analysis | Recommendation |
+ | :-- | :-- |
+ | If the error message contains the string "SqlException", SQL Database throws an error indicating some specific operation failed. | If the SQL error is not clear, try to alter the database to the latest compatibility level '150'. It can throw the latest version SQL errors. For more information, see the [documentation](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level#backwardCompat). <br/> For more information about troubleshooting SQL issues, search by SQL error code in [Database engine errors](https://docs.microsoft.com/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support. |
+ | If the error message contains the string "PdwManagedToNativeInteropException", it's usually caused by a mismatch between the source and sink column sizes. | Check the size of both the source and sink columns. For further help, contact Azure SQL support. |
+ | If the error message contains the string "InvalidOperationException", it's usually caused by invalid input data. | To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity, which can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance). |
-- **Recommendation**: Check the size of both the source and sink columns. For further help, contact Azure SQL support. -- **Cause**: If the error message contains the string "InvalidOperationException", it's usually caused by invalid input data.--- **Recommendation**: To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity, which can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance).--
-### Error code: SqlUnauthorizedAccess
+### Error code: SqlUnauthorizedAccess
- **Message**: `Cannot connect to '%connectorName;'. Detail Message: '%message;'`
@@ -268,7 +243,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check to ensure that the login account has sufficient permissions to access the SQL database.
-### Error code: SqlOpenConnectionTimeout
+### Error code: SqlOpenConnectionTimeout
- **Message**: `Open connection to database timeout after '%timeoutValue;' seconds.`
@@ -277,7 +252,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Retry the operation to update the linked service connection string with a larger connection timeout value.
-### Error code: SqlAutoCreateTableTypeMapFailed
+### Error code: SqlAutoCreateTableTypeMapFailed
- **Message**: `Type '%dataType;' in source side cannot be mapped to a type that supported by sink side(column name:'%columnName;') in autocreate table.`
@@ -286,7 +261,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Update the column type in *mappings*, or manually create the sink table in the target server.
-### Error code: SqlDataTypeNotSupported
+### Error code: SqlDataTypeNotSupported
- **Message**: `A database operation failed. Check the SQL errors.`
@@ -299,7 +274,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Update the corresponding column type to the *datetime2* type in the sink table.
-### Error code: SqlInvalidDbStoredProcedure
+### Error code: SqlInvalidDbStoredProcedure
- **Message**: `The specified Stored Procedure is not valid. It could be caused by that the stored procedure doesn't return any data. Invalid Stored Procedure script: '%scriptName;'.`
@@ -308,7 +283,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Validate the stored procedure by using SQL Tools. Make sure that the stored procedure can return data.
-### Error code: SqlInvalidDbQueryString
+### Error code: SqlInvalidDbQueryString
- **Message**: `The specified SQL Query is not valid. It could be caused by that the query doesn't return any data. Invalid query: '%query;'`
@@ -317,7 +292,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Validate the SQL query by using SQL Tools. Make sure that the query can return data.
-### Error code: SqlInvalidColumnName
+### Error code: SqlInvalidColumnName
- **Message**: `Column '%column;' does not exist in the table '%tableName;', ServerName: '%serverName;', DatabaseName: '%dbName;'.`
@@ -326,7 +301,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Verify the column in the query, *structure* in the dataset, and *mappings* in the activity.
-### Error code: SqlBatchWriteTimeout
+### Error code: SqlBatchWriteTimeout
- **Message**: `Timeouts in SQL write operation.`
@@ -335,7 +310,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Retry the operation. If the problem persists, contact Azure SQL support.
-### Error code: SqlBatchWriteTransactionFailed
+### Error code: SqlBatchWriteTransactionFailed
- **Message**: `SQL transaction commits failed.`
@@ -348,7 +323,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Retry the activity and review the SQL database side metrics.
-### Error code: SqlBulkCopyInvalidColumnLength
+### Error code: SqlBulkCopyInvalidColumnLength
- **Message**: `SQL Bulk Copy failed due to receive an invalid column length from the bcp client.`
@@ -357,7 +332,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity. This can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance).
-### Error code: SqlConnectionIsClosed
+### Error code: SqlConnectionIsClosed
- **Message**: `The connection is closed by SQL Database.`
@@ -476,7 +451,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## Azure Table Storage
-### Error code: AzureTableDuplicateColumnsFromSource
+### Error code: AzureTableDuplicateColumnsFromSource
- **Message**: `Duplicate columns with same name '%name;' are detected from source. This is NOT supported by Azure Table Storage sink.`
@@ -489,18 +464,18 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## DB2
-### Error code: DB2DriverRunFailed
+### Error code: DB2DriverRunFailed
- **Message**: `Error thrown from driver. Sql code: '%code;'` -- **Cause**: If the error message contains the string "SQLSTATE=51002 SQLCODE=-805," follow the "Tip" in [Copy data from DB2 by using Azure Data Factory](https://docs.microsoft.com/azure/data-factory/connector-db2#linked-service-properties).
+- **Cause**: If the error message contains the string "SQLSTATE=51002 SQLCODE=-805", follow the "Tip" in [Copy data from DB2 by using Azure Data Factory](https://docs.microsoft.com/azure/data-factory/connector-db2#linked-service-properties).
- **Recommendation**: Try to set "NULLID" in the `packageCollection` property. ## Delimited text format
-### Error code: DelimitedTextColumnNameNotAllowNull
+### Error code: DelimitedTextColumnNameNotAllowNull
- **Message**: `The name of column index %index; is empty. Make sure column name is properly specified in the header row.`
@@ -509,26 +484,22 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check the first row, and fix the value if it is empty.
-### Error code: DelimitedTextMoreColumnsThanDefined
+### Error code: DelimitedTextMoreColumnsThanDefined
- **Message**: `Error found when processing '%function;' source '%name;' with row number %rowCount;: found more columns than expected column count: %expectedColumnCount;.` -- **Cause**: The problematic row's column count is larger than the first row's column count. It might be caused by a data issue or incorrect column delimiter or quote char settings.--- **Recommendation**: Get the row count from the error message, check the row's column, and fix the data.--- **Cause**: If the expected column count is "1" in an error message, you might have specified wrong compression or format settings, which caused Data Factory to parse your files incorrectly.--- **Recommendation**: Check the format settings to make sure they match your source files.
+- **Causes and recommendations**: Different causes may lead to this error. Check below list for possible cause analysis and related recommendation.
-- **Cause**: If your source is a folder, the files under the specified folder might have a different schema.--- **Recommendation**: Make sure that the files in the specified folder have an identical schema.
+ | Cause analysis | Recommendation |
+ | :-- | :-- |
+ | The problematic row's column count is larger than the first row's column count. It might be caused by a data issue or incorrect column delimiter or quote char settings. | Get the row count from the error message, check the row's column, and fix the data. |
+ | If the expected column count is "1" in an error message, you might have specified wrong compression or format settings, which caused Data Factory to parse your files incorrectly. | Check the format settings to make sure they match your source files. |
+ | If your source is a folder, the files under the specified folder might have a different schema. | Make sure that the files in the specified folder have an identical schema. |
## Dynamics 365, Common Data Service, and Dynamics CRM
-### Error code: DynamicsCreateServiceClientError
+### Error code: DynamicsCreateServiceClientError
- **Message**: `This is a transient issue on Dynamics server side. Try to rerun the pipeline.`
@@ -546,7 +517,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Manually add the columns in the mapping tab.
-### Error code: DynamicsMissingTargetForMultiTargetLookupField
+### Error code: DynamicsMissingTargetForMultiTargetLookupField
- **Message**: `Cannot find the target column for multi-target lookup field: '%fieldName;'.`
@@ -557,7 +528,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
2. Add the target column in the column mapping. Ensure that the sink column is in the format *{fieldName}@EntityReference*.
-### Error code: DynamicsInvalidTargetForMultiTargetLookupField
+### Error code: DynamicsInvalidTargetForMultiTargetLookupField
- **Message**: `The provided target: '%targetName;' is not a valid target of field: '%fieldName;'. Valid targets are: '%validTargetNames;'`
@@ -566,7 +537,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Provide a valid entity name for the multi-target lookup field.
-### Error code: DynamicsInvalidTypeForMultiTargetLookupField
+### Error code: DynamicsInvalidTypeForMultiTargetLookupField
- **Message**: `The provided target type is not a valid string. Field: '%fieldName;'.`
@@ -575,18 +546,18 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Provide a valid string in the multi-target lookup target column.
-### Error code: DynamicsFailedToRequetServer
+### Error code: DynamicsFailedToRequetServer
- **Message**: `The Dynamics server or the network is experiencing issues. Check network connectivity or check Dynamics server log for more details.` - **Cause**: The Dynamics server is instable or inaccessible, or the network is experiencing issues. - **Recommendation**: For more details, check network connectivity or check the Dynamics server log. For further help, contact Dynamics support.
-
+
## FTP
-### Error code: FtpFailedToConnectToFtpServer
+### Error code: FtpFailedToConnectToFtpServer
- **Message**: `Failed to connect to FTP server. Please make sure the provided server information is correct, and try again.`
@@ -597,7 +568,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## HTTP
-### Error code: HttpFileFailedToRead
+### Error code: HttpFileFailedToRead
- **Message**: `Failed to read data from http server. Check the error from http server:%message;`
@@ -623,31 +594,20 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## ORC format
-### Error code: OrcJavaInvocationException
+### Error code: OrcJavaInvocationException
- **Message**: `An error occurred when invoking Java, message: %javaException;.`
+- **Causes and recommendations**: Different causes may lead to this error. Check below list for possible cause analysis and related recommendation.
-- **Cause**: When the error message contains the strings "java.lang.OutOfMemory," "Java heap space," and "doubleCapacity," it's usually a memory management issue in an old version of integration runtime.--- **Recommendation**: If you're using Self-hosted Integration Runtime, we recommend that you upgrade to the latest version.--- **Cause**: When the error message contains the string "java.lang.OutOfMemory," the integration runtime doesn't have enough resources to process the files.--- **Recommendation**: Limit the concurrent runs on the integration runtime. For Self-hosted IR, scale up to a powerful machine with memory equal to or larger than 8 GB.
+ | Cause analysis | Recommendation |
+ | :-- | :-- |
+ | When the error message contains the strings "java.lang.OutOfMemory", "Java heap space", and "doubleCapacity", it's usually a memory management issue in an old version of integration runtime. | If you're using Self-hosted Integration Runtime, we recommend that you upgrade to the latest version. |
+ | When the error message contains the string "java.lang.OutOfMemory", the integration runtime doesn't have enough resources to process the files. | Limit the concurrent runs on the integration runtime. For Self-hosted IR, scale up to a powerful machine with memory equal to or larger than 8 GB. |
+ |When the error message contains the string "NullPointerReference", the cause might be a transient error. | Retry the operation. If the problem persists, contact support. |
+ | When the error message contains the string "BufferOverflowException", the cause might be a transient error. | Retry the operation. If the problem persists, contact support. |
+ | When the error message contains the string "java.lang.ClassCastException:org.apache.hadoop.hive.serde2.io.HiveCharWritable can't be cast to org.apache.hadoop.io.Text", the cause might be a type conversion issue inside Java Runtime. Usually, it means that the source data can't be handled well in Java Runtime. | This is a data issue. Try to use a string instead of char or varchar in ORC format data. |
-- **Cause**: When the error message contains the string "NullPointerReference," the cause might be a transient error.--- **Recommendation**: Retry the operation. If the problem persists, contact support.--- **Cause**: When the error message contains the string "BufferOverflowException," the cause might be a transient error.--- **Recommendation**: Retry the operation. If the problem persists, contact support.--- **Cause**: When the error message contains the string "java.lang.ClassCastException:org.apache.hadoop.hive.serde2.io.HiveCharWritable can't be cast to org.apache.hadoop.io.Text," the cause might be a type conversion issue inside Java Runtime. Usually, it means that the source data can't be handled well in Java Runtime.--- **Recommendation**: This is a data issue. Try to use a string instead of char or varchar in ORC format data.-
-### Error code: OrcDateTimeExceedLimit
+### Error code: OrcDateTimeExceedLimit
- **Message**: `The Ticks value '%ticks;' for the datetime column must be between valid datetime ticks range -621355968000000000 and 2534022144000000000.`
@@ -658,24 +618,19 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## Parquet format
-### Error code: ParquetJavaInvocationException
+### Error code: ParquetJavaInvocationException
- **Message**: `An error occurred when invoking java, message: %javaException;.` -- **Cause**: When the error message contains the strings "java.lang.OutOfMemory," "Java heap space," and "doubleCapacity," it's usually a memory management issue in an old version of Integration Runtime.--- **Recommendation**: If you are using Self-hosted IR and the version is earlier than 3.20.7159.1, we recommend that you upgrade to the latest version.--- **Cause**: When the error message contains the string "java.lang.OutOfMemory," the integration runtime doesn't have enough resources to process the files.--- **Recommendation**: Limit the concurrent runs on the integration runtime. For Self-hosted IR, scale up to a powerful machine with memory that's equal to or greater than 8 GB.--- **Cause**: When the error message contains the string "NullPointerReference," it might be a transient error.--- **Recommendation**: Retry the operation. If the problem persists, contact support.
+- **Causes and recommendations**: Different causes may lead to this error. Check below list for possible cause analysis and related recommendation.
+ | Cause analysis | Recommendation |
+ | :-- | :-- |
+ | When the error message contains the strings "java.lang.OutOfMemory", "Java heap space", and "doubleCapacity", it's usually a memory management issue in an old version of Integration Runtime. | If you are using Self-hosted IR and the version is earlier than 3.20.7159.1, we recommend that you upgrade to the latest version. |
+ | When the error message contains the string "java.lang.OutOfMemory", the integration runtime doesn't have enough resources to process the files. | Limit the concurrent runs on the integration runtime. For Self-hosted IR, scale up to a powerful machine with memory that's equal to or greater than 8 GB. |
+ | When the error message contains the string "NullPointerReference", it might be a transient error. | Retry the operation. If the problem persists, contact support. |
-### Error code: ParquetInvalidFile
+### Error code: ParquetInvalidFile
- **Message**: `File is not a valid Parquet file.`
@@ -684,7 +639,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check to see whether the input is a valid Parquet file.
-### Error code: ParquetNotSupportedType
+### Error code: ParquetNotSupportedType
- **Message**: `Unsupported Parquet type. PrimitiveType: %primitiveType; OriginalType: %originalType;.`
@@ -693,7 +648,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Double-check the source data by going to [Supported file formats and compression codecs by copy activity in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/supported-file-formats-and-compression-codecs).
-### Error code: ParquetMissedDecimalPrecisionScale
+### Error code: ParquetMissedDecimalPrecisionScale
- **Message**: `Decimal Precision or Scale information is not found in schema for column: %column;.`
@@ -702,7 +657,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: The source doesn't return the correct precision and scale information. Check the issue column for the information.
-### Error code: ParquetInvalidDecimalPrecisionScale
+### Error code: ParquetInvalidDecimalPrecisionScale
- **Message**: `Invalid Decimal Precision or Scale. Precision: %precision; Scale: %scale;.`
@@ -711,7 +666,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check the issue column for precision and scale.
-### Error code: ParquetColumnNotFound
+### Error code: ParquetColumnNotFound
- **Message**: `Column %column; does not exist in Parquet file.`
@@ -720,7 +675,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check the mappings in the activity. Make sure that the source column can be mapped to the correct sink column.
-### Error code: ParquetInvalidDataFormat
+### Error code: ParquetInvalidDataFormat
- **Message**: `Incorrect format of %srcValue; for converting to %dstType;.`
@@ -729,7 +684,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Double-check the source data or specify the correct data type for this column in the copy activity column mapping. For more information, see [Supported file formats and compression codecs by copy activity in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/supported-file-formats-and-compression-codecs).
-### Error code: ParquetDataCountNotMatchColumnCount
+### Error code: ParquetDataCountNotMatchColumnCount
- **Message**: `The data count in a row '%sourceColumnCount;' does not match the column count '%sinkColumnCount;' in given schema.`
@@ -738,7 +693,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Double-check to ensure that the source column count is same as the sink column count in 'mapping'.
-### Error code: ParquetDataTypeNotMatchColumnType
+### Error code: ParquetDataTypeNotMatchColumnType
- **Message**: `The data type %srcType; is not match given column type %dstType; at column '%columnIndex;'.`
@@ -747,7 +702,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Specify a correct type in mapping.sink.
-### Error code: ParquetBridgeInvalidData
+### Error code: ParquetBridgeInvalidData
- **Message**: `%message;`
@@ -756,7 +711,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Retry the operation. If the issue persists, contact us.
-### Error code: ParquetUnsupportedInterpretation
+### Error code: ParquetUnsupportedInterpretation
- **Message**: `The given interpretation '%interpretation;' of Parquet format is not supported.`
@@ -765,7 +720,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: 'ParquetInterpretFor' should not be 'sparkSql'.
-### Error code: ParquetUnsupportFileLevelCompressionOption
+### Error code: ParquetUnsupportFileLevelCompressionOption
- **Message**: `File level compression is not supported for Parquet.`
@@ -774,7 +729,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Remove 'CompressionType' in the payload.
-### Error code: UserErrorJniException
+### Error code: UserErrorJniException
- **Message**: `Cannot create JVM: JNI return code [-6][JNI call failed: Invalid arguments.]`
@@ -814,7 +769,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## REST
-### Error code: RestSinkCallFailed
+### Error code: RestSinkCallFailed
- **Message**: `Rest Endpoint responded with Failure from server. Check the error from server:%message;`
@@ -846,7 +801,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## SFTP
-#### Error code: SftpOperationFail
+#### Error code: SftpOperationFail
- **Message**: `Failed to '%operation;'. Check detailed error from SFTP.`
@@ -855,7 +810,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check the error details from SFTP.
-### Error code: SftpRenameOperationFail
+### Error code: SftpRenameOperationFail
- **Message**: `Failed to rename the temp file. Your SFTP server doesn't support renaming temp file, set "useTempFileRename" as false in copy sink to disable uploading to temp file.`
@@ -864,7 +819,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Set "useTempFileRename" as false in the copy sink to disable uploading to the temp file.
-### Error code: SftpInvalidSftpCredential
+### Error code: SftpInvalidSftpCredential
- **Message**: `Invalid SFTP credential provided for '%type;' authentication type.`
@@ -928,15 +883,15 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Resolution**: To determine whether the "AccMngr" column exists, double-check your dataset configuration by mapping the destination dataset column.
-### Error code: SftpFailedToConnectToSftpServer
+### Error code: SftpFailedToConnectToSftpServer
- **Message**: `Failed to connect to SFTP server '%server;'.` -- **Cause**: If the error message contains the string "Socket read operation has timed out after 30000 milliseconds," one possible cause is that an incorrect linked service type is used for the SFTP server. For example, you might be using the FTP linked service to connect to the SFTP server.
+- **Cause**: If the error message contains the string "Socket read operation has timed out after 30,000 milliseconds", one possible cause is that an incorrect linked service type is used for the SFTP server. For example, you might be using the FTP linked service to connect to the SFTP server.
- **Recommendation**: Check the port of the target server. By default, SFTP uses port 22. -- **Cause**: If the error message contains the string "Server response does not contain SSH protocol identification," one possible cause is that the SFTP server throttled the connection. Data Factory will create multiple connections to download from the SFTP server in parallel, and sometimes it will encounter SFTP server throttling. Ordinarily, different servers return different errors when they encounter throttling.
+- **Cause**: If the error message contains the string "Server response does not contain SSH protocol identification", one possible cause is that the SFTP server throttled the connection. Data Factory will create multiple connections to download from the SFTP server in parallel, and sometimes it will encounter SFTP server throttling. Ordinarily, different servers return different errors when they encounter throttling.
- **Recommendation**:
@@ -949,7 +904,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## SharePoint Online list
-### Error code: SharePointOnlineAuthFailed
+### Error code: SharePointOnlineAuthFailed
- **Message**: `The access token generated failed, status code: %code;, error message: %message;.`
@@ -960,7 +915,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## XML format
-### Error code: XmlSinkNotSupported
+### Error code: XmlSinkNotSupported
- **Message**: `Write data in XML format is not supported yet, choose a different format!`
@@ -969,7 +924,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Use a dataset in a different format from that of the sink dataset.
-### Error code: XmlAttributeColumnNameConflict
+### Error code: XmlAttributeColumnNameConflict
- **Message**: `Column names %attrNames;' for attributes of element '%element;' conflict with that for corresponding child elements, and the attribute prefix used is '%prefix;'.`
@@ -978,7 +933,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Set a different value for the "attributePrefix" property.
-### Error code: XmlValueColumnNameConflict
+### Error code: XmlValueColumnNameConflict
- **Message**: `Column name for the value of element '%element;' is '%columnName;' and it conflicts with the child element having the same name.`
@@ -987,7 +942,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Set a different value for the "valueColumn" property.
-### Error code: XmlInvalid
+### Error code: XmlInvalid
- **Message**: `Input XML file '%file;' is invalid with parsing error '%error;'.`
@@ -998,7 +953,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
## General copy activity error
-### Error code: JreNotFound
+### Error code: JreNotFound
- **Message**: `Java Runtime Environment cannot be found on the Self-hosted Integration Runtime machine. It is required for parsing or writing to Parquet/ORC files. Make sure Java Runtime Environment has been installed on the Self-hosted Integration Runtime machine.`
@@ -1007,7 +962,7 @@ Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check your integration runtime environment, see [Use Self-hosted Integration Runtime](https://docs.microsoft.com/azure/data-factory/format-parquet#using-self-hosted-integration-runtime).
-### Error code: WildcardPathSinkNotSupported
+### Error code: WildcardPathSinkNotSupported
- **Message**: `Wildcard in path is not supported in sink dataset. Fix the path: '%setting;'.`
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-2101-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2101-release-notes.md
@@ -7,7 +7,7 @@
Previously updated : 01/27/2021 Last updated : 02/08/2021
@@ -43,8 +43,8 @@ The following table provides a summary of known issues in the 2101 release.
|**3.**|Kubernetes |Edge container registry does not work when web proxy is enabled.|The functionality will be available in a future release. | |**4.**|Kubernetes |Edge container registry does not work with IoT Edge modules.| | |**5.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration&view=aspnetcore-3.1&preserve-view=true#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
-|**6.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |You need to set `--sync-garbage-collection` in Arc OperatorParams to allow the deletion of resources when deleted from git repository. For more information, see [Delete a configuration](../azure-arc/kubernetes/use-gitops-connected-cluster.md#additional-parameters). |
-|**7.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. This ensures that the writes are written to the disk.| |
+|**6.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/use-gitops-connected-cluster.md#additional-parameters). |
+|**7.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
|**8.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| | |**9.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. |
@@ -69,11 +69,12 @@ The following table provides a summary of known issues carried over from the pre
|**12.**|Kubernetes |Kubernetes does not currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).| |**13.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see Modify Azure IoT Edge modules from marketplace to run on Azure Stack Edge device.<!-- insert link-->| |**14.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts cannot be bound to paths in IoT Edge containers. If possible, map the parent directory.|
-|**15.**|Kubernetes |If you bring your own certificates for IoT Edge and add those on your Azure Stack Edge device after the compute is configured on the device, the new certificates are not picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**15.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates are not picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
|**16.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected.<ul><li>**Status** column in **Certificates** page.</li><li>**Security** tile in **Get started** page.</li><li>**Configuration** tile in **Overview** page.</li></ul> | |**17.**|IoT Edge |Modules deployed through IoT Edge can't use host network. | | |**18.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. || |**19.**|Kubernetes + update |Earlier software versions such as 2008 releases have a race condition update issue that causes the update to fail with ClusterConnectionException. |Using the newer builds should help avoid this issue. If you still see this issue, the workaround is to retry the upgrade, and it should work.|
+|**20**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
<!--|**18.**|Azure Private Edge Zone (Preview) |There is a known issue with Virtual Network Function VM if the VM was created on Azure Stack Edge device running earlier preview builds such as 2006/2007b and then the device was updated to 2009 GA release. The issue is that the VNF information can't be retrieved or any new VNFs can't be created unless the VNF VMs are deleted before the device is updated. |Before you update Azure Stack Edge device to 2009 release, use the PowerShell command `get-mecvnf` followed by `remove-mecvnf <VNF guid>` to remove all Virtual Network Function VMs one at a time. After the upgrade, you will need to redeploy the same VNFs.|-->
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-system-requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-system-requirements.md
@@ -7,7 +7,7 @@
Previously updated : 10/12/2020 Last updated : 02/05/2021 # System requirements for Azure Stack Edge Pro with GPU
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot.md
@@ -7,7 +7,7 @@
Previously updated : 01/21/2021 Last updated : 02/04/2021 # Troubleshoot issues on your Azure Stack Edge Pro GPU device
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-mini-r-safety https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-safety.md
@@ -7,7 +7,7 @@
Previously updated : 10/13/2020 Last updated : 02/08/2021
@@ -33,6 +33,7 @@ The following hazard icons are to be observed when setting up and running your A
| Icon | Description | |: |: | | ![Read All Instructions First](./media/azure-stack-edge-mini-r-safety/icon-safety-read-all-instructions.png) | Read All Instructions First |
+| ![Notice Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-notice.png) **NOTICE:** | Indicates information considered important, but not hazard-related. |
| ![Hazard Symbol](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) | Hazard Symbol | | ![Electrical Shock Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-electric-shock.png) | Electric Shock Hazard | | ![Indoor Use Only](./media/azure-stack-edge-mini-r-safety/icon-safety-indoor-use-only.png) | Indoor Use Only |
@@ -64,7 +65,7 @@ It is recommended to operate the system:
![Warning Icon 4](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) ![No User Serviceable Parts Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-do-not-access.png) **CAUTION:**
-* This equipment contains a lithium battery. Do not attempt servicing the battery pack. Batteries in this equipment are not user serviceable. Risk of Explosion if battery is replaced by an incorrect type.
+* This equipment contains a lithium battery. Do not attempt to service the battery pack. Batteries in this equipment are not user serviceable. Risk of Explosion if battery is replaced by an incorrect type.
![Warning Icon 5](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:**
@@ -85,7 +86,7 @@ Only charge the battery pack when it is a part of the Azure Stack Edge Mini R de
![Warning Icon 9](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:**
-This device has two SFP+ ports which may be used with optical transceivers. To avoid hazardous laser radiation, only use with Class 1 transceivers.
+This device has two SFP+ ports, which may be used with optical transceivers. To avoid hazardous laser radiation, only use with Class 1 transceivers.
## Electrical precautions
@@ -120,7 +121,7 @@ When used with the power supply adaptor:
![Electrical Shock Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-electric-shock.png) ![Indoor Use Only](./media/azure-stack-edge-mini-r-safety/icon-safety-indoor-use-only.png) **WARNING:**
-* Power supply labeled with this symbol are rated for indoor use only.
+* Power supply labeled with this symbol is rated for indoor use only.
## Regulatory information
@@ -138,23 +139,71 @@ The equipment is designed to operate in the following environments:
| Relative humidity (RH) specifications | <ul><li>Storage: 5% to 95% relative humidity</li><li>Operating: 10% to 90% relative humidity</li></ul>| | Maximum altitude specifications | <ul><li>Operating: 15,000 feet (4,572 meters)</li><li>Non-operating: 40,000 feet (12,192 meters)</li></ul>|
-> ![Notice Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-notice.png) **NOTICE:** &nbsp;Changes or modifications made to the equipment not expressly approved by Microsoft may void the user's authority to operate the equipment.
+> ![Notice Icon - 2](./media/azure-stack-edge-mini-r-safety/icon-safety-notice.png) **NOTICE:** &nbsp;Changes or modifications made to the equipment not expressly approved by Microsoft may void the user's authority to operate the equipment.
-CANADA and USA:
+#### CANADA and USA:
-NOTICE: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at their own expense.
+> ![Notice Icon - 3](./media/azure-stack-edge-mini-r-safety/icon-safety-notice.png) **NOTICE:** &nbsp; This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at their own expense.
+
+The Netgear A6150 WiFi USB Adapter provided with this equipment is intended to be operated close to the human body are tested for body-worn Specific Absorption Rate (SAR) compliance. The SAR limit set by the FCC is 1.6 W/kg when averaged over 1 g of tissue. When carrying the product or using it while worn on your body, maintain a distance of 10 mm from the body to ensure compliance with RF exposure requirements.
+
+The Netgear A6150 WiFi USB Adapter complies with ANSI/IEEE C95.1-1999 and was tested in accordance with the measurement methods and procedures specified in OET Bulletin 65 Supplement C.
+
+Netgear A6150 Specific Absorption Rate (SAR): 1.18 W/kg averaged over 1 g of tissue
+
+The Netgear A6150 WiFi USB Adapter is to be used with approved antennas only. This device and its antenna(s) must not be co-located or operating in conjunction with any other antenna or transmitter except in accordance with FCC multitransmitter product procedures. For products available in the USA market, only channel 1~11 can be operated. Selection of other channels is not possible.
+
+Operation in the band 5150ΓÇô5250 MHz is only for indoor use to reduce the potential for harmful interference to co-channel mobile satellite systems.
+
+![Regulatory information warning - indoor use](./media/azure-stack-edge-mini-r-safety/regulatory-information-indoor-use-only.png)
++
+Users are advised that high-power radars are allocated as primary users (priority users) of the bands 5250ΓÇô5350 MHz and 5650ΓÇô5850 MHz, and these radars could cause interference and/or damage to LE-LAN devices.
+
+This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation.
+
+If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures:
+
+- Reorient or relocate the receiving antenna.
+- Increase the separation between the equipment and receiver.
+- Connect the equipment to an outlet on a circuit different from that to which the receiver is connected.
+- Consult the dealer or an experienced radio/TV technician for help.
+
+For more information about interference issues, go to the FCC website at [fcc.gov/cgb/consumerfacts/interference.html](https://www.fcc.gov/consumers/guides/interference-radio-tv-and-telephone-signals). You can also call the FCC at 1-888-CALL FCC to request Interference and Telephone Interference fact sheets.
+
+Additional information about radiofrequency safety can be found on the FCC website at [https://www.fcc.gov/general/radio-frequency-safety-0](https://www.fcc.gov/general/radio-frequency-safety-0) and the Industry Canada website at [http://www.ic.gc.ca/eic/site/smt-gst.nsf/eng/sf01904.html](http://www.ic.gc.ca/eic/site/smt-gst.nsf/eng/sf01904.html).
+
+This product has demonstrated EMC compliance under conditions that included the use of compliant peripheral devices and shielded cables between system components. It is important that you use compliant peripheral devices and shielded cables between system components to reduce the possibility of causing interference to radios, television sets, and other electronic devices.
This device complies with part 15 of the FCC Rules and Industry Canada license-exempt RSS standard(s). Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation of the device. ![Regulatory information warning 1](./media/azure-stack-edge-mini-r-safety/regulatory-information-1.png) CAN ICES-3(A)/NMB-3(A)
-Microsoft Corporation, One Microsoft Way, Redmond, WA 98052, USA.
+
+Microsoft Corporation, One Microsoft Way, Redmond, WA 98052, USA
+ United States: (800) 426-9400+ Canada: (800) 933-4750
-EUROPEAN UNION:
-Request a copy of the EU Declaration of Conformity.
+Netgear A6150 WiFi USB Adapter FCC ID: PY318300429
+
+Netgear A6150 WiFi USB Adapter IC ID: 4054A-18300429
+
+The Netgear A6150 WiFi USB Adapter provided with this equipment is compliant with SAR for general population/uncontrolled exposure limits in IC RSS-102 and has been tested in accordance with the measurement methods and procedures specified in IEEE 1528. Maintain at least 10-mm distance for body-worn condition.
+
+The Netgear A6150 WiFi USB Adapter complies with the Canada portable RF exposure limit set forth for an uncontrolled environment and is safe for intended operation as described in its manual. Further RF exposure reduction can be achieved by keeping the product as far as possible from your body or by setting the device to a lower output power if such a function is available.
+
+A table with the Specific Absorption Rate (SAR) averaged over 1 g for each product can be seen in the USA section above.
+
+![Regulatory information warning 2](./media/azure-stack-edge-mini-r-safety/regulatory-information-2.png)
+
+#### EUROPEAN UNION:
+
+Request a copy of the EU Declaration of Conformity for this equipment.
+
+The Netgear A6150 WiFi USB Adapter provided with this equipment is in compliance with Directive 2014/53/EU and can also be provided on request.
> ![Warning Icon 13](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) > This is a class A product. In a domestic environment, this product may cause radio interference in which case the user may be required to take adequate measures.
@@ -166,10 +215,38 @@ Disposal of waste batteries and electrical and electronic equipment:
This symbol on the product or its batteries or its packaging means that this product and any batteries it contains must not be disposed of with your household waste. Instead, it is your responsibility to hand this over to an applicable collection point for the recycling of batteries and electrical and electronic equipment. This separate collection and recycling will help to conserve natural resources and prevent potential negative consequences for human health and the environment due to the possible presence of hazardous substances in batteries and electrical and electronic equipment, which could be caused by inappropriate disposal. For more information about where to drop off your batteries and electrical and electronic waste, please contact your local city/municipality office, your household waste disposal service, or the shop where you purchased this product. Contact erecycle@microsoft.com for additional information on WEEE. This product contains coin cell battery(ies).+
+The Netgear A6150 WiFi USB Adapter provided with this equipment is intended to be operated close to the human body and is tested for body-worn Specific Absorption Rate (SAR) compliance (see below values). When carrying the product or using it while worn on your body, maintain a distance of 10mm from the body to ensure compliance with RF exposure requirements.
+
+**Netgear A6150 Specific Absorption Rate (SAR):** 0.54 W/kg averaged over 10g of tissue
+
+ΓÇâ
+This device may operate in all member states of the EU. Observe national and local regulations where the device is used. This device is restricted to indoor use only when operating in the 5150-5350 MHz frequency range in the following countries:
+
+![EU countries that require indoor use only](./media/azure-stack-edge-mini-r-safety/mini-r-safety-eu-indoor-use-only.png)
+
+In accordance with Article 10.8(a) and 10.8(b) of the RED, the following table provides information on the frequency bands used and the maximum RF transmit power of Netgear wireless products for sale in the EU:
+
+**WiFi**
+
+| Frequency range (MHz) | Channels used | Max Transmit Power (dBm/mW) |
+| | - | |
+| 2400-2483.5 | 1-13 | ODFM: 19.9 dBm (97.7 mW) <br> CCK: 17.9 dBm (61.7 mW) |
+| 5150-5320 | 36-48 | 22.9 dBm (195 mW) |
+| 5250-5350 | 52-64 | 22.9 dBm (195 mW) with TPC <br> 19.9 dBm (97.7 mW) non-TPC |
+| 5470-5725 | 100-140 | 29.9 dBm (977 mW) with TPC <br> 29.6 dBm (490 mW) non-TPC |
+ Microsoft Ireland Sandyford Ind Est Dublin D18 KX32 IRL+ Telephone number: +353 1 295 3826+ Fax number: +353 1 706 4110
+#### SINGAPORE:
+
+The Netgear A6150 WiFi USB Adapter provided with this equipment complies with IMDA standards.
++ ## Next steps - [Prepare to deploy Azure Stack Edge Mini R](azure-stack-edge-mini-r-deploy-prep.md)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-mini-r-system-requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-system-requirements.md
@@ -7,7 +7,7 @@
Previously updated : 11/16/2020 Last updated : 02/05/2021 # Azure Stack Edge Mini R system requirements
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-pro-r-safety https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-safety.md
@@ -7,7 +7,7 @@
Previously updated : 12/18/2020 Last updated : 02/04/2021
@@ -33,7 +33,7 @@ The following hazard icons are to be observed when setting up and running your A
| Icon | Description | |: |: | | ![Read All Instructions First](./media/azure-stack-edge-pro-r-safety/icon-safety-read-all-instructions.png) | Read All Instructions First |
-| ![Hazard Symbol](./media/azure-stack-edge-pro-r-safety/icon-safety-warning.png) | Hazard Symbol |
+| ![Notice Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-notice.png) **NOTICE:** | Indicates information considered important, but not hazard-related. || ![Hazard Symbol](./media/azure-stack-edge-pro-r-safety/icon-safety-warning.png) | Hazard Symbol |
| ![Tip Hazard Icon](./media/azure-stack-edge-pro-r-safety/icon-safety-tip-hazard.png) | Tip Hazard| | ![Heavy Weight Icon](./media/azure-stack-edge-pro-r-safety/icon-safety-heavy-weight.png) | Heavy Weight Hazard| | ![Electrical Shock Icon](./media/azure-stack-edge-pro-r-safety/icon-safety-electric-shock.png) | Electric Shock Hazard |
@@ -84,6 +84,7 @@ The following hazard icons are to be observed when setting up and running your A
* Provided with adequate space to access the power supply cord(s), because they serve as the product's main power disconnect. * Ethernet cables are not provided with the product. To reduce electromagnetic interference, it is recommended that Cat 6 Shielded Twisted-pair (STP) cabling be used. * Set up the equipment in a work area allowing for adequate air circulation around the equipment; ensure that the front and back covers are fully removed while the device is running.
+* Ethernet cables are not provided with the product. To reduce electromagnetic interference, it is recommended that Cat 6 Shielded (STP) cabling be used.
* Install the equipment in temperature-controlled area free of conductive contaminants and allow for adequate air circulation around the equipment. * Keep the equipment away from sources of liquid and excessively humid environments. * Do not allow any liquid or any foreign object to enter the system. Do not place beverages or any other liquid containers on or near the system.
@@ -203,7 +204,7 @@ A device that has a UPS installed is designed to operate in the following enviro
> Maximum corrosive contaminant levels measured at &lt;/= 50% relative humidity. -->
-> ![Notice Icon](./media/azure-stack-edge-pro-r-safety/icon-safety-notice.png) **NOTICE:** &nbsp;Changes or modifications made to the equipment not expressly approved by Microsoft may void the user's authority to operate the equipment.
+> ![Notice Icon - 2](./media/azure-stack-edge-pro-r-safety/icon-safety-notice.png) **NOTICE:** &nbsp;Changes or modifications made to the equipment not expressly approved by Microsoft may void the user's authority to operate the equipment.
CANADA and USA:
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-pro-r-system-requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-system-requirements.md
@@ -7,7 +7,7 @@
Previously updated : 09/22/2020 Last updated : 02/05/2021 # Azure Stack Edge Pro R system requirements
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-system-requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-system-requirements.md
@@ -7,14 +7,14 @@
Previously updated : 07/15/2020 Last updated : 02/05/2021 # Azure Stack Edge Pro system requirements This article describes the important system requirements for your Microsoft Azure Stack Edge Pro solution and for the clients connecting to Azure Stack Edge Pro. We recommend that you review the information carefully before you deploy your Azure Stack Edge Pro. You can refer back to this information as necessary during the deployment and subsequent operation.
-The system requirements for the Azure Stack Edge Pro include:
+The system requirements for the Azure Stack Edge Pro include:
- **Software requirements for hosts** - describes the supported platforms, browsers for the local configuration UI, SMB clients, and any additional requirements for the clients that access the device. - **Networking requirements for the device** - provides information about any networking requirements for the operation of the physical device.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-troubleshoot.md
@@ -7,7 +7,7 @@
Previously updated : 01/21/2021 Last updated : 02/05/2021 # Troubleshoot your Azure Stack Edge Pro issues
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/alerts.md
@@ -3,7 +3,7 @@ Title: View and configure DDoS protection alerts for Azure DDoS Protection Stand
description: Learn how to view and configure DDoS protection alerts for Azure DDoS Protection Standard. documentationcenter: na-+ ms.devlang: na
@@ -36,7 +36,8 @@ In this tutorial, you'll learn how to:
With these templates, you will be able to configure alerts for all public IP addresses that you have enabled diagnostic logging on. Hence in order to use these alert templates, you will first need a Log Analytics Workspace with diagnostic settings enabled. See [View and configure DDoS diagnostic logging](diagnostic-logging.md). ### Azure Monitor alert rule
-This [Azure Monitor alert rule](https://aka.ms/ddosmitigationstatus) will run a simple query to detect when an active DDoS mitigation is occurring. This indicates a potential attack. Action groups can be used to invoke actions as a result of the alert.
+
+This [Azure Monitor alert rule](https://aka.ms/DDOSmitigationstatus) will run a simple query to detect when an active DDoS mitigation is occurring. This indicates a potential attack. Action groups can be used to invoke actions as a result of the alert.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAzure%2520Monitor%2520Alert%2520-%2520DDoS%2520Mitigation%2520Started%2FDDoSMitigationStarted.json)
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/manage-ddos-protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection.md
@@ -74,7 +74,7 @@ You cannot move a virtual network to another resource group or subscription when
### Enable DDoS protection for all virtual networks
-This [policy](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Policy%20-%20Virtual%20Networks%20should%20be%20associated%20with%20an%20Azure%20DDoS%20Protection%20Standard%20plan) will detect any virtual networks in a defined scope that do not have DDoS Protection Standard enabled, then optionally create a remediation task that will create the association to protect the VNet. For detailed step-by-step instructions on how to deploy this policy, see https://aka.ms/ddosvnetpolicy-techcommunity.
+This [policy](https://aka.ms/ddosvnetpolicy) will detect any virtual networks in a defined scope that do not have DDoS Protection Standard enabled, then optionally create a remediation task that will create the association to protect the VNet. For detailed step-by-step instructions on how to deploy this policy, see https://aka.ms/ddosvnetpolicy-techcommunity.
## Validate and test
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-deploy-windows-cs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-windows-cs.md
@@ -17,7 +17,7 @@ Last updated 09/09/2020
-# Deploy an Defender for IoT C#-based security agent for Windows
+# Deploy a Defender for IoT C#-based security agent for Windows
This guide explains how to install the Defender for IoT C#-based security agent on Windows.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-investigate-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-investigate-device.md
@@ -39,7 +39,7 @@ To locate your Log Analytics workspace for data storage:
Following configuration, do the following to access data stored in your Log Analytics workspace:
-1. Select and click on an Defender for IoT alert in your IoT Hub.
+1. Select and click on a Defender for IoT alert in your IoT Hub.
1. Click **Further investigation**. 1. Select **To see which devices have this alert click here and view the DeviceId column**.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-send-security-messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-send-security-messages.md
@@ -4,7 +4,7 @@ description: Learn how to send your security messages using Defender for IoT.
documentationcenter: na-+ editor: ''
@@ -12,15 +12,15 @@ ms.devlang: na
na Previously updated : 09/09/2020- Last updated : 2/8/2021+ # Send security messages SDK
-This how-to guide explains the Defender for IoT service capabilities when you choose to collect and send your device security messages without using an Defender for IoT agent, and explains how to do so.
+This how-to guide explains the Defender for IoT service capabilities when you choose to collect and send your device security messages without using a Defender for IoT agent, and explains how to do so.
In this guide, you learn how to:
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-set-up-high-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-set-up-high-availability.md
@@ -111,7 +111,7 @@ This allows the connection between the primary and secondary appliances for back
### On the secondary
-1. Sign in to the CLI as an Defender for IoT user.
+1. Sign in to the CLI as a Defender for IoT user.
2. Run the following command on the secondary. **Do not run with sudo**:
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/overview-security-agents https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/overview-security-agents.md
@@ -29,7 +29,7 @@ Use the following workflow to deploy and test your Defender for IoT security age
1. If your IoT Hub has no registered devices, [Register a new device](../iot-accelerators/iot-accelerators-device-simulation-overview.md).
-1. [Create an DefenderIotMicroAgent module twin](quickstart-create-micro-agent-module-twin.md) for your devices.
+1. [Create a DefenderIotMicroAgent module twin](quickstart-create-micro-agent-module-twin.md) for your devices.
1. To install the agent on an Azure simulated device instead of installing on an actual device, [spin up a new Azure Virtual Machine (VM)](../virtual-machines/linux/quick-create-portal.md) in an available zone.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/quickstart-azure-rtos-security-module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-azure-rtos-security-module.md
@@ -39,8 +39,11 @@ The next stage for getting started is preparing your Azure resources. You'll nee
An IoT Hub connection is required to get started. 1. Open your **IoT Hub** in Azure portal.+ 1. Navigate to **IoT Devices**.+ 1. Select **Create**.+ 1. Copy the IoT connection string to the [configuration file](how-to-azure-rtos-security-module.md). The connections credentials are taken from the user application configuration **HOST_NAME**, **DEVICE_ID**, and **DEVICE_SYMMETRIC_KEY**.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/quickstart-create-micro-agent-module-twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-micro-agent-module-twin.md
@@ -1,5 +1,5 @@
Title: Create a Defender Iot micro agent module twin (Preview)
+ Title: Create a Defender IoT micro agent module twin (Preview)
description: Learn how to create individual DefenderIotMicroAgent module twins for new devices.
@@ -10,7 +10,7 @@
-# Create a Defender Iot micro agent module twin (Preview)
+# Create a Defender IoT micro agent module twin (Preview)
You can create individualΓÇ»**DefenderIotMicroAgent** module twins for new devices. You can also batch create module twins for all devices in an IoT Hub.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/release-notes.md
@@ -13,7 +13,7 @@ ms.devlang: na
na Previously updated : 02/07/2021 Last updated : 02/08/2021
@@ -24,13 +24,13 @@ This article lists new features and feature enhancements for Defender for IoT.
Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## February 2021
-### Enhanced custom alert rules
+### Sensor - enhanced custom alert rules
You can now create custom alert rules based on the day, group of days and time-period network activity was detected. Working with day and time rule conditions is useful, for example in cases where alert severity is derived by the time the alert event takes place. For example, create a custom rule that triggers a high severity alert when network activity is detected on a weekend or in the evening. This feature is available on the sensor with the release of version 10.1.
-### Export alerts from on-premises management console
+### On-premises management console - export alerts
Alert information can now be exported to a .csv file from the on-premises management console. You can export information of all alerts detected or export information based on the filtered view.
@@ -45,7 +45,7 @@ A new device builder module is available. The module, referred to as a micro-age
- **Security posture management** ΓÇô proactively monitor the security posture of your IoT devices. - **Continuous, real-time IoT/OT threat detection** - detect threats such as botnets, brute force attempts, crypto miners, and suspicious network activity
-The deprecated security module documentation will be moved to the Classic folder.
+The deprecated security module documentation will be moved to the *Agent-based solution for device builders>Classic* folder.
This feature set is available with the current public preview cloud release.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/security-agent-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/security-agent-architecture.md
@@ -31,7 +31,7 @@ Security agents support the following features:
- Aggregate raw security events into messages sent through IoT Hub. -- Configure remotely through use of the **azureiotsecurity** module twin. To learn more, see [Configure an Defender for IoT agent](how-to-agent-configuration.md).
+- Configure remotely through use of the **azureiotsecurity** module twin. To learn more, see [Configure a Defender for IoT agent](how-to-agent-configuration.md).
Defender for IoT Security agents is developed as open-source projects, and are available from GitHub:
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/security-edge-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/security-edge-architecture.md
@@ -40,7 +40,7 @@ Defender for IoT security module for IoT Edge offers the following features:
- Remove configuration through use of the security module twin.
- See [Configure an Defender for IoT agent](how-to-agent-configuration.md) to learn more.
+ See [Configure a Defender for IoT agent](how-to-agent-configuration.md) to learn more.
Defender for IoT security module for IoT Edge runs in a privileged mode under IoT Edge. Privileged mode is required to allow the module to monitor the Operating System, and other IoT Edge modules.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-create-custom-sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-custom-sdks.md
@@ -115,6 +115,9 @@ In the non-query paging pattern, here is a sample method showing how to retrieve
The second pattern is only generated for the Query API. It uses a `continuationToken` explicitly.
+>[!TIP]
+> A main reason for getting pages is to calculate the [Query Unit charges](concepts-query-units.md) for a Query API call.
+ Here is an example with this pattern: :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/queries.cs" id="PagedQuery":::
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-azure-signalr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-azure-signalr.md
@@ -70,7 +70,7 @@ First, go to the browser where the Azure portal is opened, and complete the foll
Next, start Visual Studio (or another code editor of your choice), and open the code solution in the *digital-twins-samples-master > ADTSampleApp* folder. Then do the following steps to create the functions:
-1. Create a new C# sharp class called **SignalRFunctions.cs** in the *SampleFunctionsApp* project.
+1. In the *SampleFunctionsApp* project, create a new C# class called **SignalRFunctions.cs**.
1. Replace the contents of the class file with the following code:
@@ -83,7 +83,9 @@ Next, start Visual Studio (or another code editor of your choice), and open the
This should resolve any dependency issues in the class.
-Next, publish your function to Azure, using the steps described in the [*Publish the app* section](tutorial-end-to-end.md#publish-the-app) of the *Connect an end-to-end solution* tutorial. You can publish it to the same app service/function app that you used in the end-to-end tutorial prereq, or create a new oneΓÇöbut you may want to use the same one to minimize duplication. Also, finish out the app publish with the following steps:
+Next, publish your function to Azure, using the steps described in the [*Publish the app* section](tutorial-end-to-end.md#publish-the-app) of the *Connect an end-to-end solution* tutorial. You can publish it to the same app service/function app that you used in the end-to-end tutorial [prerequisite](#prerequisites), or create a new oneΓÇöbut you may want to use the same one to minimize duplication.
+
+Next, finish out the app publish with the following steps:
1. Collect the *negotiate* function's **HTTP endpoint URL**. To do this, go to the Azure portal's [Function apps](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp) page and select your function app from the list. In the app menu, select *Functions* and choose the *negotiate* function. :::image type="content" source="media/how-to-integrate-azure-signalr/functions-negotiate.png" alt-text="Azure portal view of the function app, with 'Functions' highlighted in the menu. The list of functions are shown on the page, and the 'negotiate' function is also highlighted.":::
@@ -125,23 +127,11 @@ Back on the *Create Event Subscription* page, hit **Create**.
## Configure and run the web app
-In this section, you will see the result in action. First, you'll start up the **simulated device sample app** that sends telemetry data through your Azure Digital Twins instance. Then, you'll configure the **sample client web app** to connect to the Azure SignalR flow you've set up. After that, you should be able to see the data updating the sample web app in real time.
-
-### Run the device simulator
-
-During the end-to-end tutorial prerequisite, you [configured the device simulator](tutorial-end-to-end.md#configure-and-run-the-simulation) to send data through an IoT Hub and to your Azure Digital Twins instance.
-
-Now, all you have to do is start the simulator project, located in *digital-twins-samples-master > DeviceSimulator > DeviceSimulator.sln*. If you're using Visual Studio, you can open the project and then run it with this button in the toolbar:
--
-A console window will open and display simulated temperature telemetry messages. These are being sent through your Azure Digital Twins instance, where they are then picked up by the Azure functions and SignalR.
-
-You don't need to do anything else in this console, but leave it running while you complete the next steps.
+In this section, you will see the result in action. First, configure the **sample client web app** to connect to the Azure SignalR flow you've set up. Next, you'll start up the **simulated device sample app** that sends telemetry data through your Azure Digital Twins instance. After that, you will view the sample web app to see the simulated device data updating the sample web app in real time.
### Configure the sample client web app
-Next, set up the **SignalR integration web app sample** with these steps:
+Set up the **SignalR integration web app sample** with these steps:
1. Using Visual Studio or any code editor of your choice, open the unzipped _**Azure_Digital_Twins_SignalR_integration_web_app_sample**_ folder that you downloaded in the [*Download the sample applications*](#download-the-sample-applications) section. 1. Open the *src/App.js* file, and replace the URL in `HubConnectionBuilder` with the HTTP endpoint URL of the **negotiate** function that you saved earlier:
@@ -163,6 +153,18 @@ Next, set permissions in your function app in the Azure portal:
:::image type="content" source="media/how-to-integrate-azure-signalr/cors-setting-azure-function.png" alt-text="CORS Setting in Azure Function":::
+### Run the device simulator
+
+During the end-to-end tutorial prerequisite, you [configured the device simulator](tutorial-end-to-end.md#configure-and-run-the-simulation) to send data through an IoT Hub and to your Azure Digital Twins instance.
+
+Now, all you have to do is start the simulator project, located in *digital-twins-samples-master > DeviceSimulator > DeviceSimulator.sln*. If you're using Visual Studio, you can open the project and then run it with this button in the toolbar:
++
+A console window will open and display simulated temperature telemetry messages. These are being sent through your Azure Digital Twins instance, where they are then picked up by the Azure functions and SignalR.
+
+You don't need to do anything else in this console, but leave it running while you complete the next step.
+ ### See the results To see the results in action, start the **SignalR integration web app sample**. You can do this from any console window at the *Azure_Digital_Twins_SignalR_integration_web_app_sample\src* location, by running this command:
dms https://docs.microsoft.com/en-us/azure/dms/quickstart-create-data-migration-service-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/quickstart-create-data-migration-service-portal.md
@@ -11,20 +11,18 @@
Previously updated : 07/21/2020 Last updated : 01/29/2021 # Quickstart: Create an instance of the Azure Database Migration Service by using the Azure portal
-In this Quickstart, you use the Azure portal to create an instance of Azure Database Migration Service. After you create the instance, you can use it to migrate data from SQL Server to Azure SQL Database.
+In this quickstart, you use the Azure portal to create an instance of Azure Database Migration Service. After you create the instance, you can use it to migrate data from multiple database sources to Azure data platforms, such as from SQL Server to Azure SQL Database or from SQL Server to an Azure SQL Managed Instance.
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. ## Sign in to the Azure portal
-Open your web browser, navigate to the [Microsoft Azure portal](https://portal.azure.com/), and then enter your credentials to sign in to the portal.
-
-The default view is your service dashboard.
+Open your web browser, navigate to the [Microsoft Azure portal](https://portal.azure.com/), and then enter your credentials to sign in to the portal. The default view is your service dashboard.
> [!NOTE] > You can create up to 10 instances of DMS per subscription per region. If you require a greater number of instances, please create a support ticket.
@@ -33,47 +31,60 @@ The default view is your service dashboard.
Register the Microsoft.DataMigration resource provider before you create your first instance of the Database Migration Service.
-1. In the Azure portal, select **All services**, and then select **Subscriptions**.
+1. In the Azure portal, search for and select **Subscriptions**.
+
+ ![Show portal subscriptions](media/quickstart-create-data-migration-service-portal/portal-select-subscription.png)
2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
+ ![Show resource providers](media/quickstart-create-data-migration-service-portal/portal-select-resource-provider.png)
+
+3. Search for migration, and then select **Register** for **Microsoft.DataMigration**.
![Register resource provider](media/quickstart-create-data-migration-service-portal/dms-register-provider.png) ## Create an instance of the service
-1. Select +**Create a resource** to create an instance of Azure Database Migration Service.
+1. In the Azure portal menu or on the **Home** page, select **Create a resource**. Search for and select **Azure Database Migration Service**.
+
+ ![Azure Marketplace](media/quickstart-create-data-migration-service-portal/portal-marketplace.png)
-2. Search the marketplace for "migration", select **Azure Database Migration Service**, and then on the **Azure Database Migration Service** screen, select **Create**.
+2. On the **Azure Database Migration Service** screen, select **Create**.
-3. On the **Create Migration Service** screen:
+ ![Create Azure Database Migration Service instance](media/quickstart-create-data-migration-service-portal/dms-create.png)
- - Choose a **Service Name** that is memorable and unique to identify your instance of Azure Database Migration Service.
- - Select the Azure **Subscription** in which you want to create the instance.
- - Select an existing **Resource Group** or create a new one.
- - Choose the **Location** that is closest to your source or target server.
- - Select an existing **Virtual network** or create one.
+3. On the **Create Migration Service** basics screen:
- The virtual network provides Azure Database Migration Service with access to the source database and target environment.
+ - Select the subscription.
+ - Create a new resource group or choose an existing one.
+ - Specify a name for the instance of the Azure Database Migration Service.
+ - Select the location in which you want to create the instance of Azure Database Migration Service.
+ - Choose **Azure** as the service mode.
+ - Select a pricing tier. For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
+
+ ![Configure Azure Database Migration Service instance basics settings](media/quickstart-create-data-migration-service-portal/dms-create-basics.png)
- For more information on how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+ - Select Next: Networking.
- - Select Basic: 1 vCore for the **Pricing tier**.
+4. On the **Create Migration Service** networking screen:
- ![Create migration service](media/quickstart-create-data-migration-service-portal/dms-create-service1.png)
+ - Select an existing virtual network or create a new one. The virtual network provides Azure Database Migration Service with access to the source database and target environment. For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
-4. Select **Create**.
+ ![Configure Azure Database Migration Service instance networking settings](media/quickstart-create-data-migration-service-portal/dms-network-settings.png)
- After a few moments, your instance of Azure Database Migration service is created and ready to use. Azure Database Migration Service displays as shown in the following image:
+ - Select **Review + Create** to create the service.
+
+ - After a few moments, your instance of Azure Database Migration service is created and ready to use:
![Migration service created](media/quickstart-create-data-migration-service-portal/dms-service-created.png) ## Clean up resources
-You can clean up the resources created in this Quickstart by deleting the [Azure resource group](../azure-resource-manager/management/overview.md). To delete the resource group, navigate to the instance of the Azure Database Migration Service that you created. Select the **Resource group** name, and then select **Delete resource group**. This action deletes all assets in the resource group as well as the group itself.
+You can clean up the resources created in this quickstart by deleting the [Azure resource group](../azure-resource-manager/management/overview.md). To delete the resource group, navigate to the instance of the Azure Database Migration Service that you created. Select the **Resource group** name, and then select **Delete resource group**. This action deletes all assets in the resource group as well as the group itself.
## Next steps
-> [!div class="nextstepaction"]
-> [Migrate SQL Server to Azure SQL Database](tutorial-sql-server-to-azure-sql.md)
+* [Migrate SQL Server to Azure SQL Database offline](tutorial-sql-server-to-azure-sql.md)
+* [Migrate SQL Server to Azure SQL Database online](tutorial-sql-server-azure-sql-online.md)
+* [Migrate SQL Server to an Azure SQL Managed Instance offline](tutorial-sql-server-to-managed-instance.md)
+* [Migrate SQL Server to an Azure SQL Managed Instance online](tutorial-sql-server-managed-instance-online.md)
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-sql-server-to-azure-sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-azure-sql.md
@@ -232,6 +232,9 @@ After the service is created, locate it within the Azure portal, open it, and th
1. On the **Select target** screen, specify the connection details for the target Azure SQL Database, which is the pre-provisioned Azure SQL Database to which the **Adventureworks2016** schema was deployed by using the Data Migration Assistant. ![Select Target](media/tutorial-sql-server-to-azure-sql/dms-select-target2.png)
+
+ > [!NOTE]
+ > Private endpoint connections to the target Azure SQL Database are supported by the Azure Database Migration Service except when using a custom DNS name.
2. Select **Next: Map to target databases** screen, map the source and the target database for migration.
@@ -270,4 +273,4 @@ After the service is created, locate it within the Azure portal, open it, and th
- [SQL migration using Azure Data Migration Service](https://www.microsoft.com/handsonlabs/SelfPacedLabs/?storyGuid=3b671509-c3cd-4495-8e8f-354acfa09587) hands-on lab. - For information about known issues and limitations when performing online migrations to Azure SQL Database, see the article [Known issues and workarounds with Azure SQL Database online migrations](known-issues-azure-sql-online.md). - For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).-- For information about Azure SQL Database, see the article [What is the Azure SQL Database service?](../azure-sql/database/sql-database-paas-overview.md).
+- For information about Azure SQL Database, see the article [What is the Azure SQL Database service?](../azure-sql/database/sql-database-paas-overview.md).
firewall https://docs.microsoft.com/en-us/azure/firewall/firewall-workbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-workbook.md
@@ -21,7 +21,7 @@ Before starting, you should [enable diagnostic logging](firewall-diagnostics.md#
## Get started
-To deploy the workbook, go to [Azure Monitor Workbook for Azure Firewall](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20Firewall/Azure%20Monitor%20Workbook) and following the instructions on the page. Azure Firewall Workbook is designed to work across multi-tenants, multi-subscriptions, and is filterable to multiple firewalls.
+To deploy the workbook, go to [Azure Monitor Workbook for Azure Firewall](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20Firewall/Workbook%20-%20Azure%20Firewall%20Monitor%20Workbook) and following the instructions on the page. Azure Firewall Workbook is designed to work across multi-tenants, multi-subscriptions, and is filterable to multiple firewalls.
## Overview page
@@ -53,4 +53,4 @@ You can look at the logs and understand more about the resource based on the sou
## Next steps -- Learn more about [Azure Firewall Diagnostics](firewall-diagnostics.md)
+- Learn more about [Azure Firewall Diagnostics](firewall-diagnostics.md)
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/shared-query-azure-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/shared-query-azure-powershell.md
@@ -35,7 +35,7 @@ before you begin.
## Create a Resource Graph shared query
-With the `Az.ResourceGraph` PowerShell module added to your environment of choice, it's time to create
+With the **Az.ResourceGraph** PowerShell module added to your environment of choice, it's time to create
a Resource Graph shared query. The shared query is an Azure Resource Manager object that you can grant permission to or run in Azure Resource Graph Explorer. The query summarizes the count of all resources grouped by _location_.
@@ -52,7 +52,7 @@ resources grouped by _location_.
New-AzResourceGroup -Name resource-graph-queries -Location westus2 ```
-1. Create the Azure Resource Graph shared query using the `Az.ResourceGraph` PowerShell module and
+1. Create the Azure Resource Graph shared query using the **Az.ResourceGraph** PowerShell module and
[New-AzResourceGraphQuery](/powershell/module/az.resourcegraph/new-azresourcegraphquery) cmdlet:
@@ -90,7 +90,7 @@ If you wish to remove the Resource Graph shared query and resource group from yo
environment, you can do so by using the following commands: - [Remove-AzResourceGraphQuery](/powershell/module/az.resourcegraph/remove-azresourcegraphquery)-- [Remove-AzResourceGroup](/cli/azure/group#az_group_delete)
+- [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup)
```azurepowershell-interactive # Delete the Azure Resource Graph shared query
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-component-versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-component-versioning.md
@@ -129,3 +129,5 @@ For more information on which virtual machine SKUs to select for your cluster, s
- [Enterprise Security Package](./enterprise-security-package.md) - [Hortonworks release notes associated with Azure HDInsight versions](./hortonworks-release-notes.md) - [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)++
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-access-yarn-app-logs-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-access-yarn-app-logs-linux.md
@@ -32,7 +32,7 @@ YARN Timeline Server includes the following type of data:
## YARN applications and logs
-Application logs (and the associated container logs) are critical in debugging problematic Hadoop applications. YARN provides a nice framework for collecting, aggregating, and storing application logs with [Log Aggregation](https://hortonworks.com/blog/simplifying-user-logs-management-and-access-in-yarn/).
+Application logs (and the associated container logs) are critical in debugging problematic Hadoop applications. YARN provides a nice framework for collecting, aggregating, and storing application logs with Log Aggregation.
The Log Aggregation feature makes accessing application logs more deterministic. It aggregates logs across all containers on a worker node and stores them as one aggregated log file per worker node. The log is stored on the default file system after an application finishes. Your application may use hundreds or thousands of containers, but logs for all containers run on a single worker node are always aggregated to a single file. So there's only 1 log per worker node used by your application. Log Aggregation is enabled by default on HDInsight clusters version 3.0 and above. Aggregated logs are located in default storage for the cluster. The following path is the HDFS path to the logs:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/overview-iot-central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
@@ -151,9 +151,6 @@ Each Azure subscription has default quotas that could impact the scope of your I
Now that you have an overview of IoT Central, here are some suggested next steps: -- Understand the available [Azure technologies and services for creating IoT solutions](../../iot-fundamentals/iot-services-and-technologies.md). - If you're a device developer and want to dive into some code, the suggested next step is to [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md). - Familiarize yourself with the [Azure IoT Central UI](overview-iot-central-tour.md). - Get started by [creating an Azure IoT Central application](quick-deploy-iot-central.md).-- Learn how to [Connect an Azure IoT Edge device](./tutorial-add-edge-as-leaf-device.md).-- Learn more about [Azure IoT technologies and services](../../iot-fundamentals/iot-services-and-technologies.md).
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/quick-configure-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-configure-rules.md
@@ -66,6 +66,10 @@ Shortly after you save the rule, it becomes live. When the conditions defined in
> [!NOTE] > After your testing is complete, turn off the rule to stop receiving alerts in your inbox.
+## Clean up resources
++ ## Next steps In this quickstart, you learned how to:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/quick-create-simulated-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-create-simulated-device.md
@@ -163,6 +163,10 @@ After you create a new simulated device, the builder can use this device to cont
:::image type="content" source="media/quick-create-simulated-device/configure-preview.png" alt-text="Screenshot showing a configured preview device":::
+## Clean up resources
++ ## Next steps In this quickstart, you learned how to you create a **Sensor Controller** device template for an ESP32 device and add a simulated device to your application.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/quick-deploy-iot-central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-deploy-iot-central.md
@@ -56,6 +56,10 @@ To create a new Azure IoT Central application from the **Custom application** te
:::image type="content" source="media/quick-deploy-iot-central/iotcentral-application.png" alt-text="Azure IoT Central application":::
+## Clean up resources
++ ## Next steps In this quickstart, you created an IoT Central application. Here's the suggested next step to continue learning about IoT Central:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/quick-monitor-devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-monitor-devices.md
@@ -46,6 +46,10 @@ Change **Target temperature** to 80 to warm the device and reduce the humidity.
:::image type="content" source="media/quick-monitor-devices/change-settings.png" alt-text="Screenshot that shows the updated target temperature setting for the device":::
+## Clean up resources
++ ## Next steps In this quickstart, you learned how to:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/tutorial-connect-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-connect-device.md
@@ -83,6 +83,10 @@ As a device developer, you can use the **Raw data** view to examine the raw data
On this view, you can select the columns to display and set a time range to view. The **Unmodeled data** column shows data from the device that doesn't match any property or telemetry definitions in the device template.
+## Clean up resources
++ ## Next steps If you'd prefer to continue through the set of IoT Central tutorials and learn more about building an IoT Central solution, see:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/tutorial-create-telemetry-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-create-telemetry-rules.md
@@ -103,6 +103,10 @@ Choose the rule you want to enable or disable. Toggle the **Enabled/Disabled** b
Choose the rule you want to customize. Use one or more filters in the **Target devices** section to narrow the scope of the rule to the devices you want to monitor.
+## Clean up resources
++ ## Next steps In this tutorial, you learned how to:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/tutorial-define-gateway-device-type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
@@ -196,6 +196,10 @@ Both your simulated downstream devices are now connected to your simulated gatew
Select a gateway device template and gateway device instance, and select **Join**.
+## Clean up resources
++ ## Next steps In this tutorial, you learned how to:
@@ -207,9 +211,6 @@ In this tutorial, you learned how to:
* Add relationships. * Publish your device template.
-> [!NOTE]
-> VS Code based code generation is currently not supported for gateway devices modeled in IoT Central.
- Next, as a device developer, you can learn how to: > [!div class="nextstepaction"]
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/tutorial-use-device-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-use-device-groups.md
@@ -78,6 +78,10 @@ To analyze the telemetry for a device group:
You can customize the view, change the time period shown, and export the data.
+## Clean up resources
++ ## Next steps Now that you've learned how to use device groups in your Azure IoT Central application, here is the suggested next step:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/energy/concept-iot-central-smart-meter-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/concept-iot-central-smart-meter-app.md
@@ -4,7 +4,7 @@ description: This article introduces key concepts relating the architecture of A
Last updated 12/11/2020-+
iot-central https://docs.microsoft.com/en-us/azure/iot-central/energy/concept-iot-central-solar-panel-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/concept-iot-central-solar-panel-app.md
@@ -4,7 +4,7 @@ description: This article introduces key concepts relating the architecture of A
Last updated 12/11/2020-+
iot-central https://docs.microsoft.com/en-us/azure/iot-central/energy/overview-iot-central-energy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/overview-iot-central-energy.md
@@ -1,5 +1,5 @@
Title: Build energy solutions with IoT Central | Microsoft Docs
+ Title: What are the Azure IoT Central energy solutions | Microsoft Docs
description: Learn to build energy solution using Azure IoT Central application templates.
@@ -10,7 +10,7 @@
-# Build energy solutions with IoT Central
+# What are the IoT Central energy solutions?
Smart meters and solar panels are playing an important role in the energy industry transformation. The smart meters give more controls and real-time insights about energy consumptions and solar panels growth is driving breakthrough in renewable energy generation. The smart meter and solar panel monitoring apps are sample templates to show the various capabilities. Partners can leverage these templates to build energy solutions with IoT Central for their specific needs. No new coding and no additional cost are required to deploy and use these applications. Learn more about energy application templates and their capabilities.
@@ -56,9 +56,10 @@ After you deploy the app, you'll see the simulated solar panel data within 1-2 m
> ![Solar Panel App Dashboard](media/overview-iot-central-energy/solar-panel-app-dashboard.png) ## Next steps+ To get started building an energy solution:+ * Create application templates for free: [smart meter app](https://apps.azureiotcentral.com/build/new/smart-meter-monitoring), [solar panel app](https://apps.azureiotcentral.com/build/new/solar-panel-monitoring) * Learn about [smart meter monitoring app concepts](./concept-iot-central-smart-meter-app.md) * Learn about [solar panel monitoring app concepts](./concept-iot-central-solar-panel-app.md)
-* Learn about [IoT Central platform](../index.yml)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/energy/tutorial-smart-meter-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/tutorial-smart-meter-app.md
@@ -97,10 +97,7 @@ If you decide to not continue using this application, delete your application wi
## Next steps
-Learn about smart meter app architecture refer to
+To learn about smart meter app architecture, see:
+ > [!div class="nextstepaction"]
-> [the concept article](./concept-iot-central-smart-meter-app.md)
-* Create smart meter application templates for free:
-[smart meter app](https://apps.azureiotcentral.com/build/new/smart-meter-monitoring)
-* Learn more about IoT Central, see
-[IoT Central overview](../index.yml)
+> [Smart meter application architecture](./concept-iot-central-smart-meter-app.md)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/energy/tutorial-solar-panel-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/tutorial-solar-panel-app.md
@@ -100,5 +100,4 @@ If you decide not to continue using this application, delete your application wi
> [!div class="nextstepaction"] > [Azure IoT Central - solar panel app architecture](./concept-iot-central-solar-panel-app.md)
-* [Create solar panel application templates for free](https://apps.azureiotcentral.com/build/new/solar-panel-monitoring)
-* [Azure IoT Central overview](../index.yml)
+
iot-central https://docs.microsoft.com/en-us/azure/iot-central/government/concepts-connectedwastemanagement-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/concepts-connectedwastemanagement-architecture.md
@@ -4,7 +4,7 @@ description: Learn concepts for a connected waste management solution built with
Last updated 12/11/2020-+
iot-central https://docs.microsoft.com/en-us/azure/iot-central/government/concepts-waterconsumptionmonitoring-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/concepts-waterconsumptionmonitoring-architecture.md
@@ -4,7 +4,7 @@ description: Learn concepts for a water consumption monitoring solution built wi
Last updated 12/11/2020-+
iot-central https://docs.microsoft.com/en-us/azure/iot-central/government/concepts-waterqualitymonitoring-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/concepts-waterqualitymonitoring-architecture.md
@@ -4,7 +4,7 @@ description: Learn concepts for a water quality monitoring solution built with A
Last updated 12/11/2020-+
iot-central https://docs.microsoft.com/en-us/azure/iot-central/government/overview-iot-central-government https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/overview-iot-central-government.md
@@ -1,5 +1,5 @@
Title: Building government solutions with Azure IoT Central
+ Title: What are the Azure IoT Central government solutions
description: Learn to build smart city solutions using Azure IoT Central application templates.
@@ -9,7 +9,7 @@
-# Building government solutions with Azure IoT Central
+# What are the IoT Central government solutions?
Get started with building smart city solutions using Azure IoT Central application templates. Start now with **water quality monitoring**, **water consumption monitoring**, and **connected waste management**.
@@ -66,8 +66,6 @@ Get started with the [Connected Waste Management application tutorial](./tutoria
## Next steps
-* Try any of the Government application templates in IoT Central for free [create app](https://apps.azureiotcentral.com/build/government)
* Learn about [Water Quality Monitoring concepts](./concepts-waterqualitymonitoring-architecture.md) * Learn about [Water Consumption Monitoring concepts](./concepts-waterconsumptionmonitoring-architecture.md) * Learn about [Connected Waste Management concepts](./concepts-connectedwastemanagement-architecture.md)
-* Learn about IoT Central, see [IoT Central overview](../core/overview-iot-central.md)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/government/tutorial-connected-waste-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-connected-waste-management.md
@@ -1,6 +1,6 @@
Title: 'Tutorial: Create a connected waste management app with Azure IoT Central'
-description: Learn to build a connected waste management application by using Azure IoT Central application templates.
+description: 'Tutorial: Learn to build a connected waste management application by using Azure IoT Central application templates'
Last updated 12/11/2020
@@ -14,13 +14,14 @@ This tutorial shows you how to use Azure IoT Central to create a connected waste
Specifically, you learn how to:
-* Use the Azure IoT Central *Connected waste management* template to create your app.
-* Explore and customize the operator dashboard.
-* Explore the connected waste bin device template.
-* Explore simulated devices.
-* Explore and configure rules.
-* Configure jobs.
-* Customize your application branding.
+> [!div class="checklist"]
+> Use the Azure IoT Central *Connected waste management* template to create your app.
+> Explore and customize the operator dashboard.
+> Explore the connected waste bin device template.
+> Explore simulated devices.
+> Explore and configure rules.
+> Configure jobs.
+> Customize your application branding.
## Prerequisites
iot-central https://docs.microsoft.com/en-us/azure/iot-central/government/tutorial-water-quality-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-quality-monitoring.md
@@ -13,8 +13,6 @@
# Tutorial: Create a water quality monitoring application in Azure IoT Central -- This tutorial guides you through the creation of a water quality monitoring application in Azure IoT Central. You create the application from the Azure IoT Central **Water quality monitoring** application template. In this tutorial, you learn to:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/healthcare/concept-continuous-patient-monitoring-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/concept-continuous-patient-monitoring-architecture.md
@@ -4,7 +4,7 @@ description: Tutorial - Learn about a continuous patient monitoring solution arc
Last updated 12/11/2020-+
iot-central https://docs.microsoft.com/en-us/azure/iot-central/healthcare/overview-iot-central-healthcare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/overview-iot-central-healthcare.md
@@ -1,5 +1,5 @@
Title: Build healthcare solutions with Azure IoT Central | Microsoft Docs
+ Title: What are the Azure IoT Central healthcare solutions | Microsoft Docs
description: Learn to build healthcare solution using Azure IoT Central application templates.
@@ -10,9 +10,7 @@
-# Building healthcare solutions with Azure IoT Central
--
+# What are the IoT Central healthcare solutions?
Learn to build healthcare solutions with Azure IoT Central using application templates.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/healthcare/tutorial-continuous-patient-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md
@@ -20,6 +20,10 @@ In this tutorial, you learn how to:
> * Create an application template > * Walk through the application template
+## Prerequisites
+
+An Azure subscription is recommended. Alternatively, you can use a free, 7-day trial. If you don't have an Azure subscription, you can create one on the [Azure sign-up page](https://aka.ms/createazuresubscription).
+ ## Create an application template Navigate to the [Azure IoT Central application manager website](https://apps.azureiotcentral.com/). Select **Build** from the left-hand navigation bar and then select the **Healthcare** tab.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/architecture-connected-logistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-connected-logistics.md
@@ -5,7 +5,7 @@
-+ Last updated 10/20/2019
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/architecture-digital-distribution-center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-digital-distribution-center.md
@@ -5,7 +5,7 @@
-+ Last updated 10/20/2019
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/architecture-micro-fulfillment-center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-micro-fulfillment-center.md
@@ -4,7 +4,7 @@ description: Learn to build a micro-fulfillment center application using our Mic
Last updated 10/13/2019-+
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/architecture-smart-inventory-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-smart-inventory-management.md
@@ -5,7 +5,7 @@
-+ Last updated 10/20/2019
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/overview-iot-central-retail https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/overview-iot-central-retail.md
@@ -1,5 +1,5 @@
Title: Building retail solutions with Azure IoT Central | Microsoft Docs
+ Title: What are the Azure IoT Central retail solutions | Microsoft Docs
description: Learn about using Azure IoT Central application templates to build connected logistics, digital distribution center, in-store analytics, condition monitoring, checkout, smart inventory management, and retail solutions.
@@ -10,7 +10,7 @@
-# Building retail solutions with Azure IoT Central
+# What are the IoT Central retail solutions?
Azure IoT Central is an IoT app platform that reduces the burden and cost associated with developing, managing, and maintaining enterprise-grade IoT solutions. Choosing to build with Azure IoT Central gives you the opportunity to focus your time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.
@@ -175,9 +175,5 @@ To learn how to deploy the solution, see the [Create a video analytics applicati
To get started building a retail solution: * Get started with the [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) tutorial that walks you through how to build a solution with one of the in-store analytics application templates.
-* [Deploy and walk through a connected logistics application template](./tutorial-iot-central-connected-logistics.md).
-* [Deploy and walk through a digital distribution center application template](./tutorial-iot-central-digital-distribution-center.md).
-* [Deploy and walk through a smart inventory management application template](./tutorial-iot-central-smart-inventory-management.md).
-* [Deploy and walk through the micro-fulfillment center application template](./tutorial-micro-fulfillment-center.md).
* [Deploy and walk through the video analytics application template](./tutorial-video-analytics-deploy.md).
-* Learn more about IoT Central in the [IoT Central overview](../core/overview-iot-central.md).
+* [Deploy and walk through a connected logistics application template](./tutorial-iot-central-connected-logistics.md).
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/store-analytics-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/store-analytics-architecture.md
@@ -4,7 +4,7 @@ description: Learn to build an in-store analytics application using Checkout app
Last updated 10/13/2019-+
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-in-store-analytics-create-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
@@ -13,8 +13,6 @@ Last updated 11/12/2019
# Tutorial: Create an in-store analytics application in Azure IoT Central -- The tutorial shows solution builders how to create an Azure IoT Central in-store analytics application. The sample application is for a retail store. It's a solution to the common business need to monitor and adapt to occupancy and environmental conditions. The sample application that you build includes three real devices: a Rigado Cascade 500 gateway, and two RuuviTag sensors. The tutorial also shows how to use the simulated occupancy sensor included in the application template for testing purposes. The Rigado C500 gateway serves as the communication hub in your application. It communicates with sensors in your store and manages their connections to the cloud. The RuuviTag is an environmental sensor that provides telemetry including temperature, humidity, and pressure. The simulated occupancy sensor provides a way to track motion and presence in the checkout areas of a store.
@@ -269,7 +267,12 @@ To add an action to the rule:
Within a few minutes, the specified email account should begin to receive emails. The application sends email each time a sensor indicates that the humidity level exceeded the value in your condition.
+## Clean up resources
++ ## Next steps+ In this tutorial, you learned how to: * Use the Azure IoT Central **In-store analytics - checkout** template to create a retail store application
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-in-store-analytics-customize-dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
@@ -239,7 +239,12 @@ To add a command tile to reboot the gateway:
1. Optionally, select the **Reboot** tile to run the reboot command on your gateway.
+## Clean up resources
++ ## Next steps+ In this tutorial, you learned how to: * Change the dashboard name
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md
@@ -501,6 +501,4 @@ You can delete your Power BI datasets and dashboard by deleting the workspace fr
These three tutorials have shown you an end-to-end solution that uses the **In-store analytics - checkout** IoT Central application template. You've connected devices to the application, used IoT Central to monitor the devices, and used Power BI to build a dashboard to view insights from the device telemetry. A recommended next step is to explore one of the other IoT Central application templates: > [!div class="nextstepaction"]
-> * [Build energy solutions with IoT Central](../energy/overview-iot-central-energy.md)
-> * [Build government solutions with IoT Central](../government/overview-iot-central-government.md)
-> * [Build healthcare solutions with IoT Central](../healthcare/overview-iot-central-healthcare.md)
+> [Build energy solutions with IoT Central](../energy/overview-iot-central-energy.md)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-iot-central-connected-logistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
@@ -126,10 +126,8 @@ If you're not going to continue to use this application, delete the application
:::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-cleanup.png" alt-text="Template cleanup"::: ## Next steps
-* Learn more about
+
+Learn more about :
+ > [!div class="nextstepaction"]
-> [Connected logistics concept](./architecture-connected-logistics.md)
-* Learn more about other
-[IoT Central retail templates](./overview-iot-central-retail.md)
-* Learn more about
-[IoT Central overview](../core/overview-iot-central.md)
+> [Connected logistics concepts](./architecture-connected-logistics.md)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-iot-central-digital-distribution-center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
@@ -11,13 +11,13 @@ Last updated 10/20/2019
# Tutorial: Deploy and walk through a digital distribution center application template -- This tutorial shows you how to get started by deploying an IoT Central **digital distribution center** application template. You will learn how to deploy the template, what is included out of the box, and what you might want to do next. In this tutorial, you learn how to,
-* Create digital distribution center application
-* Walk through the application
+
+> [!div class="checklist"]
+> Create digital distribution center application
+> Walk through the application
## Prerequisites * No specific pre-requisites required to deploy this app
@@ -106,10 +106,8 @@ If you're not going to continue to use this application, delete the application
> ![Screenshot showing how to delete the application when you're done with it](./media/tutorial-iot-central-ddc/ddc-cleanup.png) ## Next steps
-* Learn more about digital distribution center solution architecture
+
+Learn more about digital distribution center solution architecture:
+ > [!div class="nextstepaction"] > [digital distribution center concept](./architecture-digital-distribution-center.md)
-* Learn more about other
-[IoT Central retail templates](./overview-iot-central-retail.md)
-* Learn more about IoT Central refer to
-[IoT Central overview](../core/overview-iot-central.md)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-iot-central-smart-inventory-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
@@ -11,13 +11,13 @@ Last updated 10/20/2019
# Tutorial: Deploy and walk through a smart inventory management application template -- This tutorial shows you how to get started by deploying an IoT Central **smart inventory management** application template. You will learn how to deploy the template, what is included out of the box, and what you might want to do next.
-In this tutorial, you learn how to,
-* create smart inventory management application
-* walk through the application
+In this tutorial, you learn how to,
+
+> [!div class="checklist"]
+> create smart inventory management application
+> walk through the application
## Prerequisites
@@ -107,10 +107,8 @@ If you're not going to continue to use this application, delete the application
> ![Screenshot showing how to delete the application when you're done with it](./media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_cleanup.png) ## Next steps
-* Learn more about smart inventory management
+
+Learn more about smart inventory management:
+ > [!div class="nextstepaction"] > [Smart inventory management concept](./architecture-smart-inventory-management.md)
-* Learn more about other
-[IoT Central retail templates](./overview-iot-central-retail.md)
-* Learn more about IoT Central refer to
-[IoT Central overview](../core/overview-iot-central.md)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-micro-fulfillment-center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
@@ -87,8 +87,7 @@ If you're not going to continue to use this application, delete the application
## Next steps
-Learn more about
+Learn more about:
+ > [!div class="nextstepaction"] > [micro-fulfillment center solution architecture](./architecture-micro-fulfillment-center.md)
-* Learn more about the [Azure IoT Central retail templates](./overview-iot-central-retail.md)
-* Learn more about [Azure IoT Central](../core/overview-iot-central.md)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-video-analytics-build-module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-build-module.md
@@ -69,6 +69,13 @@ Open the local *live-video-analytics* repository folder with VS Code.
1. The version of the **LvaEdgeGatewayModule** image increments every time the build completes. You need to use this version in the deployment manifest file.
+## Clean up resources
+
+If you've finished with the application, you can remove all the resources you created as follows:
+
+1. In the IoT Central application, navigate to the **Your application** page in the **Administration** section. Then select **Delete**.
+1. In the Azure portal, delete the **lva-rg** resource group.
+ ## Next steps Now that you've learned about the video analytics - object and motion detection application template and the LVA IoT Edge modules, the suggested next step is to learn more about:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-video-analytics-create-app-openvino https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-create-app-openvino.md
@@ -119,3 +119,25 @@ On the **LVA Edge Gateway v2** page, select **+ Replace manifest**.
Navigate to the *lva-configuration* folder and select the *deployment.openvino.amd64.json* manifest file you edited previously. Select **Upload**. When the validation is complete, select **Replace**. [!INCLUDE [iot-central-video-analytics-part4](../../../includes/iot-central-video-analytics-part4.md)]+
+## Clean up resources
+
+If you've finished with the application, you can remove all the resources you created as follows:
+
+1. In the IoT Central application, navigate to the **Your application** page in the **Administration** section. Then select **Delete**.
+1. In the Azure portal, delete the **lva-rg** resource group.
+1. On your local machine, stop the **amp-viewer** Docker container.
+
+## Next steps
+
+You've now created an IoT Central application using the **Video analytics - object and motion detection** application template, created a device template for the gateway device, and added a gateway device to the application.
+
+If you want to try out the video analytics - object and motion detection application using IoT Edge modules running a cloud VM with simulated video streams:
+
+> [!div class="nextstepaction"]
+> [Create an IoT Edge instance for video analytics (Linux VM)](tutorial-video-analytics-iot-edge-vm.md)
+
+If you want to try out the video analytics - object and motion detection application using IoT Edge modules running a real device with real **ONVIF** camera:
+
+> [!div class="nextstepaction"]
+> [Create an IoT Edge instance for video analytics (Intel NUC)](tutorial-video-analytics-iot-edge-nuc.md)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-video-analytics-create-app-yolo-v3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-create-app-yolo-v3.md
@@ -121,3 +121,25 @@ On the **LVA Edge Gateway v2** page, select **+ Replace manifest**.
Navigate to the *lva-configuration* folder and select the *deployment.amd64.json* manifest file you edited previously. Select **Upload**. When the validation is complete, select **Replace**. [!INCLUDE [iot-central-video-analytics-part4](../../../includes/iot-central-video-analytics-part4.md)]+
+## Clean up resources
+
+If you've finished with the application, you can remove all the resources you created as follows:
+
+1. In the IoT Central application, navigate to the **Your application** page in the **Administration** section. Then select **Delete**.
+1. In the Azure portal, delete the **lva-rg** resource group.
+1. On your local machine, stop the **amp-viewer** Docker container.
+
+## Next steps
+
+You've now created an IoT Central application using the **Video analytics - object and motion detection** application template, created a device template for the gateway device, and added a gateway device to the application.
+
+If you want to try out the video analytics - object and motion detection application using IoT Edge modules running a cloud VM with simulated video streams:
+
+> [!div class="nextstepaction"]
+> [Create an IoT Edge instance for video analytics (Linux VM)](tutorial-video-analytics-iot-edge-vm.md)
+
+If you want to try out the video analytics - object and motion detection application using IoT Edge modules running a real device with real **ONVIF** camera:
+
+> [!div class="nextstepaction"]
+> [Create an IoT Edge instance for video analytics (Intel NUC)](tutorial-video-analytics-iot-edge-nuc.md)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-video-analytics-deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-deploy.md
@@ -1,15 +1,15 @@
Title: 'How to deploy the video analytics - object and motion detection Azure IoT Central application template'
-description: This guide summarizes the steps to deploy an Azure IoT Central application using the video analytics - object and motion detection application template.
+ Title: 'Tutorial: How to deploy the video analytics - object and motion detection Azure IoT Central application template'
+description: Tutorial - This guide summarizes the steps to deploy an Azure IoT Central application using the video analytics - object and motion detection application template.
-+ Last updated 07/31/2020
-# How to deploy an IoT Central application using the video analytics - object and motion detection application template
+# Tutorial: How to deploy an IoT Central application using the video analytics - object and motion detection application template
For an overview of the key *video analytics - object and motion detection* application components, see [object and motion detection video analytics application architecture](architecture-video-analytics.md).
@@ -17,6 +17,10 @@ The following video gives a walkthrough of how to use the _video analytics - obj
> [!VIDEO https://www.youtube.com/embed/Bo3FziU9bSA]
+## Prerequisites
+
+An Azure subscription is recommended. Alternatively, you can use a free, 7-day trial. If you don't have an Azure subscription, you can create one on the [Azure sign-up page](https://aka.ms/createazuresubscription).
+ ## Deploy the application Complete the following steps to deploy an IoT Central application using the video analytics application template:
@@ -37,6 +41,20 @@ Complete the following steps to deploy an IoT Central application using the vide
- View captured video that shows detected objects. - Tidy up.
+## Clean up resources
+
+When you've finished with the application, you can remove all the resources you created as follows:
+
+1. In the IoT Central application, navigate to the **Your application** page in the **Administration** section. Then select **Delete**.
+1. In the Azure portal, delete the **lva-rg** resource group.
+1. On your local machine, stop the **amp-viewer** Docker container.
+ ## Next steps
-Now you have an overview of the steps to deploy and use the video analytics application template, see [Create a video analytics application in Azure IoT Central (YOLO v3)](tutorial-video-analytics-create-app-yolo-v3.md) or [Create a video analytics in Azure IoT Central (OpenVINO&trade;)](tutorial-video-analytics-create-app-openvino.md) to get started.
+Now you have an overview of the steps to deploy and use the video analytics application template, see
+
+> [!div class="nextstepaction"]
+> [Create a video analytics application in Azure IoT Central (YOLO v3)](tutorial-video-analytics-create-app-yolo-v3.md) or
+
+> [!div class="nextstepaction"]
+> [Create a video analytics in Azure IoT Central (OpenVINO&trade;)](tutorial-video-analytics-create-app-openvino.md) to get started.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-video-analytics-iot-edge-nuc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-iot-edge-nuc.md
@@ -143,6 +143,14 @@ Identify the RTSP stream URLs for the cameras connected to your IoT Edge device,
> [!TIP] > Try to view the camera stream on the IoT Edge computer using a media player such as VLC.
+## Clean up resources
+
+If you've finished with the application, you can remove all the resources you created as follows:
+
+1. In the IoT Central application, navigate to the **Your application** page in the **Administration** section. Then select **Delete**.
+1. In the Azure portal, delete the **lva-rg** resource group.
+1. On your local machine, stop the **amp-viewer** Docker container.
+ ## Next steps You've now deployed the IoT Edge runtime and the LVA modules to the Intel NUC gateway device.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-video-analytics-iot-edge-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-iot-edge-vm.md
@@ -113,6 +113,14 @@ sudo docker ps
The list includes a container called **live555**.
+## Clean up resources
+
+If you've finished with the application, you can remove all the resources you created as follows:
+
+1. In the IoT Central application, navigate to the **Your application** page in the **Administration** section. Then select **Delete**.
+1. In the Azure portal, delete the **lva-rg** resource group.
+1. On your local machine, stop the **amp-viewer** Docker container.
+ ## Next steps You've now deployed the IoT Edge runtime, the LVA modules, and the Live555 simulation stream in a Linux VM running on Azure.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/retail/tutorial-video-analytics-manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-manage.md
@@ -189,7 +189,7 @@ You can pause live video analytics processing in the application:
* Click on the **Streaming Endpoint** resource. * On the **Streaming endpoint details** page, select **Stop**.
-## Tidy up
+## Clean up resources
If you've finished with the application, you can remove all the resources you created as follows:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/horizontal-arm-route-messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/horizontal-arm-route-messages.md
@@ -86,7 +86,7 @@ This section provides the steps to deploy the template, create a virtual device,
1. The last environment variable is the **Device ID**. In the command window, set up the command and execute it.
- ```cms
+ ```cmd
SET IOT_DEVICE_ID=<device-id-goes-here> ```
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-csharp-csharp-file-upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-file-upload.md
@@ -74,16 +74,15 @@ In this section, you modify the device app you created in [Send cloud-to-device
1. Add the following method to the **Program** class: ```csharp
- private static async void SendToBlobAsync()
+ private static async Task SendToBlobAsync(string fileName)
{
- string fileName = "image.jpg";
Console.WriteLine("Uploading file: {0}", fileName); var watch = System.Diagnostics.Stopwatch.StartNew();
- using (var sourceData = new FileStream(@"image.jpg", FileMode.Open))
- {
- await deviceClient.UploadToBlobAsync(fileName, sourceData);
- }
+ await deviceClient.GetFileUploadSasUriAsync(new FileUploadSasUriRequest { BlobName = fileName });
+ var blob = new CloudBlockBlob(sas.GetBlobUri());
+ await blob.UploadFromFileAsync(fileName);
+ await deviceClient.CompleteFileUploadAsync(new FileUploadCompletionNotification { CorrelationId = sas.CorrelationId, IsSuccess = true });
watch.Stop(); Console.WriteLine("Time to upload file: {0}ms\n", watch.ElapsedMilliseconds);
@@ -95,7 +94,7 @@ In this section, you modify the device app you created in [Send cloud-to-device
1. Add the following line in the **Main** method, right before `Console.ReadLine()`: ```csharp
- SendToBlobAsync();
+ await SendToBlobAsync("image.jpg");
``` > [!NOTE]
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/howto-use-iot-pnp-bridge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-use-iot-pnp-bridge.md
@@ -277,6 +277,8 @@ After the bridge starts, use the Azure IoT explorer tool to verify it's working.
[!INCLUDE [iot-pnp-iot-explorer.md](../../includes/iot-pnp-iot-explorer.md)]
+## Clean up resources
+ [!INCLUDE [iot-pnp-clean-resources.md](../../includes/iot-pnp-clean-resources.md)] ## Next steps
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/overview-iot-plug-and-play-current-release https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/overview-iot-plug-and-play-current-release.md
@@ -71,4 +71,8 @@ For current and previous IoT Plug and Play announcements, see the following blog
- [Public preview refresh (Posted on August 29, 2020)](https://techcommunity.microsoft.com/t5/internet-of-things/add-quot-plug-and-play-quot-to-your-iot-solutions/ba-p/1548531) - [Prepare and certify your devices for IoT Plug and Play (Posted on August 26, 2020)](https://azure.microsoft.com/blog/prepare-and-certify-your-devices-for-iot-plug-and-play/) - [IoT Plug and Play is now available in preview (Posted on August 22, 2019)](https://azure.microsoft.com/blog/iot-plug-and-play-is-now-available-in-preview/)-- [Build with Azure IoT Central and IoT Plug and Play (Posted on May 7, 2019)](https://azure.microsoft.com/blog/build-with-azure-iot-central-and-iot-plug-and-play/)
+- [Build with Azure IoT Central and IoT Plug and Play (Posted on May 7, 2019)](https://azure.microsoft.com/blog/build-with-azure-iot-central-and-iot-plug-and-play/)
+
+## Next steps
+
+The suggested next step is to review [What is IoT Plug and Play?](overview-iot-plug-and-play.md).
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/quickstart-connect-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/quickstart-connect-device.md
@@ -60,6 +60,10 @@ zone_pivot_groups: programming-languages-set-twenty-six
:::zone-end
+## Clean up resources
+
+If you've finished with the quickstarts and tutorials, see [Clean up resources](set-up-environment.md#clean-up-resources).
+ ## Next steps In this quickstart, you've learned how to connect an IoT Plug and Play device to an IoT hub. To learn more about how to build a solution that interacts with your IoT Plug and Play devices, see:
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/quickstart-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/quickstart-service.md
@@ -52,6 +52,10 @@ zone_pivot_groups: programming-languages-set-ten
:::zone-end
+## Clean up resources
+
+If you've finished with the quickstarts and tutorials, see [Clean up resources](set-up-environment.md#clean-up-resources).
+ ## Next steps In this quickstart, you learned how to connect an IoT Plug and Play device to a IoT solution. To learn more about IoT Plug and Play device models, see:
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/set-up-environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/set-up-environment.md
@@ -1,20 +1,22 @@
Title: Set up the IoT resources you need for IoT Plug and Play | Microsoft Docs
-description: Create an IoT Hub and Device Provisioning Service instance to use with the IoT Plug and Play quickstarts and tutorials.
+ Title: Quickstart - Set up the IoT resources you need for IoT Plug and Play | Microsoft Docs
+description: Quickstart - Create an IoT Hub and Device Provisioning Service instance to use with the IoT Plug and Play quickstarts and tutorials.
Last updated 08/11/2020-+ # Setup IoT Hub and DPS one time before completing any quickstart,tutorial,or how-to
-# Set up your environment for the IoT Plug and Play quickstarts and tutorials
+# Quickstart - Set up your environment for the IoT Plug and Play quickstarts and tutorials
Before you can complete any of the IoT Plug and Play quickstarts and tutorials, you need to configure an IoT hub and the Device Provisioning Service (DPS) in your Azure subscription. You'll also need local copies of the model files used by the sample applications and the Azure IoT explorer tool.
+## Prerequisites
+ If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. To avoid the requirement to install the Azure CLI locally, you can use the Azure Cloud Shell to set up the cloud services.
@@ -127,7 +129,7 @@ Configure the tool to use the model files you downloaded previously. From the ho
To learn more, see [Install and use Azure IoT explorer](howto-use-iot-explorer.md).
-## Remove the resources
+## Clean up resources
You can use the IoT hub and DPS instance for all the IoT Plug and Play quickstarts and tutorials, so you only need to complete the steps in this article once. When you're finished, you can remove them from your subscription with the following command:
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/tutorial-configure-tsi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/tutorial-configure-tsi.md
@@ -1,6 +1,6 @@
Title: Use Azure Time Series Insights to store and analyze your Azure IoT Plug and Play device telemetry
-description: Set up a Time Series Insights environment and connect your IoT hub to view and analyze telemetry from your IoT Plug and Play devices.
+ Title: Tutorial - Use Azure Time Series Insights to store and analyze your Azure IoT Plug and Play device telemetry
+description: Tutorial - Set up a Time Series Insights environment and connect your IoT hub to view and analyze telemetry from your IoT Plug and Play devices.
Last updated 10/14/2020
@@ -11,11 +11,16 @@
# As an IoT solution builder, I want to historize and analyze data from my IoT Plug and Play devices by routing to Time Series Insights.
-# Preview tutorial: Create and configure a Time Series Insights Gen2 environment
+# Tutorial: Create and configure a Time Series Insights Gen2 environment
In this tutorial, you learn how to create and configure an [Azure Time Series Insights Gen2](../time-series-insights/overview-what-is-tsi.md) environment to integrate with your IoT Plug and Play solution. Use Time Series Insights to collect, process, store, query, and visualize time series data at the scale of Internet of Things (IoT).
-First, you provision a Time Series Insights environment and connect your IoT hub as a streaming event source. Then you work through model synchronization to author your [Time Series Model](../time-series-insights/concepts-model-overview.md). You use the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) sample model files that you used for the temperature controller and thermostat devices.
+In this tutorial, you
+
+> [!div class="checklist"]
+> * Provision a Time Series Insights environment and connect your IoT hub as a streaming event source.
+> * Work through model synchronization to author your [Time Series Model](../time-series-insights/concepts-model-overview.md).
+> * Use the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) sample model files that you used for the temperature controller and thermostat devices.
> [!NOTE] > This integration between Time Series Insights and IoT Plug and Play is in preview. The way that DTDL device models map to the Time Series Insights Time Series Model might change.
@@ -219,10 +224,11 @@ Go back to the charting pane and expand **Device Fleet** > your device. Select *
![Screenshot showing how to change the instance type for thermostat2.](./media/tutorial-configure-tsi/charting-values.png)
-## Next steps
+## Clean up resources
-* To learn more about the various charting options, including interval sizing and y-axis controls, see [Azure Time Series Insights Explorer](../time-series-insights/concepts-ux-panels.md).
-* For an in-depth overview of your environment's Time Series Model, see [Time Series Model in Azure Time Series Insights Gen2](../time-series-insights/concepts-model-overview.md).
+## Next steps
-* To dive into the query APIs and the Time Series Expression syntax, see [Azure Time Series Insights Gen2 Query APIs](/rest/api/time-series-insights/reference-query-apis).
+> [!div class="nextstepaction"]
+> To learn more about the various charting options, including interval sizing and y-axis controls, see [Azure Time Series Insights Explorer](../time-series-insights/concepts-ux-panels.md).
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/tutorial-migrate-device-to-module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/tutorial-migrate-device-to-module.md
@@ -17,7 +17,12 @@ This tutorial shows you how to connect a generic IoT Plug and Play [module](../i
A device is an IoT Plug and Play device if it publishes its model ID when it connects to an IoT hub and implements the properties and methods described in the Digital Twins Definition Language (DTDL) model identified by the model ID. To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way.
-To demonstrate how to implement an IoT Plug and Play module, this tutorial shows you how to convert the thermostat C# device sample into a generic module.
+To demonstrate how to implement an IoT Plug and Play module, this tutorial shows you how to:
+
+> [!div class="checklist"]
+> * Add a device with a module to your IoT hub.
+> * Convert the thermostat C# device sample into a generic module.
+> * Use the service SDK to interact with the module.
## Prerequisites
@@ -230,6 +235,10 @@ You can use the Azure IoT Explorer tool to see:
* IoT Edge module twin property updates triggering IoT Plug and Play notifications. * The IoT Edge module react to your IoT Plug and Play commands.
+## Clean up resources
++ ## Next steps In this tutorial, you've learned how to connect an IoT Plug and Play device with modules to an IoT hub. To learn more about IoT Plug and Play device models, see:
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/tutorial-multiple-components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/tutorial-multiple-components.md
@@ -60,6 +60,10 @@ zone_pivot_groups: programming-languages-set-twenty-six
:::zone-end
+## Clean up resources
++ ## Next steps In this tutorial, you've learned how to connect an IoT Plug and Play device with components to an IoT hub. To learn more about IoT Plug and Play device models, see:
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/tutorial-use-mqtt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/tutorial-use-mqtt.md
@@ -134,15 +134,14 @@ The following definitions are for the MQTT topics the device uses to send inform
* The `DEVICE_TELEMETRY_MESSAGE` defines the topic the device uses to send telemetry to your IoT hub. For more information about MQTT, visit the [MQTT Samples for Azure IoT](https://github.com/Azure-Samples/IoTMQTTSample/) GitHub repository.
-
-## Next steps
-In this tutorial, you learned how to modify an MQTT device client to follow the IoT Plug and Play conventions. To learn more about IoT Plug and Play, see:
+## Clean up resources
-> [!div class="nextstepaction"]
-> [Architecture](concepts-architecture.md)
+
+## Next steps
-To learn more about IoT Hub support for the MQTT protocol, see:
+In this tutorial, you learned how to modify an MQTT device client to follow the IoT Plug and Play conventions. To learn more about IoT Hub support for the MQTT protocol, see:
> [!div class="nextstepaction"] > [Communicate with your IoT hub using the MQTT protocol](../iot-hub/iot-hub-mqtt-support.md)
lighthouse https://docs.microsoft.com/en-us/azure/lighthouse/concepts/cross-tenant-management-experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/cross-tenant-management-experience.md
@@ -1,7 +1,7 @@
Title: Cross-tenant management experiences description: Azure delegated resource management enables a cross-tenant management experience. Previously updated : 02/02/2021 Last updated : 02/08/2021
@@ -160,6 +160,7 @@ Most tasks and services can be performed on delegated resources across managed t
Support requests: - [Open support requests from **Help + support**](../../azure-portal/supportability/how-to-create-azure-support-request.md#getting-started) in the Azure portal for delegated resources (selecting the support plan available to the delegated scope)
+- Use the [Azure Quota API](/rest/api/reserved-vm-instances/quotaapi) to view and manage Azure service quotas for delegated customer resources
## Current limitations
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-overview.md
@@ -53,7 +53,7 @@ Key scenarios that you can accomplish using Standard Load Balancer include:
- Enable support for **[load-balancing](../virtual-network/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)** of **[IPv6](../virtual-network/ipv6-overview.md)**. -- Standard Load Balancer provides multi-dimensional metrics through [Azure Monitor](../azure-monitor/overview.md). These metrics can be filtered, grouped, and broken out for a given dimension. They provide current and historic insights into performance and health of your service. [Insights for Azure Load Balancer] (https://docs.microsoft.com/azure/load-balancer/load-balancer-insights) offers a preconfigured dashboard with useful visualizations for these metrics. Resource Health is also supported. Review **[Standard Load Balancer Diagnostics](load-balancer-standard-diagnostics.md)** for more details.
+- Standard Load Balancer provides multi-dimensional metrics through [Azure Monitor](../azure-monitor/overview.md). These metrics can be filtered, grouped, and broken out for a given dimension. They provide current and historic insights into performance and health of your service. [Insights for Azure Load Balancer](https://docs.microsoft.com/azure/load-balancer/load-balancer-insights) offers a preconfigured dashboard with useful visualizations for these metrics. Resource Health is also supported. Review **[Standard Load Balancer Diagnostics](load-balancer-standard-diagnostics.md)** for more details.
- Load balance services on **[multiple ports, multiple IP addresses, or both](./load-balancer-multivip-overview.md)**.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-access-azureml-behind-firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
@@ -118,6 +118,7 @@ The hosts in this section are owned by Microsoft, and provide services required
| -- | -- | -- | -- | | Azure Active Directory | login.microsoftonline.com | login.microsoftonline.us | login.chinacloudapi.cn | | Azure portal | management.azure.com | management.azure.us | management.azure.cn |
+| Azure Resource Manager | management.azure.com | management.usgovcloudapi.net | management.chinacloudapi.cn |
**Azure Machine Learning hosts**
@@ -141,7 +142,7 @@ The hosts in this section are owned by Microsoft, and provide services required
| **Required for** | **Azure public** | **Azure Government** | **Azure China 21Vianet** | | -- | -- | -- | -- | | Compute cluster/instance | \*.batchai.core.windows.net | \*.batchai.core.usgovcloudapi.net |\*.batchai.ml.azure.cn |
-| Compute cluster/instance | graph.windows.net | | |
+| Compute cluster/instance | graph.windows.net | graph.windows.net | graph.chinacloudapi.cn |
| Compute instance | \*.instances.azureml.net | \*.instances.azureml.us | \*.instances.azureml.cn | | Compute instance | \*.instances.azureml.ms | | |
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
@@ -30,7 +30,8 @@ If you plan on using a private link enabled workspace with a customer-managed ke
## Limitations
-Using an Azure Machine Learning workspace with private link is not available in the Azure Government regions or Azure China 21Vianet regions.
+* Using an Azure Machine Learning workspace with private link is not available in the Azure Government regions or Azure China 21Vianet regions.
+* If you enable public access for a workspace secured with private link and use Azure Machine Learning studio over the public internet, some features such as the designer may fail to access your data. This problem happens when the data is stored on a service that is secured behind the VNet. For example, an Azure Storage Account.
## Create a workspace that uses a private endpoint
@@ -154,6 +155,31 @@ Since communication to the workspace is only allowed from the virtual network, a
For information on Azure Virtual Machines, see the [Virtual Machines documentation](../virtual-machines/index.yml).
+## Enable public access
+
+After configuring a workspace with a private endpoint, you can optionally enable public access to the workspace. Doing so does not remove the private endpoint. It enables public access in addition to the private access. To enable public access to a private link-enabled workspace, use the following steps:
+
+# [Python](#tab/python)
+
+Use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)?view=azure-ml-py#delete-private-endpoint-connection-private-endpoint-connection-name-) to remove a private endpoint.
+
+```python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+ws.update(allow_public_access_when_behind_vnet=True)
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+The [Azure CLI extension for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ext/azure-cli-ml/ml/workspace?view=azure-cli-latest#ext_azure_cli_ml_az_ml_workspace_update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
+
+# [Portal](#tab/azure-portal)
+
+Currently there is no way to enable this functionality using the portal.
+++ ## Next steps
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-parallel-run-step https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-parallel-run-step.md
@@ -168,7 +168,16 @@ When you need a full understanding of how each node executed the score script, l
- The total number of items, successfully processed items count, and failed item count. - The start time, duration, process time and run method time.
-You can also find information on the resource usage of the processes for each worker. This information is in CSV format and is located at `~/logs/sys/perf/<ip_address>/node_resource_usage.csv`. Information about each process is available under `~logs/sys/perf/<ip_address>/processes_resource_usage.csv`.
+You can also view the results of periodical checks of the resource usage for each node. The log files and setup files are in this folder:
+
+- `~/logs/perf`: Set `--resource_monitor_interval` to change the checking interval in seconds. The default interval is `600`, which is approximately 10 minutes. To stop the monitoring, set the value to `0`. Each `<ip_address>` folder includes:
+
+ - `os/`: Information about all running processes in the node. One check runs an operating system command and saves the result to a file. On Linux, the command is `ps`. On Windows, use `tasklist`.
+ - `%Y%m%d%H`: The sub folder name is the time to hour.
+ - `processes_%M`: The file ends with the minute of the checking time.
+ - `node_disk_usage.csv`: Detailed disk usage of the node.
+ - `node_resource_usage.csv`: Resource usage overview of the node.
+ - `processes_resource_usage.csv`: Resource usage overview of each process.
### How do I log from my user script from a remote context?
@@ -252,4 +261,4 @@ registered_ds = ds.register(ws, '***dataset-name***', create_new_version=True)
* See the SDK reference for help with the [azureml-pipeline-steps](/python/api/azureml-pipeline-steps/azureml.pipeline.steps?preserve-view=true&view=azure-ml-py) package. View reference [documentation](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunstep?preserve-view=true&view=azure-ml-py) for ParallelRunStep class.
-* Follow the [advanced tutorial](tutorial-pipeline-batch-scoring-classification.md) on using pipelines with ParallelRunStep. The tutorial shows how to pass another file as a side input.
+* Follow the [advanced tutorial](tutorial-pipeline-batch-scoring-classification.md) on using pipelines with ParallelRunStep. The tutorial shows how to pass another file as a side input.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-network-security-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-network-security-overview.md
@@ -133,6 +133,15 @@ The following network diagram shows a secured Azure Machine Learning workspace w
### Limitations - AKS clusters must belong to the same VNet as the workspace and its associated resources.
+## Optional: Enable public access
+
+You can secure the workspace behind a VNet using a private endpoint and still allow access over the public internet. The initial configuration is the same as [securing the workspace and associated resources](#secure-the-workspace-and-associated-resources).
+
+After securing the workspace with a private link, you then [Enable public access](how-to-configure-private-link.md#enable-public-access). After this, you can access the workspace from both the public internet and the VNet.
+
+### Limitations
+
+- If you use Azure Machine Learning studio over the public internet, some features such as the designer may fail to access your data. This problem happens when the data is stored on a service that is secured behind the VNet. For example, an Azure Storage Account.
## Optional: enable studio functionality [Secure the workspace](#secure-the-workspace-and-associated-resources) > [Secure the training environment](#secure-the-training-environment) > [Secure the inferencing environment](#secure-the-inferencing-environment) > **Enable studio functionality** > [Configure firewall settings](#configure-firewall-settings)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-secure-inferencing-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-inferencing-vnet.md
@@ -253,7 +253,7 @@ Azure Container Instances are dynamically created when deploying a model. To ena
> * In the same resource group as your Azure Machine Learning workspace. > * If your workspace has a __private endpoint__, the virtual network used for Azure Container Instances must be the same as the one used by the workspace private endpoint. >
-> When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace cannot also be in the virtual network.
+> When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace cannot be in the virtual network.
To use ACI in a virtual network to your workspace, use the following steps:
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-secure-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-web-service.md
@@ -23,6 +23,8 @@ You use [HTTPS](https://en.wikipedia.org/wiki/HTTPS) to restrict access to web s
> The Azure Machine Learning SDK uses the term "SSL" for properties that are related to secure communications. This doesn't mean that your web service doesn't use *TLS*. SSL is just a more commonly recognized term. > > Specifically, web services deployed through Azure Machine Learning support TLS version 1.2 for AKS and ACI. For ACI deployments, if you are on older TLS version, we recommend re-deploying to get the latest TLS version.
+>
+> TLS version 1.3 for Azure Machine Learning - AKS Inference is unsupported.
TLS and SSL both rely on *digital certificates*, which help with encryption and identity verification. For more information on how digital certificates work, see the Wikipedia topic [Public key infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-labeled-dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-labeled-dataset.md
@@ -36,6 +36,9 @@ When you complete a data labeling project, you can export the label data from a
### COCO The COCO file is created in the default blob store of the Azure Machine Learning workspace in a folder within *export/coco*.
+
+>[!NOTE]
+>In Object detection projects, the exported "bbox": [x,y,width,height]" values in COCO file are normalized. They are scaled to 1. Example : a bounding box at (10, 10) location, with 30 pixels width , 60 pixels height, in a 640x480 pixel image will be annotated as (0.015625. 0.02083, 0.046875, 0.125). Since the coordintes are normalized, it will show as '0.0' as "width" and "height" for all images. The actual width and height can be obtained using Python library like OpenCV or Pillow(PIL).
### Azure Machine Learning dataset
migrate https://docs.microsoft.com/en-us/azure/migrate/create-manage-projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/create-manage-projects.md
@@ -10,7 +10,9 @@ Last updated 11/23/2020
# Create and manage Azure Migrate projects
-This article describes how to create, manage, and delete [Azure Migrate](migrate-services-overview.md) projects. If you're using Classic Azure Migrate projects, please delete those projects and follow the steps to create a new Azure Migrate project. You can't upgrade Classic Azure Migrate projects or components to the Azure Migrate. View [FAQ](https://docs.microsoft.com/azure/migrate/resources-faq#i-have-a-project-with-the-previous-classic-experience-of-azure-migrate-how-do-i-start-using-the-new-version) before you start the creation process.
+This article describes how to create, manage, and delete [Azure Migrate](migrate-services-overview.md) projects.
+
+Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, classic version of Azure Migrate will no longer be supported and the inventory metadata in the classic project will be deleted. If you're using classic Azure Migrate projects, delete those projects and follow the steps to create a new Azure Migrate project. You can't upgrade classic Azure Migrate projects or components to the Azure Migrate. View [FAQ](https://docs.microsoft.com/azure/migrate/resources-faq#i-have-a-project-with-the-previous-classic-experience-of-azure-migrate-how-do-i-start-using-the-new-version) before you start the creation process.
An Azure Migrate project is used to store discovery, assessment, and migration metadata collected from the environment you're assessing or migrating. In a project you can track discovered assets, create assessments, and orchestrate migrations to Azure.
migrate https://docs.microsoft.com/en-us/azure/migrate/resources-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/resources-faq.md
@@ -28,10 +28,10 @@ Use Azure Migrate to discover, assess, and migrate on-premises infrastructure, a
[Azure Migrate](migrate-services-overview.md) provides a centralized hub for assessment and migration to Azure. - Using Azure Migrate provides interoperability and future extensibility with Azure Migrate tools, other Azure services, and third-party tools.-- The Azure Migrate:Server Migration tool is purpose-built for server migration to Azure. It's optimized for migration. You don't need to learn about concepts and scenarios that aren't directly relevant to migration. -- There are no tool usage charges for migration for 180 days, from the time replication is started for a VM. This gives you time to complete migration. You only pay for the storage and network resources used in replication, and for compute charges consumed during test migrations.-- Azure Migrate supports all migration scenarios supported by Site Recovery. In addition, for VMware VMs, Azure Migrate provides an agentless migration option.-- We're prioritizing new migration features for the Azure Migrate:Server Migration tool only. These features aren't targeted for Site Recovery.
+- The Azure Migrate: Server Migration tool is purpose-built for server migration to Azure. It's optimized for migration. You don't need to learn about concepts and scenarios that aren't directly relevant to migration.
+- There are no tool usage charges for migration for 180 days, from the time replication is started for a VM. It gives you time to complete migration. You only pay for the storage and network resources used in replication, and for compute charges consumed during test migrations.
+- Azure Migrate supports all migration scenarios supported by Site Recovery. Also, for VMware VMs, Azure Migrate provides an agentless migration option.
+- We're prioritizing new migration features for the Azure Migrate: Server Migration tool only. These features aren't targeted for Site Recovery.
[Azure Site Recovery](../site-recovery/site-recovery-overview.md) should be used for disaster recovery only.
@@ -39,7 +39,7 @@ The Azure Migrate: Server Migration tool uses some back-end Site Recovery functi
## I have a project with the previous Classic experience of Azure Migrate. How do I start using the new version?
-You can't upgrade projects or components in the previous version to the new version. You need to [create a new Azure Migrate project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a Classic project, you can attach it to a project of current version after you delete the Classic project.
+Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, classic version of Azure Migrate will no longer be supported and the inventory metadata in the classic project will be deleted. You can't upgrade projects or components in the previous version to the new version. You need to [create a new Azure Migrate project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a Classic project, you can attach it to a project of current version after you delete the Classic project.
## What's the difference between Azure Migrate: Server Assessment and the MAP Toolkit?
migrate https://docs.microsoft.com/en-us/azure/migrate/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/whats-new.md
@@ -14,9 +14,7 @@
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate. ## Update (January 2021)-- Migration of VMware VMs to Azure virtual machines with disks encrypted using double encryption with platform-managed and customer-managed keys(CMK), using Azure Migrate Server Migration (agentless replication) is now available through Azure portal.-- Migration of VMware VMs to Azure virtual machines with disks encrypted using server-side encryption (SSE) with customer-managed keys (CMK) and double encryption with platform-managed and customer-managed keys, using Azure Migrate Server Migration (agent-based replication) is now available through Azure portal.-- Migration of physical servers and VMs from other clouds such as AWS and GCP to Azure virtual machines with disks encrypted using server-side encryption (SSE) with customer-managed keys (CMK) and double encryption with platform-managed and customer-managed keys, using Azure Migrate Server Migration (agent-based replication) is now available through Azure portal.
+- Azure Migrate: Server Migration tool now lets you migrate VMware virtual machines, physical servers, and virtual machines from other clouds to Azure virtual machines with disks encrypted with server-side encryption with customer-managed keys (CMK).
## Update (December 2020) - Azure Migrate now automatically installs the Azure VM agent on the VMware VMs while migrating them to Azure using the agentless method of VMware migration.
mysql https://docs.microsoft.com/en-us/azure/mysql/concepts-database-application-development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-database-application-development.md
@@ -19,7 +19,7 @@ There are code samples available for various programming languages and platforms
[Connectivity libraries used to connect to Azure Database for MySQL](concepts-connection-libraries.md) ## Tools
-Azure Database for MySQL uses the MySQL community version, compatible with MySQL common management tools such as Workbench or MySQL utilities such as mysql.exe, [phpMyAdmin](https://www.phpmyadmin.net/), [Navicat](https://www.navicat.com/products/navicat-for-mysql), and others. You can also use the Azure portal, Azure CLI, and REST APIs to interact with the database service.
+Azure Database for MySQL uses the MySQL community version, compatible with MySQL common management tools such as Workbench or MySQL utilities such as mysql.exe, [phpMyAdmin](https://www.phpmyadmin.net/), [Navicat](https://www.navicat.com/products/navicat-for-mysql), [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/) and others. You can also use the Azure portal, Azure CLI, and REST APIs to interact with the database service.
## Resource limitations Azure Database for MySQL manages the resources available to a server by using two different mechanisms:
@@ -41,4 +41,4 @@ When a transient error occurs while connecting to a MySQL database, your code sh
Database connections are a limited resource, so we recommend sensible use of connections when accessing your MySQL database to achieve better performance. - Access the database by using connection pooling or persistent connections. - Access the database by using short connection life span. -- Use retry logic in your application at the point of the connection attempt to catch failures resulting from concurrent connections have reached the maximum allowed. In the retry logic, set a short delay, and then wait for a random time before the additional connection attempts.
+- Use retry logic in your application at the point of the connection attempt to catch failures resulting from concurrent connections have reached the maximum allowed. In the retry logic, set a short delay, and then wait for a random time before the additional connection attempts.
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/connection-monitor-create-using-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-create-using-portal.md
@@ -16,6 +16,9 @@
# Create a monitor in Connection Monitor by using the Azure portal
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
+ Learn how to use Connection Monitor to monitor communication between your resources. This article describes how to create a monitor by using the Azure portal. Connection Monitor supports hybrid and Azure cloud deployments.
@@ -198,4 +201,4 @@ Connection monitors have these scale limits:
## Next steps * Learn [how to analyze monitoring data and set alerts](./connection-monitor-overview.md#analyze-monitoring-data-and-set-alerts).
-* Learn [how to diagnose problems in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network).
+* Learn [how to diagnose problems in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network).
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/connection-monitor-create-using-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-create-using-powershell.md
@@ -16,6 +16,10 @@
# Create a connection monitor by using PowerShell
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
++ Learn how to use the Connection Monitor feature of Azure Network Watcher to monitor communication between your resources.
@@ -115,4 +119,4 @@ Connection monitors have the following scale limits:
## Next steps * Learn [how to analyze monitoring data and set alerts](./connection-monitor-overview.md#analyze-monitoring-data-and-set-alerts).
-* Learn [how to diagnose issues in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network).
+* Learn [how to diagnose issues in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network).
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/connection-monitor-create-using-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-create-using-template.md
@@ -16,6 +16,9 @@
# Create a Connection Monitor using the ARM template
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
+ Learn how to create Connection Monitor to monitor communication between your resources using the ARMClient. It supports hybrid and Azure cloud deployments.
@@ -397,4 +400,4 @@ Connection monitors have the following scale limits:
## Next steps * Learn [how to analyze monitoring data and set alerts](./connection-monitor-overview.md#analyze-monitoring-data-and-set-alerts)
-* Learn [how to diagnose issues in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network)
+* Learn [how to diagnose issues in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network)
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/connection-monitor-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-overview.md
@@ -20,6 +20,9 @@
# Network Connectivity Monitoring with Connection Monitor
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
+ Connection Monitor provides unified end-to-end connection monitoring in Azure Network Watcher. The Connection Monitor feature supports hybrid and Azure cloud deployments. Network Watcher provides tools to monitor, diagnose, and view connectivity-related metrics for your Azure deployments. Here are some use cases for Connection Monitor:
@@ -107,7 +110,7 @@ Connection Monitor includes the following entities:
![Diagram showing a connection monitor, defining the relationship between test groups and tests](./media/connection-monitor-2-preview/cm-tg-2.png)
-You can create a connection monitor using [Azure portal](./connection-monitor-create-using-portal.md) or [ARMClient](./connection-monitor-create-using-template.md)
+You can create a connection monitor using [Azure portal](./connection-monitor-create-using-portal.md), [ARMClient](./connection-monitor-create-using-template.md) or [PowerShell](connection-monitor-create-using-powershell.md)
All sources, destinations, and test configurations that you add to a test group get broken down to individual tests. Here's an example of how sources and destinations are broken down:
@@ -269,10 +272,11 @@ When you use metrics, set the resource type as Microsoft.Network/networkWatchers
| Metric | Display name | Unit | Aggregation type | Description | Dimensions | | | | | | | |
-| ProbesFailedPercent | % Probes Failed | Percentage | Average | Percentage of connectivity monitoring probes failed. | No dimensions |
-| AverageRoundtripMs | Avg. Round-trip Time (ms) | Milliseconds | Average | Average network RTT for connectivity monitoring probes sent between source and destination. | No dimensions |
-| ChecksFailedPercent (Preview) | % Checks Failed (Preview) | Percentage | Average | Percentage of failed checks for a test. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region |
-| RoundTripTimeMs (Preview) | Round-trip Time (ms) (Preview) | Milliseconds | Average | RTT for checks sent between source and destination. This value isn't averaged. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region |
+| ProbesFailedPercent (classic) | % Probes Failed (classic) | Percentage | Average | Percentage of connectivity monitoring probes failed. | No dimensions |
+| AverageRoundtripMs (classic) | Avg. Round-trip Time (ms) (classic) | Milliseconds | Average | Average network RTT for connectivity monitoring probes sent between source and destination. | No dimensions |
+| ChecksFailedPercent | % Checks Failed | Percentage | Average | Percentage of failed checks for a test. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region |
+| RoundTripTimeMs | Round-trip Time (ms) | Milliseconds | Average | RTT for checks sent between source and destination. This value isn't averaged. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region |
+| TestResult | Test Result | Count | Average | Connection monitor test result | SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>SourceIP <br>DestinationIP <br>SourceSubnet <br>DestinationSubnet |
#### Metric based alerts for Connection Monitor
@@ -345,4 +349,4 @@ For networks whose sources are Azure VMs, the following issues can be detected:
## Next Steps * Learn [How to create Connection Monitor using Azure portal](./connection-monitor-create-using-portal.md)
- * Learn [How to create Connection Monitor using ARMClient](./connection-monitor-create-using-template.md)
+ * Learn [How to create Connection Monitor using ARMClient](./connection-monitor-create-using-template.md)
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/connection-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor.md
@@ -23,6 +23,9 @@
> [!NOTE] > This tutorial cover Connection Monitor (classic). Try the new and improved [Connection Monitor](connection-monitor-overview.md) to experience enhanced connectivity monitoring
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new connection monitors in Connection Monitor (classic) but you can continue to use existing connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate from Connection Monitor (classic) to the new Connection Monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before 29 February 2024.
+ Successful communication between a virtual machine (VM) and an endpoint such as another VM, can be critical for your organization. Sometimes, configuration changes are introduced which can break communication. In this tutorial, you learn how to: > [!div class="checklist"]
@@ -179,4 +182,4 @@ In this tutorial, you learned how to monitor a connection between two VMs. You l
At some point, you may find that resources in a virtual network are unable to communicate with resources in other networks connected by an Azure virtual network gateway. Advance to the next tutorial to learn how to diagnose a problem with a virtual network gateway. > [!div class="nextstepaction"]
-> [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md)
+> [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md)
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md
@@ -16,6 +16,9 @@
# Migrate to Connection Monitor from Connection Monitor (Classic)
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new connection monitors in Connection Monitor (classic) but you can continue to use existing connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate from Connection Monitor (classic) to the new Connection Monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before 29 February 2024.
+ You can migrate existing connection monitors to new, improved Connection Monitor with only a few clicks and with zero downtime. To learn more about the benefits, see [Connection Monitor](./connection-monitor-overview.md). ## Key points to note
@@ -60,4 +63,4 @@ After the migration begins, the following changes take place:
To learn more about Connection Monitor, see: * [Migrate from Network Performance Monitor to Connection Monitor](./migrate-to-connection-monitor-from-network-performance-monitor.md)
-* [Create Connection Monitor by using the Azure portal](./connection-monitor-create-using-portal.md)
+* [Create Connection Monitor by using the Azure portal](./connection-monitor-create-using-portal.md)
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md
@@ -16,6 +16,9 @@
# Migrate to Connection Monitor from Network Performance Monitor
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, migrate your tests from Network Performance Monitor to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
+ You can migrate tests from Network Performance Monitor (NPM) to new, improved Connection Monitor with a single click and with zero downtime. To learn more about the benefits, see [Connection Monitor](./connection-monitor-overview.md).
purview https://docs.microsoft.com/en-us/azure/purview/manage-integration-runtimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-integration-runtimes.md
@@ -36,7 +36,7 @@ This article describes how to create and manage a self-hosted integration runtim
- Copy and paste the authentication key.
- - Download the self-hosted integration runtime from [Azure Data Factory Integration Runtime](https://www.microsoft.com/download/details.aspx?id=39717) on a local Windows machine. Run the installer.
+ - Download the self-hosted integration runtime from [Microsoft Integration Runtime](https://www.microsoft.com/download/details.aspx?id=39717) on a local Windows machine. Run the installer.
- On the **Register Integration Runtime (Self-hosted)** page, paste one of the two keys you saved earlier, and select **Register**.
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/about-move-process https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/about-move-process.md
@@ -5,7 +5,7 @@
Previously updated : 09/09/2020 Last updated : 02/01/2021 #Customer intent: As an Azure admin, I want to understand how Azure Resource Mover works.
@@ -42,7 +42,7 @@ Each move resource goes through the summarized steps.
**Step 4: Initiate move** | Kick off the move process. The move method depends on the resource type:<br/><br/> - **Stateless**: Typically, for stateless resources, the move process deploys an imported template in the target region. The template is based on the source resource settings, and any manual edits you make to target settings.<br/><br/> - **Stateful**: For stateful resources, the move process might involve creating the resource, or enabling a copy, in the target region.<br/><br/> For stateful resources only, initiating a move might result in downtime of source resources. For example, VMs and SQL. | Kicking off move shifts the state to *Initiate move in progress*.<br/><br/> A successful initiate move moves resource state to *Commit move pending*, with no issues. <br/><br/> An unsuccessful move process moves state to *Initiate move failed*. **Step 5 Option 1: Discard move** | After the initial move, you can decide whether you want to go ahead with a full move. If you don't, you can discard the move, and Resource Mover deletes the resources created in the target. The replication process for stateful resources continues after the Discard process. This option is useful for testing. | Discarding resources moves state to *Discard in progress*.<br/><br/> Successful discard moves state to *Initiate move pending*, with no issues.<br/><br/> A failed discard moves state to *Discard move failed*. **Step 5 Option 2: Commit move** | After the initial move, if you want to go ahead with a full move, you verify resources in the target region, and when you're ready, you commit the move.<br/><br/> For stateful resources only, commit can result in source resources like VMs or SQL becoming inaccessible. | If you commit the move, resource state moves to *Commit move in progress**.<br/><br/> After a successful commit, the resource state shows *Commit move completed*, with no issues.<br/><br/> A failed commit moves state to *Commit move failed*.
-**Step 6: Delete source** | After committing the move, and verifying resources in the target region, you can delete the source resource. | After committing the move, resource state moves to *Delete source pending*.
+**Step 6: Delete source** | After committing the move, and verifying resources in the target region, you can delete the source resource. | After committing, a resource state moves to *Delete source pending*. You can then select the source resource and delete it.<br/><br/> - Only resources in the *Delete source pending* state can be deleted. | Deleting a resource group or SQL Server in the Resource Mover portal isn't supported. These resources can only be deleted from the resource properties page.
## Move region states
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/common-questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/common-questions.md
@@ -5,7 +5,7 @@
Previously updated : 02/01/2021 Last updated : 02/04/2021
@@ -14,13 +14,6 @@
This article answers common questions about [Azure Resource Mover](overview.md).
-## General
-
-### Is Resource Mover generally available?
-
-Resource Mover is currently in public preview. Production workloads are supported.
-- ## Moving across regions
@@ -41,6 +34,9 @@ Using Resource Mover, you can currently move the following resources across regi
- Internal and public load balancers - Azure SQL databases and elastic pools
+### Can I move disks across regions?
+
+You can't select disks as resources to the moved across regions. However, disks are moved as part of a VM move.
### Can I move resources across subscriptions when I move them across regions?
@@ -94,6 +90,12 @@ Subscription was moved to a different tenant. | Disable and then enable managed
Change the source/target combinations as needed using the change option in the portal.
+### What happens when I remove a resource from a list of move resources?
+
+You can remove resources that you've added to move list. Behavior when you remove a resource from the list depends on the resource state. [Learn more](remove-move-resources.md#vm-resource-state-after-removing).
+++ ## Next steps [Learn more](about-move-process.md) about Resource Mover components, and the move process.
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/modify-target-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/modify-target-settings.md
@@ -5,7 +5,7 @@
Previously updated : 09/10/2020 Last updated : 02/08/2021 #Customer intent: As an Azure admin, I want modify target settings when moving resources to another region.
@@ -32,16 +32,16 @@ Configuration settings you can modify are summarized in the table.
**Resource** | **Options** | |
-**VM Name** | Options:<br/><br/> - Create a new VM with the same name in the target region.<br/><br/> - Create a new VM with a different name in the target region.<br/><br/> - Use an existing VM in the target region.<br/><br/> If you create a new VM, with the exception of the settings you modify, the new target VM is assigned the same settings as the source.
-**VM availability zone** | The availability zone in which the target VM will be placed. This can be marked **NA** if you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability zone.
+**VM name** | Options:<br/><br/> - Create a new VM with the same name in the target region.<br/><br/> - Create a new VM with a different name in the target region.<br/><br/> - Use an existing VM in the target region.<br/><br/> If you create a new VM, with the exception of the settings you modify, the new target VM is assigned the same settings as the source.
+**VM availability zone** | The availability zone in which the target VM will be placed. Select **Not applicable** if you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability zone.
**VM SKU** | The [VM type](https://azure.microsoft.com/pricing/details/virtual-machines/series/) (available in the target region) that will be used for the target VM.<br/><br/> The selected target VM shouldn't be smaller than the source VM.
-**Networking resources** | Options for virtual networks (VNets)/network security groups/network interfaces:<br/><br/> - Create a new resource with the same name in the target region.<br/><br/> - Create a new resource with a different name in the target region.<br/><br/> - Use an existing networking resource in the target region.<br/><br/> If you create a new target resource, with the exception of the settings you modify, it's assigned the same settings as the source resource.
-**Public IP address name** | Specify the name.
-**Public IP address SKU** | Specify the [SKU](../virtual-network/public-ip-addresses.md#sku).
-**Public IP address zone** | Specify the [zone](../virtual-network/public-ip-addresses.md#standard) for standard public IP addresses.<br/><br/> If you want it to be zone redundant, enter as **Zone redundant**.
-**Load balancer name** | Specify the name.
-**Load balancer SKU** | Basic or Standard. We recommend using Standard.
-**Load balancer zone** | Specify a zone for the load balancer. <br/><br/> If you want it to be zone redundant, enter as **Zone redundant**.
+**VM availability set | The availability set in which the target VM will be placed. Select **Not applicable** you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability set.
+**VM key vault** | The associated key vault when you enable Azure disk encryption on a VM.
+**Disk encryption set** | The associated disk encryption set if the VM uses a customer-managed key for server-side encryption.
+**Resource group** | The resource group in which the target VM will be placed.
+**Networking resources** | Options for network interfaces, virtual networks (VNets/), and network security groups/network interfaces:<br/><br/> - Create a new resource with the same name in the target region.<br/><br/> - Create a new resource with a different name in the target region.<br/><br/> - Use an existing networking resource in the target region.<br/><br/> If you create a new target resource, with the exception of the settings you modify, it's assigned the same settings as the source resource.
+**Public IP address name, SKU, and zone** | Specifies the name, [SKU](../virtual-network/public-ip-addresses.md#sku), and [zone](../virtual-network/public-ip-addresses.md#standard) for standard public IP addresses.<br/><br/> If you want it to be zone redundant, enter as **Zone redundant**.
+**Load balancer name, SKU, and zone ** | Specifies the name, SKU (Basic or Standard), and zone for the load balancer.<br/><br/> We recommend using Standard sKU.<br/><br/> If you want it to be zone redundant, specify as **Zone redundant**.
**Resource dependencies** | Options for each dependency:<br/><br/>- The resource uses source dependent resources that will move to the target region.<br/><br/> - The resource uses different dependent resources located in the target region. In this case, you can choose from any similar resources in the target region. ### Edit VM target settings
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/overview.md
@@ -26,8 +26,6 @@ You might move resources to different Azure regions to:
- **Respond to deployment requirements**: Move resources that were deployed in error, or move in response to capacity needs. - **Respond to decommissioning**: Move resources because a region is decommissioned.
-> [!IMPORTANT]
-> Azure Resource Mover is currently in public preview.
## Why use Resource Mover?
@@ -59,6 +57,7 @@ You can move resources across regions in the Resource Mover hub, or from within
Using Resource Mover, you can currently move the following resources across regions: - Azure VMs and associated disks
+- Encrypted Azure VMs and associated disks. This includes VMs with Azure disk encryption enabled, and Azure VMs using default server-side encryption (both with platform-managed keys and customer-managed keys)
- NICs - Availability sets - Azure virtual networks
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/support-matrix-move-region-azure-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/support-matrix-move-region-azure-vm.md
@@ -5,7 +5,7 @@
Previously updated : 10/11/2020 Last updated : 02/08/2021
@@ -112,7 +112,7 @@ Extensions | Not supported | Extensions aren't copied over to the VM in target
This table summarized support for the Azure VM OS disk, data disk, and temporary disk. It's important to observe the VM disk limits and targets for [managed disks](../virtual-machines/disks-scalability-targets.md) to avoid any performance issues. > [!NOTE]
-> The target VM size should be equal to or larger than the source VM. The parameters used for validation are: Data Disks Count, NICs count, Available CPUs, Memory in GB. If it isn't a error is issued.
+> The target VM size should be equal to or larger than the source VM. The parameters used for validation are: Data Disks Count, NICs count, Available CPUs, Memory in GB. If it sn't a error is issued.
**Component** | **Support** | **Details**
@@ -130,6 +130,8 @@ Managed disk (Premium) | Supported |
Standard SSD | Supported | Generation 2 (UEFI boot) | Supported Boot diagnostics storage account | Not supported | Reenable it after moving the VM to the target region.
+VMs with Azure disk encryption enabled | Supported | [Learn more](tutorial-move-region-encrypted-virtual-machines.md)
+VMs using server-side encryption with customer-managed key | Supported | [Learn more](tutorial-move-region-encrypted-virtual-machines.md)
### Limits and data change rates
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/tutorial-move-region-encrypted-virtual-machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
@@ -0,0 +1,379 @@
+
+ Title: Move encrypted Azure VMs across regions with Azure Resource Mover
+description: Learn how to move encrypted Azure VMs to another region with Azure Resource Mover
++++ Last updated : 02/04/2021++
+#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
+++
+# Tutorial: Move encrypted Azure VMs across regions
+
+In this article, learn how to move encrypted Azure VMs to a different Azure region using [Azure Resource Mover](overview.md). Here's what we mean by encryption:
+
+- VMs that have disks with Azure disk encryption enabled. [Learn more](../virtual-machines/windows/disk-encryption-portal-quickstart.md)
+- Or, VMs that use customer-managed keys (CMKs) for encryption-at-rest (server-side encryption). [Learn more](../virtual-machines/disks-enable-customer-managed-keys-portal.md)
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Check prerequisites.
+> * For VMs with Azure disk encryption enabled, copy keys and secrets from the source region key vault to the destination region key vault.
+> * Prepare VMs to move them, and select resources in the source region that you want to move.
+> * Resolve resource dependencies.
+> * For VMs with Azure disk encryption enabled, manually assign the destination key vault. For VMs using server-side encryption with customer-managed keys, manually assign a disk encryption set in the destination region.
+> * Move the key vault and/or disk encryption set.
+> * Prepare and move the source resource group.
+> * Prepare and move the other resources.
+> * Decide whether you want to discard or commit the move.
+> * Optionally remove resources in the source region after the move.
+
+> [!NOTE]
+> Tutorials show the quickest path for trying out a scenario, and use default options.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. Then sign in to the [Azure portal](https://portal.azure.com).
+
+## Prerequisites
+
+**Requirement** |**Details**
+ |
+**Subscription permissions** | Check you have *Owner* access on the subscription containing the resources that you want to move.<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor and User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles.
+**VM support** | Check that the VMs you want to move are supported.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<br/><br/> - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.
+**Key vault requirements (Azure disk encryption)** | If you have Azure disk encryption enabled for VMs, in addition to the key vault in the source region, you need a key vault in the destination region. [Create a key vault](../key-vault/general/quick-create-portal.md).<br/><br/> For the key vaults in the source and target region, you need these permissions:<br/><br/> - Key permissions: Key Management Operations (Get, List); Cryptographic Operations (Decrypt and Encrypt).<br/><br/> - Secret permissions: Secret Management Operations (Get, List and Set)<br/><br/> - Certificate (List and Get).
+**Disk encryption set (server-side encryption with CMK)** | If you're using VMs with server-side encryption using a CMK, in addition to the disk encryption set in the source region, you need a disk encryption set in the destination region. [Create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set).<br/><br/> Moving between regions isn't supported if you're using HSM keys for customer-managed keys.
+**Target region quota** | The subscription needs enough quota to create the resources you're moving in the target region. If it doesn't have quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
+**Target region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.
++
+## Verify key vault permissions (Azure Disk Encryption)
+
+If you're moving VMs that have Azure disk encryption enabled, in the key vaults in the source and destination regions, verify/set permissions to ensure that moving encrypted VMs will work as expected.
+
+1. In the Azure portal, open the key vault in the source region.
+2. Under **Settings**, select **Access policies**.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/key-vault-access-policies.png" alt-text="Button to open key vault access policies." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/key-vault-access-policies.png":::
+
+3. If there are no user permissions, select **Add Access Policy**, and specify the permissions. If the user account already has a policy, under **User**, set the permissions.
+
+ - If VMs you want to move are enabled with Azure disk encryption (ADE), In **Key Permissions** > **Key Management Operations**, select **Get** and **List** if they're not selected.
+ - If you're using customer-managed keys (CMKs) to encrypt disk encryption keys used for encryption-at-rest (server-side encryption), in **Key Permissions** > **Key Management Operations**, select **Get** and **List**. Additionally, in **Cryptographic Operations**, select **Decrypt** and **Encrypt**
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/set-vault-permissions.png" alt-text="Dropdown list to select key vault permissions." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/set-vault-permissions.png":::
+
+4. In **Secret permissions**, **Secret Management Operations**, select **Get**, **List**, and **Set**.
+5. If you're assigning permissions to a new user account, in **Select principal**, select the user to whom you're assigning permissions.
+6. In **Access policies**, make sure that **Azure Disk Encryption for volume encryption** is enabled.
+7. Repeat the procedure for the key vault in the destination region.
++
+### Copy the keys to the destination key vault
+
+You need to copy the encryption secrets and keys from the source key vault to the destination key vault, using a script we provide.
+
+- You run the script in PowerShell. We recommend running the latest PowerShell version.
+- Specifically, the script requires these modules:
+ - Az.Compute
+ - Az.KeyVault (version 3.0.0
+ - Az.Accounts (version 2.2.3)
+
+Run as follows:
+
+1. Navigate to the [script](https://raw.githubusercontent.com/AsrOneSdk/published-scripts/master/CopyKeys/CopyKeys.ps1) in GitHub.
+2. Copy the contents of the script to a local file, and name it *Copy-keys.ps1*.
+3. Run the script.
+4. Sign into Azure.
+5. In the **User Input** pop-up, select the source subscription, resource group, and source VM. Then select the target location, and the target vaults for disk and key encryption.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/script-input.png" alt-text="Pop up to input script values." :::
++
+6. When the script completes, screen output indicates that CopyKeys succeeded.
+
+## Prepare VMs
+
+1. After [checking that VMs meet requirements](#prerequisites), make sure that VMs you want to move are turned on. All VMs disks that you want to be available in the destination region must be attached and initialized in the VM.
+3. Check that VMs have the latest trusted root certificates, and an updated certificate revocation list (CRL). To do this:
+ - On Windows VMs, install the latest Windows updates.
+ - On Linux VMs, follow distributor guidance so that machines have the latest certificates and CRL.
+4. Allow outbound connectivity from VMs as follows:
+ - If you're using a URL-based firewall proxy to control outbound connectivity, allow access to these [URLs](support-matrix-move-region-azure-vm.md#url-access)
+ - If you're using network security group (NSG) rules to control outbound connectivity, create these [service tag rules](support-matrix-move-region-azure-vm.md#nsg-rules).
+
+## Select resources to move
++
+- You can select any supported resource type in any of the resource groups in the source region you select.
+- You move resources to a target region that's in the same subscription as the source region. If you want to change the subscription, you can do that after the resources are moved.
+
+Select resources as follows:
+
+1. In the Azure portal, search for *resource mover*. Then, under **Services**, select **Azure Resource Mover**.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/search.png" alt-text="Search results for resource mover in the Azure portal." :::
+
+2. In **Overview**, click **Move across regions**.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/move-across-regions.png" alt-text="Button to add resources to move to another region." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/move-across-regions.png":::
+
+3. In **Move resources** > **Source + destination**, select the source subscription and region.
+4. In **Destination**, select the region to which you want to move the VMs. Then click **Next**.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/source-target.png" alt-text="Page to select source and destination region.." :::
+
+5. In **Resources to move**, click **Select resources**.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-resources.png" alt-text="Button to select resource to move.]." :::
+
+6. In **Select resources**, select the VMs. You can only add resources that are [supported for move](#prepare-vms). Then click **Done**.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-vm.png" alt-text="Page to select VMs to move." :::
+
+ > [!NOTE]
+ > In this tutorial we're selecting a VM that uses server-side encryption (rayne-vm) with a customer-managed key, and a VM with disk encryption enabled (rayne-vm-ade).
+
+7. In **Resources to move**, click **Next**.
+8. In **Review**, check the source and destination settings.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/review.png" alt-text="Page to review settings and proceed with move." :::
+
+9. Click **Proceed**, to begin adding the resources.
+10. Select the notifications icon to track progress. After the add process finishes successfully, select **Added resources for move** in the notifications.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/added-resources-notification.png" alt-text="Notification to confirm resources were added successfully." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/added-resources-notification.png":::
+
+
+11. After clicking the notification, review the resources on the **Across regions** page.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resources-prepare-pending.png" alt-text="Pages showing added resources with prepare pending." :::
+
+> [!NOTE]
+> - Resources you add are placed into a *Prepare pending* state.
+> - The resource group for the VMs is added automatically.
+> - If you modify the **Destination configuration** entries to use a resource that already exists in the destination region, the resource state is set to *Commit pending*, since you don't need to initiate a move for it.
+> - If you want to remove a resource that's been added, the method for doing that depends on where you are in the move process. [Learn more](remove-move-resources.md).
++
+## Resolve dependencies
+
+1. If any resources show a *Validate dependencies* message in the **Issues** column, select the **Validate dependencies** button.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/check-dependencies.png" alt-text="NButton to check dependencies." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/check-dependencies.png":::
+
+ The validation process begins.
+2. If dependencies are found, click **Add dependencies**
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/add-dependencies.png" alt-text="Button to add dependencies." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/add-dependencies.png":::
++
+3. In **Add dependencies**, leave the default **Show all dependencies** option.
+
+ - **Show all dependencies** iterates through all of the direct and indirect dependencies for a resource. For example, for a VM it shows the NIC, virtual network, network security groups (NSGs) etc.
+ - **Show first level dependencies only** shows only direct dependencies. For example, for a VM it shows the NIC, but not the virtual network.
+
+4. Select the dependent resources you want to add > **Add dependencies**.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-dependencies.png" alt-text="Select dependencies from dependencies list." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/select-dependencies.png":::
+
+5. Validate dependencies again.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/validate-again.png" alt-text="Page to validate again." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/validate-again.png":::
+
+## Assign destination resources
+
+Destination resources associated with encryption need manual assignment.
+
+- If you're moving a VM that's has Azure disk encryption (ADE), the key vault in your destination region will appear as a dependency.
+- If you're moving a VM that has server-side encryption that uses custom-managed keys (CMKs), then the disk encryption set in the destination region appears as a dependency.
+- Since this tutorial is moving a VM with ADE enabled, and a VM using a CMK, both the destination key vault and disk encryption set show up as dependencies.
+
+Assign manually as follows:
+
+1. In the disk encryption set entry, select **Resource not assigned** in the **Destination configuration** column.
+2. In **Configuration settings**, select the destination disk encryption set. Then select **Save changes**.
+3. You can select to save and validate dependencies for the resource you're modifying, or you can just save the changes, and validate everything you modify in one go.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-destination-set.png" alt-text="Page to select disk encryption set in destination region." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/select-destination-set.png":::
+
+ After adding the destination resource, the status of the disk encryption set turns to *Commit move pending*.
+3. In the key vault entry, select **Resource not assigned** in the **Destination configuration** column. **Configuration settings**, select the destination key vault. Save the changes.
+
+At this stage both the disk encryption set and the key vault status turns to *Commit move pending*.
++
+To commit and finish the move process for encryption resources.
+
+1. In **Across regions**, select the resource (disk encryption set or key vault) > **Commit move**.
+2. ln **Move Resources**, click **Commit**.
+
+> [!NOTE]
+> After committing the move, the resource is in a *Delete source pending* state.
++
+## Move the source resource group
+
+Before you can prepare and move VMs, the VM resource group must be present in the target region.
+
+### Prepare to move the source resource group
+
+During the Prepare process, Resource Mover generates Azure Resource Manager (ARM) templates using the resource group settings. Resources inside the resource group aren't affected.
+
+Prepare as follows:
+
+1. In **Across regions**, select the source resource group > **Prepare**.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/prepare-resource-group.png" alt-text="Prepare resource group." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/prepare-resource-group.png":::
+
+2. In **Prepare resources**, click **Prepare**.
+
+> [!NOTE]
+> After preparing the resource group, it's in the *Initiate move pending* state.
+
+
+### Move the source resource group
+
+Initiate the move as follows:
+
+1. In **Across regions**, select the resource group > **Initiate Move**
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/initiate-move-resource-group.png" alt-text="Button to initiate move." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/initiate-move-resource-group.png":::
+
+2. ln **Move Resources**, click **Initiate move**. The resource group moves into an *Initiate move in progress* state.
+3. After initiating the move, the target resource group is created, based on the generated ARM template. The source resource group moves into a *Commit move pending* state.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resource-group-commit-move-pending.png" alt-text="Review the commit move pending state." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/resource-group-commit-move-pending.png":::
+
+To commit and finish the move process:
+
+1. In **Across regions**, select the resource group > **Commit move**.
+2. ln **Move Resources**, click **Commit**.
+
+> [!NOTE]
+> After committing the move, the source resource group is in a *Delete source pending* state.
++
+## Prepare resources to move
+
+Now that the encryption resources and the source resource group are moved, you can prepare to move other resources that are in the *Prepare pending* state.
++
+1. In **Across regions**, validate again and resolve any issues.
+2. If you want to edit target settings before beginning the move, select the link in the **Destination configuration** column for the resource, and edit the settings. If you edit the target VM settings, the target VM size shouldn't be smaller than the source VM size.
+3. Select **Prepare** for resources in the *Prepare pending* state that you want to move.
+3. In **Prepare resources**, select **Prepare**
+
+ - During the prepare process, the Azure Site Recovery Mobility agent is installed on VMs, to replicate them.
+ - VM data is replicated periodically to the target region. This doesn't affect the source VM.
+ - Resource Move generates ARM templates for the other source resources.
+
+After preparing resources, they're in an *Initiate move pending* state.
++++
+## Initiate the move
+
+With resources prepared, you can now initiate the move.
+
+1. In **Across regions**, select resources with state *Initiate move pending*. Then click **Initiate move**.
+2. In **Move resources**, click **Initiate move**.
+3. Track move progress in the notifications bar.
+
+ - For VMs, replica VMs are created in the target region. The source VM is shut down, and some downtime occurs (usually minutes).
+ - Resource Mover recreates other resources using the ARM templates that were prepared. There's usually no downtime.
+ - After moving resources, they're in an *Commit move pending* state.
+++
+## Discard or commit?
+
+After the initial move, you can decide whether you want to commit the move, or to discard it.
+
+- **Discard**: You might discard a move if you're testing, and you don't want to actually move the source resource. Discarding the move returns the resource to a state of *Initiate move pending*.
+- **Commit**: Commit completes the move to the target region. After committing, a source resource will be in a state of *Delete source pending*, and you can decide if you want to delete it.
++
+## Discard the move
+
+You can discard the move as follows:
+
+1. In **Across regions**, select resources with state *Commit move pending*, and click **Discard move**.
+2. In **Discard move**, click **Discard**.
+3. Track move progress in the notifications bar.
++
+> [!NOTE]
+> After discarding resources, VMs are in an *Initiate move pending* state.
+
+## Commit the move
+
+If you want to complete the move process, commit the move.
+
+1. In **Across regions**, select resources with state *Commit move pending*, and click **Commit move**.
+2. In **Commit resources**, click **Commit**.
+
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resources-commit-move.png" alt-text="Page to commit resources to finalize move." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/resources-commit-move.png" :::
+
+3. Track the commit progress in the notifications bar.
+
+> [!NOTE]
+> - After committing the move, VMs stop replicating. The source VM isn't impacted by the commit.
+> - Commit doesn't impact source networking resources.
+> - After committing the move, resources are in a *Delete source pending* state.
+++
+## Configure settings after the move
+
+- The Mobility service isn't uninstalled automatically from VMs. Uninstall it manually, or leave it if you plan to move the server again.
+- Modify Azure role-based access control (Azure RBAC) rules after the move.
+
+## Delete source resources after commit
+
+After the move, you can optionally delete resources in the source region.
+
+1. In **Across Regions**, select each source resource that you want to delete. then select **Delete source**.
+2. In **Delete source**, review what you're intending to delete, and in **Confirm delete**, type **yes**. The action is irreversible, so check carefully!
+3. After typing **yes**, select **Delete source**.
+
+> [!NOTE]
+> In the Resource Move portal, you can't delete resource groups, key vaults, or SQL Server servers. You need to delete these individually from the properties page for each resource.
++
+## Delete additional resources created for move
+
+After the move, you can manually delete the move collection, and Site Recovery resources that were created.
+
+- The move collection is hidden by default. To see it you need to turn on hidden resources.
+- The cache storage has a lock that must be deleted, before it can be deleted.
+
+Delete as follows:
+1. Locate the resources in resource group ```RegionMoveRG-<sourceregion>-<target-region>```.
+2. Check that all the VM and other source resources in the source region have been moved or deleted. This ensures that there are no pending resources using them.
+2. Delete the resources:
+
+ - The move collection name is ```movecollection-<sourceregion>-<target-region>```.
+ - The cache storage account name is ```resmovecache<guid>```
+ - The vault name is ```ResourceMove-<sourceregion>-<target-region>-GUID```.
+## Next steps
+
+In this tutorial, you:
+
+> [!div class="checklist"]
+> * Moved encrypted Azure VMs and their dependent resources to another Azure region.
++
+Now, trying moving Azure SQL databases and elastic pools to another region.
+
+> [!div class="nextstepaction"]
+> [Move Azure SQL resources](./tutorial-move-region-sql.md)
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/tutorial-move-region-sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/tutorial-move-region-sql.md
@@ -5,7 +5,7 @@
Previously updated : 09/09/2020 Last updated : 02/04/2021 #Customer intent: As an Azure admin, I want to move SQL Server databases to a different Azure region.
@@ -268,8 +268,11 @@ Finishing moving databases and elastic pools as follows:
After the move, you can optionally delete resources in the source region.
-1. In **Across Regions**, click the name of each source resource that you want to delete.
-2. In the properties page for each resource, select **Delete**.
+> [!NOTE]
+> SQL Server servers can't be deleted from the portal, and must be deleted from the resource property page.
+
+1. In **Across Regions**, click the name of the source resource that you want to delete.
+2. Select **Delete source**.
## Next steps
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/tutorial-move-region-virtual-machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/tutorial-move-region-virtual-machines.md
@@ -5,7 +5,7 @@
Previously updated : 09/09/2020 Last updated : 02/04/2021 #Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
@@ -14,9 +14,7 @@
# Tutorial: Move Azure VMs across regions In this article, learn how to move Azure VMs, and related network/storage resources, to a different Azure region, using [Azure Resource Mover](overview.md).-
-> [!NOTE]
-> Azure Resource Mover is currently in public preview.
+.
In this tutorial, you learn how to:
@@ -36,26 +34,21 @@ In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. Then sign in to the [Azure portal](https://portal.azure.com). ## Prerequisites--- Check you have *Owner* access on the subscription containing the resources that you want to move.
- - The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription.
- - To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles.
-- The subscription needs enough quota to create the resources you're moving in the target region. If it doesn't have quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).-- Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.
+**Requirement** | **Description**
+ |
+**Subscription permissions** | Check you have *Owner* access on the subscription containing the resources that you want to move<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles.
+**VM support** | Check that the VMs you want to move are supported.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<br/><br/> - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.
+**Destination subscription** | The subscription in the destination region needs enough quota to create the resources you're moving in the target region. If it doesn't have quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
+**Destination region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.
-## Check VM requirements
+## Prepare VMs
-1. Check that the VMs you want to move are supported.
-
- - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.
- - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.
- - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.
-2. Check that VMs you want to move are turned on.
-3. Make sure VMs have the latest trusted root certificates, and an updated certificate revocation list (CRL). To do this:
+1. After checking that VMs meet requirements, make sure that VMs you want to move are turned on. All VMs disks that you want to be available in the destination region must be attached and initialized in the VM.
+1. Make sure VMs have the latest trusted root certificates, and an updated certificate revocation list (CRL). To do this:
- On Windows VMs, install the latest Windows updates. - On Linux VMs, follow distributor guidance so that machines have the latest certificates and CRL.
-4. Allow outbound connectivity from VMs:
+1. Allow outbound connectivity from VMs:
- If you're using a URL-based firewall proxy to control outbound connectivity, allow access to these [URLs](support-matrix-move-region-azure-vm.md#url-access) - If you're using network security group (NSG) rules to control outbound connectivity, create these [service tag rules](support-matrix-move-region-azure-vm.md#nsg-rules).
@@ -81,12 +74,12 @@ Select resources you want to move.
![Page to select source and destination region](./media/tutorial-move-region-virtual-machines/source-target.png) 6. In **Resources to move**, click **Select resources**.
-7. In **Select resources**, select the VM. You can only add [resources supported for move](#check-vm-requirements). Then click **Done**.
+7. In **Select resources**, select the VM. You can only add [resources supported for move](#prepare-vms). Then click **Done**.
![Page to select VMs to move](./media/tutorial-move-region-virtual-machines/select-vm.png) 8. In **Resources to move**, click **Next**.
-9. In **Review + Add**, check the source and destination settings.
+9. In **Review**, check the source and destination settings.
![Page to review settings and proceed with move](./media/tutorial-move-region-virtual-machines/review.png) 10. Click **Proceed**, to begin adding the resources.
@@ -95,25 +88,27 @@ Select resources you want to move.
> [!NOTE] > - Added resources are in a *Prepare pending* state.
+> - The resource group for the VMs is added automatically.
> - If you want to remove an resource from a move collection, the method for doing that depends on where you are in the move process. [Learn more](remove-move-resources.md). ## Resolve dependencies 1. If resources show a *Validate dependencies* message in the **Issues** column, click the **Validate dependencies** button. The validation process begins. 2. If dependencies are found, click **Add dependencies**.
-3. In **Add dependencies**, select the dependent resources > **Add dependencies**. Monitor progress in the notifications.
+3. In **Add dependencies**, leave the default **Show all dependencies** option.
+
+ - Show all dependencies iterates through all of the direct and indirect dependencies for a resource. For example, for a VM it shows the NIC, virtual network, network security groups (NSGs) etc.
+ - Show first level dependencies only shows only direct dependencies. For example, for a VM it shows the NIC, but not the virtual network.
++
+4. Select the dependent resources you want to add > **Add dependencies**. Monitor progress in the notifications.
![Add dependencies](./media/tutorial-move-region-virtual-machines/add-dependencies.png)
-4. Add additional dependencies if needed, and validate dependencies again.
+4. Validate dependencies again.
![Page to add additional dependencies](./media/tutorial-move-region-virtual-machines/add-additional-dependencies.png)
-4. On the **Across regions** page, verify that resources are now in a *Prepare pending* state, with no issues.
-
- ![Page showing resources in prepare pending state](./media/tutorial-move-region-virtual-machines/prepare-pending.png)
-> [!NOTE]
-> If you want to edit target settings before beginning the move, select the link in the **Destination configuration** column for the resource, and edit the settings. If you edit the target VM settings, the target VM size shouldn't be smaller than the source VM size.
## Move the source resource group
@@ -154,9 +149,17 @@ To commit and finish the move process:
## Prepare resources to move
+Now that the source resource group is moved, you can prepare to move other resources that are in the *Prepare pending* state.
+
+1. In **Across regions**, verify that resources are now in a *Prepare pending* state, with no issues. If they're not, validate again and resolve any outstanding issues.
+
+ ![Page showing resources in prepare pending state](./media/tutorial-move-region-virtual-machines/prepare-pending.png)
+
+2. If you want to edit target settings before beginning the move, select the link in the **Destination configuration** column for the resource, and edit the settings. If you edit the target VM settings, the target VM size shouldn't be smaller than the source VM size.
+ Now that the source resource group is moved, you can prepare to move the other resources.
-1. In **Across regions**, select the resources you want to prepare.
+3. Select the resources you want to prepare.
![Page to select prepare for other resources](./media/tutorial-move-region-virtual-machines/prepare-other.png)
@@ -234,12 +237,16 @@ If you want to complete the move process, commit the move.
- The Mobility service isn't uninstalled automatically from VMs. Uninstall it manually, or leave it if you plan to move the server again. - Modify Azure role-based access control (Azure RBAC) rules after the move. + ## Delete source resources after commit After the move, you can optionally delete resources in the source region.
-1. In **Across Regions**, click the name of each source resource that you want to delete.
-2. In the properties page for each resource, select **Delete**.
+> [!NOTE]
+> A few resources, for example key vaults and SQL Server servers, can't be deleted from the portal, and must be deleted from the resource property page.
+
+1. In **Across Regions**, click the name of the source resource that you want to delete.
+2. Select **Delete source**.
## Delete additional resources created for move
@@ -267,4 +274,4 @@ In this tutorial, you:
Now, trying moving Azure SQL databases and elastic pools to another region. > [!div class="nextstepaction"]
-> [Move Azure SQL resources](./tutorial-move-region-sql.md)
+> [Move Azure SQL resources](./tutorial-move-region-sql.md)
search https://docs.microsoft.com/en-us/azure/search/resource-partners-knowledge-mining https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/resource-partners-knowledge-mining.md
@@ -7,7 +7,7 @@
Previously updated : 12/17/2020 Last updated : 02/04/2021 # Partner solutions
@@ -16,6 +16,5 @@ Get expert help from Microsoft partners who build end-to-end solutions that incl
| Partner | Description | Website/Product link | ||-|-|
-| ![OrangeNXT](media/resource-partners/orangenxt-beldmerk-boven-160px.png "Company logo") | [**OrangeNXT**](https://orangenxt.com/) offers expertise in data consolidation, data modeling, and building skillsets that include custom logic developed for specific use-cases.<br/><br/>[**digitalNXT Search**](https://orangenxt.com/solutions/digitalnxt/digitalnxt-search/) is an OrangeNXT solution that combines AI, optical character recognition (OCR), and natural language processing in Azure Cognitive Search pipeline to help you extract search results from multiple structured and unstructured data sources. Integral to **digitalNXT Search** is advanced custom cognitive skills for interpreting and correlating selected data.<br/><br/>| [Product page](https://orangenxt.com/solutions/digitalnxt/digitalnxt-search/)|
-
-<!-- Review [**digitalNXT** case studies](https://orangenxt.com/solutions/digitalnxt/digitalnxt-search/problems-causes-solutions/) for a closer look at specific solutions. -->
+| ![Neal Analytics](media/resource-partners/neal-analytics-logo.png "Neal Analytics company logo") | [**Neal Analytics**](https://nealanalytics.com/) offers over 10 years of cloud, data, and AI expertise on Azure. Its experts have recognized in-depth expertise across the Azure AI and ML services. Neal can help customers customize and implement Cognitive Search across a wide variety of use cases. Neal Analytics expertise ranges from enterprise-level search, form, and process automation to domain mapping for data extraction and analytics, plagiarism detection, and more. | [Product page](https://go.nealanalytics.com/cognitive-search)|
+| ![OrangeNXT](media/resource-partners/orangenxt-beldmerk-boven-160px.png "OrangeNXT company logo") | [**OrangeNXT**](https://orangenxt.com/) offers expertise in data consolidation, data modeling, and building skillsets that include custom logic developed for specific use-cases.<br/><br/>digitalNXT Search is an OrangeNXT solution that combines AI, optical character recognition (OCR), and natural language processing in Azure Cognitive Search pipeline to help you extract search results from multiple structured and unstructured data sources. Integral to digitalNXT Search is advanced custom cognitive skills for interpreting and correlating selected data.<br/><br/>| [Product page](https://orangenxt.com/solutions/digitalnxt/digitalnxt-search/)|
search https://docs.microsoft.com/en-us/azure/search/search-api-versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-api-versions.md
@@ -83,7 +83,7 @@ The following table provides links to more recent SDK versions.
| SDK version | Status | Description | |-|--||
-| [Azure.Search.Documents 11](/dotnet/api/overview/azure/search.documents-readme) | Stable | New client library from Azure .NET SDK, released July 2020. Targets the Search REST api-version=2020-06-30 REST API but does not yet support, geo-filters. |
+| [Azure.Search.Documents 11](/dotnet/api/overview/azure/search.documents-readme) | Stable | New client library from Azure .NET SDK, released July 2020. Targets the Search REST api-version=2020-06-30 REST API but does not yet provide native support for geo-filters. We recommend [Microsoft.Spatial](https://www.nuget.org/packages/Microsoft.Spatial/) package for geographic operations. Examples are available for [System.Text.Json](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Microsoft.Azure.Core.Spatial/README.md) and [Newtonsoft.Json](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Microsoft.Azure.Core.Spatial.NewtonsoftJson/README.md). |
| [Microsoft.Azure.Search 10](https://www.nuget.org/packages/Microsoft.Azure.Search/) | Stable | Released May 2019. Targets the Search REST api-version=2019-05-06.| | [Microsoft.Azure.Management.Search 4.0.0](/dotnet/api/overview/azure/search/management) | Stable | Targets the Management REST api-version=2020-08-01. | | Microsoft.Azure.Management.Search 3.0.0 | Stable | Targets the Management REST api-version=2015-08-19. |
search https://docs.microsoft.com/en-us/azure/search/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/whats-new.md
@@ -7,119 +7,70 @@
Previously updated : 11/12/2020 Last updated : 02/08/2021 # What's new in Azure Cognitive Search Learn what's new in the service. Bookmark this page to keep up to date with the service. Check out the [Preview feature list](search-api-preview.md) to view features in public preview.
-## November 2020
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability |
-||-|-||
-|[Customer-managed key encryption over indexers, data sources, and skillsets](search-security-manage-encryption-keys.md) | Security | This addition extends customer-managed encryption over the full range of assets created and managed by a search service. Recall that customer-managed key support adds an additional encryption layer on top of base encryption performed and managed by Microsoft. | Generally available using REST api-version=2020-06-30 |
-
-## September 2020
-
-Create an identity for a search service in Azure Active Directory, then use Azure RBAC permissions to grant the identity read-only permissions to Azure data sources. Optionally, choose the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) capability if IP rules are not an option.
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability |
-||-|-||
-| [Managed service identity](search-howto-managed-identities-data-sources.md) | Indexers, security | Create an identity for a search service in Azure Active Directory, then use Azure RBAC permissions to grant access to Azure data sources. This approach eliminates the need for credentials on the connection string. <br><br>An additional way to use a managed service identity is through a [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) if IP rules are not an option. | Generally available. Access this capability when using the portal or [Create Data Source (REST)](/rest/api/searchservice/create-data-source) with api-version=2020-06-30. |
-| [Outbound requests using a private link](search-indexer-howto-access-private.md) | Indexers, security | Create a shared private link resource that indexers can use when accessing Azure resources secured by Azure Private Link. For more information about all of the ways you can secure indexer connections, see [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md). | Generally available. Access this capability when using the portal or [Shared Private Link Resource](/rest/api/searchmanagement/sharedprivatelinkresources) with api-version=2020-08-01. |
-| [Management REST API (2020-08-01)](/rest/api/searchmanagement/management-api-versions) | REST | New stable REST API adds support for creating shared private link resources. | Generally available. |
-| [Management REST API (2020-08-01-Preview)](/rest/api/searchmanagement/management-api-versions) | REST | Adds shared private link resource for Azure Functions and Azure SQL for MySQL Databases. | Public preview. |
-| [Management .NET SDK 4.0](/dotnet/api/overview/azure/search/management) | .NET SDK | Azure SDK update for the management SDK, targeted REST API version 2020-08-01. | Generally available. |
-
-## August 2020
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability |
-|||-||
-| [double encryption](search-security-overview.md#encryption) | Security | Enable double encryption at the storage layer by configuring customer-managed key encryption on new search services. Create a new service, [configure and apply customer-managed keys](search-security-manage-encryption-keys.md) to indexes or synonym maps, and benefit from double encryption over that content. | Generally available on all search services created after August 1, 2020 in these regions: West US 2, East US, South Central US, US Gov Virginia, US Gov Arizona. Use the portal, management REST APIs, or SDKs to create the service. |
-
-## July 2020
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability |
-|||-||
-| [Azure.Search.Documents client library](/dotnet/api/overview/azure/search.documents-readme) | Azure SDK for .NET | .NET client library released by the Azure SDK team, designed for consistency with other .NET client libraries. <br/><br/>Version 11 targets the Search REST api-version=2020-06-30, but does not yet support knowledge store or geospatial types. <br/><br/>For more information, see [Quickstart: Create an index](search-get-started-dotnet.md) and [Upgrade to Azure.Search.Documents (v11)](search-dotnet-sdk-migration-version-11.md). | Generally available. </br> Install the [Azure.Search.Documents package](https://www.nuget.org/packages/Azure.Search.Documents/) from NuGet. |
-| [azure.search.documents client library](/python/api/overview/azure/search-documents-readme) | Azure SDK for Python| Python client library released by the Azure SDK team, designed for consistency with other Python client libraries. <br/><br/>Version 11 targets the Search REST api-version=2020-06-30. | Generally available. </br> Install the [azure-search-documents package](https://pypi.org/project/azure-search-documents/) from PyPI. |
-| [@azure/search-documents client library](/javascript/api/overview/azure/search-documents-readme) | Azure SDK for JavaScript | JavaScript client library released by the Azure SDK team, designed for consistency with other JavaScript client libraries. <br/><br/>Version 11 targets the Search REST api-version=2020-06-30. | Generally available. </br> Install the [@azure/search-documents package](https://www.npmjs.com/package/@azure/search-documents) from npm. |
-
-## June 2020
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability |
-|||-||
-[Knowledge store](knowledge-store-concept-intro.md) | AI enrichment | Output of an AI-enriched indexer, storing content in Azure Storage for use in other apps and processes. | Generally available. </br> Use [Search REST API 2020-06-30](/rest/api/searchservice/) or later, or the portal. |
-| [Search REST API 2020-06-30](/rest/api/searchservice/) | REST | A new stable version of the REST APIs. In addition to knowledge store, this version includes enhancements to search relevance and scoring. | Generally available. |
-| [Okapi BM25 relevance algorithm](index-ranking-similarity.md) | Query | New relevance ranking algorithm automatically used for all new search services created after July 15. For services created earlier, you can opt in by setting the `similarity` property on index fields. | Generally available. </br> Use [Search REST API 2020-06-30](/rest/api/searchservice/) or later, or REST API 2019-05-06. |
-| **executionEnvironment** | Security (indexers) | Explicitly set this indexer configuration property to `private` to force all connections to external data sources over a private endpoint. Applicable only to search services that leverage Azure Private Link. | Generally available. </br> Use [Search REST API 2020-06-30](/rest/api/searchservice/) to set this general configuration parameter. |
-
-## May 2020 (Microsoft Build)
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability |
-|||-||
-| [Debug sessions](cognitive-search-debug-session.md) | AI enrichment | Debug sessions provide a portal-based interface to investigate and resolve issues with an existing skillset. Fixes created in the debug session can be saved to production skillsets. Get started with [this tutorial](cognitive-search-tutorial-debug-sessions.md). | Public preview, in the portal. |
-| [IP rules for in-bound firewall support](service-configure-firewall.md) | Security | Limit access to a search service endpoint to specific IP addresses. | Generally available. </br> Use [Management REST API 2020-03-13](/rest/api/searchmanagement/) or later, or the portal. |
-| [Azure Private Link for a private search endpoint](service-create-private-endpoint.md) | Security| Shield a search service from the public internet by running it as a private link resource, accessible only to client apps and other Azure services on the same virtual network. | Generally available. </br> Use [Management REST API 2020-03-13](/rest/api/searchmanagement/) or later, or the portal. |
-| [system-managed identity (preview)](search-howto-managed-identities-data-sources.md) | Security (indexers) | Register a search service as a trusted service with Azure Active Directory to set up connections to supported Azure data source for indexing. Applies to [indexers](search-indexer-overview.md) that ingest content from Azure data sources such as Azure SQL Database, Azure Cosmos DB, and Azure Storage. | Public preview. </br> Use the portal to register the search service. |
-| [sessionId query parameter](index-similarity-and-scoring.md), [scoringStatistics=global parameter](index-similarity-and-scoring.md#scoring-statistics) | Query (relevance) | Add sessionID to a query to establish a session for computing search scores, with scoringStatistics=global to collect scores from all shards, for more consistent search score calculations. | Generally available. </br> Use [Search REST API 2020-06-30](/rest/api/searchservice/) or later, or REST API 2019-05-06. |
-| [featuresMode relevance score expansion (preview)](index-similarity-and-scoring.md#featuresMode-param) | Query | Add this query parameter to expand a relevance score to show more detail: per field similarity score, per field term frequency, and per field number of unique tokens matched. <br/><br/>Custom scoring algorithms can consume this information. *Learning to rank* algorithms are an advanced custom scoring capability that can be implemented when you provide relevance score details. For a sample that demonstrates this capability, see [Add machine learning (LearnToRank) to search relevance](https://github.com/Azure-Samples/search-ranking-tutorial). | Public preview. </br> Use [Search REST API 2020-06-30-Preview](/rest/api/searchservice/index-preview) or REST API 2019-05-06-Preview. |
-
-## March 2020
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability |
-|||-||
-| [Native blob soft delete (preview)](search-howto-index-changed-deleted-blobs.md) | Indexers | An Azure Blob Storage indexer in Azure Cognitive Search will recognize blobs that are in a soft deleted state, and remove the corresponding search document during indexing. | Public preview. </br> Use the [Search REST API 2020-06-30-Preview](/rest/api/searchservice/index-preview) and REST API 2019-05-06-Preview, with Run Indexer against an Azure Blob data source that has native "soft delete" enabled. |
-| [Management REST API (2020-03-13)](/rest/api/searchmanagement/management-api-versions) | REST | New stable REST API for creating and managing a search service. Adds IP firewall and Private Link support | Generally available. |
-
-## February 2020
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability |
-|||-||
-| [PII Detection (preview)](cognitive-search-skill-pii-detection.md) | AI enrichment | A new cognitive skill used during indexing that extracts personal information from an input text and gives you the option to mask it from that text in various ways. | Public preview. </br> Use the portal or [Search REST API 2020-06-30-Preview](/rest/api/searchservice/index-preview) or REST API 2019-05-06-Preview. |
-| [Custom Entity Lookup (preview)](cognitive-search-skill-custom-entity-lookup.md )| AI enrichment | A new cognitive skill that looks for text from a custom, user-defined list of words and phrases. Using this list, it labels all documents with any matching entities. The skill also supports a degree of fuzzy matching that can be applied to find matches that are similar but not exact. | Public preview. </br> Use the portal or [Search REST API 2020-06-30-Preview](/rest/api/searchservice/index-preview) or REST API 2019-05-06-Preview. |
-
-## January 2020
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability |
-|||-||
-| [Customer-managed encryption keys](search-security-manage-encryption-keys.md) |Security | Adds an extra layer of encryption in addition to the platform's built-in encryption. Using an encryption key that you create and manage, you can encrypt index content and synonym maps before the payload reaches a search service. | Generally available. </br> Use Search REST API 2019-05-06 or later. For managed code, the correct package is still [.NET SDK version 8.0-preview](search-dotnet-sdk-migration-version-9.md) even though the feature is out of preview. |
-| [IP rules for in-bound firewall support (preview)](service-configure-firewall.md) | Security | Limit access to a search service endpoint to specific IP addresses. The preview API has new **IpRule** and **NetworkRuleSet** properties in [CreateOrUpdate API](/rest/api/searchmanagement/2019-10-01-preview/createorupdate-service). This preview feature is available in selected regions. | Public preview using api-version=2019-10-01-Preview. |
-| [Azure Private Link for a private search endpoint (preview)](service-create-private-endpoint.md) | Security| Shield a search service from the public internet by running it as a private link resource, accessible only to client apps and other Azure services on the same virtual network. | Public preview using api-version=2019-10-01-Preview. |
-
-## 2019 Feature Announcements
-
-### December 2019
-
-+ [Create Demo App (preview)](search-create-app-portal.md) is a new wizard in the portal that generates a downloadable HTML file with query (read-only) access to an index. The file comes with embedded script that renders an operational "localhost"-style web app, bound to an index on your search service. Pages are configurable in the wizard and can contain a search bar, results area, sidebar navigation, and typeahead query support. You can modify the HTML offline to extend or customize the workflow or appearance. A demo app is not easily extended to include security and hosting layers that are typically needed in production scenarios. You should consider it as a validation and testing tool rather than a short cut to a full client app.
-
-+ [Create a private endpoint for secure connections (preview)](service-create-private-endpoint.md) explains how to set up a Private Link for secure connections to your search service. This preview feature is available upon request and uses [Azure Private Link](../private-link/private-link-overview.md) and [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) as part of the solution.
-
-### November 2019 - Ignite Conference
-
-+ [Incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) adds caching and statefullness to an enrichment pipeline so that you can work on specific steps or phases without losing content that is already processed. Previously, any change to an enrichment pipeline required a full rebuild. With incremental enrichment, the output of costly analysis, especially image analysis, is preserved.
-
-<!--
-+ Custom Entity Lookup is a cognitive skill used during indexing that allows you to provide a list of custom entities (such as part numbers, diseases, or names of locations you care about) that should be found within the text. It supports fuzzy matching, case-insensitive matching, and entity synonyms. -->
-
-+ [Document Extraction (preview)](cognitive-search-skill-document-extraction.md) is a cognitive skill used during indexing that allows you to extract the contents of a file from within a skillset. Previously, document cracking only occurred prior to skillset execution. With the addition of this skill, you can also perform this operation within skillset execution.
-
-+ [Text Translation](cognitive-search-skill-text-translation.md) is a cognitive skill used during indexing that evaluates text and, for each record, returns the text translated to the specified target language.
-
-+ [Power BI templates](https://github.com/Azure-Samples/cognitive-search-templates/blob/master/README.md) can jumpstart your visualizations and analysis of enriched content in a knowledge store in Power BI desktop. This template is designed for Azure table projections created through the [Import data wizard](knowledge-store-create-portal.md).
-
-+ [Azure Data Lake Storage Gen2 (preview)](search-howto-index-azure-data-lake-storage.md), [Cosmos DB Gremlin API (preview)](search-howto-index-cosmosdb.md), and [Cosmos DB Cassandra API (preview)](search-howto-index-cosmosdb.md) are now supported in indexers. You can sign up using [this form](https://aka.ms/azure-cognitive-search/indexer-preview). You will receive a confirmation email once you have been accepted into the preview program.
-
-### July 2019
-
-+ Generally available in [Azure Government Cloud](../azure-government/compare-azure-government-global-azure.md#azure-cognitive-search).
+## January 2021
+
+|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
+||-||
+| [Solution accelerator for Azure Cognitive Search and QnA Maker](https://github.com/Azure-Samples/search-qna-maker-accelerator) | Pulls questions and answers out of the document and suggest the most relevant answers. A live demo app can be found at [https://aka.ms/qnaWithAzureSearchDemo](https://aka.ms/qnaWithAzureSearchDemo). | Open-source project (no SLA) |
+
+## 2020 Archive
+
+| Month | Feature | Description |
+|-||-|
+| November | [Customer-managed key encryption (extended)](search-security-manage-encryption-keys.md) | extends customer-managed encryption over the full range of assets created and managed by a search service. Generally available.|
+| September | [Managed service identity (indexers)](search-howto-managed-identities-data-sources.md) | Generally available. |
+| September | [Outbound requests using a private link](search-indexer-howto-access-private.md) | Generally available. |
+| September | [Management REST API (2020-08-01)](/rest/api/searchmanagement/management-api-versions) | Generally available. |
+| September | [Management REST API (2020-08-01-Preview)](/rest/api/searchmanagement/management-api-versions) | Adds shared private link resource for Azure Functions and Azure SQL for MySQL Databases. |
+| September | [Management .NET SDK 4.0](/dotnet/api/overview/azure/search/management) | Azure SDK update for the management SDK, targeted REST API version 2020-08-01. Generally available.|
+| August | [double encryption](search-security-overview.md#encryption) | Generally available on all search services created after August 1, 2020 in these regions: West US 2, East US, South Central US, US Gov Virginia, US Gov Arizona. |
+| July | [Azure.Search.Documents client library](/dotnet/api/overview/azure/search.documents-readme) | Azure SDK for .NET, generally available. |
+| July | [azure.search.documents client library](/python/api/overview/azure/search-documents-readme) | Azure SDK for Python, generally available. |
+| July | [@azure/search-documents client library](/javascript/api/overview/azure/search-documents-readme) | Azure SDK for JavaScript, generally available. |
+| June | [Knowledge store](knowledge-store-concept-intro.md) | Generally available. |
+| June | [Search REST API 2020-06-30](/rest/api/searchservice/) | Generally available. |
+| June | [Search REST API 2020-06-30-Preview](/rest/api/searchservice/) | Adds Reset Skillset to selectively reprocess skills, and incremental enrichment. |
+| June | [Okapi BM25 relevance algorithm](index-ranking-similarity.md) | Generally available. |
+| June | **executionEnvironment** (applies to search services using Azure Private Link.) | Generally available. |
+| June | [AML skill (preview)](cognitive-search-aml-skill.md) | A cognitive skill that extends AI enrichment with a custom Azure Machine Learning (AML) model. |
+| May | [Debug sessions (preview)](cognitive-search-debug-session.md) | Skillset debugger in the portal. |
+| May | [IP rules for in-bound firewall support](service-configure-firewall.md) | Generally available. |
+| May | [Azure Private Link for a private search endpoint](service-create-private-endpoint.md) | Generally available. |
+| May | [Managed service identity (indexers) - (preview)](search-howto-managed-identities-data-sources.md) | Connect to Azure data sources using a managed identity. |
+| May | [sessionId query parameter](index-similarity-and-scoring.md), [scoringStatistics=global parameter](index-similarity-and-scoring.md#scoring-statistics) | Global search statistics, useful for [machine learning (LearnToRank) models for search relevance](https://github.com/Azure-Samples/search-ranking-tutorial). |
+| May | [featuresMode relevance score expansion (preview)](index-similarity-and-scoring.md#featuresMode-param) | |
+|March | [Native blob soft delete (preview)](search-howto-index-changed-deleted-blobs.md) | Deletes search documents if the source blob is soft-deleted in blob storage. |
+|March | [Management REST API (2020-03-13)](/rest/api/searchmanagement/management-api-versions) | Generally available. |
+|February | [PII Detection skill (preview)](cognitive-search-skill-pii-detection.md) | A cognitive skill that extracts and masks personal information. |
+|February | [Custom Entity Lookup skill (preview)](cognitive-search-skill-custom-entity-lookup.md) | A cognitive skill that finds words and phrases from a list and labels all documents with matching entities. |
+|January | [Customer-managed key encryption](search-security-manage-encryption-keys.md) | Generally available |
+|January | [IP rules for in-bound firewall support (preview)](service-configure-firewall.md) | New **IpRule** and **NetworkRuleSet** properties in [CreateOrUpdate API](/rest/api/searchmanagement/2019-10-01-preview/createorupdate-service). |
+|January | [Create a private endpoint (preview)](service-create-private-endpoint.md) | Set up a Private Link for secure connections to your search service. This preview feature has a dependency [Azure Private Link](../private-link/private-link-overview.md) and [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) as part of the solution. |
+
+## 2019 Archive
+
+| Month | Feature | Description |
+|-||-|
+|December | [Create Demo App (preview)](search-create-app-portal.md) | A wizard that generates a downloadable HTML file with query (read-only) access to an index, intended as a validation and testing tool rather than a short cut to a full client app.|
+|November | [Incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) | Caches skillset processing for future reuse. |
+|November | [Document Extraction skill (preview)](cognitive-search-skill-document-extraction.md) | A cognitive skill to extract the contents of a file from within a skillset.|
+|November | [Text Translation skill](cognitive-search-skill-text-translation.md) | A cognitive skill used during indexing that evaluates and translates text. Generally available.|
+|November | [Power BI templates](https://github.com/Azure-Samples/cognitive-search-templates/blob/master/README.md) | Template for visualizing content in knowledge store |
+|November | [Azure Data Lake Storage Gen2 (preview)](search-howto-index-azure-data-lake-storage.md), [Cosmos DB Gremlin API (preview)](search-howto-index-cosmosdb.md), and [Cosmos DB Cassandra API (preview)](search-howto-index-cosmosdb.md) | New indexer data sources in public preview. |
+|July | [Azure Government Cloud support](../azure-government/compare-azure-government-global-azure.md#azure-cognitive-search) | Generally available.|
<a name="new-service-name"></a> ## New service name
-Azure Search is now renamed to **Azure Cognitive Search** to reflect the expanded (yet optional) use of cognitive skills and AI processing in core operations. API versions, NuGet packages, namespaces, and endpoints are unchanged. New and existing search solutions are unaffected by the service name change.
+Azure Search was renamed to **Azure Cognitive Search** in October 2019 to reflect the expanded (yet optional) use of cognitive skills and AI processing in core operations. API versions, NuGet packages, namespaces, and endpoints are unchanged. New and existing search solutions are unaffected by the service name change.
## Service updates
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-adaptive-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-adaptive-application.md
@@ -12,7 +12,7 @@ ms.devlang: na
na Previously updated : 08/06/2020 Last updated : 02/07/2021
@@ -40,7 +40,7 @@ By defining lists of known-safe applications, and generating alerts when anythin
- Prevent specific software that's banned by your organization - Increase oversight of apps that access sensitive data -
+No enforcement options are currently available. Adaptive application controls are intended to provide security alerts if any application runs other than the ones you've defined as safe.
## Availability
@@ -231,7 +231,12 @@ Some of the functions that are available from the REST API:
> Remove the following properties before using the JSON in the Put request: recommendationStatus, configurationStatus, issues, location, and sourceSystem.
+## FAQ - Adaptive application controls
+
+### Are there any options to enforce the application controls?
+No enforcement options are currently available. Adaptive application controls are intended to provide **security alerts** if any application runs other than the ones you've defined as safe. They have a range of benefits ([What are the benefits of adaptive application controls?](#what-are-the-benefits-of-adaptive-application-controls)) and are extremely customizable as shown on this page.
+
## Next steps In this document, you learned how to use adaptive application control in Azure Security Center to define allow lists of applications running on your Azure and non-Azure machines. To learn more about some of Security Center's other cloud workload protection features, see:
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-services.md
@@ -11,7 +11,7 @@ ms.devlang: na
na Previously updated : 01/26/2021 Last updated : 02/08/2021
@@ -49,7 +49,7 @@ The two tabs below show the features of Azure Security Center that are available
|**Feature**|**Azure Virtual Machines**|**Azure Virtual Machine Scale Sets**|**Azure Arc enabled machines**|**Azure Defender required** |-|:-:|:-:|:-:|:-:|
-|[Microsoft Defender for Endpoint integration](security-center-wdatp.md)|-|-|-|Yes|
+|[Microsoft Defender for Endpoint integration](security-center-wdatp.md)|-|-|-|-|
|[Virtual machine behavioral analytics (and security alerts)](./azure-defender.md)|Γ£ö</br>(on supported versions)|Γ£ö</br>(on supported versions)|Γ£ö|Yes| |[Fileless security alerts](alerts-reference.md#alerts-windows)|-|-|-|Yes| |[Network-based security alerts](other-threat-protections.md#network-layer)|Γ£ö|Γ£ö|-|Yes|
sentinel https://docs.microsoft.com/en-us/azure/sentinel/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
@@ -7,7 +7,7 @@
Previously updated : 02/02/2021 Last updated : 02/04/2021 # What's new in Azure Sentinel
@@ -27,6 +27,7 @@ Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
## January 2021
+- [Analytics rule wizard: Improved query editing experience (public preview)](#analytics-rule-wizard-improved-query-editing-experience-public-preview)
- [Az.SecurityInsights PowerShell module (Public preview)](#azsecurityinsights-powershell-module-public-preview) - [SQL database connector](#sql-database-connector) - [Improved incident comments](#improved-incident-comments)
@@ -35,6 +36,16 @@ Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
- [Improved rule tuning with the analytics rule preview graphs](#improved-rule-tuning-with-the-analytics-rule-preview-graphs-public-preview)
+## Analytics rule wizard: Improved query editing experience (public preview)
+
+The Azure Sentinel Scheduled analytics rule wizard now provides the following enhancements for writing and editing queries:
+
+- An expandable editing window, providing you with more screen space to view your query.
+- Key word highlighting in your query code.
+- Expanded auto-complete support.
+- Real-time query validations. Errors in your query now show as a red block in the scroll bar, and as a red dot in the **Set rule logic** tab name. Additionally, a query with errors cannot be saved.
+
+For more information, see [Tutorial: Detect threats out-of-the-box](tutorial-detect-threats-built-in.md).
### Az.SecurityInsights PowerShell module (Public preview) Azure Sentinel now supports the new [Az.SecurityInsights](https://www.powershellgallery.com/packages/Az.SecurityInsights/) PowerShell module.
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/upgrade-managed-disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/upgrade-managed-disks.md
@@ -1,372 +0,0 @@
- Title: Upgrade cluster nodes to use Azure managed disks
-description: Here's how to upgrade an existing Service Fabric cluster to use Azure managed disks with little or no downtime of your cluster.
- Previously updated : 4/07/2020-
-# Upgrade cluster nodes to use Azure managed disks
-
-[Azure managed disks](../virtual-machines/managed-disks-overview.md) are the recommended disk storage offering for use with Azure virtual machines for persistent storage of data. You can improve the resiliency of your Service Fabric workloads by upgrading the virtual machine scale sets that underlie your node types to use managed disks. Here's how to upgrade an existing Service Fabric cluster to use Azure managed disks with little or no downtime of your cluster.
-
-The general strategy for upgrading a Service Fabric cluster node to use managed disks is to:
-
-1. Deploy an otherwise duplicate virtual machine scale set of that node type, but with the [managedDisk](/azure/templates/microsoft.compute/2019-07-01/virtualmachinescalesets/virtualmachines#ManagedDiskParameters) object added to the `osDisk` section of the virtual machine scale set deployment template. The new scale set should bind to the same load balancer / IP as the original, so that your customers don't experience a service outage during the migration.
-
-2. Once both the original and upgraded scale sets are running side by side, disable the original node instances one at a time so that the system services (or replicas of stateful services) migrate to the new scale set.
-
-3. Verify the cluster and new nodes are healthy, then remove the original scale set and node state for the deleted nodes.
-
-This article will walk you through the steps of upgrading the primary node type of an example cluster to use managed disks, while avoiding any cluster downtime (see note below). The initial state of the example test cluster consists of one node type of [Silver durability](service-fabric-cluster-capacity.md#durability-characteristics-of-the-cluster), backed by a single scale set with five nodes.
-
-> [!NOTE]
-> The limitations of a Basic SKU load balancer prevent an additional scale set from being added. We recommend using the Standard SKU load balancer instead. For more, see [a comparison of the two SKUs](../load-balancer/skus.md).
-
-> [!CAUTION]
-> You will experience an outage with this procedure only if you have dependencies on the cluster DNS (such as when accessing [Service Fabric Explorer](service-fabric-visualizing-your-cluster.md)). Architectural [best practice for front-end services](/azure/architecture/microservices/design/gateway) is to have some kind of [load balancer](/azure/architecture/guide/technology-choices/load-balancing-overview) in front of your node types to make node swapping possible without an outage.
-
-Here are the [templates and cmdlets](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade) for Azure Resource Manager that we'll use to complete the upgrade scenario. The template changes will be explained in [Deploy an upgraded scale set for the primary node type](#deploy-an-upgraded-scale-set-for-the-primary-node-type) below.
-
-## Set up the test cluster
-
-Let's set up the initial Service Fabric test cluster. First, [download](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade) the Azure Resource Manager sample templates that we'll use to complete this scenario.
-
-Next, sign in to your Azure account.
-
-```powershell
-# Sign in to your Azure account
-Login-AzAccount -SubscriptionId "<subscription ID>"
-```
-
-The following commands will guide you through generating a new self-signed certificate and deploying the test cluster. If you already have a certificate you'd like to use, skip to [Use an existing certificate to deploy the cluster](#use-an-existing-certificate-to-deploy-the-cluster).
-
-### Generate a self-signed certificate and deploy the cluster
-
-First, assign the variables you'll need for Service Fabric cluster deployment. Adjust the values for `resourceGroupName`, `certSubjectName`, `parameterFilePath`, and `templateFilePath` for your specific account and environment:
-
-```powershell
-# Assign deployment variables
-$resourceGroupName = "sftestupgradegroup"
-$certOutputFolder = "c:\certificates"
-$certPassword = "Password!1" | ConvertTo-SecureString -AsPlainText -Force
-$certSubjectName = "sftestupgrade.southcentralus.cloudapp.azure.com"
-$templateFilePath = "C:\Initial-1NodeType-UnmanagedDisks.json"
-$parameterFilePath = "C:\Initial-1NodeType-UnmanagedDisks.parameters.json"
-```
-
-> [!NOTE]
-> Ensure that the `certOutputFolder` location exist on your local machine before running the command to deploy a new Service Fabric cluster.
-
-Next open the [*Initial-1NodeType-UnmanagedDisks.parameters.json*](https://github.com/erikadoyle/service-fabric-scripts-and-templates/blob/managed-disks/templates/nodetype-upgrade-no-outage/Initial-1NodeType-UnmanagedDisks.parameters.json) file and adjust the values for `clusterName` and `dnsName` to correspond to the dynamic values you set in PowerShell and save your changes.
-
-Then deploy the Service Fabric test cluster:
-
-```powershell
-# Deploy the initial test cluster
-New-AzServiceFabricCluster `
- -ResourceGroupName $resourceGroupName `
- -CertificateOutputFolder $certOutputFolder `
- -CertificatePassword $certPassword `
- -CertificateSubjectName $certSubjectName `
- -TemplateFile $templateFilePath `
- -ParameterFile $parameterFilePath
-```
-
-Once the deployment is complete, locate the *.pfx* file (`$certPfx`) on your local machine and import it to your certificate store:
-
-```powershell
-cd c:\certificates
-$certPfx = ".\sftestupgradegroup20200312121003.pfx"
-
-Import-PfxCertificate `
- -FilePath $certPfx `
- -CertStoreLocation Cert:\CurrentUser\My `
- -Password (ConvertTo-SecureString Password!1 -AsPlainText -Force)
-```
-
-The operation will return the certificate thumbprint, which you'll use to [connect to the new cluster](#connect-to-the-new-cluster-and-check-health-status) and check its health status. (Skip the following section, which is an alternate approach to cluster deployment.)
-
-### Use an existing certificate to deploy the cluster
-
-You can also use an existing Azure Key Vault certificate to deploy the test cluster. To do this, you'll need to [obtain references to your Key Vault](#obtain-your-key-vault-references) and certificate thumbprint.
-
-```powershell
-# Key Vault variables
-$certUrlValue = "https://sftestupgradegroup.vault.azure.net/secrets/sftestupgradegroup20200309235308/dac0e7b7f9d4414984ccaa72bfb2ea39"
-$sourceVaultValue = "/subscriptions/########-####-####-####-############/resourceGroups/sftestupgradegroup/providers/Microsoft.KeyVault/vaults/sftestupgradegroup"
-$thumb = "BB796AA33BD9767E7DA27FE5182CF8FDEE714A70"
-```
-
-Open the [*Initial-1NodeType-UnmanagedDisks.parameters.json*](https://github.com/erikadoyle/service-fabric-scripts-and-templates/blob/managed-disks/templates/nodetype-upgrade-no-outage/Initial-1NodeType-UnmanagedDisks.parameters.json) file and change the values for `clusterName` and `dnsName` to something unique.
-
-Finally, designate a resource group name for the cluster and set the `templateFilePath` and `parameterFilePath` locations of your *Initial-1NodeType-UnmanagedDisks* files:
-
-> [!NOTE]
-> The designated resource group must already exist and be located in the same region as your Key Vault.
-
-```powershell
-# Deploy the new scale set (upgraded to use managed disks) into the primary node type.
-$resourceGroupName = "sftestupgradegroup"
-$templateFilePath = "C:\Upgrade-1NodeType-2ScaleSets-ManagedDisks.json"
-$parameterFilePath = "C:\Upgrade-1NodeType-2ScaleSets-ManagedDisks.parameters.json"
-```
-
-Finally, run the following command to deploy the initial test cluster:
-
-```powershell
-New-AzResourceGroupDeployment `
- -ResourceGroupName $resourceGroupName `
- -TemplateFile $templateFilePath `
- -TemplateParameterFile $parameterFilePath `
- -CertificateThumbprint $thumb `
- -CertificateUrlValue $certUrlValue `
- -SourceVaultValue $sourceVaultValue `
- -Verbose
-```
-
-### Connect to the new cluster and check health status
-
-Connect to the cluster and ensure that all five of its nodes are healthy (replacing the `clusterName` and `thumb` variables for your cluster):
-
-```powershell
-# Connect to the cluster
-$clusterName = "sftestupgrade.southcentralus.cloudapp.azure.com:19000"
-$thumb = "BB796AA33BD9767E7DA27FE5182CF8FDEE714A70"
-
-Connect-ServiceFabricCluster `
- -ConnectionEndpoint $clusterName `
- -KeepAliveIntervalInSec 10 `
- -X509Credential `
- -ServerCertThumbprint $thumb `
- -FindType FindByThumbprint `
- -FindValue $thumb `
- -StoreLocation CurrentUser `
- -StoreName My
-
-# Check cluster health
-Get-ServiceFabricClusterHealth
-```
-
-With that, we're ready to begin the upgrade procedure.
-
-## Deploy an upgraded scale set for the primary node type
-
-In order to upgrade, or *vertically scale*, a node type, we'll need to deploy a copy of that node type's virtual machine scale set, which is otherwise identical to the original scale set (including reference to the same `nodeTypeRef`, `subnet`, and `loadBalancerBackendAddressPools`) except that it includes the desired upgrade/changes and its own separate subnet and inbound NAT address pool. Because we are upgrading a primary node type, the new scale set will be marked as primary (`isPrimary: true`), just like the original scale set. (For non-primary node type upgrades, simply omit this.)
-
-For convenience, the required changes have already been made for you in the *Upgrade-1NodeType-2ScaleSets-ManagedDisks* [template](https://github.com/erikadoyle/service-fabric-scripts-and-templates/blob/managed-disks/templates/nodetype-upgrade-no-outage/Upgrade-1NodeType-2ScaleSets-ManagedDisks.json) and [parameters](https://github.com/erikadoyle/service-fabric-scripts-and-templates/blob/managed-disks/templates/nodetype-upgrade-no-outage/Upgrade-1NodeType-2ScaleSets-ManagedDisks.parameters.json) files.
-
-The following sections will explain the template changes in detail. If you prefer, you can skip the explanation and continue on to [the next step of the upgrade procedure](#obtain-your-key-vault-references).
-
-### Update the cluster template with the upgraded scale set
-
-Here are the section-by-section modifications of the original cluster deployment template for adding an upgraded scale set for the primary node type.
-
-#### Parameters
-
-Add parameters for the instance name, count, and size of the new scale set. Note that `vmNodeType1Name` is unique to the new scale set, while the count and size values are identical to the original scale set.
-
-**Template file**
-
-```json
-"vmNodeType1Name": {
- "type": "string",
- "defaultValue": "NTvm2",
- "maxLength": 9
-},
-"nt1InstanceCount": {
- "type": "int",
- "defaultValue": 5,
- "metadata": {
- "description": "Instance count for node type"
- }
-},
-"vmNodeType1Size": {
- "type": "string",
- "defaultValue": "Standard_D2_v2"
-},
-```
-
-**Parameters file**
-
-```json
-"vmNodeType1Name": {
- "value": "NTvm2"
-},
-"nt1InstanceCount": {
- "value": 5
-},
-"vmNodeType1Size": {
- "value": "Standard_D2_v2"
-}
-```
-
-### Variables
-
-In the deployment template `variables` section, add an entry for the inbound NAT address pool of the new scale set.
-
-**Template file**
-
-```json
-"lbNatPoolID1": "[concat(variables('lbID0'),'/inboundNatPools/LoadBalancerBEAddressNatPool1')]",
-```
-
-### Resources
-
-In the deployment template *resources* section, add the new virtual machine scale set, keeping in mind these things:
-
-* The new scale set references the same node type as the original:
-
- ```json
- "nodeTypeRef": "[parameters('vmNodeType0Name')]",
- ```
-
-* The new scale set references the same load balancer backend address and subnet (but uses a different load balancer inbound NAT pool):
-
- ```json
- "loadBalancerBackendAddressPools": [
- {
- "id": "[variables('lbPoolID0')]"
- }
- ],
- "loadBalancerInboundNatPools": [
- {
- "id": "[variables('lbNatPoolID1')]"
- }
- ],
- "subnet": {
- "id": "[variables('subnet0Ref')]"
- }
- ```
-
-* Like the original scale set, the new scale set is marked as the primary node type. (When upgrading non-primary node types, omit this change.)
-
- ```json
- "isPrimary": true,
- ```
-
-* Unlike the original scale set, the new scale set is upgraded to use managed disks.
-
- ```json
- "managedDisk": {
- "storageAccountType": "[parameters('storageAccountType')]"
- }
- ```
-
-Once you've implemented all the changes in your template and parameters files, proceed to the next section to acquire your Key Vault references and deploy the updates to your cluster.
-
-### Obtain your Key Vault references
-
-To deploy the updated configuration, you'll first to obtain several references to your cluster certificate stored in your Key Vault. The easiest way to find these values is through Azure portal. You'll need:
-
-* **The Key Vault URL of your cluster certificate.** From your Key Vault in Azure portal, select **Certificates** > *Your desired certificate* > **Secret Identifier**:
-
- ```powershell
- $certUrlValue="https://sftestupgradegroup.vault.azure.net/secrets/sftestupgradegroup20200309235308/dac0e7b7f9d4414984ccaa72bfb2ea39"
- ```
-
-* **The thumbprint of your cluster certificate.** (You probably already have this if you [connected to the initial cluster](#connect-to-the-new-cluster-and-check-health-status) to check its health status.) From the same certificate blade (**Certificates** > *Your desired certificate*) in Azure portal, copy **X.509 SHA-1 Thumbprint (in hex)**:
-
- ```powershell
- $thumb = "BB796AA33BD9767E7DA27FE5182CF8FDEE714A70"
- ```
-
-* **The Resource ID of your Key Vault.** From your Key Vault in Azure portal, select **Properties** > **Resource ID**:
-
- ```powershell
- $sourceVaultValue = "/subscriptions/########-####-####-####-############/resourceGroups/sftestupgradegroup/providers/Microsoft.KeyVault/vaults/sftestupgradegroup"
- ```
-
-### Deploy the updated template
-
-Adjust the `parameterFilePath` and `templateFilePath` as needed and then run the following command:
-
-```powershell
-# Deploy the new scale set (upgraded to use managed disks) into the primary node type.
-$templateFilePath = "C:\Upgrade-1NodeType-2ScaleSets-ManagedDisks.json"
-$parameterFilePath = "C:\Upgrade-1NodeType-2ScaleSets-ManagedDisks.parameters.json"
-
-New-AzResourceGroupDeployment `
- -ResourceGroupName $resourceGroupName `
- -TemplateFile $templateFilePath `
- -TemplateParameterFile $parameterFilePath `
- -CertificateThumbprint $thumb `
- -CertificateUrlValue $certUrlValue `
- -SourceVaultValue $sourceVaultValue `
- -Verbose
-```
-
-When the deployment completes, check the cluster health again and ensure all ten nodes (five on the original and five on the new scale set) are healthy.
-
-```powershell
-Get-ServiceFabricClusterHealth
-```
-
-## Migrate seed nodes to the new scale set
-
-We're now ready to start disabling the nodes of the original scale set. As these nodes become disabled, the system services and seed nodes migrate to the VMs of the new scale set because it is also marked as the primary node type.
-
-```powershell
-# Disable the nodes in the original scale set.
-$nodeNames = @("_NTvm1_0","_NTvm1_1","_NTvm1_2","_NTvm1_3","_NTvm1_4")
-
-Write-Host "Disabling nodes..."
-foreach($name in $nodeNames){
- Disable-ServiceFabricNode -NodeName $name -Intent RemoveNode -Force
-}
-```
-
-Use Service Fabric Explorer to monitor the migration of seed nodes to the new scale set and the progression of nodes in the original scale set from *Disabling* to *Disabled* status.
-
-![Service Fabric Explorer showing status of disabled nodes](./media/upgrade-managed-disks/service-fabric-explorer-node-status.png)
-
-> [!NOTE]
-> It may take some time to complete the disabling operation across all the nodes of the original scale set. To guarantee data consistency, only one seed node can change at a time. Each seed node change requires a cluster update; thus replacing a seed node requires two cluster upgrades (one each for node addition and removal). Upgrading the five seed nodes in this sample scenario will result in ten cluster upgrades.
-
-## Remove the original scale set
-
-Once the disabling operation is complete, remove the scale set.
-
-```powershell
-# Remove the original scale set
-$scaleSetName = "NTvm1"
-
-Remove-AzVmss `
- -ResourceGroupName $resourceGroupName `
- -VMScaleSetName $scaleSetName `
- -Force
-
-Write-Host "Removed scale set $scaleSetName"
-```
-
-In Service Fabric Explorer, the removed nodes (and thus the *Cluster Health State*) will now appear in *Error* state.
-
-![Service Fabric Explorer showing disabled nodes in error state](./media/upgrade-managed-disks/service-fabric-explorer-disabled-nodes-error-state.png)
-
-Remove the obsolete nodes from the Service Fabric cluster to restore the Cluster Health State to *OK*.
-
-```powershell
-# Remove node states for the deleted scale set
-foreach($name in $nodeNames){
- Remove-ServiceFabricNodeState -NodeName $name -TimeoutSec 300 -Force
- Write-Host "Removed node state for node $name"
-}
-```
-
-![Service Fabric Explorer with down nodes in error state removed](./media/upgrade-managed-disks/service-fabric-explorer-healthy-cluster.png)
-
-## Next steps
-
-In this walkthrough, you learned how to upgrade the virtual machine scale sets of a Service Fabric cluster to use managed disks while avoiding service outages during the process. For more info on related topics check out the following resources.
-
-Learn how to:
-
-* [Scale up a Service Fabric cluster primary node type](service-fabric-scale-up-primary-node-type.md)
-
-* [Convert a scale set template to use managed disks](../virtual-machine-scale-sets/virtual-machine-scale-sets-convert-template-to-md.md)
-
-* [Remove a Service Fabric node type](service-fabric-how-to-remove-node-type.md)
-
-See also:
-
-* [Sample: Upgrade cluster nodes to use Azure managed disks](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade)
-
-* [Vertical scaling considerations](service-fabric-best-practices-capacity-scaling.md#vertical-scaling-considerations)
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-faq.md
@@ -183,7 +183,7 @@ Yes, [ExpressRoute can be used](concepts-expressroute-with-site-recovery.md) to
### If I replicate to Azure, what kind of storage account or managed disk do I need?
-You need an LRS or GRS storage. We recommend GRS so that data is resilient if a regional outage occurs, or if the primary region can't be recovered. The account must be in the same region as the Recovery Services vault. Premium storage is supported for VMware VM, Hyper-V VM, and physical server replication, when you deploy Site Recovery in the Azure portal. Managed disks only support LRS.
+Using storage accounts as target storage is not supported by Azure Site Recovery. It is recommended to rather use managed disks as the target storage for your machines. Managed disks only support LRS type for data resiliency.
### How often can I replicate data? * **Hyper-V:** Hyper-V VMs can be replicated every 30 seconds (except for premium storage), five minutes or 15 minutes.
storage https://docs.microsoft.com/en-us/azure/storage/blobs/object-replication-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/object-replication-overview.md
@@ -7,7 +7,7 @@
Previously updated : 01/13/2021 Last updated : 02/08/2021
@@ -39,6 +39,8 @@ Object replication requires that the following Azure Storage features are also e
Enabling change feed and blob versioning may incur additional costs. For more details, refer to the [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/).
+Object replication is supported only for general-purpose v2 storage accounts. Both the source and destination accounts must be general-purpose v2.
+ ## How object replication works Object replication asynchronously copies block blobs in a container according to rules that you configure. The contents of the blob, any versions associated with the blob, and the blob's metadata and properties are all copied from the source container to the destination container.
storage https://docs.microsoft.com/en-us/azure/storage/blobs/point-in-time-restore-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/point-in-time-restore-overview.md
@@ -7,7 +7,7 @@
Previously updated : 12/28/2020 Last updated : 02/01/2021
@@ -29,6 +29,10 @@ To initiate a point-in-time restore, call the [Restore Blob Ranges](/rest/api/st
Azure Storage analyzes all changes that have been made to the specified blobs between the requested restore point, specified in UTC time, and the present moment. The restore operation is atomic, so it either succeeds completely in restoring all changes, or it fails. If there are any blobs that cannot be restored, then the operation fails, and read and write operations to the affected containers resume.
+The following diagram shows how point-in-time restore works. One or more containers or blob ranges is restored to its state *n* days ago, where *n* is less than or equal to the retention period defined for point-in-time restore. The effect is to revert write and delete operations that happened during the retention period.
++ Only one restore operation can be run on a storage account at a time. A restore operation cannot be canceled once it is in progress, but a second restore operation can be performed to undo the first operation. The **Restore Blob Ranges** operation returns a restore ID that uniquely identifies the operation. To check the status of a point-in-time restore, call the **Get Restore Status** operation with the restore ID returned from the **Restore Blob Ranges** operation.
storage https://docs.microsoft.com/en-us/azure/storage/blobs/soft-delete-blob-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-overview.md
@@ -7,7 +7,7 @@
Previously updated : 07/15/2020 Last updated : 02/01/2021
@@ -24,6 +24,10 @@ If there is a possibility that your data may accidentally be modified or deleted
When soft delete for blobs is enabled on a storage account, you can recover objects after they have been deleted, within the specified data retention period. This protection extends to any blobs (block blobs, append blobs, or page blobs) that are erased as the result of an overwrite.
+The following diagram shows how a deleted blob can be restored when blob soft delete is enabled:
++ If data in an existing blob or snapshot is deleted while blob soft delete is enabled but blob versioning is not enabled, then a soft deleted snapshot is generated to save the state of the overwritten data. After the specified retention period has expired, the object is permanently deleted. If blob versioning and blob soft delete are both enabled on the storage account, then deleting a blob creates a new version instead of a soft-deleted snapshot. The new version is not soft-deleted and is not removed when the soft-delete retention period expires. Soft-deleted versions of a blob can be restored within the retention period by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation. The blob can subsequently be restored from one of its versions by calling the [Copy Blob](/rest/api/storageservices/copy-blob) operation. For more information about using blob versioning and soft delete together, see [Blob versioning and soft delete](versioning-overview.md#blob-versioning-and-soft-delete).
storage https://docs.microsoft.com/en-us/azure/storage/blobs/soft-delete-container-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-container-overview.md
@@ -7,7 +7,7 @@
Previously updated : 01/06/2021 Last updated : 02/08/2021
@@ -15,21 +15,27 @@
# Soft delete for containers (preview)
-Soft delete for containers (preview) protects your data from being accidentally or erroneously modified or deleted. When container soft delete is enabled for a storage account, any deleted container and their contents are retained in Azure Storage for the period that you specify. During the retention period, you can restore previously deleted containers and any blobs within them.
+Soft delete for containers (preview) protects your data from being accidentally or maliciously deleted. When container soft delete is enabled for a storage account, any deleted container and their contents are retained in Azure Storage for the period that you specify. During the retention period, you can restore previously deleted containers. Restoring a container restores any blobs within that container when it was deleted.
For end to end protection for your blob data, Microsoft recommends enabling the following data protection features: -- Container soft delete, to protect against accidental delete or overwrite of a container. To learn how to enable container soft delete, see [Enable and manage soft delete for containers](soft-delete-container-enable.md).-- Blob soft delete, to protect against accidental delete or overwrite of an individual blob. To learn how to enable blob soft delete, see [Soft delete for blobs](soft-delete-blob-overview.md).
+- Container soft delete, to restore a container that has been deleted. To learn how to enable container soft delete, see [Enable and manage soft delete for containers](soft-delete-container-enable.md).
- Blob versioning, to automatically maintain previous versions of a blob. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it is erroneously modified or deleted. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
+- Blob soft delete, to restore a blob or version that has been deleted. To learn how to enable blob soft delete, see [Enable and manage soft delete for blobs](soft-delete-blob-enable.md).
> [!WARNING]
-> Deleting a storage account cannot be undone. Soft delete does not protect against the deletion of a storage account. To prevent accidental deletion of a storage account, configure a **CannotDelete** lock on the storage account resource. For more information on locking Azure resources, see [Lock resources to prevent unexpected changes](../../azure-resource-manager/management/lock-resources.md).
+> Deleting a storage account cannot be undone. Soft delete does not protect against the deletion of a storage account, but only against the deletion of data objects in that account. To protect a storage account from deletion, configure a **CannotDelete** lock on the storage account resource. For more information about locking Azure Resource Manager resources, see [Lock resources to prevent unexpected changes](../../azure-resource-manager/management/lock-resources.md).
## How container soft delete works When you enable container soft delete, you can specify a retention period for deleted containers that is between 1 and 365 days. The default retention period is 7 days. During the retention period, you can recover a deleted container by calling the **Undelete Container** operation.
+When you restore a container, the container's blobs and any blob versions are also restored. However, you can only use container soft delete to restore blobs if the container itself was deleted. To a restore a deleted blob when its parent container has not been deleted, you must use blob soft delete or blob versioning.
+
+The following diagram shows how a deleted container can be restored when container soft delete is enabled:
++ When you restore a container, you can restore it to its original name if that name has not been reused. If the original container name has been used, then you can restore the container with a new name. After the retention period has expired, the container is permanently deleted from Azure Storage and cannot be recovered. The clock starts on the retention period at the point that the container is deleted. You can change the retention period at any time, but keep in mind that an updated retention period applies only to newly deleted containers. Previously deleted containers will be permanently deleted based on the retention period that was in effect at the time that the container was deleted.
@@ -38,7 +44,7 @@ Disabling container soft delete does not result in permanent deletion of contain
## About the preview
-Container soft delete is available in preview in all public Azure regions.
+Container soft delete is available in preview in all Azure regions.
> [!IMPORTANT] > The container soft delete preview is intended for non-production use only. Production service-level agreements (SLAs) are not currently available.
storage https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-change-feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-change-feed.md
@@ -3,7 +3,7 @@ Title: Change feed in Azure Blob Storage | Microsoft Docs
description: Learn about change feed logs in Azure Blob Storage and how to use them. Previously updated : 09/08/2020 Last updated : 02/08/2021
@@ -16,10 +16,16 @@ The purpose of the change feed is to provide transaction logs of all the changes
[!INCLUDE [storage-data-lake-gen2-support](../../../includes/storage-data-lake-gen2-support.md)]
+## How the change feed works
+ The change feed is stored as [blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs) in a special container in your storage account at standard [blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) cost. You can control the retention period of these files based on your requirements (See the [conditions](#conditions) of the current release). Change events are appended to the change feed as records in the [Apache Avro](https://avro.apache.org/docs/1.8.2/spec.html) format specification: a compact, fast, binary format that provides rich data structures with inline schema. This format is widely used in the Hadoop ecosystem, Stream Analytics, and Azure Data Factory. You can process these logs asynchronously, incrementally or in-full. Any number of client applications can independently read the change feed, in parallel, and at their own pace. Analytics applications such as [Apache Drill](https://drill.apache.org/docs/querying-avro-files/) or [Apache Spark](https://spark.apache.org/docs/latest/sql-data-sources-avro.html) can consume logs directly as Avro files, which let you process them at a low-cost, with high-bandwidth, and without having to write a custom application.
+The following diagram shows how records are added to the change feed:
++ Change feed support is well-suited for scenarios that process data based on objects that have changed. For example, applications can: - Update a secondary index, synchronize with a cache, search-engine, or any other content-management scenarios.
storage https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutable-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-immutable-storage.md
@@ -6,7 +6,7 @@
Previously updated : 11/13/2020 Last updated : 02/01/2021
@@ -50,6 +50,10 @@ Immutable storage for Azure Blob storage supports two types of WORM or immutable
Container and storage account deletion are also not permitted if there are any blobs in a container that are protected by a legal hold or a locked time-based policy. A legal hold policy will protect against blob, container, and storage account deletion. Both unlocked and locked time-based policies will protect against blob deletion for the specified time. Both unlocked and locked time-based policies will protect against container deletion only if at least one blob exists within the container. Only a container with *locked* time-based policy will protect against storage account deletions; containers with unlocked time-based policies do not offer storage account deletion protection nor compliance.
+The following diagram shows how time-based retention policies and legal holds prevent write and delete operations while they are in effect.
++ For more information on how to set and lock time-based retention policies, see [Set and manage immutability policies for Blob storage](storage-blob-immutability-policies-manage.md). ## Time-based retention policies
storage https://docs.microsoft.com/en-us/azure/storage/blobs/versioning-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/versioning-overview.md
@@ -34,6 +34,10 @@ A version captures the state of a blob at a given point in time. When blob versi
When you create a blob with versioning enabled, the new blob is the current version of the blob (or the base blob). If you subsequently modify that blob, Azure Storage creates a version that captures the state of the blob before it was modified. The modified blob becomes the new current version. A new version is created each time you modify the blob.
+The following diagram shows how versions are created on write and delete operations, and how a previous version may be promoted to be the current version:
++ Having a large number of versions per blob can increase the latency for blob listing operations. Microsoft recommends maintaining fewer than 1000 versions per blob. You can use lifecycle management to automatically delete old versions. For more information about lifecycle management, see [Optimize costs by automating Azure Blob Storage access tiers](storage-lifecycle-management-concepts.md). When you delete a blob with versioning enabled, Azure Storage creates a version that captures the state of the blob before it was deleted. The current version of the blob is then deleted, but the blob's versions persist, so that it can be re-created if needed.
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-troubleshoot.md
@@ -206,10 +206,10 @@ On the server that is showing as "Appears offline" in the portal, look at Event
- To use TLS cmdlets, see [Configuring TLS Cipher Suite Order by using TLS PowerShell Cmdlets](/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order-by-using-tls-powershell-cmdlets). Azure File Sync currently supports the following cipher suites for TLS 1.2 protocol:
- - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256
- - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256
- - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256
- - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256
+ - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
- If **GetNextJob completed with status: -2134347764** is logged, the server is unable to communicate with the Azure File Sync service due to an expired or deleted certificate. - Run the following PowerShell command on the server to reset the certificate used for authentication:
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/diagnostics-linux-v3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/diagnostics-linux-v3.md
@@ -0,0 +1,824 @@
+
+ Title: Azure Compute - Linux Diagnostic Extension 3.0
+description: How to configure the Azure Linux Diagnostic Extension (LAD) 3.0 to collect metrics and log events from Linux VMs running in Azure.
+++++
+ vm-linux
+ Last updated : 12/13/2018 ++
+# Use Linux Diagnostic Extension 3.0 to monitor metrics and logs
+
+This document describes version 3.0 and newer of the Linux Diagnostic Extension.
+
+> [!IMPORTANT]
+> For information about version 2.3 and older, see [this document](/previous-versions/azure/virtual-machines/linux/classic/diagnostic-extension-v2).
+
+## Introduction
+
+The Linux Diagnostic Extension helps a user monitor the health of a Linux VM running on Microsoft Azure. It has the following capabilities:
+
+* Collects system performance metrics from the VM and stores them in a specific table in a designated storage account.
+* Retrieves log events from syslog and stores them in a specific table in the designated storage account.
+* Enables users to customize the data metrics that are collected and uploaded.
+* Enables users to customize the syslog facilities and severity levels of events that are collected and uploaded.
+* Enables users to upload specified log files to a designated storage table.
+* Supports sending metrics and log events to arbitrary EventHub endpoints and JSON-formatted blobs in the designated storage account.
+
+This extension works with both Azure deployment models.
+
+## Installing the extension in your VM
+
+You can enable this extension by using the Azure PowerShell cmdlets, Azure CLI scripts, ARM templates, or the Azure portal. For more information, see [Extensions Features](features-linux.md).
+
+>[!NOTE]
+>Certain components of the Diagnostics VM extension are also shipped in the [Log Analytics VM extension](./oms-linux.md). Due to this architecture, conflicts can arise if both extensions are instantiated in the same ARM template. To avoid these install-time conflicts, use the [`dependsOn` directive](../../azure-resource-manager/templates/define-resource-dependency.md#dependson) to ensure the extensions are installed sequentially. The extensions can be installed in either order.
+
+These installation instructions and a [downloadable sample configuration](https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json) configure LAD 3.0 to:
+
+* capture and store the same metrics as were provided by LAD 2.3;
+* capture a useful set of file system metrics, new to LAD 3.0;
+* capture the default syslog collection enabled by LAD 2.3;
+* enable the Azure portal experience for charting and alerting on VM metrics.
+
+The downloadable configuration is just an example; modify it to suit your own needs.
+
+### Supported Linux distributions
+
+The Linux Diagnostic Extension supports the following distributions and versions. The list of distributions and versions applies only to Azure-endorsed Linux vendor images. Third-party BYOL and BYOS images, like appliances, are generally not supported for the Linux Diagnostic Extension.
+
+A distribution that lists only major versions, like Debian 7, is also supported for all minor versions. If a specific minor version is specified, only that specific version is supported; if "+" is appended, minor versions equal to or greater than the specified version are supported.
+
+Supported distributions and versions:
+
+- Ubuntu 18.04, 16.04, 14.04
+- CentOS 7, 6.5+
+- Oracle Linux 7, 6.4+
+- OpenSUSE 13.1+
+- SUSE Linux Enterprise Server 12
+- Debian 9, 8, 7
+- RHEL 7, 6.7+
+
+### Prerequisites
+
+* **Azure Linux Agent version 2.2.0 or later**. Most Azure VM Linux gallery images include version 2.2.7 or later. Run `/usr/sbin/waagent -version` to confirm the version installed on the VM. If the VM is running an older version of the guest agent, follow [these instructions](./update-linux-agent.md) to update it.
+* **Azure CLI**. [Set up the Azure CLI](/cli/azure/install-azure-cli) environment on your machine.
+* The wget command, if you don't already have it: Run `sudo apt-get install wget`.
+* An existing Azure subscription and an existing general purpose storage account to store the data in. General purpose storage accounts support Table storage which is required. A Blob storage account will not work.
+* Python 2
+
+### Python requirement
+
+The Linux Diagnostic Extension requires Python 2. If your virtual machine is using a distro that doesn't include Python 2 by default then you must install it. The following sample commands will install Python 2 on different distros.
+
+ - Red Hat, CentOS, Oracle: `yum install -y python2`
+ - Ubuntu, Debian: `apt-get install -y python2`
+ - SUSE: `zypper install -y python2`
+
+The python2 executable must be aliased to *python*. Following is one method that you can use to set this alias:
+
+1. Run the following command to remove any existing aliases.
+
+ ```
+ sudo update-alternatives --remove-all python
+ ```
+
+2. Run the following command to create the alias.
+
+ ```
+ sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 1
+ ```
+
+### Sample installation
+
+> [!NOTE]
+> For either of the samples, fill in the correct values for the variables in the first section before running.
+
+The sample configuration downloaded in these examples collects a set of standard data and sends them to table storage. The URL for the sample configuration and its contents are subject to change. In most cases, you should download a copy of the portal settings JSON file and customize it for your needs, then have any templates or automation you construct use your own version of the configuration file rather than downloading that URL each time.
+
+#### Azure CLI sample
+
+```azurecli
+# Set your Azure VM diagnostic variables correctly below
+my_resource_group=<your_azure_resource_group_name_containing_your_azure_linux_vm>
+my_linux_vm=<your_azure_linux_vm_name>
+my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnostic_data>
+
+# Should login to Azure first before anything else
+az login
+
+# Select the subscription containing the storage account
+az account set --subscription <your_azure_subscription_id>
+
+# Download the sample Public settings. (You could also use curl or any web browser)
+wget https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json -O portal_public_settings.json
+
+# Build the VM resource ID. Replace storage account name and resource ID in the public settings.
+my_vm_resource_id=$(az vm show -g $my_resource_group -n $my_linux_vm --query "id" -o tsv)
+sed -i "s#__DIAGNOSTIC_STORAGE_ACCOUNT__#$my_diagnostic_storage_account#g" portal_public_settings.json
+sed -i "s#__VM_RESOURCE_ID__#$my_vm_resource_id#g" portal_public_settings.json
+
+# Build the protected settings (storage account SAS token)
+my_diagnostic_storage_account_sastoken=$(az storage account generate-sas --account-name $my_diagnostic_storage_account --expiry 2037-12-31T23:59:00Z --permissions wlacu --resource-types co --services bt -o tsv)
+my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}"
+
+# Finally tell Azure to install and enable the extension
+az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group $my_resource_group --vm-name $my_linux_vm --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
+```
+#### Azure CLI sample for Installing LAD 3.0 extension on the virtual machine scale set instance
+
+```azurecli
+#Set your Azure VMSS diagnostic variables correctly below
+$my_resource_group=<your_azure_resource_group_name_containing_your_azure_linux_vm>
+$my_linux_vmss=<your_azure_linux_vmss_name>
+$my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnostic_data>
+
+# Should login to Azure first before anything else
+az login
+
+# Select the subscription containing the storage account
+az account set --subscription <your_azure_subscription_id>
+
+# Download the sample Public settings. (You could also use curl or any web browser)
+wget https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json -O portal_public_settings.json
+
+# Build the VMSS resource ID. Replace storage account name and resource ID in the public settings.
+$my_vmss_resource_id=$(az vmss show -g $my_resource_group -n $my_linux_vmss --query "id" -o tsv)
+sed -i "s#__DIAGNOSTIC_STORAGE_ACCOUNT__#$my_diagnostic_storage_account#g" portal_public_settings.json
+sed -i "s#__VM_RESOURCE_ID__#$my_vmss_resource_id#g" portal_public_settings.json
+
+# Build the protected settings (storage account SAS token)
+$my_diagnostic_storage_account_sastoken=$(az storage account generate-sas --account-name $my_diagnostic_storage_account --expiry 2037-12-31T23:59:00Z --permissions wlacu --resource-types co --services bt -o tsv)
+$my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}"
+
+# Finally tell Azure to install and enable the extension
+az vmss extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group $my_resource_group --vmss-name $my_linux_vmss --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
+```
+
+#### PowerShell sample
+
+```powershell
+$storageAccountName = "yourStorageAccountName"
+$storageAccountResourceGroup = "yourStorageAccountResourceGroupName"
+$vmName = "yourVMName"
+$VMresourceGroup = "yourVMResourceGroupName"
+
+# Get the VM object
+$vm = Get-AzVM -Name $vmName -ResourceGroupName $VMresourceGroup
+
+# Get the public settings template from GitHub and update the templated values for storage account and resource ID
+$publicSettings = (Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json).Content
+$publicSettings = $publicSettings.Replace('__DIAGNOSTIC_STORAGE_ACCOUNT__', $storageAccountName)
+$publicSettings = $publicSettings.Replace('__VM_RESOURCE_ID__', $vm.Id)
+
+# If you have your own customized public settings, you can inline those rather than using the template above: $publicSettings = '{"ladCfg": { ... },}'
+
+# Generate a SAS token for the agent to use to authenticate with the storage account
+$sasToken = New-AzStorageAccountSASToken -Service Blob,Table -ResourceType Service,Container,Object -Permission "racwdlup" -Context (Get-AzStorageAccount -ResourceGroupName $storageAccountResourceGroup -AccountName $storageAccountName).Context -ExpiryTime $([System.DateTime]::Now.AddYears(10))
+
+# Build the protected settings (storage account SAS token)
+$protectedSettings="{'storageAccountName': '$storageAccountName', 'storageAccountSasToken': '$sasToken'}"
+
+# Finally install the extension with the settings built above
+Set-AzVMExtension -ResourceGroupName $VMresourceGroup -VMName $vmName -Location $vm.Location -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 3.0
+```
+
+### Updating the extension settings
+
+After you've changed your Protected or Public settings, deploy them to the VM by running the same command. If anything changed in the settings, the updated settings are sent to the extension. LAD reloads the configuration and restarts itself.
+
+### Migration from previous versions of the extension
+
+The latest version of the extension is **3.0**. **Any old versions (2.x) are deprecated and may be unpublished on or after July 31, 2018**.
+
+> [!IMPORTANT]
+> This extension introduces breaking changes to the configuration of the extension. One such change was made to improve the security of the extension; as a result, backwards compatibility with 2.x could not be maintained. Also, the Extension Publisher for this extension is different than the publisher for the 2.x versions.
+>
+> To migrate from 2.x to this new version of the extension, you must uninstall the old extension (under the old publisher name), then install version 3 of the extension.
+
+Recommendations:
+
+* Install the extension with automatic minor version upgrade enabled.
+ * On classic deployment model VMs, specify '3.*' as the version if you are installing the extension through Azure XPLAT CLI or PowerShell.
+ * On Azure Resource Manager deployment model VMs, include '"autoUpgradeMinorVersion": true' in the VM deployment template.
+* Use a new/different storage account for LAD 3.0. There are several small incompatibilities between LAD 2.3 and LAD 3.0 that make sharing an account troublesome:
+ * LAD 3.0 stores syslog events in a table with a different name.
+ * The counterSpecifier strings for `builtin` metrics differ in LAD 3.0.
+
+## Protected settings
+
+This set of configuration information contains sensitive information that should be protected from public view, for example, storage credentials. These settings are transmitted to and stored by the extension in encrypted form.
+
+```json
+{
+ "storageAccountName" : "the storage account to receive data",
+ "storageAccountEndPoint": "the hostname suffix for the cloud for this account",
+ "storageAccountSasToken": "SAS access token",
+ "mdsdHttpProxy": "HTTP proxy settings",
+ "sinksConfig": { ... }
+}
+```
+
+Name | Value
+- | --
+storageAccountName | The name of the storage account in which data is written by the extension.
+storageAccountEndPoint | (optional) The endpoint identifying the cloud in which the storage account exists. If this setting is absent, LAD defaults to the Azure public cloud, `https://core.windows.net`. To use a storage account in Azure Germany, Azure Government, or Azure China, set this value accordingly.
+storageAccountSasToken | An [Account SAS token](https://azure.microsoft.com/blog/sas-update-account-sas-now-supports-all-storage-services/) for Blob and Table services (`ss='bt'`), applicable to containers and objects (`srt='co'`), which grants add, create, list, update, and write permissions (`sp='acluw'`). Do *not* include the leading question-mark (?).
+mdsdHttpProxy | (optional) HTTP proxy information needed to enable the extension to connect to the specified storage account and endpoint.
+sinksConfig | (optional) Details of alternative destinations to which metrics and events can be delivered. The specific details of each data sink supported by the extension are covered in the sections that follow.
+
+To get a SAS token within a Resource Manager template, use the **listAccountSas** function. For an example template, see [List function example](../../azure-resource-manager/templates/template-functions-resource.md#list-example).
+
+You can easily construct the required SAS token through the Azure portal.
+
+1. Select the general-purpose storage account to which you want the extension to write
+1. Select "Shared access signature" from the Settings part of the left menu
+1. Make the appropriate sections as previously described
+1. Click the "Generate SAS" button.
++
+Copy the generated SAS into the storageAccountSasToken field; remove the leading question-mark ("?").
+
+### sinksConfig
+
+```json
+"sinksConfig": {
+ "sink": [
+ {
+ "name": "sinkname",
+ "type": "sinktype",
+ ...
+ },
+ ...
+ ]
+},
+```
+
+This optional section defines additional destinations to which the extension sends the information it collects. The "sink" array contains an object for each additional data sink. The "type" attribute determines the other attributes in the object.
+
+Element | Value
+- | --
+name | A string used to refer to this sink elsewhere in the extension configuration.
+type | The type of sink being defined. Determines the other values (if any) in instances of this type.
+
+Version 3.0 of the Linux Diagnostic Extension supports two sink types: EventHub, and JsonBlob.
+
+#### The EventHub sink
+
+```json
+"sink": [
+ {
+ "name": "sinkname",
+ "type": "EventHub",
+ "sasURL": "https SAS URL"
+ },
+ ...
+]
+```
+
+The "sasURL" entry contains the full URL, including SAS token, for the Event Hub to which data should be published. LAD requires a SAS naming a policy that enables the Send claim. An example:
+
+* Create an Event Hubs namespace called `contosohub`
+* Create an Event Hub in the namespace called `syslogmsgs`
+* Create a Shared access policy on the Event Hub named `writer` that enables the Send claim
+
+If you created a SAS good until midnight UTC on January 1, 2018, the sasURL value might be:
+
+```https
+https://contosohub.servicebus.windows.net/syslogmsgs?sr=contosohub.servicebus.windows.net%2fsyslogmsgs&sig=xxxxxxxxxxxxxxxxxxxxxxxxx&se=1514764800&skn=writer
+```
+
+For more information about generating and retrieving information on SAS tokens for Event Hubs, see [this web page](/rest/api/eventhub/generate-sas-token#powershell).
+
+#### The JsonBlob sink
+
+```json
+"sink": [
+ {
+ "name": "sinkname",
+ "type": "JsonBlob"
+ },
+ ...
+]
+```
+
+Data directed to a JsonBlob sink is stored in blobs in Azure storage. Each instance of LAD creates a blob every hour for each sink name. Each blob always contains a syntactically valid JSON array of object. New entries are atomically added to the array. Blobs are stored in a container with the same name as the sink. The Azure storage rules for blob container names apply to the names of JsonBlob sinks: between 3 and 63 lower-case alphanumeric ASCII characters or dashes.
+
+## Public settings
+
+This structure contains various blocks of settings that control the information collected by the extension. Each setting is optional. If you specify `ladCfg`, you must also specify `StorageAccount`.
+
+```json
+{
+ "ladCfg": { ... },
+ "perfCfg": { ... },
+ "fileLogs": { ... },
+ "StorageAccount": "the storage account to receive data",
+ "mdsdHttpProxy" : ""
+}
+```
+
+Element | Value
+- | --
+StorageAccount | The name of the storage account in which data is written by the extension. Must be the same name as is specified in the [Protected settings](#protected-settings).
+mdsdHttpProxy | (optional) Same as in the [Protected settings](#protected-settings). The public value is overridden by the private value, if set. Place proxy settings that contain a secret, such as a password, in the [Protected settings](#protected-settings).
+
+The remaining elements are described in detail in the following sections.
+
+### ladCfg
+
+```json
+"ladCfg": {
+ "diagnosticMonitorConfiguration": {
+ "eventVolume": "Medium",
+ "metrics": { ... },
+ "performanceCounters": { ... },
+ "syslogEvents": { ... }
+ },
+ "sampleRateInSeconds": 15
+}
+```
+
+This optional structure controls the gathering of metrics and logs for delivery to the Azure Metrics service and to other data sinks. You must specify either `performanceCounters` or `syslogEvents` or both. You must specify the `metrics` structure.
+
+Element | Value
+- | --
+eventVolume | (optional) Controls the number of partitions created within the storage table. Must be one of `"Large"`, `"Medium"`, or `"Small"`. If not specified, the default value is `"Medium"`.
+sampleRateInSeconds | (optional) The default interval between collection of raw (unaggregated) metrics. The smallest supported sample rate is 15 seconds. If not specified, the default value is `15`.
+
+#### metrics
+
+```json
+"metrics": {
+ "resourceId": "/subscriptions/...",
+ "metricAggregation" : [
+ { "scheduledTransferPeriod" : "PT1H" },
+ { "scheduledTransferPeriod" : "PT5M" }
+ ]
+}
+```
+
+Element | Value
+- | --
+resourceId | The Azure Resource Manager resource ID of the VM or of the virtual machine scale set to which the VM belongs. This setting must be also specified if any JsonBlob sink is used in the configuration.
+scheduledTransferPeriod | The frequency at which aggregate metrics are to be computed and transferred to Azure Metrics, expressed as an IS 8601 time interval. The smallest transfer period is 60 seconds, that is, PT1M. You must specify at least one scheduledTransferPeriod.
+
+Samples of the metrics specified in the performanceCounters section are collected every 15 seconds or at the sample rate explicitly defined for the counter. If multiple scheduledTransferPeriod frequencies appear (as in the example), each aggregation is computed independently.
+
+#### performanceCounters
+
+```json
+"performanceCounters": {
+ "sinks": "",
+ "performanceCounterConfiguration": [
+ {
+ "type": "builtin",
+ "class": "Processor",
+ "counter": "PercentIdleTime",
+ "counterSpecifier": "/builtin/Processor/PercentIdleTime",
+ "condition": "IsAggregate=TRUE",
+ "sampleRate": "PT15S",
+ "unit": "Percent",
+ "annotation": [
+ {
+ "displayName" : "Aggregate CPU %idle time",
+ "locale" : "en-us"
+ }
+ ]
+ }
+ ]
+}
+```
+
+This optional section controls the collection of metrics. Raw samples are aggregated for each [scheduledTransferPeriod](#metrics) to produce these values:
+
+* mean
+* minimum
+* maximum
+* last-collected value
+* count of raw samples used to compute the aggregate
+
+Element | Value
+- | --
+sinks | (optional) A comma-separated list of names of sinks to which LAD sends aggregated metric results. All aggregated metrics are published to each listed sink. See [sinksConfig](#sinksconfig). Example: `"EHsink1, myjsonsink"`.
+type | Identifies the actual provider of the metric.
+class | Together with "counter", identifies the specific metric within the provider's namespace.
+counter | Together with "class", identifies the specific metric within the provider's namespace.
+counterSpecifier | Identifies the specific metric within the Azure Metrics namespace.
+condition | (optional) Selects a specific instance of the object to which the metric applies or selects the aggregation across all instances of that object. For more information, see the `builtin` metric definitions.
+sampleRate | IS 8601 interval that sets the rate at which raw samples for this metric are collected. If not set, the collection interval is set by the value of [sampleRateInSeconds](#ladcfg). The shortest supported sample rate is 15 seconds (PT15S).
+unit | Should be one of these strings: "Count", "Bytes", "Seconds", "Percent", "CountPerSecond", "BytesPerSecond", "Millisecond". Defines the unit for the metric. Consumers of the collected data expect the collected data values to match this unit. LAD ignores this field.
+displayName | The label (in the language specified by the associated locale setting) to be attached to this data in Azure Metrics. LAD ignores this field.
+
+The counterSpecifier is an arbitrary identifier. Consumers of metrics, like the Azure portal charting and alerting feature, use counterSpecifier as the "key" that identifies a metric or an instance of a metric. For `builtin` metrics, we recommend you use counterSpecifier values that begin with `/builtin/`. If you are collecting a specific instance of a metric, we recommend you attach the identifier of the instance to the counterSpecifier value. Some examples:
+
+* `/builtin/Processor/PercentIdleTime` - Idle time averaged across all vCPUs
+* `/builtin/Disk/FreeSpace(/mnt)` - Free space for the /mnt filesystem
+* `/builtin/Disk/FreeSpace` - Free space averaged across all mounted filesystems
+
+Neither LAD nor the Azure portal expects the counterSpecifier value to match any pattern. Be consistent in how you construct counterSpecifier values.
+
+When you specify `performanceCounters`, LAD always writes data to a table in Azure storage. You can have the same data written to JSON blobs and/or Event Hubs, but you cannot disable storing data to a table. All instances of the diagnostic extension configured to use the same storage account name and endpoint add their metrics and logs to the same table. If too many VMs are writing to the same table partition, Azure can throttle writes to that partition. The eventVolume setting causes entries to be spread across 1 (Small), 10 (Medium), or 100 (Large) different partitions. Usually, "Medium" is sufficient to ensure traffic is not throttled. The Azure Metrics feature of the Azure portal uses the data in this table to produce graphs or to trigger alerts. The table name is the concatenation of these strings:
+
+* `WADMetrics`
+* The "scheduledTransferPeriod" for the aggregated values stored in the table
+* `P10DV2S`
+* A date, in the form "YYYYMMDD", which changes every 10 days
+
+Examples include `WADMetricsPT1HP10DV2S20170410` and `WADMetricsPT1MP10DV2S20170609`.
+
+#### syslogEvents
+
+```json
+"syslogEvents": {
+ "sinks": "",
+ "syslogEventConfiguration": {
+ "facilityName1": "minSeverity",
+ "facilityName2": "minSeverity",
+ ...
+ }
+}
+```
+
+This optional section controls the collection of log events from syslog. If the section is omitted, syslog events are not captured at all.
+
+The syslogEventConfiguration collection has one entry for each syslog facility of interest. If minSeverity is "NONE" for a particular facility, or if that facility does not appear in the element at all, no events from that facility are captured.
+
+Element | Value
+- | --
+sinks | A comma-separated list of names of sinks to which individual log events are published. All log events matching the restrictions in syslogEventConfiguration are published to each listed sink. Example: "EHforsyslog"
+facilityName | A syslog facility name (such as "LOG\_USER" or "LOG\_LOCAL0"). See the "facility" section of the [syslog man page](http://man7.org/linux/man-pages/man3/syslog.3.html) for the full list.
+minSeverity | A syslog severity level (such as "LOG\_ERR" or "LOG\_INFO"). See the "level" section of the [syslog man page](http://man7.org/linux/man-pages/man3/syslog.3.html) for the full list. The extension captures events sent to the facility at or above the specified level.
+
+When you specify `syslogEvents`, LAD always writes data to a table in Azure storage. You can have the same data written to JSON blobs and/or Event Hubs, but you cannot disable storing data to a table. The partitioning behavior for this table is the same as described for `performanceCounters`. The table name is the concatenation of these strings:
+
+* `LinuxSyslog`
+* A date, in the form "YYYYMMDD", which changes every 10 days
+
+Examples include `LinuxSyslog20170410` and `LinuxSyslog20170609`.
+
+### perfCfg
+
+This optional section controls execution of arbitrary [OMI](https://github.com/Microsoft/omi) queries.
+
+```json
+"perfCfg": [
+ {
+ "namespace": "root/scx",
+ "query": "SELECT PercentAvailableMemory, PercentUsedSwap FROM SCX_MemoryStatisticalInformation",
+ "table": "LinuxOldMemory",
+ "frequency": 300,
+ "sinks": ""
+ }
+]
+```
+
+Element | Value
+- | --
+namespace | (optional) The OMI namespace within which the query should be executed. If unspecified, the default value is "root/scx", implemented by the [System Center Cross-platform Providers](https://github.com/Microsoft/SCXcore).
+query | The OMI query to be executed.
+table | (optional) The Azure storage table, in the designated storage account (see [Protected settings](#protected-settings)).
+frequency | (optional) The number of seconds between execution of the query. Default value is 300 (5 minutes); minimum value is 15 seconds.
+sinks | (optional) A comma-separated list of names of additional sinks to which raw sample metric results should be published. No aggregation of these raw samples is computed by the extension or by Azure Metrics.
+
+Either "table" or "sinks", or both, must be specified.
+
+### fileLogs
+
+Controls the capture of log files. LAD captures new text lines as they are written to the file and writes them to table rows and/or any specified sinks (JsonBlob or EventHub).
+
+> [!NOTE]
+> fileLogs are captured by a subcomponent of LAD called `omsagent`. In order to collect fileLogs, you must ensure that the `omsagent` user has read permissions on the files you specify, as well as execute permissions on all directories in the path to that file. You can check this by running `sudo su omsagent -c 'cat /path/to/file'` after LAD is installed.
+
+```json
+"fileLogs": [
+ {
+ "file": "/var/log/mydaemonlog",
+ "table": "MyDaemonEvents",
+ "sinks": ""
+ }
+]
+```
+
+Element | Value
+- | --
+file | The full pathname of the log file to be watched and captured. The pathname must name a single file; it cannot name a directory or contain wildcards. The 'omsagent' user account must have read access to the file path.
+table | (optional) The Azure storage table, in the designated storage account (as specified in the protected configuration), into which new lines from the "tail" of the file are written.
+sinks | (optional) A comma-separated list of names of additional sinks to which log lines sent.
+
+Either "table" or "sinks", or both, must be specified.
+
+## Metrics supported by the builtin provider
+
+The builtin metric provider is a source of metrics most interesting to a broad set of users. These metrics fall into five broad classes:
+
+* Processor
+* Memory
+* Network
+* Filesystem
+* Disk
+
+### builtin metrics for the Processor class
+
+The Processor class of metrics provides information about processor usage in the VM. When aggregating percentages, the result is the average across all CPUs. In a two-vCPU VM, if one vCPU was 100% busy and the other was 100% idle, the reported PercentIdleTime would be 50. If each vCPU was 50% busy for the same period, the reported result would also be 50. In a four-vCPU VM, with one vCPU 100% busy and the others idle, the reported PercentIdleTime would be 75.
+
+counter | Meaning
+- | -
+PercentIdleTime | Percentage of time during the aggregation window that processors were executing the kernel idle loop
+PercentProcessorTime | Percentage of time executing a non-idle thread
+PercentIOWaitTime | Percentage of time waiting for IO operations to complete
+PercentInterruptTime | Percentage of time executing hardware/software interrupts and DPCs (deferred procedure calls)
+PercentUserTime | Of non-idle time during the aggregation window, the percentage of time spent in user more at normal priority
+PercentNiceTime | Of non-idle time, the percentage spent at lowered (nice) priority
+PercentPrivilegedTime | Of non-idle time, the percentage spent in privileged (kernel) mode
+
+The first four counters should sum to 100%. The last three counters also sum to 100%; they subdivide the sum of PercentProcessorTime, PercentIOWaitTime, and PercentInterruptTime.
+
+To obtain a single metric aggregated across all processors, set `"condition": "IsAggregate=TRUE"`. To obtain a metric for a specific processor, such as the second logical processor of a four-vCPU VM, set `"condition": "Name=\\"1\\""`. Logical processor numbers are in the range `[0..n-1]`.
+
+### builtin metrics for the Memory class
+
+The Memory class of metrics provides information about memory utilization, paging, and swapping.
+
+counter | Meaning
+- | -
+AvailableMemory | Available physical memory in MiB
+PercentAvailableMemory | Available physical memory as a percent of total memory
+UsedMemory | In-use physical memory (MiB)
+PercentUsedMemory | In-use physical memory as a percent of total memory
+PagesPerSec | Total paging (read/write)
+PagesReadPerSec | Pages read from backing store (swap file, program file, mapped file, etc.)
+PagesWrittenPerSec | Pages written to backing store (swap file, mapped file, etc.)
+AvailableSwap | Unused swap space (MiB)
+PercentAvailableSwap | Unused swap space as a percentage of total swap
+UsedSwap | In-use swap space (MiB)
+PercentUsedSwap | In-use swap space as a percentage of total swap
+
+This class of metrics has only a single instance. The "condition" attribute has no useful settings and should be omitted.
+
+### builtin metrics for the Network class
+
+The Network class of metrics provides information about network activity on an individual network interface since boot. LAD does not expose bandwidth metrics, which can be retrieved from host metrics.
+
+counter | Meaning
+- | -
+BytesTransmitted | Total bytes sent since boot
+BytesReceived | Total bytes received since boot
+BytesTotal | Total bytes sent or received since boot
+PacketsTransmitted | Total packets sent since boot
+PacketsReceived | Total packets received since boot
+TotalRxErrors | Number of receive errors since boot
+TotalTxErrors | Number of transmit errors since boot
+TotalCollisions | Number of collisions reported by the network ports since boot
+
+ Although this class is instanced, LAD does not support capturing Network metrics aggregated across all network devices. To obtain the metrics for a specific interface, such as eth0, set `"condition": "InstanceID=\\"eth0\\""`.
+
+### builtin metrics for the Filesystem class
+
+The Filesystem class of metrics provides information about filesystem usage. Absolute and percentage values are reported as they'd be displayed to an ordinary user (not root).
+
+counter | Meaning
+- | -
+FreeSpace | Available disk space in bytes
+UsedSpace | Used disk space in bytes
+PercentFreeSpace | Percentage free space
+PercentUsedSpace | Percentage used space
+PercentFreeInodes | Percentage of unused inodes
+PercentUsedInodes | Percentage of allocated (in use) inodes summed across all filesystems
+BytesReadPerSecond | Bytes read per second
+BytesWrittenPerSecond | Bytes written per second
+BytesPerSecond | Bytes read or written per second
+ReadsPerSecond | Read operations per second
+WritesPerSecond | Write operations per second
+TransfersPerSecond | Read or write operations per second
+
+Aggregated values across all file systems can be obtained by setting `"condition": "IsAggregate=True"`. Values for a specific mounted file system, such as "/mnt", can be obtained by setting `"condition": 'Name="/mnt"'`.
+
+> [!NOTE]
+> If using the Azure portal instead of JSON, the correct condition field form is Name='/mnt'
+
+### builtin metrics for the Disk class
+
+The Disk class of metrics provides information about disk device usage. These statistics apply to the entire drive. If there are multiple file systems on a device, the counters for that device are, effectively, aggregated across all of them.
+
+counter | Meaning
+- | -
+ReadsPerSecond | Read operations per second
+WritesPerSecond | Write operations per second
+TransfersPerSecond | Total operations per second
+AverageReadTime | Average seconds per read operation
+AverageWriteTime | Average seconds per write operation
+AverageTransferTime | Average seconds per operation
+AverageDiskQueueLength | Average number of queued disk operations
+ReadBytesPerSecond | Number of bytes read per second
+WriteBytesPerSecond | Number of bytes written per second
+BytesPerSecond | Number of bytes read or written per second
+
+Aggregated values across all disks can be obtained by setting `"condition": "IsAggregate=True"`. To get information for a specific device (for example, /dev/sdf1), set `"condition": "Name=\\"/dev/sdf1\\""`.
+
+## Installing and configuring LAD 3.0
+
+### Azure CLI
+
+Assuming your protected settings are in the file ProtectedSettings.json and your public configuration information is in PublicSettings.json, run this command:
+
+```azurecli
+az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group <resource_group_name> --vm-name <vm_name> --protected-settings ProtectedSettings.json --settings PublicSettings.json
+```
+
+The command assumes you are using the Azure Resource Management mode of the Azure CLI. To configure LAD for classic deployment model (ASM) VMs, switch to "asm" mode (`azure config mode asm`) and omit the resource group name in the command. For more information, see the [cross-platform CLI documentation](/cli/azure/authenticate-azure-cli).
+
+### PowerShell
+
+Assuming your protected settings are in the `$protectedSettings` variable and your public configuration information is in the `$publicSettings` variable, run this command:
+
+```powershell
+Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> -Location <vm_location> -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 3.0
+```
+
+## An example LAD 3.0 configuration
+
+Based on the preceding definitions, here's a sample LAD 3.0 extension configuration with some explanation. To apply this sample to your case, you should use your own storage account name, account SAS token, and EventHubs SAS tokens.
+
+> [!NOTE]
+> Depending on whether you use the Azure CLI or PowerShell to install LAD, the method for providing public and protected settings will differ. If using the Azure CLI, save the following settings to ProtectedSettings.json and PublicSettings.json to use with the sample command above. If using PowerShell, save the settings to `$protectedSettings` and `$publicSettings` by running `$protectedSettings = '{ ... }'`.
+
+### Protected settings
+
+These protected settings configure:
+
+* a storage account
+* a matching account SAS token
+* several sinks (JsonBlob or EventHubs with SAS tokens)
+
+```json
+{
+ "storageAccountName": "yourdiagstgacct",
+ "storageAccountSasToken": "sv=xxxx-xx-xx&ss=bt&srt=co&sp=wlacu&st=yyyy-yy-yyT21%3A22%3A00Z&se=zzzz-zz-zzT21%3A22%3A00Z&sig=fake_signature",
+ "sinksConfig": {
+ "sink": [
+ {
+ "name": "SyslogJsonBlob",
+ "type": "JsonBlob"
+ },
+ {
+ "name": "FilelogJsonBlob",
+ "type": "JsonBlob"
+ },
+ {
+ "name": "LinuxCpuJsonBlob",
+ "type": "JsonBlob"
+ },
+ {
+ "name": "MyJsonMetricsBlob",
+ "type": "JsonBlob"
+ },
+ {
+ "name": "LinuxCpuEventHub",
+ "type": "EventHub",
+ "sasURL": "https://youreventhubnamespace.servicebus.windows.net/youreventhubpublisher?sr=https%3a%2f%2fyoureventhubnamespace.servicebus.windows.net%2fyoureventhubpublisher%2f&sig=fake_signature&se=1808096361&skn=yourehpolicy"
+ },
+ {
+ "name": "MyMetricEventHub",
+ "type": "EventHub",
+ "sasURL": "https://youreventhubnamespace.servicebus.windows.net/youreventhubpublisher?sr=https%3a%2f%2fyoureventhubnamespace.servicebus.windows.net%2fyoureventhubpublisher%2f&sig=yourehpolicy&skn=yourehpolicy"
+ },
+ {
+ "name": "LoggingEventHub",
+ "type": "EventHub",
+ "sasURL": "https://youreventhubnamespace.servicebus.windows.net/youreventhubpublisher?sr=https%3a%2f%2fyoureventhubnamespace.servicebus.windows.net%2fyoureventhubpublisher%2f&sig=yourehpolicy&se=1808096361&skn=yourehpolicy"
+ }
+ ]
+ }
+}
+```
+
+### Public Settings
+
+These public settings cause LAD to:
+
+* Upload percent-processor-time and used-disk-space metrics to the `WADMetrics*` table
+* Upload messages from syslog facility "user" and severity "info" to the `LinuxSyslog*` table
+* Upload raw OMI query results (PercentProcessorTime and PercentIdleTime) to the named `LinuxCPU` table
+* Upload appended lines in file `/var/log/myladtestlog` to the `MyLadTestLog` table
+
+In each case, data is also uploaded to:
+
+* Azure Blob storage (container name is as defined in the JsonBlob sink)
+* EventHubs endpoint (as specified in the EventHubs sink)
+
+```json
+{
+ "StorageAccount": "yourdiagstgacct",
+ "ladCfg": {
+ "sampleRateInSeconds": 15,
+ "diagnosticMonitorConfiguration": {
+ "performanceCounters": {
+ "sinks": "MyMetricEventHub,MyJsonMetricsBlob",
+ "performanceCounterConfiguration": [
+ {
+ "unit": "Percent",
+ "type": "builtin",
+ "counter": "PercentProcessorTime",
+ "counterSpecifier": "/builtin/Processor/PercentProcessorTime",
+ "annotation": [
+ {
+ "locale": "en-us",
+ "displayName": "Aggregate CPU %utilization"
+ }
+ ],
+ "condition": "IsAggregate=TRUE",
+ "class": "Processor"
+ },
+ {
+ "unit": "Bytes",
+ "type": "builtin",
+ "counter": "UsedSpace",
+ "counterSpecifier": "/builtin/FileSystem/UsedSpace",
+ "annotation": [
+ {
+ "locale": "en-us",
+ "displayName": "Used disk space on /"
+ }
+ ],
+ "condition": "Name=\"/\"",
+ "class": "Filesystem"
+ }
+ ]
+ },
+ "metrics": {
+ "metricAggregation": [
+ {
+ "scheduledTransferPeriod": "PT1H"
+ },
+ {
+ "scheduledTransferPeriod": "PT1M"
+ }
+ ],
+ "resourceId": "/subscriptions/your_azure_subscription_id/resourceGroups/your_resource_group_name/providers/Microsoft.Compute/virtualMachines/your_vm_name"
+ },
+ "eventVolume": "Large",
+ "syslogEvents": {
+ "sinks": "SyslogJsonBlob,LoggingEventHub",
+ "syslogEventConfiguration": {
+ "LOG_USER": "LOG_INFO"
+ }
+ }
+ }
+ },
+ "perfCfg": [
+ {
+ "query": "SELECT PercentProcessorTime, PercentIdleTime FROM SCX_ProcessorStatisticalInformation WHERE Name='_TOTAL'",
+ "table": "LinuxCpu",
+ "frequency": 60,
+ "sinks": "LinuxCpuJsonBlob,LinuxCpuEventHub"
+ }
+ ],
+ "fileLogs": [
+ {
+ "file": "/var/log/myladtestlog",
+ "table": "MyLadTestLog",
+ "sinks": "FilelogJsonBlob,LoggingEventHub"
+ }
+ ]
+}
+```
+
+The `resourceId` in the configuration must match that of the VM or the virtual machine scale set.
+
+* Azure platform metrics charting and alerting knows the resourceId of the VM you're working on. It expects to find the data for your VM using the resourceId the lookup key.
+* If you use Azure autoscale, the resourceId in the autoscale configuration must match the resourceId used by LAD.
+* The resourceId is built into the names of JsonBlobs written by LAD.
+
+## View your data
+
+Use the Azure portal to view performance data or set alerts:
++
+The `performanceCounters` data are always stored in an Azure Storage table. Azure Storage APIs are available for many languages and platforms.
+
+Data sent to JsonBlob sinks is stored in blobs in the storage account named in the [Protected settings](#protected-settings). You can consume the blob data using any Azure Blob Storage APIs.
+
+In addition, you can use these UI tools to access the data in Azure Storage:
+
+* Visual Studio Server Explorer.
+* [Screenshot shows containers and tables in Azure Storage Explorer.](https://azurestorageexplorer.codeplex.com/ "Azure Storage Explorer").
+
+This snapshot of a Microsoft Azure Storage Explorer session shows the generated Azure Storage tables and containers from a correctly configured LAD 3.0 extension on a test VM. The image doesn't match exactly with the [sample LAD 3.0 configuration](#an-example-lad-30-configuration).
+++
+See the relevant [EventHubs documentation](../../event-hubs/event-hubs-about.md) to learn how to consume messages published to an EventHubs endpoint.
+
+## Next steps
+
+* Create metric alerts in [Azure Monitor](../../azure-monitor/platform/alerts-classic-portal.md) for the metrics you collect.
+* Create [monitoring charts](../../azure-monitor/platform/data-platform.md) for your metrics.
+* Learn how to [create a virtual machine scale set](../linux/tutorial-create-vmss.md) using your metrics to control autoscaling.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/diagnostics-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/diagnostics-linux.md
@@ -1,6 +1,6 @@
Title: Azure Compute - Linux Diagnostic Extension
-description: How to configure the Azure Linux Diagnostic Extension (LAD) to collect metrics and log events from Linux VMs running in Azure.
+ Title: Azure Compute - Linux Diagnostic Extension 4.0
+description: How to configure the Azure Linux Diagnostic Extension (LAD) 4.0 to collect metrics and log events from Linux VMs running in Azure.
@@ -8,14 +8,15 @@
vm-linux Previously updated : 12/13/2018 Last updated : 02/05/2021
-# Use Linux Diagnostic Extension to monitor metrics and logs
+# Use Linux Diagnostic Extension 4.0 to monitor metrics and logs
-This document describes version 3.0 and newer of the Linux Diagnostic Extension.
+This document describes version 4.0 and newer of the Linux Diagnostic Extension.
> [!IMPORTANT]
+> For information about version 3.*, see [this document](https://docs.microsoft.com/azure/virtual-machines/extensions/diagnostics-linux-v3).
> For information about version 2.3 and older, see [this document](/previous-versions/azure/virtual-machines/linux/classic/diagnostic-extension-v2). ## Introduction
@@ -38,10 +39,11 @@ You can enable this extension by using the Azure PowerShell cmdlets, Azure CLI s
>[!NOTE] >Certain components of the Diagnostics VM extension are also shipped in the [Log Analytics VM extension](./oms-linux.md). Due to this architecture, conflicts can arise if both extensions are instantiated in the same ARM template. To avoid these install-time conflicts, use the [`dependsOn` directive](../../azure-resource-manager/templates/define-resource-dependency.md#dependson) to ensure the extensions are installed sequentially. The extensions can be installed in either order.
-These installation instructions and a [downloadable sample configuration](https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json) configure LAD 3.0 to:
+These installation instructions and a [downloadable sample configuration](https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json) configure LAD 4.0 to:
-* capture and store the same metrics as were provided by LAD 2.3;
-* capture a useful set of file system metrics, new to LAD 3.0;
+* capture and store the same metrics as were provided by LAD 2.3, 3*;
+* send metrics to Azure Monitor Sink along with the usual sink to Azure Storage, new in Lad 4.0
+* capture a useful set of file system metrics, as were provided by LAD 3.0;
* capture the default syslog collection enabled by LAD 2.3; * enable the Azure portal experience for charting and alerting on VM metrics.
@@ -100,6 +102,9 @@ The python2 executable must be aliased to *python*. Following is one method that
The sample configuration downloaded in these examples collects a set of standard data and sends them to table storage. The URL for the sample configuration and its contents are subject to change. In most cases, you should download a copy of the portal settings JSON file and customize it for your needs, then have any templates or automation you construct use your own version of the configuration file rather than downloading that URL each time.
+> [!NOTE]
+> For enabling the new Azure Monitor Sink, the VMs need to have System Assigned Identity enabled for MSI Auth token generation. This can be done during VM creation or after the VM has been created. Steps for enabling System Assigned Identity through portal, CLI, PowerShell, and resource manager. are listed in detail [here](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).
+ #### Azure CLI sample ```azurecli
@@ -114,6 +119,9 @@ az login
# Select the subscription containing the storage account az account set --subscription <your_azure_subscription_id>
+# Enable System Assigned Identity to the existing VM
+az vm identity assign -g $my_resource_group -n $my_linux_vm
+ # Download the sample Public settings. (You could also use curl or any web browser) wget https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json -O portal_public_settings.json
@@ -126,10 +134,10 @@ sed -i "s#__VM_RESOURCE_ID__#$my_vm_resource_id#g" portal_public_settings.json
my_diagnostic_storage_account_sastoken=$(az storage account generate-sas --account-name $my_diagnostic_storage_account --expiry 2037-12-31T23:59:00Z --permissions wlacu --resource-types co --services bt -o tsv) my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}"
-# Finallly tell Azure to install and enable the extension
-az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group $my_resource_group --vm-name $my_linux_vm --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
+# Finally tell Azure to install and enable the extension
+az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 4.0 --resource-group $my_resource_group --vm-name $my_linux_vm --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
```
-#### Azure CLI sample for Installing LAD 3.0 extension on the VMSS instance
+#### Azure CLI sample for Installing LAD 4.0 extension on the virtual machine scale set instance
```azurecli #Set your Azure VMSS diagnostic variables correctly below
@@ -143,6 +151,9 @@ az login
# Select the subscription containing the storage account az account set --subscription <your_azure_subscription_id>
+# Enable System Assigned Identity to the existing VMSS
+az vmss identity assign -g $my_resource_group -n $my_linux_vmss
+ # Download the sample Public settings. (You could also use curl or any web browser) wget https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json -O portal_public_settings.json
@@ -156,7 +167,7 @@ $my_diagnostic_storage_account_sastoken=$(az storage account generate-sas --acco
$my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}" # Finally tell Azure to install and enable the extension
-az vmss extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group $my_resource_group --vmss-name $my_linux_vmss --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
+az vmss extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 4.0 --resource-group $my_resource_group --vmss-name $my_linux_vmss --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
``` #### PowerShell sample
@@ -170,6 +181,9 @@ $VMresourceGroup = "yourVMResourceGroupName"
# Get the VM object $vm = Get-AzVM -Name $vmName -ResourceGroupName $VMresourceGroup
+# Enable System Assigned Identity on an existing VM
+Update-AzVM -ResourceGroupName $VMresourceGroup -VM $vm -IdentityType SystemAssigned
+ # Get the public settings template from GitHub and update the templated values for storage account and resource ID $publicSettings = (Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json).Content $publicSettings = $publicSettings.Replace('__DIAGNOSTIC_STORAGE_ACCOUNT__', $storageAccountName)
@@ -184,7 +198,7 @@ $sasToken = New-AzStorageAccountSASToken -Service Blob,Table -ResourceType Servi
$protectedSettings="{'storageAccountName': '$storageAccountName', 'storageAccountSasToken': '$sasToken'}" # Finally install the extension with the settings built above
-Set-AzVMExtension -ResourceGroupName $VMresourceGroup -VMName $vmName -Location $vm.Location -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 3.0
+Set-AzVMExtension -ResourceGroupName $VMresourceGroup -VMName $vmName -Location $vm.Location -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 4.0
``` ### Updating the extension settings
@@ -193,21 +207,17 @@ After you've changed your Protected or Public settings, deploy them to the VM by
### Migration from previous versions of the extension
-The latest version of the extension is **3.0**. **Any old versions (2.x) are deprecated and may be unpublished on or after July 31, 2018**.
+The latest version of the extension is **4.0 which is currently in Public Preview**. **Older versions of 3.x are still being supported while versions of 2.x are deprecated since July 31, 2018**.
> [!IMPORTANT]
-> This extension introduces breaking changes to the configuration of the extension. One such change was made to improve the security of the extension; as a result, backwards compatibility with 2.x could not be maintained. Also, the Extension Publisher for this extension is different than the publisher for the 2.x versions.
->
-> To migrate from 2.x to this new version of the extension, you must uninstall the old extension (under the old publisher name), then install version 3 of the extension.
+> To migrate from 3.x to this new version of the extension, you must uninstall the old extension, then install version 4 of the extension (with the updated configuration for system assigned identity and sinks for sending metrics to Azure Monitor Sink.)
Recommendations: * Install the extension with automatic minor version upgrade enabled.
- * On classic deployment model VMs, specify '3.*' as the version if you are installing the extension through Azure XPLAT CLI or Powershell.
+ * On classic deployment model VMs, specify '4.*' as the version if you are installing the extension through Azure XPLAT CLI or PowerShell.
* On Azure Resource Manager deployment model VMs, include '"autoUpgradeMinorVersion": true' in the VM deployment template.
-* Use a new/different storage account for LAD 3.0. There are several small incompatibilities between LAD 2.3 and LAD 3.0 that make sharing an account troublesome:
- * LAD 3.0 stores syslog events in a table with a different name.
- * The counterSpecifier strings for `builtin` metrics differ in LAD 3.0.
+* Can use the same storage account for LAD 4.0 as with LAD 3.*.
## Protected settings
@@ -240,7 +250,7 @@ You can easily construct the required SAS token through the Azure portal.
1. Make the appropriate sections as previously described 1. Click the "Generate SAS" button.
-![Screenshot shows the Shared access signature page with Generate S A S.](./media/diagnostics-linux/make_sas.png)
Copy the generated SAS into the storageAccountSasToken field; remove the leading question-mark ("?").
@@ -266,7 +276,7 @@ Element | Value
name | A string used to refer to this sink elsewhere in the extension configuration. type | The type of sink being defined. Determines the other values (if any) in instances of this type.
-Version 3.0 of the Linux Diagnostic Extension supports two sink types: EventHub, and JsonBlob.
+Version 4.0 of the Linux Diagnostic Extension supports two sink types: EventHub, and JsonBlob.
#### The EventHub sink
@@ -311,14 +321,14 @@ Data directed to a JsonBlob sink is stored in blobs in Azure storage. Each insta
## Public settings
-This structure contains various blocks of settings that control the information collected by the extension. Each setting is optional. If you specify `ladCfg`, you must also specify `StorageAccount`.
+This structure contains various blocks of settings that control the information collected by the extension. Each setting (except ladCfg) is optional. If you specify metric or syslog collection in `ladCfg`, you must also specify `StorageAccount`. sinksConfig element needs to be specified in order to enable Azure Monitor Sink for metrics from LAD 4.0
```json { "ladCfg": { ... },
- "perfCfg": { ... },
"fileLogs": { ... }, "StorageAccount": "the storage account to receive data",
+ "sinksConfig": { ... },
"mdsdHttpProxy" : "" } ```
@@ -344,7 +354,15 @@ The remaining elements are described in detail in the following sections.
} ```
-This optional structure controls the gathering of metrics and logs for delivery to the Azure Metrics service and to other data sinks. You must specify either `performanceCounters` or `syslogEvents` or both. You must specify the `metrics` structure.
+This structure controls the gathering of metrics and logs for delivery to the Azure Metrics service and to other data sinks. You must specify either `performanceCounters` or `syslogEvents` or both. You must specify the `metrics` structure.
+
+If you don't want to enable syslog or metrics collection, then you can simply specify an empty structure for ladCfg element as shown below -
+
+```json
+"ladCfg": {
+ "diagnosticMonitorConfiguration": {}
+ }
+```
Element | Value - | --
@@ -462,31 +480,27 @@ When you specify `syslogEvents`, LAD always writes data to a table in Azure stor
Examples include `LinuxSyslog20170410` and `LinuxSyslog20170609`.
-### perfCfg
+### sinksConfig
+
+This optional section controls enabling sending metrics to the Azure Monitor Sink in addition to the Storage account and the default Guest Metrics blade.
-This optional section controls execution of arbitrary [OMI](https://github.com/Microsoft/omi) queries.
+> [!NOTE]
+> This requires System Assigned Identity to be enabled on the VMs/VMSS.
+> This can be done through portal, CLI, PowerShell, and resource manager. Steps are listed in detail [here](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).
+> The steps to enable this are also listed in the installation samples for AZ CLI, PowerShell etc. above.
```json
-"perfCfg": [
- {
- "namespace": "root/scx",
- "query": "SELECT PercentAvailableMemory, PercentUsedSwap FROM SCX_MemoryStatisticalInformation",
- "table": "LinuxOldMemory",
- "frequency": 300,
- "sinks": ""
- }
-]
+ "sinksConfig": {
+ "sink": [
+ {
+ "name": "AzMonSink",
+ "type": "AzMonSink",
+ "AzureMonitor": {}
+ }
+ ]
+ },
```
-Element | Value
-- | --
-namespace | (optional) The OMI namespace within which the query should be executed. If unspecified, the default value is "root/scx", implemented by the [System Center Cross-platform Providers](https://github.com/Microsoft/SCXcore).
-query | The OMI query to be executed.
-table | (optional) The Azure storage table, in the designated storage account (see [Protected settings](#protected-settings)).
-frequency | (optional) The number of seconds between execution of the query. Default value is 300 (5 minutes); minimum value is 15 seconds.
-sinks | (optional) A comma-separated list of names of additional sinks to which raw sample metric results should be published. No aggregation of these raw samples is computed by the extension or by Azure Metrics.
-
-Either "table" or "sinks", or both, must be specified.
### fileLogs
@@ -515,6 +529,9 @@ Either "table" or "sinks", or both, must be specified.
## Metrics supported by the builtin provider
+> [!NOTE]
+> The default metrics supported by LAD are aggregated across all file-systems/disks/name. For non-aggregated metrics, kindly refer to the newer Azure Monitor Sink metrics support.
+ The builtin metric provider is a source of metrics most interesting to a broad set of users. These metrics fall into five broad classes: * Processor
@@ -539,8 +556,6 @@ PercentPrivilegedTime | Of non-idle time, the percentage spent in privileged (ke
The first four counters should sum to 100%. The last three counters also sum to 100%; they subdivide the sum of PercentProcessorTime, PercentIOWaitTime, and PercentInterruptTime.
-To obtain a single metric aggregated across all processors, set `"condition": "IsAggregate=TRUE"`. To obtain a metric for a specific processor, such as the second logical processor of a four-vCPU VM, set `"condition": "Name=\\"1\\""`. Logical processor numbers are in the range `[0..n-1]`.
- ### builtin metrics for the Memory class The Memory class of metrics provides information about memory utilization, paging, and swapping.
@@ -563,7 +578,7 @@ This class of metrics has only a single instance. The "condition" attribute has
### builtin metrics for the Network class
-The Network class of metrics provides information about network activity on an individual network interfaces since boot. LAD does not expose bandwidth metrics, which can be retrieved from host metrics.
+The Network class of metrics provides information about network activity on an individual network interface since boot. LAD does not expose bandwidth metrics, which can be retrieved from host metrics.
counter | Meaning - | -
@@ -576,8 +591,6 @@ TotalRxErrors | Number of receive errors since boot
TotalTxErrors | Number of transmit errors since boot TotalCollisions | Number of collisions reported by the network ports since boot
- Although this class is instanced, LAD does not support capturing Network metrics aggregated across all network devices. To obtain the metrics for a specific interface, such as eth0, set `"condition": "InstanceID=\\"eth0\\""`.
- ### builtin metrics for the Filesystem class The Filesystem class of metrics provides information about filesystem usage. Absolute and percentage values are reported as they'd be displayed to an ordinary user (not root).
@@ -597,10 +610,6 @@ ReadsPerSecond | Read operations per second
WritesPerSecond | Write operations per second TransfersPerSecond | Read or write operations per second
-Aggregated values across all file systems can be obtained by setting `"condition": "IsAggregate=True"`. Values for a specific mounted file system, such as "/mnt", can be obtained by setting `"condition": 'Name="/mnt"'`.
-
-**NOTE**: If using the Azure Portal instead of JSON, the correct condition field form is Name='/mnt'
- ### builtin metrics for the Disk class The Disk class of metrics provides information about disk device usage. These statistics apply to the entire drive. If there are multiple file systems on a device, the counters for that device are, effectively, aggregated across all of them.
@@ -618,16 +627,14 @@ ReadBytesPerSecond | Number of bytes read per second
WriteBytesPerSecond | Number of bytes written per second BytesPerSecond | Number of bytes read or written per second
-Aggregated values across all disks can be obtained by setting `"condition": "IsAggregate=True"`. To get information for a specific device (for example, /dev/sdf1), set `"condition": "Name=\\"/dev/sdf1\\""`.
-
-## Installing and configuring LAD 3.0
+## Installing and configuring LAD 4.0
### Azure CLI Assuming your protected settings are in the file ProtectedSettings.json and your public configuration information is in PublicSettings.json, run this command: ```azurecli
-az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group <resource_group_name> --vm-name <vm_name> --protected-settings ProtectedSettings.json --settings PublicSettings.json
+az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 4.0 --resource-group <resource_group_name> --vm-name <vm_name> --protected-settings ProtectedSettings.json --settings PublicSettings.json
``` The command assumes you are using the Azure Resource Management mode of the Azure CLI. To configure LAD for classic deployment model (ASM) VMs, switch to "asm" mode (`azure config mode asm`) and omit the resource group name in the command. For more information, see the [cross-platform CLI documentation](/cli/azure/authenticate-azure-cli).
@@ -637,12 +644,12 @@ The command assumes you are using the Azure Resource Management mode of the Azur
Assuming your protected settings are in the `$protectedSettings` variable and your public configuration information is in the `$publicSettings` variable, run this command: ```powershell
-Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> -Location <vm_location> -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 3.0
+Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> -Location <vm_location> -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 4.0
```
-## An example LAD 3.0 configuration
+## An example LAD 4.0 configuration
-Based on the preceding definitions, here's a sample LAD 3.0 extension configuration with some explanation. To apply this sample to your case, you should use your own storage account name, account SAS token, and EventHubs SAS tokens.
+Based on the preceding definitions, here's a sample LAD 4.0 extension configuration with some explanation. To apply this sample to your case, you should use your own storage account name, account SAS token, and EventHubs SAS tokens.
> [!NOTE] > Depending on whether you use the Azure CLI or PowerShell to install LAD, the method for providing public and protected settings will differ. If using the Azure CLI, save the following settings to ProtectedSettings.json and PublicSettings.json to use with the sample command above. If using PowerShell, save the settings to `$protectedSettings` and `$publicSettings` by running `$protectedSettings = '{ ... }'`.
@@ -703,7 +710,6 @@ These public settings cause LAD to:
* Upload percent-processor-time and used-disk-space metrics to the `WADMetrics*` table * Upload messages from syslog facility "user" and severity "info" to the `LinuxSyslog*` table
-* Upload raw OMI query results (PercentProcessorTime and PercentIdleTime) to the named `LinuxCPU` table
* Upload appended lines in file `/var/log/myladtestlog` to the `MyLadTestLog` table In each case, data is also uploaded to:
@@ -770,14 +776,15 @@ In each case, data is also uploaded to:
} } },
- "perfCfg": [
- {
- "query": "SELECT PercentProcessorTime, PercentIdleTime FROM SCX_ProcessorStatisticalInformation WHERE Name='_TOTAL'",
- "table": "LinuxCpu",
- "frequency": 60,
- "sinks": "LinuxCpuJsonBlob,LinuxCpuEventHub"
- }
- ],
+ "sinksConfig": {
+ "sink": [
+ {
+ "name": "AzMonSink",
+ "type": "AzMonSink",
+ "AzureMonitor": {}
+ }
+ ]
+ },
"fileLogs": [ { "file": "/var/log/myladtestlog",
@@ -798,7 +805,7 @@ The `resourceId` in the configuration must match that of the VM or the virtual m
Use the Azure portal to view performance data or set alerts:
-![Screenshot shows the Azure portal with the Used disk space on metric selected and the resulting chart.](./media/diagnostics-linux/graph_metrics.png)
The `performanceCounters` data are always stored in an Azure Storage table. Azure Storage APIs are available for many languages and platforms.
@@ -809,9 +816,9 @@ In addition, you can use these UI tools to access the data in Azure Storage:
* Visual Studio Server Explorer. * [Screenshot shows containers and tables in Azure Storage Explorer.](https://azurestorageexplorer.codeplex.com/ "Azure Storage Explorer").
-This snapshot of a Microsoft Azure Storage Explorer session shows the generated Azure Storage tables and containers from a correctly configured LAD 3.0 extension on a test VM. The image doesn't match exactly with the [sample LAD 3.0 configuration](#an-example-lad-30-configuration).
+This snapshot of a Microsoft Azure Storage Explorer session shows the generated Azure Storage tables and containers from a correctly configured LAD 3.0 extension on a test VM. The image doesn't match exactly with the [sample LAD 3.0 configuration](#an-example-lad-40-configuration).
-![image](./media/diagnostics-linux/stg_explorer.png)
See the relevant [EventHubs documentation](../../event-hubs/event-hubs-about.md) to learn how to consume messages published to an EventHubs endpoint.
@@ -819,4 +826,4 @@ See the relevant [EventHubs documentation](../../event-hubs/event-hubs-about.md)
* Create metric alerts in [Azure Monitor](../../azure-monitor/platform/alerts-classic-portal.md) for the metrics you collect. * Create [monitoring charts](../../azure-monitor/platform/data-platform.md) for your metrics.
-* Learn how to [create a virtual machine scale set](../linux/tutorial-create-vmss.md) using your metrics to control autoscaling.
+* Learn how to [create a virtual machine scale set](../linux/tutorial-create-vmss.md) using your metrics to control autoscaling.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/key-vault-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/key-vault-windows.md
@@ -219,9 +219,9 @@ The Azure CLI can be used to deploy the Key Vault VM extension to an existing vi
```azurecli # Start the deployment
- az vm extension set -name "KeyVaultForWindows" `
+ az vm extension set --name "KeyVaultForWindows" `
--publisher Microsoft.Azure.KeyVault `
- -resource-group "<resourcegroup>" `
+ --resource-group "<resourcegroup>" `
--vm-name "<vmName>" ` --settings '{\"secretsManagementSettings\": { \"pollingIntervalInS\": \"<pollingInterval>\", \"certificateStoreName\": \"<certStoreName>\", \"certificateStoreLocation\": \"<certStoreLoc>\", \"observedCertificates\": [\" <observedCert1> \", \" <observedCert2> \"] }}' ```
@@ -230,9 +230,9 @@ The Azure CLI can be used to deploy the Key Vault VM extension to an existing vi
```azurecli # Start the deployment
- az vmss extension set -name "KeyVaultForWindows" `
+ az vmss extension set --name "KeyVaultForWindows" `
--publisher Microsoft.Azure.KeyVault `
- -resource-group "<resourcegroup>" `
+ --resource-group "<resourcegroup>" `
--vmss-name "<vmName>" ` --settings '{\"secretsManagementSettings\": { \"pollingIntervalInS\": \"<pollingInterval>\", \"certificateStoreName\": \"<certStoreName>\", \"certificateStoreLocation\": \"<certStoreLoc>\", \"observedCertificates\": [\" <observedCert1> \", \" <observedCert2> \"] }}' ```
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/reserved-vm-instance-size-flexibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/reserved-vm-instance-size-flexibility.md
@@ -7,7 +7,7 @@
Last updated 02/02/2021-+ # Virtual machine size flexibility with Reserved VM Instances
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/windows/install-mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/install-mongodb.md
@@ -1,157 +0,0 @@
- Title: Install MongoDB on a Windows VM in Azure
-description: Learn how to install MongoDB on an Azure VM running Windows Server 2012 R2 created with the Resource Manager deployment model.
---- Previously updated : 12/15/2017---
-# Install and configure MongoDB on a Windows VM in Azure
-[MongoDB](https://www.mongodb.org) is a popular open-source, high-performance NoSQL database. This article guides you through installing and configuring MongoDB on a Windows Server 2016 virtual machine (VM) in Azure. You can also [install MongoDB on a Linux VM in Azure](../linux/install-mongodb.md).
-
-## Prerequisites
-Before you install and configure MongoDB, you need to create a VM and, ideally, add a data disk to it. See the following articles to create a VM and add a data disk:
-
-* Create a Windows Server VM using [the Azure portal](quick-create-portal.md) or [Azure PowerShell](quick-create-powershell.md).
-* Attach a data disk to a Windows Server VM using [the Azure portal](attach-managed-disk-portal.md) or [Azure PowerShell](attach-disk-ps.md).
-
-To begin installing and configuring MongoDB, [log on to your Windows Server VM](connect-logon.md) by using Remote Desktop.
-
-## Install MongoDB
-> [!IMPORTANT]
-> MongoDB security features, such as authentication and IP address binding, are not enabled by default. Security features should be enabled before deploying MongoDB to a production environment. For more information, see [MongoDB Security and Authentication](https://www.mongodb.org/display/DOCS/Security+and+Authentication).
--
-1. After you've connected to your VM using Remote Desktop, open Internet Explorer from the taskbar.
-2. Select **Use recommended security, privacy, and compatibility settings** when Internet Explorer first opens, and click **OK**.
-3. Internet Explorer enhanced security configuration is enabled by default. Add the MongoDB website to the list of allowed sites:
-
- * Select the **Tools** icon in the upper-right corner.
- * In **Internet Options**, select the **Security** tab, and then select the **Trusted Sites** icon.
- * Click the **Sites** button. Add *https://\*.mongodb.com* to the list of trusted sites, and then close the dialog box.
-
- ![Configure Internet Explorer security settings](./media/install-mongodb/configure-internet-explorer-security.png)
-4. Browse to the [MongoDB - Downloads](https://www.mongodb.com/downloads) page (https://www.mongodb.com/downloads).
-5. If needed, select the **Community Server** edition and then select the latest current stable release for*Windows Server 2008 R2 64-bit and later*. To download the installer, click **DOWNLOAD (msi)**.
-
- ![Download MongoDB installer](./media/install-mongodb/download-mongodb.png)
-
- Run the installer after the download is complete.
-6. Read and accept the license agreement. When you're prompted, select **Complete** install.
-7. If desired, you can choose to also install Compass, a graphical interface for MongoDB.
-8. On the final screen, click **Install**.
-
-## Configure the VM and MongoDB
-1. The path variables are not updated by the MongoDB installer. Without the MongoDB `bin` location in your path variable, you need to specify the full path each time you use a MongoDB executable. To add the location to your path variable:
-
- * Right-click the **Start** menu, and select **System**.
- * Click **Advanced system settings**, and then click **Environment Variables**.
- * Under **System variables**, select **Path**, and then click **Edit**.
-
- ![Configure PATH variables](./media/install-mongodb/configure-path-variables.png)
-
- Add the path to your MongoDB `bin` folder. MongoDB is typically installed in *C:\Program Files\MongoDB*. Verify the installation path on your VM. The following example adds the default MongoDB install location to the `PATH` variable:
-
- ```
- ;C:\Program Files\MongoDB\Server\3.6\bin
- ```
-
- > [!NOTE]
- > Be sure to add the leading semicolon (`;`) to indicate that you are adding a location to your `PATH` variable.
-
-2. Create MongoDB data and log directories on your data disk. From the **Start** menu, select **Command Prompt**. The following examples create the directories on drive F:
-
- ```
- mkdir F:\MongoData
- mkdir F:\MongoLogs
- ```
-3. Start a MongoDB instance with the following command, adjusting the path to your data and log directories accordingly:
-
- ```
- mongod --dbpath F:\MongoData\ --logpath F:\MongoLogs\mongolog.log
- ```
-
- It may take several minutes for MongoDB to allocate the journal files and start listening for connections. All log messages are directed to the *F:\MongoLogs\mongolog.log* file as `mongod.exe` server starts and allocates journal files.
-
- > [!NOTE]
- > The command prompt stays focused on this task while your MongoDB instance is running. Leave the command prompt window open to continue running MongoDB. Or, install MongoDB as service, as detailed in the next step.
-
-4. For a more robust MongoDB experience, install the `mongod.exe` as a service. Creating a service means you don't need to leave a command prompt running each time you want to use MongoDB. Create the service as follows, adjusting the path to your data and log directories accordingly:
-
- ```
- mongod --dbpath F:\MongoData\ --logpath F:\MongoLogs\mongolog.log --logappend --install
- ```
-
- The preceding command creates a service named MongoDB, with a description of "Mongo DB". The following parameters are also specified:
-
- * The `--dbpath` option specifies the location of the data directory.
- * The `--logpath` option must be used to specify a log file, because the running service does not have a command window to display output.
- * The `--logappend` option specifies that a restart of the service causes output to append to the existing log file.
-
- To start the MongoDB service, run the following command:
-
- ```
- net start MongoDB
- ```
-
- For more information about creating the MongoDB service, see [Configure a Windows Service for MongoDB](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-windows/#mongodb-as-a-windows-service).
-
-## Test the MongoDB instance
-With MongoDB running as a single instance or installed as a service, you can now start creating and using your databases. To start the MongoDB administrative shell, open another command prompt window from the **Start** menu, and enter the following command:
-
-```
-mongo
-```
-
-You can list the databases with the `db` command. Insert some data as follows:
-
-```
-db.foo.insert( { a : 1 } )
-```
-
-Search for data as follows:
-
-```
-db.foo.find()
-```
-
-The output is similar to the following example:
-
-```
-{ "_id" : "ObjectId("57f6a86cee873a6232d74842"), "a" : 1 }
-```
-
-Exit the `mongo` console as follows:
-
-```
-exit
-```
-
-## Configure firewall and Network Security Group rules
-Now that MongoDB is installed and running, open a port in Windows Firewall so you can remotely connect to MongoDB. To create a new inbound rule to allow TCP port 27017, open an administrative PowerShell prompt and enter the following command:
-
-```powershell
-New-NetFirewallRule `
- -DisplayName "Allow MongoDB" `
- -Direction Inbound `
- -Protocol TCP `
- -LocalPort 27017 `
- -Action Allow
-```
-
-You can also create the rule by using the **Windows Firewall with Advanced Security** graphical management tool. Create a new inbound rule to allow TCP port 27017.
-
-If needed, create a Network Security Group rule to allow access to MongoDB from outside of the existing Azure virtual network subnet. You can create the Network Security Group rules by using the [Azure portal](nsg-quickstart-portal.md) or [Azure PowerShell](nsg-quickstart-powershell.md). As with the Windows Firewall rules, allow TCP port 27017 to the virtual network interface of your MongoDB VM.
-
-> [!NOTE]
-> TCP port 27017 is the default port used by MongoDB. You can change this port by using the `--port` parameter when starting `mongod.exe` manually or from a service. If you change the port, make sure to update the Windows Firewall and Network Security Group rules in the preceding steps.
--
-## Next steps
-In this tutorial, you learned how to install and configure MongoDB on your Windows VM. You can now access MongoDB on your Windows VM, by following the advanced topics in the [MongoDB documentation](https://docs.mongodb.com/manual/).
-
virtual-wan https://docs.microsoft.com/en-us/azure/virtual-wan/about-vpn-profile-download https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/about-vpn-profile-download.md
@@ -6,13 +6,17 @@
Previously updated : 09/22/2020 Last updated : 02/08/2021
-# Working with User VPN client profiles
+# Working with User VPN client profile files
-The downloaded profile file contains information that is necessary to configure a VPN connection. This article helps you obtain and understand the information necessary for a User VPN client profile.
+The profile files contain information that is necessary to configure a VPN connection. This article helps you obtain and understand the information necessary for a User VPN client profile.
+
+## Download the profile
+
+You can use the steps in the [Download profiles](global-hub-profile.md) article to download the client profile zip file.
[!INCLUDE [client profiles](../../includes/vpn-gateway-vwan-vpn-profile-download.md)]
vpn-gateway https://docs.microsoft.com/en-us/azure/vpn-gateway/about-vpn-profile-download https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/about-vpn-profile-download.md
@@ -5,14 +5,39 @@
- Previously updated : 09/03/2020+ Last updated : 02/08/2021
-# About P2S VPN client profiles
+# Working with P2S VPN client profile files
-The downloaded profile file contains information that is necessary to configure a VPN connection. This article will help you obtain and understand the information necessary for a VPN client profile.
+The profile files contain information that is necessary to configure a VPN connection. This article will help you obtain and understand the information necessary for a VPN client profile.
+
+## Generate and download profile
+
+You can generate client configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file.
+
+### Portal
+
+1. In the Azure portal, navigate to the virtual network gateway for the virtual network that you want to connect to.
+1. On the virtual network gateway page, select **Point-to-site configuration**.
+1. At the top of the Point-to-site configuration page, select **Download VPN client**. It takes a few minutes for the client configuration package to generate.
+1. Your browser indicates that a client configuration zip file is available. It is named the same name as your gateway. Unzip the file to view the folders.
+
+### PowerShell
+
+To generate using PowerShell, you can use the following example:
+
+1. When generating VPN client configuration files, the value for '-AuthenticationMethod' is 'EapTls'. Generate the VPN client configuration files using the following command:
+
+ ```azurepowershell-interactive
+ $profile=New-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" -AuthenticationMethod "EapTls"
+
+ $profile.VPNProfileSASUrl
+ ```
+
+1. Copy the URL to your browser to download the zip file, then unzip the file to view the folders.
[!INCLUDE [client profiles](../../includes/vpn-gateway-vwan-vpn-profile-download.md)]