Updates from: 02/18/2021 04:09:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/analytics-with-application-insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/analytics-with-application-insights.md
Title: Track user behavior with Application Insights
+ Title: Track user behavior by using Application Insights
description: Learn how to enable event logs in Application Insights from Azure AD B2C user journeys.
zone_pivot_groups: b2c-policy-type
-# Track user behavior in Azure Active Directory B2C using Application Insights
+
+# Track user behavior in Azure AD B2C by using Application Insights
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
zone_pivot_groups: b2c-policy-type
::: zone pivot="b2c-custom-policy"
-Azure Active Directory B2C (Azure AD B2C) supports sending event data directly to [Application Insights](../azure-monitor/app/app-insights-overview.md) by using the instrumentation key provided to Azure AD B2C. With an Application Insights technical profile, you can get detailed and customized event logs for your user journeys to:
+In Azure Active Directory B2C (Azure AD B2C), you can send event data directly to [Application Insights](../azure-monitor/app/app-insights-overview.md) by using the instrumentation key provided to Azure AD B2C. With an Application Insights technical profile, you can get detailed and customized event logs for your user journeys to:
-* Gain insights on user behavior.
-* Troubleshoot your own policies in development or in production.
-* Measure performance.
-* Create notifications from Application Insights.
+- Gain insights on user behavior.
+- Troubleshoot your own policies in development or in production.
+- Measure performance.
+- Create notifications from Application Insights.
## Overview
-To enable custom event logs, you add an Application Insights technical profile. In the technical profile, you define the Application Insights instrumentation key, event name, and the claims to record. To post an event, the technical profile is then added as an orchestration step in a [user journey](userjourneys.md).
+To enable custom event logs, add an Application Insights technical profile. In the technical profile, you define the Application Insights instrumentation key, the event name, and the claims to record. To post an event, add the technical profile as an orchestration step in a [user journey](userjourneys.md).
-When using the Application Insights, consider the following:
+When you use Application Insights, consider the following:
-- There is a short delay, typically less than five minutes, before new logs available in Application Insights.-- Azure AD B2C allows you to choose the claims to be recorded. Don't include claims with personal data.-- To record a user session, events can be unified by using a correlation ID. -- Call the Application Insights technical profile directly from a [user journey](userjourneys.md) or a [sub journeys](subjourneys.md). Don't use Application Insights technical profile as a [validation technical profile](validation-technical-profile.md).
+- There's a short delay, typically less than five minutes, before new logs are available in Application Insights.
+- Azure AD B2C allows you to choose which claims to record. Don't include claims with personal data.
+- To record a user session, you can use a correlation ID to unify events.
+- Call the Application Insights technical profile directly from a [user journey](userjourneys.md) or a [sub journey](subjourneys.md). Don't use an Application Insights technical profile as a [validation technical profile](validation-technical-profile.md).
## Prerequisites
When using the Application Insights, consider the following:
## Create an Application Insights resource
-When you're using Application Insights with Azure AD B2C, all you need to do is create a resource and get the instrumentation key. For information, see [Create an Application Insights resource](../azure-monitor/app/create-new-resource.md)
+When you use Application Insights with Azure AD B2C, all you need to do is create a resource and get the instrumentation key. For information, see [Create an Application Insights resource](../azure-monitor/app/create-new-resource.md).
1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Make sure you're using the directory that contains your Azure subscription by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your subscription. This tenant is not your Azure AD B2C tenant.
-3. Choose **Create a resource** in the top-left corner of the Azure portal, and then search for and select **Application Insights**.
-4. Click **Create**.
-5. Enter a **Name** for the resource.
-6. For **Application Type**, select **ASP.NET web application**.
-7. For **Resource Group**, select an existing group or enter a name for a new group.
-8. Click **Create**.
-4. After you create the Application Insights resource, open it, expand **Essentials**, and copy the instrumentation key.
+1. Make sure you're using the directory that has your Azure subscription. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure subscription. This tenant isn't your Azure AD B2C tenant.
+1. Choose **Create a resource** in the upper-left corner of the Azure portal, and then search for and select **Application Insights**.
+1. Select **Create**.
+1. For **Name**, enter a name for the resource.
+1. For **Application Type**, select **ASP.NET web application**.
+1. For **Resource Group**, select an existing group or enter a name for a new group.
+1. Select **Create**.
+1. Open the new Application Insights resource, expand **Essentials**, and copy the instrumentation key.
-![Application Insights Overview and Instrumentation Key](./media/analytics-with-application-insights/app-insights.png)
+![Screenshot that shows the Instrumentation Key on the Application Insights Overview tab.](./media/analytics-with-application-insights/app-insights.png)
## Define claims
-A claim provides a temporary storage of data during an Azure AD B2C policy execution. The [claims schema](claimsschema.md) is the place where you declare your claims.
-
-1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>.
-1. Search for the [BuildingBlocks](buildingblocks.md) element. If the element doesn't exist, add it.
-1. Locate the [ClaimsSchema](claimsschema.md) element. If the element doesn't exist, add it.
-1. Add the following claims to the **ClaimsSchema** element.
-
-```xml
-<ClaimType Id="EventType">
- <DisplayName>Event type</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="EventTimestamp">
- <DisplayName>Event timestamp</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="PolicyId">
- <DisplayName>Policy Id</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="Culture">
- <DisplayName>Culture ID</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="CorrelationId">
- <DisplayName>Correlation Id</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="federatedUser">
- <DisplayName>Federated user</DisplayName>
- <DataType>boolean</DataType>
-</ClaimType>
-<ClaimType Id="parsedDomain">
- <DisplayName>Domain name</DisplayName>
- <DataType>string</DataType>
- <UserHelpText>The domain portion of the email address.</UserHelpText>
-</ClaimType>
-<ClaimType Id="userInLocalDirectory">
- <DisplayName>userInLocalDirectory</DisplayName>
- <DataType>boolean</DataType>
-</ClaimType>
-```
+A claim provides temporary storage of data during an Azure AD B2C policy execution. You declare your claims in the [ClaimsSchema element](claimsschema.md).
+
+1. Open the extensions file of your policy. The file might look similar to `SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**.
+1. Search for the [BuildingBlocks](buildingblocks.md) element. If you don't see the element, add it.
+1. Find the **ClaimsSchema** element. If you don't see the element, add it.
+1. Add the following claims to the **ClaimsSchema** element:
+
+ ```xml
+ <ClaimType Id="EventType">
+ <DisplayName>Event type</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="EventTimestamp">
+ <DisplayName>Event timestamp</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="PolicyId">
+ <DisplayName>Policy Id</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="Culture">
+ <DisplayName>Culture ID</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="CorrelationId">
+ <DisplayName>Correlation Id</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="federatedUser">
+ <DisplayName>Federated user</DisplayName>
+ <DataType>boolean</DataType>
+ </ClaimType>
+ <ClaimType Id="parsedDomain">
+ <DisplayName>Domain name</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>The domain portion of the email address.</UserHelpText>
+ </ClaimType>
+ <ClaimType Id="userInLocalDirectory">
+ <DisplayName>userInLocalDirectory</DisplayName>
+ <DataType>boolean</DataType>
+ </ClaimType>
+ ```
## Add new technical profiles
-Technical profiles can be considered functions in the custom policy. This table defines the technical profiles that are used to open a session and post events. The solution uses the [technical profile inclusion](technicalprofiles.md#include-technical-profile) approach. Where a technical profile includes another technical profile to change settings or add new functionality.
+Technical profiles can be considered functions in the custom policy. These functions use the [technical profile inclusion](technicalprofiles.md#include-technical-profile) approach, where a technical profile includes another technical profile and changes settings or adds new functionality. The following table defines the technical profiles that are used to open a session and post events.
-| Technical Profile | Task |
+| Technical profile | Task |
| -- | --|
-| AppInsights-Common | The common technical profile with the common set of configuration. Including, the Application Insights instrumentation key, collection of claims to record, and the developer mode. The following technical profiles include the common technical profile, and add more claims, such as the event name. |
-| AppInsights-SignInRequest | Records a `SignInRequest` event with a set of claims when a sign-in request has been received. |
-| AppInsights-UserSignUp | Records a `UserSignUp` event when the user triggers the sign-up option in a sign-up/sign-in journey. |
-| AppInsights-SignInComplete | Records a `SignInComplete` event on successful completion of an authentication, when a token has been sent to the relying party application. |
+| AppInsights-Common | The common technical profile with typical configuration. It includes the Application Insights instrumentation key, a collection of claims to record, and developer mode. The other technical profiles include the common technical profile and add more claims, such as the event name. |
+| AppInsights-SignInRequest | Records a **SignInRequest** event with a set of claims when a sign-in request has been received. |
+| AppInsights-UserSignUp | Records a **UserSignUp** event when the user triggers the sign-up option in a sign-up or sign-in journey. |
+| AppInsights-SignInComplete | Records a **SignInComplete** event upon successful authentication, when a token has been sent to the relying party application. |
-Add the profiles to the *TrustFrameworkExtensions.xml* file from the starter pack. Add these elements to the **ClaimsProviders** element:
+Open the *TrustFrameworkExtensions.xml* file from the starter pack. Add the technical profiles to the **ClaimsProvider** element:
```xml <ClaimsProvider>
Add the profiles to the *TrustFrameworkExtensions.xml* file from the starter pac
<DisplayName>Application Insights</DisplayName> <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.Insights.AzureApplicationInsightsProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <Metadata>
- <!-- The ApplicationInsights instrumentation key which will be used for logging the events -->
+ <!-- The ApplicationInsights instrumentation key, which you use for logging the events -->
<Item Key="InstrumentationKey">xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</Item> <Item Key="DeveloperMode">false</Item> <Item Key="DisableTelemetry ">false</Item> </Metadata> <InputClaims>
- <!-- Properties of an event are added through the syntax {property:NAME}, where NAME is property being added to the event. DefaultValue can be either a static value or a value that's resolved by one of the supported DefaultClaimResolvers. -->
+ <!-- Properties of an event are added through the syntax {property:NAME}, where NAME is the property being added to the event. DefaultValue can be either a static value or a value that's resolved by one of the supported DefaultClaimResolvers. -->
<InputClaim ClaimTypeReferenceId="EventTimestamp" PartnerClaimType="{property:EventTimestamp}" DefaultValue="{Context:DateTimeInUtc}" /> <InputClaim ClaimTypeReferenceId="tenantId" PartnerClaimType="{property:TenantId}" DefaultValue="{Policy:TrustFrameworkTenantId}" /> <InputClaim ClaimTypeReferenceId="PolicyId" PartnerClaimType="{property:Policy}" DefaultValue="{Policy:PolicyId}" />
Add the profiles to the *TrustFrameworkExtensions.xml* file from the starter pac
## Add the technical profiles as orchestration steps
-Call `AppInsights-SignInRequest` as orchestration step 2 to track that a sign-in/sign-up request has been received:
-
-```xml
-<!-- Track that we have received a sign in request -->
-<OrchestrationStep Order="2" Type="ClaimsExchange">
- <ClaimsExchanges>
- <ClaimsExchange Id="TrackSignInRequest" TechnicalProfileReferenceId="AppInsights-SignInRequest" />
- </ClaimsExchanges>
-</OrchestrationStep>
-```
-
-Immediately *before* the `SendClaims` orchestration step, add a new step that calls `AppInsights-UserSignup`. It's triggered when the user selects the sign-up button in a sign-up/sign-in journey.
-
-```xml
-<!-- Handles the user clicking the sign up link in the local account sign in page -->
-<OrchestrationStep Order="8" Type="ClaimsExchange">
- <Preconditions>
- <Precondition Type="ClaimsExist" ExecuteActionsIf="false">
- <Value>newUser</Value>
- <Action>SkipThisOrchestrationStep</Action>
- </Precondition>
- <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
- <Value>newUser</Value>
- <Value>false</Value>
- <Action>SkipThisOrchestrationStep</Action>
- </Precondition>
- </Preconditions>
- <ClaimsExchanges>
- <ClaimsExchange Id="TrackUserSignUp" TechnicalProfileReferenceId="AppInsights-UserSignup" />
- </ClaimsExchanges>
-</OrchestrationStep>
-```
-
-Immediately after the `SendClaims` orchestration step, call `AppInsights-SignInComplete`. This step shows a successfully completed journey.
-
-```xml
-<!-- Track that we have successfully sent a token -->
-<OrchestrationStep Order="10" Type="ClaimsExchange">
- <ClaimsExchanges>
- <ClaimsExchange Id="TrackSignInComplete" TechnicalProfileReferenceId="AppInsights-SignInComplete" />
- </ClaimsExchanges>
-</OrchestrationStep>
-```
+Add new orchestration steps that refer to the technical profiles.
> [!IMPORTANT] > After you add the new orchestration steps, renumber the steps sequentially without skipping any integers from 1 to N.
+1. Call `AppInsights-SignInRequest` as the second orchestration step. This step tracks that a sign-up or sign-in request has been received.
+
+ ```xml
+ <!-- Track that we have received a sign in request -->
+ <OrchestrationStep Order="2" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="TrackSignInRequest" TechnicalProfileReferenceId="AppInsights-SignInRequest" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ ```
+
+1. Before the `SendClaims` orchestration step, add a new step that calls `AppInsights-UserSignup`. It's triggered when the user selects the sign-up button in a sign-up or sign-in journey.
+
+ ```xml
+ <!-- Handles the user selecting the sign-up link in the local account sign-in page -->
+ <OrchestrationStep Order="8" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="false">
+ <Value>newUser</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
+ <Value>newUser</Value>
+ <Value>false</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="TrackUserSignUp" TechnicalProfileReferenceId="AppInsights-UserSignup" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ ```
+
+1. After the `SendClaims` orchestration step, call `AppInsights-SignInComplete`. This step shows a successfully completed journey.
+
+ ```xml
+ <!-- Track that we have successfully sent a token -->
+ <OrchestrationStep Order="10" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="TrackSignInComplete" TechnicalProfileReferenceId="AppInsights-SignInComplete" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ ```
## Upload your file, run the policy, and view events
-Save and upload the *TrustFrameworkExtensions.xml* file. Then, call the relying party policy from your application or use **Run Now** in the Azure portal. Wait a minute or so, and your events will be available in Application Insights.
+Save and upload the *TrustFrameworkExtensions.xml* file. Then call the relying party policy from your application or use **Run Now** in the Azure portal. Wait for your events to be available in Application Insights.
1. Open the **Application Insights** resource in your Azure Active Directory tenant.
-2. Select **Usage**, then select **Events**.
-3. Set **During** to **Last hour** and **By** to **3 minutes**. You might need to select **Refresh** to view results.
+1. Select **Usage**, and then select **Events**.
+1. Set **During** to **Last hour** and **By** to **3 minutes**. You might need to refresh the window to see the results.
-![Application Insights USAGE-Events Blase](./media/analytics-with-application-insights/app-ins-graphic.png)
+![Screenshot that shows Application Insights event statistics.](./media/analytics-with-application-insights/app-ins-graphic.png)
## Collect more data
-To fit your business needs, you may want to record more claims. To add a claim, first [define a claim](#define-claims), then add the claim to the input claims collection. Claims that you add to the *AppInsights-Common* technical profile, will appear in all of the events. Claims that you add to a specific technical profile, will appear only in that event. The input claim element contains the following attributes:
+To fit your business needs, you might want to record more claims. To add a claim, first [define a claim](#define-claims), then add the claim to the input claims collection. Claims that you add to the **AppInsights-Common** technical profile appear in all events. Claims that you add to a specific technical profile appear only in that event. The input claim element contains the following attributes:
-- **ClaimTypeReferenceId** - is the reference to a claim type. -- **PartnerClaimType** - is the name of the property that appears in Azure Insights. Use the syntax of `{property:NAME}`, where `NAME` is property being added to the event.-- **DefaultValue** - A predefined value to be recorded, such as event name. A claim that is used in the user journey, such as the identity provider name. If the claim is empty, the default value will be used. For example, the `identityProvider` claim is set by the federation technical profiles, such as Facebook. If the claim is empty, it indicates the user sign-in with a local account. Thus, the default value is set to *Local*. You can also record a [claim resolvers](claim-resolver-overview.md) with a contextual value, such as the application ID, or the user IP address.
+- **ClaimTypeReferenceId** is the reference to a claim type.
+- **PartnerClaimType** is the name of the property that appears in Azure Insights. Use the syntax of `{property:NAME}`, where `NAME` is a property being added to the event.
+- **DefaultValue** is a predefined value to be recorded, such as an event name. If a claim that is used in the user journey is empty, the default value is used. For example, the `identityProvider` claim is set by the federation technical profiles, such as Facebook. If the claim is empty, it indicates the user signed in with a local account. Thus, the default value is set to **Local**. You can also record a [claim resolver](claim-resolver-overview.md) with a contextual value, such as the application ID or the user IP address.
-### Manipulating claims
+### Manipulate claims
-You can use [input claims transformations](custom-policy-trust-frameworks.md#manipulating-your-claims) to modify the input claims or generate new ones before sending to Application Insights. In the following example, the technical profile includes the *CheckIsAdmin* input claims transformation.
+You can use [input claims transformations](custom-policy-trust-frameworks.md#manipulating-your-claims) to modify the input claims or generate new ones before sending them to Application Insights. In the following example, the technical profile includes the `CheckIsAdmin` input claims transformation.
```xml <TechnicalProfile Id="AppInsights-SignInComplete">
You can use [input claims transformations](custom-policy-trust-frameworks.md#man
### Add events
-To add an event, create a new technical profile that includes the *AppInsights-Common* technical profile. Then add the technical profile as orchestration step to the [user journey](custom-policy-trust-frameworks.md#orchestration-steps). Use [precondition](userjourneys.md#preconditions) to trigger the event when desired. For example, report the event only when users run through MFA.
+To add an event, create a new technical profile that includes the `AppInsights-Common` technical profile. Then add the new technical profile as an orchestration step to the [user journey](custom-policy-trust-frameworks.md#orchestration-steps). Use the [Precondition](userjourneys.md#preconditions) element to trigger the event when you're ready. For example, report the event only when users run through multifactor authentication.
```xml <TechnicalProfile Id="AppInsights-MFA-Completed">
To add an event, create a new technical profile that includes the *AppInsights-C
</TechnicalProfile> ```
-Now that you have a technical profile, add the event to the user journey. Then renumber the steps sequentially without skipping any integers from 1 to N.
+>[!Important]
+>When you add an event to the user journey, remember to renumber the orchestration steps sequentially.
```xml <OrchestrationStep Order="8" Type="ClaimsExchange">
Now that you have a technical profile, add the event to the user journey. Then r
## Enable developer mode
-When using the Application Insights to define events, you can indicate whether developer mode is enabled. Developer mode controls how events are buffered. In a development environment with minimal event volume, enabling developer mode results in events being sent immediately to Application Insights. The default value is `false`. Don't enable developer mode in production environments.
+When you use Application Insights to define events, you can indicate whether developer mode is enabled. Developer mode controls how events are buffered. In a development environment with minimal event volume, enabling developer mode results in events being sent immediately to Application Insights. The default value is `false`. Don't enable developer mode in production environments.
-To enable developer mode, in the *AppInsights-Common* technical profile, change the `DeveloperMode` metadata to `true`:
+To enable developer mode, change the `DeveloperMode` metadata to `true` in the `AppInsights-Common` technical profile:
```xml <TechnicalProfile Id="AppInsights-Common">
To enable developer mode, in the *AppInsights-Common* technical profile, change
## Disable telemetry
-To disable the Application insight logs, in the *AppInsights-Common* technical profile, change the `DisableTelemetry` metadata to `true`:
+To disable Application Insights logs, change the `DisableTelemetry` metadata to `true` in the `AppInsights-Common` technical profile:
```xml <TechnicalProfile Id="AppInsights-Common">
To disable the Application insight logs, in the *AppInsights-Common* technical p
## Next steps -- Learn how to [create custom KPI dashboards using Azure Application Insights](../azure-monitor/learn/tutorial-app-dashboards.md).
+Learn how to [create custom KPI dashboards using Azure Application Insights](../azure-monitor/learn/tutorial-app-dashboards.md).
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/buildingblocks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/buildingblocks.md
The **BuildingBlocks** element contains the following elements that must be spec
- [Localization](localization.md) - Allows you to support multiple languages. The localization support in policies allows you set up the list of supported languages in a policy and pick a default language. Language-specific strings and collections are also supported. -- [DisplayControls](display-controls.md) - Defines the controls to be displayed on a page. Display controls have special functionality and interact with back-end validation technical profiles. Display controls are currently in **preview**.
+- [DisplayControls](display-controls.md) - Defines the controls to be displayed on a page. Display controls have special functionality and interact with back-end validation technical profiles.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
Title: Build a SCIM endpoint for user provisioning to apps from Azure Active Directory
-description: System for Cross-domain Identity Management (SCIM) standardizes automatic user provisioning. Learn to develop a SCIM endpoint, integrate your SCIM API with Azure Active Directory, and start automating provisioning users and groups into your cloud applications with Azure Active Directory.
+description: Learn to develop a SCIM endpoint, integrate your SCIM API with Azure AD, and automatically provision users and groups into your cloud applications with Azure Active Directory.
# Tutorial: Develop a sample SCIM endpoint
-No one wants to build a new endpoint from scratch, so we've created some [reference code](https://aka.ms/scimreferencecode) for you to get started with [SCIM](https://aka.ms/scimoverview). This tutorial describes how to deploy the SCIM reference code in Azure and test it using Postman or by integrating with the Azure AD SCIM client. You can get your SCIM endpoint up and running with no code in just 5 minutes. This tutorial is intended for developers who are looking to get started with SCIM or others interested in testing out a SICM endpoint.
+No one wants to build a new endpoint from scratch, so we created some [reference code](https://aka.ms/scimreferencecode) for you to get started with [System for Cross-domain Identity Management (SCIM)](https://aka.ms/scimoverview). You can get your SCIM endpoint up and running with no code in just five minutes.
-In this tutorial, learn how to:
+This tutorial describes how to deploy the SCIM reference code in Azure and test it by using Postman or by integrating with the Azure Active Directory (Azure AD) SCIM client. This tutorial is intended for developers who want to get started with SCIM, or anyone interested in testing a SCIM endpoint.
+
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Deploy your SCIM endpoint in Azure
-> * Test your SCIM endpoint
+>
+> * Deploy your SCIM endpoint in Azure.
+> * Test your SCIM endpoint.
## Deploy your SCIM endpoint in Azure
-The steps provided here deploy the SCIM endpoint to a service using [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) and [Azure App Services](https://docs.microsoft.com/azure/app-service/). The SCIM reference code can also be run locally, hosted by an on-premises server, or deployed to another external service.
-
-### Open solution and deploy to Azure App Service
+The steps here deploy the SCIM endpoint to a service by using [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) and [Azure App Service](https://docs.microsoft.com/azure/app-service/). The SCIM reference code can also be run locally, hosted by an on-premises server, or deployed to another external service.
1. Go to the [reference code](https://github.com/AzureAD/SCIMReferenceCode) from GitHub and select **Clone or download**.
-1. Choose to either **Open in Desktop**, or, copy the link, open **Visual Studio**, and select **Clone or check out code** to enter the copied link and make a local copy.
+1. Select **Open in Desktop**, or copy the link, open Visual Studio, and select **Clone or check out code** to enter the copied link and make a local copy.
-1. In **Visual Studio**, be sure to sign into the account that has access to your hosting resources.
+1. In Visual Studio, make sure to sign in to the account that has access to your hosting resources.
-1. In **Solution Explorer**, open *Microsoft.SCIM.sln* and right-click the *Microsoft.SCIM.WebHostSample* file. Select **Publish**.
+1. In Solution Explorer, open *Microsoft.SCIM.sln* and right-click the *Microsoft.SCIM.WebHostSample* file. Select **Publish**.
- ![cloud publish](media/use-scim-to-build-users-and-groups-endpoints/cloud-publish.png)
+ ![Screenshot that shows the sample file.](media/use-scim-to-build-users-and-groups-endpoints/cloud-publish.png)
> [!NOTE]
- > To run this solution locally, double-click the project and select **IIS Express** to launch the project as a web page with a local host URL.
+ > To run this solution locally, double-click the project and select **IIS Express** to launch the project as a webpage with a local host URL.
-1. Select **Create profile** and make sure **App Service** and **Create new** are selected.
+1. Select **Create profile** and make sure that **App Service** and **Create new** are selected.
- ![cloud publish 2](media/use-scim-to-build-users-and-groups-endpoints/cloud-publish-2.png)
+ ![Screenshot that shows the Publish window.](media/use-scim-to-build-users-and-groups-endpoints/cloud-publish-2.png)
1. Step through the dialog options and rename the app to a name of your choice. This name is used in both the app and the SCIM endpoint URL.
- ![cloud publish 3](media/use-scim-to-build-users-and-groups-endpoints/cloud-publish-3.png)
+ ![Screenshot that shows creating a new app service.](media/use-scim-to-build-users-and-groups-endpoints/cloud-publish-3.png)
-1. Select the resource group to use and choose **Publish**.
+1. Select the resource group to use and select **Publish**.
-1. Navigate to the application in **Azure App Services** > **Configuration** and select **New application setting** to add the *Token__TokenIssuer* setting with the value `https://sts.windows.net/<tenant_id>/`. Replace `<tenant_id>` with your Azure AD tenant_id and if you're looking to test the SCIM endpoint using [Postman](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint), also add a *ASPNETCORE_ENVIRONMENT* setting with the value `Development`.
+1. Go to the application in **Azure App Service** > **Configuration** and select **New application setting** to add the *Token__TokenIssuer* setting with the value `https://sts.windows.net/<tenant_id>/`. Replace `<tenant_id>` with your Azure AD tenant ID. If you want to test the SCIM endpoint by using [Postman](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint), add an *ASPNETCORE_ENVIRONMENT* setting with the value `Development`.
- ![appservice settings](media/use-scim-to-build-users-and-groups-endpoints/app-service-settings.png)
+ ![Screenshot that shows the Application settings window.](media/use-scim-to-build-users-and-groups-endpoints/app-service-settings.png)
- When testing your endpoint with an Enterprise Application in the Azure portal, choose to keep the environment as `Development` and provide the token generated from the `/scim/token` endpoint for testing or change the environment to `Production` and leave the token field empty in the enterprise application in the [Azure portal](https://docs.microsoft.com/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#step-4-integrate-your-scim-endpoint-with-the-azure-ad-scim-client).
+ When you test your endpoint with an enterprise application in the [Azure portal](use-scim-to-provision-users-and-groups.md#integrate-your-scim-endpoint-with-the-aad-scim-client), you have two options. You can keep the environment in `Development` and provide the testing token from the `/scim/token` endpoint, or you can change the environment to `Production` and leave the token field empty.
-That's it! Your SCIM endpoint is now published and allows you to use the Azure App Service URL to test the SCIM endpoint.
+That's it! Your SCIM endpoint is now published, and you can use the Azure App Service URL to test the SCIM endpoint.
## Test your SCIM endpoint
-The requests to a SCIM endpoint require authorization and the SCIM standard leaves multiple options for authentication and authorization, such as cookies, basic authentication, TLS client authentication, or any of the methods listed in [RFC 7644](https://tools.ietf.org/html/rfc7644#section-2).
+Requests to a SCIM endpoint require authorization. The SCIM standard has multiple options for authentication and authorization, including cookies, basic authentication, TLS client authentication, or any of the methods listed in [RFC 7644](https://tools.ietf.org/html/rfc7644#section-2).
-Be sure to avoid insecure methods, such as username/password, in favor of a more secure method such as OAuth. Azure AD supports long-lived bearer tokens (for gallery and non-gallery applications) and the OAuth authorization grant (for applications published in the app gallery).
+Be sure to avoid methods that aren't secure, such as username and password, in favor of a more secure method such as OAuth. Azure AD supports long-lived bearer tokens (for gallery and non-gallery applications) and the OAuth authorization grant (for gallery applications).
> [!NOTE]
-> The authorization methods provided in the repo are for testing only. When integrating with Azure AD, you can review the authorization guidance, see [Plan provisioning for a SCIM endpoint](https://docs.microsoft.com/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#authorization-for-provisioning-connectors-in-the-application-gallery).
+> The authorization methods provided in the repo are for testing only. When you integrate with Azure AD, you can review the authorization guidance. See [Plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md).
-The development environment enables features unsafe for production, such as reference code to control the behavior of the security token validation. The token validation code is configured to use a self-signed security token and the signing key is stored in the configuration file, see the **Token:IssuerSigningKey** parameter in the *appsettings.Development.json* file.
+The development environment enables features that are unsafe for production, such as reference code to control the behavior of the security token validation. The token validation code uses a self-signed security token, and the signing key is stored in the configuration file. See the **Token:IssuerSigningKey** parameter in the *appsettings.Development.json* file.
```json "Token": {
The development environment enables features unsafe for production, such as refe
``` > [!NOTE]
-> By sending a **GET** request to the `/scim/token` endpoint, a token is issued using the configured key and can be used as bearer token for subsequent authorization.
+> When you send a **GET** request to the `/scim/token` endpoint, a token is issued using the configured key. That token can be used as a bearer token for subsequent authorization.
-The default token validation code is configured to use a token issued by Azure Active Directory and requires the issuing tenant be configured using the **Token:TokenIssuer** parameter in the *appsettings.json* file.
+The default token validation code is configured to use an Azure AD token and requires the issuing tenant be configured by using the **Token:TokenIssuer** parameter in the *appsettings.json* file.
``` json "Token": {
The default token validation code is configured to use a token issued by Azure A
### Use Postman to test endpoints
-After the SCIM endpoint is deployed, you can test to ensure it is SCIM RFC compliant. This example provides a set of tests in **Postman** to validate CRUD operations on users and groups, filtering, updates to group membership, and disabling users.
+After you deploy the SCIM endpoint, you can test to ensure that it's compliant with SCIM RFC. This example provides a set of tests in Postman that validate CRUD (create, read, update, and delete) operations on users and groups, filtering, updates to group membership, and disabling users.
-The endpoints are located in the `{host}/scim/` directory and can be interacted with using standard HTTP requests. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
+The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
> [!NOTE]
-> You can only use HTTP endpoints for local tests as the Azure AD provisioning service requires your endpoint support HTTPS.
-
-#### Open Postman and run tests
+> You can only use HTTP endpoints for local tests. The Azure AD provisioning service requires that your endpoint support HTTPS.
-1. Download [Postman](https://www.getpostman.com/downloads/) and start application.
-1. Copy the link [https://aka.ms/ProvisioningPostman](https://aka.ms/ProvisioningPostman) and paste into Postman to import the test collection.
+1. Download [Postman](https://www.getpostman.com/downloads/) and start the application.
+1. Copy and paste this link into Postman to import the test collection: `https://aka.ms/ProvisioningPostman`.
- ![postman collection](media/use-scim-to-build-users-and-groups-endpoints/postman-collection.png)
+ ![Screenshot that shows importing the test collection in Postman.](media/use-scim-to-build-users-and-groups-endpoints/postman-collection.png)
-1. Create a test environment with the variables below:
+1. Create a test environment that has these variables:
|Environment|Variable|Value| |-|-|-|
- |Run project locally using IIS Express|||
+ |Run the project locally by using IIS Express|||
||**Server**|`localhost`|
- ||**Port**|`:44359` *(don't forget the **:**)*|
+ ||**Port**|`:44359` *(don't forget the **`:`**)*|
||**Api**|`scim`|
- |Run project locally using Kestrel|||
+ |Run the project locally by using Kestrel|||
||**Server**|`localhost`|
- ||**Port**|`:5001` *(don't forget the **:**)*|
+ ||**Port**|`:5001` *(don't forget the **`:`**)*|
||**Api**|`scim`|
- |Hosting the endpoint in Azure|||
+ |Host the endpoint in Azure|||
||**Server**|*(input your SCIM URL)*| ||**Port**|*(leave blank)*| ||**Api**|`scim`|
-1. Use **Get Key** from the Postman Collection to send a **GET** request to the token endpoint and retrieve a security token to be stored in the **token** variable for subsequent requests.
+1. Use **Get Key** from the Postman collection to send a **GET** request to the token endpoint and retrieve a security token to be stored in the **token** variable for subsequent requests.
- ![postman get key](media/use-scim-to-build-users-and-groups-endpoints/postman-get-key.png)
+ ![Screenshot that shows the Postman Get Key folder.](media/use-scim-to-build-users-and-groups-endpoints/postman-get-key.png)
> [!NOTE]
- > To make a SCIM endpoints secure, you need a security token before connecting, and the tutorial uses the `{host}/scim/token` endpoint to generate a self-signed token.
+ > To make a SCIM endpoint secure, you need a security token before you connect. The tutorial uses the `{host}/scim/token` endpoint to generate a self-signed token.
That's it! You can now run the **Postman** collection to test the SCIM endpoint functionality.
-## Next Steps
+## Next steps
-To develop a SCIM-compliant user and group endpoint with interoperability for a client, see the [SCIM client implementation](http://www.simplecloud.info/#Implementations2).
+To develop a SCIM-compliant user and group endpoint with interoperability for a client, see [SCIM client implementation](http://www.simplecloud.info/#Implementations2).
> [!div class="nextstepaction"] > [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md)
-> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
+> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-password-ban-bad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-password-ban-bad.md
Consider the following example:
The next step is to identify all instances of banned passwords in the user's normalized new password. Points are assigned based on the following criteria: 1. Each banned password that's found in a user's password is given one point.
-1. Each remaining unique character is given one point.
+1. Each remaining character that is not part of a banned password is given one point.
1. A password must be at least five (5) points to be accepted. For the next two example scenarios, Contoso is using Azure AD Password Protection and has "contoso" on their custom banned password list. Let's also assume that "blank" is on the global list.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/fido2-compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/fido2-compatibility.md
The information in the table above was tested for the following operating system
| Operating system | Latest tested version | | | |
-| Windows | Windows 10 20H2 1904 |
+| Windows | Windows 10 20H2 |
| macOS | OS X 11 Big Sur | | Linux | Fedora 32 Workstation |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-enterprise-app-role-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-enterprise-app-role-management.md
Previously updated : 12/07/2020 Last updated : 02/15/2021
By using Azure Active Directory (Azure AD), you can customize the claim type for
- An Azure AD subscription with directory setup. - A subscription that has single sign-on (SSO) enabled. You must configure SSO with your application.
+> [!NOTE]
+> This article explains on how to create/update/delete application roles on the service principal using APIs in Azure AD. If you would like to use the new user interface for App Roles then please see the details [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps).
+ ## When to use this feature Use this feature if your application expects custom roles in the SAML response returned by Azure AD. You can create as many roles as you need.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-aad-b2c-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-aad-b2c-considerations.md
application = PublicClientApplicationBuilder.Create(ClientID)
Acquiring a token for an Azure AD B2C-protected API in a public client application requires you to use the overrides with an authority: ```csharp
-IEnumerable<IAccount> accounts = await application.GetAccountsAsync();
-AuthenticationResult ar = await application.AcquireTokenInteractive(scopes)
- .WithAccount(GetAccountByPolicy(accounts, policy))
- .WithParentActivityOrWindow(ParentActivityOrWindow)
- .ExecuteAsync();
+AuthenticationResult authResult = null;
+IEnumerable<IAccount> accounts = await application.GetAccountsAsync(policy);
+IAccount account = accounts.FirstOrDefault();
+try
+{
+ authResult = await application.AcquireTokenSilent(scopes, account)
+ .ExecuteAsync();
+}
+catch (MsalUiRequiredException ex)
+{
+ authResult = await application.AcquireTokenInteractive(scopes)
+ .WithAccount(account)
+ .WithParentActivityOrWindow(ParentActivityOrWindow)
+ .ExecuteAsync();
+}
``` In the preceding code snippet: - `policy` is a string containing the name of your Azure AD B2C user flow or custom policy (for example, `PolicySignUpSignIn`). - `ParentActivityOrWindow` is required for Android (the Activity) and is optional for other platforms that support a parent UI like windows on Microsoft Windows and UIViewController in iOS. For more information on the UI dialog, see [WithParentActivityOrWindow](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Acquiring-tokens-interactively#withparentactivityorwindow) on the MSAL Wiki.-- `GetAccountByPolicy(IEnumerable<IAccount>, string)` is a method that finds an account for a given policy. For example:-
- ```csharp
- private IAccount GetAccountByPolicy(IEnumerable<IAccount> accounts, string policy)
- {
- foreach (var account in accounts)
- {
- string userIdentifier = account.HomeAccountId.ObjectId.Split('.')[0];
- if (userIdentifier.EndsWith(policy.ToLower()))
- return account;
- }
- return null;
- }
- ```
Applying a user flow or custom policy (for example, letting the user edit their profile or reset their password) is currently done by calling `AcquireTokenInteractive`. For these two policies, you don't use the returned token/authentication result.
Do so by calling `AcquireTokenInteractive` with the authority for that policy. B
```csharp private async void EditProfileButton_Click(object sender, RoutedEventArgs e) {
- IEnumerable<IAccount> accounts = await app.GetAccountsAsync();
+ IEnumerable<IAccount> accounts = await application.GetAccountsAsync(PolicyEditProfile);
+ IAccount account = accounts.FirstOrDefault();
try {
- var authResult = await app.AcquireToken(scopes:App.ApiScopes)
- .WithAccount(GetUserByPolicy(accounts, App.PolicyEditProfile)),
+ var authResult = await application.AcquireTokenInteractive(scopes)
.WithPrompt(Prompt.NoPrompt),
- .WithB2CAuthority(App.AuthorityEditProfile)
+ .WithAccount(account)
+ .WithB2CAuthority(AuthorityEditProfile)
.ExecuteAsync();
- DisplayBasicTokenInfo(authResult);
- }
+ }
catch { }
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-configure-app-access-web-apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md
Some permissions, like Microsoft Graph's *Files.Read.All* permission, require ad
### Configure client credentials
-Apps that use application permissions authenticate as themselves by using their own credentials, without requiring any user interaction. Before your application (or API) can access Microsoft Graph, your own web API, or any another API by using application permissions, you must configure that client app's credentials.
+Apps that use application permissions authenticate as themselves by using their own credentials, without requiring any user interaction. Before your application (or API) can access Microsoft Graph, your own web API, or another API by using application permissions, you must configure that client app's credentials.
For more information about configuring an app's credentials, see the [Add credentials](quickstart-register-app.md#add-credentials) section of [Quickstart: Register an application with the Microsoft identity platform](quickstart-register-app.md).
The **Grant admin consent** button is *disabled* if you aren't an admin or if no
Advance to the next quickstart in the series to learn how to configure which account types can access your application. For example, you might want to limit access only to those users in your organization (single-tenant) or allow users in other Azure AD tenants (multi-tenant) and those with personal Microsoft accounts (MSA). > [!div class="nextstepaction"]
-> [Modify the accounts supported by an application](quickstart-modify-supported-accounts.md)
+> [Modify the accounts supported by an application](quickstart-modify-supported-accounts.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-nodejs-desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md
class AuthProvider {
* https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md */ async getAccount() {
- // need to call getAccount here?
const cache = this.clientApplication.getTokenCache(); const currentAccounts = await cache.getAllAccounts();
active-directory https://docs.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-delegate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate.md
# Delegation and roles in Azure AD entitlement management
-By default, Global administrators and User administrators can create and manage all aspects of Azure AD entitlement management. However, the users in these roles may not know all the situations where access packages are required. Typically it is users within the respective departments, teams, or projects who know who they are collaborating with, using what resources, and for how long. Instead of granting unrestricted permissions to non-administrators, you can grant users the least permissions they need to perform their job and avoid creating conflicting or inappropriate access rights.
+By default, Global administrators and User administrators can create and manage all aspects of Azure AD entitlement management. However, the users in these roles may not know all the situations where access packages are required. Typically it's users within the respective departments, teams, or projects who know who they're collaborating with, using what resources, and for how long. Instead of granting unrestricted permissions to non-administrators, you can grant users the least permissions they need to do their job and avoid creating conflicting or inappropriate access rights.
This video provides an overview of how to delegate access governance from IT administrator to users who aren't administrators.
To understand how you might delegate access governance in entitlement management
![Delegate from IT administrator to managers](./media/entitlement-management-delegate/delegate-admin-dept-managers.png)
-As the IT administrator, Hana has contacts in each department -- Mamta in Marketing, Mark in Finance, and Joe in Legal who are responsible for their department's resources and business critical content.
+As the IT administrator, Hana has contacts in each department-- Mamta in Marketing, Mark in Finance, and Joe in Legal who are responsible for their department's resources and business critical content.
-With entitlement management, you can delegate access governance to these non-administrators because they are the ones who know which users need access, for how long, and to which resources. This ensures the right people are managing access for their departments.
+With entitlement management, you can delegate access governance to these non-administrators because they're the ones who know which users need access, for how long, and to which resources. Delegating to non-administrators ensures the right people are managing access for their departments.
Here is one way that Hana could delegate access governance to the marketing, finance, and legal departments.
Here is one way that Hana could delegate access governance to the marketing, fin
1. Hana adds that group to the catalog creators role.
- Mamta, Mark, and Joe can now create catalogs for their departments, add resources that their departments need, and do further delegation within the catalog.
-
- Note that Mamta, Mark, and Joe cannot see each other's catalogs.
+ Mamta, Mark, and Joe can now create catalogs for their departments, add resources that their departments need, and do further delegation within the catalog. They can't see each other's catalogs.
1. Mamta creates a **Marketing** catalog, which is a container of resources. 1. Mamta adds the resources that her marketing department owns to this catalog.
-1. Mamta can add additional people from her department as catalog owners for this catalog. This helps share the catalog management responsibilities.
+1. Mamta can add other people from her department as catalog owners for this catalog, which helps share the catalog management responsibilities.
1. Mamta can further delegate the creation and management of access packages in the Marketing catalog to project managers in the Marketing department. She can do this by assigning them to the access package manager role. An access package manager can create and manage access packages.
Entitlement management has the following roles that are specific to entitlement
| Entitlement management role | Description | | | |
-| Catalog creator | Create and manage catalogs. Typically an IT administrator who is not a Global administrator, or a resource owner for a collection of resources. The person that creates a catalog automatically becomes the catalog's first catalog owner, and can add additional catalog owners. A catalog creator canΓÇÖt manage or see catalogs that they donΓÇÖt own and canΓÇÖt add resources they donΓÇÖt own to a catalog. If the catalog creator needs to manage another catalog or add resources they donΓÇÖt own, they can request to be a co-owner of that catalog or resource. |
-| Catalog owner | Edit and manage existing catalogs. Typically an IT administrator or resource owners, or a user who the catalog owner has designated. |
+| Catalog creator | Create and manage catalogs. Typically an IT administrator who isn't a Global administrator, or a resource owner for a collection of resources. The person that creates a catalog automatically becomes the catalog's first catalog owner, and can add more catalog owners. A catalog creator canΓÇÖt manage or see catalogs that they donΓÇÖt own and canΓÇÖt add resources they donΓÇÖt own to a catalog. If the catalog creator needs to manage another catalog or add resources they donΓÇÖt own, they can request to be a co-owner of that catalog or resource. |
+| Catalog owner | Edit and manage existing catalogs. Typically an IT administrator or resource owners, or a user who the catalog owner has chosen. |
| Access package manager | Edit and manage all existing access packages within a catalog. | | Access package assignment manager | Edit and manage all existing access packages' assignments. |
-In addition, a designated approver and a requestor of an access package also have rights, although they are not roles.
+Also, the chosen approver and a requestor of an access package have rights, although they're not roles.
| Right | Description | | | |
-| Approver | Authorized by a policy to approve or deny requests to access packages, though they cannot change the access package definitions. |
+| Approver | Authorized by a policy to approve or deny requests to access packages, though they can't change the access package definitions. |
| Requestor | Authorized by a policy of an access package to request that access package. |
-The following table lists the tasks that the entitlement management roles can perform.
+The following table lists the tasks that the entitlement management roles can do.
| Task | Admin | Catalog creator | Catalog owner | Access package manager | Access package assignment manager | | | :: | :: | :: | :: | :: |
The following table lists the tasks that the entitlement management roles can pe
## Required roles to add resources to a catalog
-A Global administrator can add or remove any group (cloud-created security groups or cloud-created Microsoft 365 Groups), application, or SharePoint Online site in a catalog. A User administrator can add or remove any group or application in a catalog, except for a group configured as assignable to a directory role. Note that a user administrator can manage access packages in a catalog that includes groups configured as assignable to a directory role.
+A Global administrator can add or remove any group (cloud-created security groups or cloud-created Microsoft 365 Groups), application, or SharePoint Online site in a catalog. A User administrator can add or remove any group or application in a catalog, except for a group configured as assignable to a directory role. Note that a user administrator can manage access packages in a catalog that includes groups configured as assignable to a directory role. For more information on role-assignable groups, reference [Create a role-assignable group in Azure Active Directory](../roles/groups-create-eligible.md).
-For a user who is not a Global administrator or a User administrator, to add groups, applications, or SharePoint Online sites to a catalog, that user must have *both* the required Azure AD directory role and catalog owner entitlement management role. The following table lists the role combinations that are required to add resources to a catalog. To remove resources from a catalog, you must have the same roles.
+For a user who isn't a Global administrator or a User administrator, to add groups, applications, or SharePoint Online sites to a catalog, that user must have *both* the required Azure AD directory role and catalog owner entitlement management role. The following table lists the role combinations that are required to add resources to a catalog. To remove resources from a catalog, you must have the same roles.
| Azure AD directory role | Entitlement management role | Can add security group | Can add Microsoft 365 Group | Can add app | Can add SharePoint Online site | | | :: | :: | :: | :: | :: |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-install-prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
na ms.devlang: na Previously updated : 11/05/2020 Last updated : 02/16/2021
Prior to version 1.1.614.0, Azure AD Connect by default uses TLS 1.0 for encrypt
``` 1. If you also want to enable TLS 1.2 between the sync engine server and a remote SQL Server, make sure you have the required versions installed for [TLS 1.2 support for Microsoft SQL Server](https://support.microsoft.com/kb/3135244).
+### DCOM prerequisites on the synchronization server
+During the installation of the synchronization service, Azure AD Connect checks for the presence of the following registry key:
+
+- HKEY_LOCAL_MACHINE: Software\Microsoft\Ole
+
+Under this registry key, Azure AD Connect will check to see if the following values are present and uncorrupted:
+
+- [MachineAccessRestriction](https://docs.microsoft.com/windows/win32/com/machineaccessrestriction)
+- [MachineLaunchRestriction](https://docs.microsoft.com/windows/win32/com/machinelaunchrestriction)
+- [DefaultLaunchPermission](https://docs.microsoft.com/windows/win32/com/defaultlaunchpermission)
+ ## Prerequisites for federation installation and configuration ### Windows Remote Management When you use Azure AD Connect to deploy AD FS or the Web Application Proxy (WAP), check these requirements:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sso-how-it-works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-how-it-works.md
The sign-in flow on a web browser is as follows:
6. Active Directory locates the computer account and returns a Kerberos ticket to the browser encrypted with the computer account's secret. 7. The browser forwards the Kerberos ticket it acquired from Active Directory to Azure AD. 8. Azure AD decrypts the Kerberos ticket, which includes the identity of the user signed into the corporate device, using the previously shared key.-
- >[!NOTE]
- >Azure AD will attempt to match user's UPN from the Kerberos ticket to an Azure AD user object that has a corresponding value in the userPrincipalName attribute. If this is not successful, Azure AD will fall back to matching the samAccountName from the Kerberos ticket to an Azure AD user object that has a corresponding value in the onPremisesSamAccountName attribute.
-
9. After evaluation, Azure AD either returns a token back to the application or asks the user to perform additional proofs, such as Multi-Factor Authentication. 10. If the user sign-in is successful, the user is able to access the application.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/admin-units-manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-manage.md
For more granular administrative control in Azure Active Directory (Azure AD), y
![Screenshot showing the "Grant admin consent for Graph explorer" link.](./media/admin-units-manage/select-graph-explorer.png)
-1. Use the preview version of Azure AD PowerShell.
+1. Use [Azure AD PowerShell](https://www.powershellgallery.com/packages/AzureAD/).
## Add an administrative unit
You can add an administrative unit by using either the Azure portal or PowerShel
### Use PowerShell
-Install Azure AD PowerShell (preview) before you try to run the following commands:
+Install [Azure AD PowerShell](https://www.powershellgallery.com/packages/AzureAD/) before you try to run the following commands:
```powershell Connect-AzureAD
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/logicmonitor-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/logicmonitor-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://<companyname>.logicmonitor.com`-
+
+ c. In the **Reply URL (Assertion Consumer Service URL)** textbox, type a URL using the following pattern:
+ `https://companyname.logicmonitor.com/santaba/saml/SSO/`
+
> [!NOTE] > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [LogicMonitor Client support team](https://www.logicmonitor.com/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/samsung-knox-and-business-services-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/samsung-knox-and-business-services-tutorial.md
In this tutorial, you'll learn how to integrate Samsung Knox and Business Servic
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Samsung Knox and Business Services single sign-on (SSO) enabled subscription.
+* A Samsung Knox account.
## Scenario description
To configure the integration of Samsung Knox and Business Services into Azure AD
## Configure and test Azure AD SSO for Samsung Knox and Business Services
-Configure and test Azure AD SSO with Samsung Knox and Business Services using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Samsung Knox and Business Services.
+Configure and test Azure AD SSO with Samsung Knox and Business Services using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in [SamsungKnox.com](https://samsungknox.com/).
To configure and test Azure AD SSO with Samsung Knox and Business Services, perform the following steps:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- In the **Sign on URL** text box, type the URL:
- `https://www.samsungknox.com`
+ * In the **Sign on URL** text box, type the URL:
+ `https://www.samsungknox.com`
+ * In the **Reply URL (assertion consumer service URL)** text box, type the URL:
+ `https://central.samsungknox.com/ams/ad/saml/acs`
+
+ ![Basic SAML Configuration values](https://docs.samsungknox.com/assets/merge/ad-sso/basic-saml-configuration.png)
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Samsung Knox and Business Services SSO
-1. In a different web browser window, sign in to your Samsung Knox and Business Services company site as an administrator.
+1. In a different web browser window, sign in to [SamsungKnox.com](https://samsungknox.com/) as an administrator.
1. Click on the **Avatar** on the top right corner.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the left sidebar, click **ACTIVE DIRECTORY SETTINGS** and perform the following steps.
- ![ACTIVE DIRECTORY SETTINGS](./media/samsung-knox-and-business-services-tutorial/sso-settings.png)
+ ![ACTIVE DIRECTORY SETTINGS](https://docs.samsungknox.com/assets/merge/ad-sso/ad-5.png)
a. In the **Identifier(entity ID)** textbox, paste the **Identifier** value which you have entered in the Azure portal. b. In the **App federation metadata URL** textbox, paste the **App Federation Metadata Url** value which you have copied from the Azure portal.
- c. click on **CONNECT TO AD SSO**.
+ c. Click on **CONNECT TO AD SSO**.
### Create Samsung Knox and Business Services test user
-In this section, you create a user called Britta Simon in Samsung Knox and Business Services. Work with [Samsung Knox and Business Services support team](mailto:noreplyk.sec@samsung.com) to add the users in the Samsung Knox and Business Services platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Samsung Knox and Business Services. Refer to the [Knox Configure](https://docs.samsungknox.com/admin/knox-configure/Administrators.htm) or [Knox Mobile Enrollment](https://docs.samsungknox.com/admin/knox-mobile-enrollment/kme-add-an-admin.htm) admin guides for instructions on how to invite a sub-administrator, or test user, to your Samsung Knox organization. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Samsung Knox and Business Services Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to [SamsungKnox.com](https://samsungknox.com/), where you can initiate the login flow.
-* Go to Samsung Knox and Business Services Sign-on URL directly and initiate the login flow from there.
-
-* You can use Microsoft My Apps. When you click the Samsung Knox and Business Services tile in the My Apps, this will redirect to Samsung Knox and Business Services Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* Go to [SamsungKnox.com](https://samsungknox.com/) directly and initiate the login flow from there.
+* You can use Microsoft My Apps. When you click the Samsung Knox and Business Services tile in the My Apps, this will redirect to [SamsungKnox.com](https://samsungknox.com/). For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
## Next steps
aks https://docs.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
export IDENTITY_CLIENT_ID="$(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n $
export IDENTITY_RESOURCE_ID="$(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n ${IDENTITY_NAME} --query id -otsv)" ```
+## Assign permissions for the managed identity
+
+The *IDENTITY_CLIENT_ID* managed identity must have Reader permissions in the resource group that contains the virtual machine scale set of your AKS cluster.
+
+```azurecli-interactive
+NODE_GROUP=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)
+NODES_RESOURCE_ID=$(az group show -n $NODE_GROUP -o tsv --query "id")
+az role assignment create --role "Reader" --assignee "$IDENTITY_CLIENT_ID" --scope $NODES_RESOURCE_ID
+```
+ ## Create a pod identity Create a pod identity for the cluster using `az aks pod-identity add`.
aks https://docs.microsoft.com/en-us/azure/aks/virtual-nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/virtual-nodes.md
description: Overview of how using virtual node with Azure Kubernetes Services (AKS) Previously updated : 09/21/2020 Last updated : 02/17/2021
Virtual Nodes functionality is heavily dependent on ACI's feature set. In additi
* [DaemonSets](concepts-clusters-workloads.md#statefulsets-and-daemonsets) will not deploy pods to the virtual nodes * Virtual nodes support scheduling Linux pods. You can manually install the open source [Virtual Kubelet ACI](https://github.com/virtual-kubelet/azure-aci) provider to schedule Windows Server containers to ACI. * Virtual nodes require AKS clusters with Azure CNI networking.
-* Virtual nodes with Private clusters.
* Using api server authorized ip ranges for AKS. * Volume mounting Azure Files share support [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-volume.md) * Using IPv6 is not supported.
api-management https://docs.microsoft.com/en-us/azure/api-management/diagnose-solve-problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/diagnose-solve-problems.md
+
+ Title: Azure API Management Diagnose and solve problems
+description: Learn how to troubleshoot issues with your API in Azure API Management with the Diagnose and Solve tool in the Azure portal.
+++ Last updated : 02/05/2021+++
+# Azure API Management Diagnostics overview
+
+When you built and managed an API in Azure API Management, you want to be prepared for any issues that may arise, from 404 not found errors to 502 bad gateway error. API Management Diagnostics is an intelligent and interactive experience to help you troubleshoot your API published in APIM with no configuration required. When you do run into issues with your published APIs, API Management Diagnostics points out whatΓÇÖs wrong, and guides you to the right information to quickly troubleshoot and resolve the issue.
+
+Although this experience is most helpful when you re having issues with your API within the last 24 hours, all the diagnostic graphs are always available for you to analyze.
+
+## Open API Management Diagnostics
+
+To access API Management Diagnostics, navigate to your API Management service instance in the [Azure portal](https://portal.azure.com). In the left navigation, select **Diagnose and solve problems**.
++++
+## Intelligent search
+
+You can search your issues or problems in the search bar on the top of the page. The search also helps you find the tools that may help to troubleshoot or resolve your issues.
+++
+## Troubleshooting categories
+
+You can troubleshoot issues under categories. Common issues that are related to your API performance, gateway, API policies, and service tier can all be analyzed within each category. Each category also provides more specific diagnostics checks.
+++
+### Availability and performance
+
+Check your API availability and performance issues under this category. After selecting this category tile, you will see a few common checks are recommended in an interactive interface. Click each check to dive deep to the specifics of each issue. The check will also provide you a graph showing your API performance and a summary of performance issues. For example, your API service may have had a 5xx error and timeout in the last hour at the backend.
+++++
+### API policies
+
+This category detects errors and notifies you of your policy issues.
+
+A similar interactive interface guides you to the data metrics to help you troubleshoot your API policies configuration.
++
+### Gateway performance
+
+For gateway requests or responses or any 4xx or 5xx errors on your gateway, use this category to monitor and troubleshoot. Similarly, leverage the interactive interface to dive deep on the specific area that you want to check for your API gateway performance.
++
+### Service upgrade
+
+This category checks which service tier (SKU) you are currently using and reminds you to upgrade to avoid any issues that may be related to that tier. The same interactive interface helps you go deep with more graphics and a summary check result.
++
+## Search documentation
+
+In additional to the Diagnose and solve problems tools, you can search for troubleshooting documentation related to your issue. After running the diagnostics on your service, select **Search Documentation** in the interactive interface.
+
+ :::image type="content" source="media/diagnose-solve-problems/search-documentation.png" alt-text="screenshot 1 of how to use Search Documentation function.":::
++
+ :::image type="content" source="media/diagnose-solve-problems/search-documentation-2.png" alt-text="screenshot 2 of how to use Search Documentation.":::
++
+## Next steps
+
+* Also use [API analytics](howto-use-analytics.md) to analyze the usage and performance of the APIs.
+* Want to troubleshoot Web Apps issues with Diagnostics? Read it [here](../app-service/overview-diagnostics.md)
+* Leverage Diagnostics to check Azure Kubernetes Services issues. See [Diagnostics on AKS](../aks/concepts-diagnostics.md)
+* Post your questions or feedback at [UserVoice](https://feedback.azure.com/forums/248703-api-management) by adding "[Diag]" in the title.
app-service https://docs.microsoft.com/en-us/azure/app-service/app-service-key-vault-references https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-key-vault-references.md
A Key Vault reference is of the form `@Microsoft.KeyVault({referenceString})`, w
For example, a complete reference would look like the following: ```
-@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret)
+@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
``` Alternatively:
attestation https://docs.microsoft.com/en-us/azure/attestation/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
+
+ Title: Built-in policy definitions for Azure Attestation
+description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated : 02/11/2021++++++
+# Azure Policy built-in definitions for Azure Attestation
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
+definitions for Azure Attestation. For additional Azure Policy built-ins for other services, see
+[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Azure Attestation
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
automation https://docs.microsoft.com/en-us/azure/automation/automation-dsc-extension-history https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-extension-history.md
Title: Work with Azure Desired State Configuration extension version history
-description: This article tells how to work with the version history for the Desired State Configuration (DSC) extension in Azure.
Previously updated : 07/22/2020
+description: This article shares version history information for the Desired State Configuration (DSC) extension in Azure.
Last updated : 02/17/2021 keywords: dsc, powershell, azure, extension
# Work with Azure Desired State Configuration extension version history
-The Azure Desired State Configuration (DSC) VM Extension is updated as-needed to support
-enhancements and new capabilities delivered by Azure, Windows Server, and the Windows Management
-Framework (WMF) that includes Windows PowerShell.
+The Azure Desired State Configuration (DSC) VM [extension](../virtual-machines/extensions/dsc-overview.md) is updated as-needed to support enhancements and new capabilities delivered by Azure, Windows Server, and the Windows Management Framework (WMF) that includes Windows PowerShell.
-This article provides information about each version of the Azure DSC VM Extension, what
-environments it supports, and comments and remarks on new features or changes.
+This article provides information about each version of the Azure DSC VM extension, what environments it supports, and comments and remarks on new features or changes.
## Latest version
+### Version 2.83
+
+- **Release date:**
+ - February 2021
+- **OS support:**
+ - Windows Server 2019
+ - Windows Server 2016
+ - Windows Server 2012 R2
+ - Windows Server 2012
+ - Windows Server 2008 R2 SP1
+ - Windows Client 7/8.1/10
+ - Nano Server
+- **WMF support:**
+ - WMF 5.1
+ - WMF 5.0 RTM
+ - WMF 4.0 Update
+ - WMF 4.0
+- **Environment:**
+ - Azure
+ - Azure China Vianet 21
+ - Azure Government
+- **Remarks:** This release includes a fix for unsigned binaries with the Windows VM extension.
+ ### Version 2.80 - **Release date:**
environments it supports, and comments and remarks on new features or changes.
- Azure - Azure China Vianet 21 - Azure Government-- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSes, it
- installs the [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/)
- (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
+- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSs, it installs the [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/) (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
- **New features:** - Improvement in extension metadata for substatus and other minor bug fixes. ## Supported versions > [!WARNING]
-> Versions 2.4 through 2.13 use WMF 5.0 Public Preview whose signing certificates expired in August
-> 2016. For more information about this issue, see
-> [blog post](https://devblogs.microsoft.com/powershell/azure-dsc-extension-versions-2-4-up-to-2-13-will-retire-in-august/).
+> Versions 2.4 through 2.13 use WMF 5.0 Public Preview, whose signing certificates expired in August 2016.
+> For more information about this issue, see the following
+> [blog article](https://devblogs.microsoft.com/powershell/azure-dsc-extension-versions-2-4-up-to-2-13-will-retire-in-august/).
### Version 2.75
environments it supports, and comments and remarks on new features or changes.
2008 R2 SP1, Windows Client 7/8.1/10, Nano Server - **WMF support:** WMF 5.1, WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure-- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSes, it
- installs the
- [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/)
- (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
+- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSs, it
+ installs the [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/) (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
- **New features:**
- - After GitHub's recent move to TLS 1.2, you can't onboard a VM to Azure Automation DSC using DIY
- Resource Manager templates available on Azure Marketplace or use DSC extension to get any config
- hosted on GitHub. You will see an error similar to the following while deploying the extension:
+ - After GitHub's enforcement of TLS 1.2, you can't onboard a VM to Azure Automation State Configuration using DIY Resource Manager templates available on Azure Marketplace, or use DSC extension to retrieve any configurations hosted on GitHub. An error similar to the following while deploying the extension is returned:
```json {
environments it supports, and comments and remarks on new features or changes.
} ```
- - In the new extension version, TLS 1.2 is now enforced. While deploying the extension if you
- already had the AutoUpgradeMinorVersion = true in the Resource Manager template, then the
- extension will get autoupgraded to 2.75. For manual updates, specify `TypeHandlerVersion = 2.75`
+ - In the new extension version, TLS 1.2 is now enforced. While deploying the extension, if you
+ already specified `AutoUpgradeMinorVersion = true` in the Resource Manager template, the
+ extension is autoupgraded to 2.75. For manual updates, specify `TypeHandlerVersion = 2.75`
in your Resource Manager template. ### Version 2.70 - 2.72
environments it supports, and comments and remarks on new features or changes.
2008 R2 SP1, Windows Client 7/8.1/10, Nano Server - **WMF support:** WMF 5.1, WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure-- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSes, it
+- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSs, it
installs the [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/) (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM. - **New features:**
- - Bug fixes & improvements that simplifies using DSC Azure Automation through the portal UI as
- well as Resource Manager template. For more information, see
- [Default Configuration Script](../virtual-machines/extensions/dsc-overview.md) in the DSC
- Extension documentation.
+ - Bug fixes & improvements that simplify using Azure Automation State Configuration in the portal and with a Resource Manager template. For more information, see [Default Configuration Script](../virtual-machines/extensions/dsc-overview.md) in the DSC extension documentation.
### Version 2.26
environments it supports, and comments and remarks on new features or changes.
2008 R2 SP1, Windows Client 7/8.1/10, Nano Server - **WMF support:** WMF 5.1, WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure-- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSes, it
- installs the
- [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/)
- (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
+- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSs, it
+ installs the [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/) (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
- **New features:** - Telemetry improvements.
environments it supports, and comments and remarks on new features or changes.
2008 R2 SP1, Windows Client 7/8.1/10, Nano Server - **WMF support:** WMF 5.1, WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure-- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSes, it
- installs the
- [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/)
- (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
+- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSs, it
+ installs the [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/) (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
- **New features:** - Several bug fixes and other minor improvements were added.
environments it supports, and comments and remarks on new features or changes.
2008 R2 SP1, Nano Server - **WMF support:** WMF 5.1, WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure-- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSes, it
- installs the
- [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/)
- (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
+- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSs, it
+ installs the [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/) (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
- **New features:** - Exposes VM UUID & DSC Agent ID as extension metadata. Other minor improvements were added.
environments it supports, and comments and remarks on new features or changes.
2008 R2 SP1, Nano Server - **WMF support:** WMF 5.1, WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure-- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSes, it
- installs the
- [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/)
- (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
+- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSs, it
+ installs the [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/) (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
- **New features:**
- - Lots of bug fixes and other improvements were added.
+ - Bug fixes and other improvements were added.
### Version 2.22
environments it supports, and comments and remarks on new features or changes.
2008 R2 SP1, Nano Server - **WMF support:** WMF 5.1, WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure-- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSes, it
- installs the
- [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/)
- (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
+- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSs, it
+ installs the [Windows Management Framework 5.1](https://devblogs.microsoft.com/powershell/wmf-5-1-releasing-january-2017/) (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
- **New features:**
- - The DSC Extension now has support for WMF 5.1.
+ - The DSC extension now supports WMF 5.1.
- Minor other improvements were added. ### Version 2.21
environments it supports, and comments and remarks on new features or changes.
2008 R2 SP1, Nano Server - **WMF support:** WMF 5.1 Preview, WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure-- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSes, it
- installs the
- [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/)
- (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
+- **Remarks:** This version uses DSC as included in Windows Server 2016; for other Windows OSs, it
+ installs the [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/) (installing WMF requires a reboot). For Nano Server, DSC role is installed on the VM.
- **New features:**
- - The DSC Extension is now available on Nano Server. This version primarily contains code changes
- for running the Extension on Nano Server.
+ - The DSC extension is now available on Nano Server. This version primarily contains code changes for running the extension on Nano Server.
- Minor other improvements were added. ### Version 2.20
environments it supports, and comments and remarks on new features or changes.
- **WMF support:** WMF 5.1 Preview, WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure - **Remarks:** This version uses DSC as included in Windows Server 2016 Technical Preview; for other
- Windows OSes, it installs the
- [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/)
- (installing WMF requires a reboot).
+ Windows OSs, it installs the [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/) (installing WMF requires a reboot).
- **New features:** - Support for WMF 5.1 Preview. When first published, this version was an optional upgrade and you had to specify Wmfversion = '5.1PP' in Resource Manager templates to install WMF 5.1 preview.
environments it supports, and comments and remarks on new features or changes.
- **WMF support:** WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure, Azure China Vianet 21, Azure Government - **Remarks:** This version uses DSC as included in Windows Server 2016 Technical Preview; for other
- Windows OSes, it installs the
- [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/)
- (installing WMF requires a reboot).
+ Windows OSs, it installs the [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/) (installing WMF requires a reboot).
- **New features:**
- - The DSC Extension is now onboarded to Azure China Vianet 21. This version primarily contains fixes for
- running the Extension on Azure China Vianet 21.
+ - The DSC extension is now available in Azure China Vianet 21. This version contains fixes for running the extension on Azure China Vianet 21.
### Version 2.18
environments it supports, and comments and remarks on new features or changes.
- **WMF support:** WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure - **Remarks:** This version uses DSC as included in Windows Server 2016 Technical Preview; for other
- Windows OSes, it installs the
- [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/)
- (installing WMF requires a reboot).
+ Windows OSs, it installs the [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/) (installing WMF requires a reboot).
- **New features:**
- - Make telemetry non-blocking when an error occurs during telemetry hotfix download (known Azure
- DNS issue) or during install.
- - Fix for the intermittent issue where extension stops processing configuration after a reboot.
- This was causing the DSC Extension to remain in 'transitioning' state.
+ - Make telemetry non-blocking when an error occurs during telemetry hotfix download (known Azure DNS issue) or during install.
+ - Fix for the intermittent issue where extension stops processing configuration after a reboot. This was causing the DSC extension to remain in 'transitioning' state.
- Minor other fixes and improvements were added. ### Version 2.17
environments it supports, and comments and remarks on new features or changes.
- **WMF support:** WMF 5.0 RTM, WMF 4.0 Update, WMF 4.0 - **Environment:** Azure - **Remarks:** This version uses DSC as included in Windows Server 2016 Technical Preview; for other
- Windows OSes, it installs the
- [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/)
- (installing WMF requires a reboot).
+ Windows OSs, it installs the [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/) (installing WMF requires a reboot).
- **New features:**
- - Support for WMF 4.0 Update. For more information on WMF 4.0 Update, see
- [this blog](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-4-0-update-now-available-for-windows-server-2012-windows-server-2008-r2-sp1-and-windows-7-sp1/).
- - Retry logic on errors that occur during the DSC Extension install or while applying a DSC
- configuration post extension install. As a part of this change, the extension will retry the
- installation if a previous install failed or re-enact a DSC configuration that had previously
- failed, for a maximum three times until it reaches the completion state (Success/Error) or if a
- new request comes. If the extension fails due to invalid user settings/user input, it does not
- retry. In this case, the extension needs to be invoked again with a new request and correct user
- settings. Note: The DSC Extension is dependent on the Azure VM agent for the retries. Azure VM
- agent invokes the extension with the last failed request until it reaches a success or error
- state.
+ - Support for WMF 4.0 Update. For more information on WMF 4.0 Update, see [this blog](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-4-0-update-now-available-for-windows-server-2012-windows-server-2008-r2-sp1-and-windows-7-sp1/).
+ - Retry logic on errors that occur during the DSC extension install or while applying a DSC configuration post extension install. As a part of this change, the extension will retry the installation if a previous install failed or re-enact a DSC configuration that had previously failed, for a maximum three times until it reaches the completion state (Success/Error) or if a new request comes. If the extension fails due to invalid user settings/user input, it does not retry. In this case, the extension needs to be invoked again with a new request and correct user settings.
+
+ > [!NOTE]
+ > The DSC extension is dependent on the Azure VM agent for the retries. Azure VM agent invokes the extension with the last failed request until it reaches a success or error state.
### Version 2.16
environments it supports, and comments and remarks on new features or changes.
- **WMF support:** WMF 5.0 RTM, WMF 4.0 - **Environment:** Azure - **Remarks:** This version uses DSC as included in Windows Server 2016 Technical Preview; for other
- Windows OSes, it installs the
- [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/)
- (installing WMF requires a reboot).
+ Windows OSs, it installs the [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/) (installing WMF requires a reboot).
- **New features:** - Improvement in error handling and other minor bug fixes.
- - New property in DSC Extension settings. 'ForcePullAndApply' in AdvancedOptions is added to
- enable the DSC Extension enact DSC configurations when the refresh mode is Pull (as opposed to
- the default Push mode). For more information, please refer to
- [this blog](https://devblogs.microsoft.com/powershell/arm-dsc-extension-settings/)
- to get more information on the DSC Extension settings.
+ - New property in DSC extension settings. `ForcePullAndApply` in AdvancedOptions is added to enable the DSC extension enact DSC configurations when the refresh mode is Pull (as opposed to the default Push mode). For more information about the DSC extension settings, refer to [this blog](https://devblogs.microsoft.com/powershell/arm-dsc-extension-settings/).
### Version 2.15
environments it supports, and comments and remarks on new features or changes.
- **WMF support:** WMF 5.0 RTM, WMF 4.0 - **Environment:** Azure - **Remarks:** This version uses DSC as included in Windows Server 2016 Technical Preview; for other
- Windows OSes, it installs the
- [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/)
- (installing WMF requires a reboot).
+ Windows OSs, it installs the [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/) (installing WMF requires a reboot).
- **New features:**
- - In extension version 2.14, changes to install WMF RTM were included. While upgrading from
- extension version 2.13.2.0 to 2.14.0.0, you may notice that some DSC cmdlets fail or your
- configuration fails with an error ΓÇô 'No Instance found with given property values'. For more
- information, see the
- [DSC release notes](/powershell/scripting/wmf/known-issues/known-issues-dsc). The workarounds
- for these issues have been added in 2.15 version.
- - Unfortunately, if you have already installed version 2.14 and are running into one of the above
- two issues, you will need to perform these steps manually. In an elevated PowerShell session:
+ - In extension version 2.14, changes to install WMF RTM were included. While upgrading from extension version 2.13.2.0 to 2.14.0.0, you may notice that some DSC cmdlets fail or your configuration fails with an error ΓÇô 'No Instance found with given property values'. For more information, see the [DSC release notes](/powershell/scripting/wmf/known-issues/known-issues-dsc). The workarounds for these issues have been added in 2.15 version.
+ - If you already installed version 2.14 and are running into one of the above two issues, you need to perform these steps manually. In an elevated PowerShell session run the following commands:
- `Remove-Item -Path $env:SystemRoot\system32\Configuration\DSCEngineCache.mof` - `mofcomp $env:windir\system32\wbem\DscCoreConfProv.mof`
environments it supports, and comments and remarks on new features or changes.
- **WMF support:** WMF 5.0 RTM, WMF 4.0 - **Environment:** Azure - **Remarks:** This version uses DSC as included in Windows Server 2016 Technical Preview; for other
- Windows OSes, it installs the
- [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/)
- (installing WMF requires a reboot).
+ Windows OSs, it installs the [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/) (installing WMF requires a reboot).
- **New features:** - Uses WMF RTM.
- - Enables data collection in order to improve the quality of the DSC Extension. For more
- information, see
- [the blog](https://devblogs.microsoft.com/powershell/azure-dsc-extension-data-collection-2/).
- - Provides an updated settings format for the extension in a Resource Manager template. For more
- information, see
- [the blog](https://devblogs.microsoft.com/powershell/arm-dsc-extension-settings/).
+ - Enables data collection in order to improve the quality of the DSC extension. For more
+ information, see this [blog article](https://devblogs.microsoft.com/powershell/azure-dsc-extension-data-collection-2/).
+ - Provides an updated settings format for the extension in a Resource Manager template. For more information, see this [blog article](https://devblogs.microsoft.com/powershell/arm-dsc-extension-settings/).
- Bug fixes and other enhancements. ## Next steps - For more information about PowerShell DSC, see [PowerShell documentation center](/powershell/scripting/dsc/overview/overview). - Examine the [Resource Manager template for the DSC extension](../virtual-machines/extensions/dsc-template.md).-- For more functionality and resources that you can manage with PowerShell DSC, browse the [PowerShell gallery](https://www.powershellgallery.com/packages?q=DscResource&x=0&y=0).
+- For other functionality and resources that you can manage with PowerShell DSC, browse the [PowerShell gallery](https://www.powershellgallery.com/packages?q=DscResource&x=0&y=0).
- For details about passing sensitive parameters into configurations, see [Manage credentials securely with the DSC extension handler](../virtual-machines/extensions/dsc-credentials.md).
automation https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-hrw-run-runbooks.md
Use the following procedure to specify a Run As account for a Hybrid Runbook Wor
As part of your automated build process for deploying resources in Azure, you might require access to on-premises systems to support a task or set of steps in your deployment sequence. To provide authentication against Azure using the Run As account, you must install the Run As account certificate. >[!NOTE]
->This PowerShell runbook currently does not run on LInux machines. It runs only on Windows machines.
+>This PowerShell runbook currently does not run on Linux machines. It runs only on Windows machines.
> The following PowerShell runbook, called **Export-RunAsCertificateToHybridWorker**, exports the Run As certificate from your Azure Automation account. The runbook downloads and imports the certificate into the local machine certificate store on a Hybrid Runbook Worker that is connected to the same account. Once it completes that step, the runbook verifies that the worker can successfully authenticate to Azure using the Run As account.
automation https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 01/08/2021 Last updated : 02/17/2021
PowerShell Workflow runbooks are text runbooks based on [Windows PowerShell Work
Python runbooks compile under Python 2 and Python 3. Python 3 runbooks are currently in preview. You can directly edit the code of the runbook using the text editor in the Azure portal. You can also use an offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation.
+Python 3 runbooks are supported in the following Azure global infrastructures:
+
+* Azure global
+* Azure Government
+ ### Advantages * Use the robust Python libraries.
automation https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-windows-hrw-install.md
# Deploy a Windows Hybrid Runbook Worker
-You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources.
+You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources.
Azure Automation stores and manages runbooks and then delivers them to one or more designated machines. This article describes how to deploy a user Hybrid Runbook Worker on a Windows machine, how to remove the worker, and how to remove a Hybrid Runbook Worker group.
To remove a Hybrid Runbook Worker group, you first need to remove the Hybrid Run
* To learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
-* To learn how to troubleshoot your Hybrid Runbook Workers, see [Troubleshoot Hybrid Runbook Worker issues](troubleshoot/hybrid-runbook-worker.md#general).
+* To learn how to troubleshoot your Hybrid Runbook Workers, see [Troubleshoot Hybrid Runbook Worker issues](troubleshoot/hybrid-runbook-worker.md#general).
automation https://docs.microsoft.com/en-us/azure/automation/troubleshoot/change-tracking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/change-tracking.md
Title: Troubleshoot Azure Automation Change Tracking and Inventory issues
description: This article tells how to troubleshoot and resolve issues with the Azure Automation Change Tracking and Inventory feature. Previously updated : 01/31/2019 Last updated : 02/15/2021
This article describes how to troubleshoot and resolve Azure Automation Change Tracking and Inventory issues. For general information about Change Tracking and Inventory, see [Change Tracking and Inventory overview](../change-tracking/overview.md).
+## General errors
+
+### <a name="machine-already-registered"></a>Scenario: Machine is already registered to a different account
+
+### Issue
+
+You receive the following error message:
+
+```error
+Unable to Register Machine for Change Tracking, Registration Failed with Exception System.InvalidOperationException: {"Message":"Machine is already registered to a different account."}
+```
+
+### Cause
+
+The machine has already been deployed to another workspace for Change Tracking.
+
+### Resolution
+
+1. Make sure that your machine is reporting to the correct workspace. For guidance on how to verify this, see [Verify agent connectivity to Azure Monitor](../../azure-monitor/platform/agent-windows.md#verify-agent-connectivity-to-azure-monitor). Also make sure that this workspace is linked to your Azure Automation account. To confirm, go to your Automation account and select **Linked workspace** under **Related Resources**.
+
+1. Make sure that the machines show up in the Log Analytics workspace linked to your Automation account. Run the following query in the Log Analytics workspace.
+
+ ```kusto
+ Heartbeat
+ | summarize by Computer, Solutions
+ ```
+
+ If you don't see your machine in the query results, it hasn't checked in recently. There's probably a local configuration issue. You should reinstall the Log Analytics agent.
+
+ If your machine is listed in the query results, verify under the Solutions property that **changeTracking** is listed. This verifies it is registered with Change Tracking and Inventory. If it is not, check for scope configuration problems. The scope configuration determines which machines are configured for Change Tracking and Inventory. To configure the scope configuration for the target machine, see [Enable Change Tracking and Inventory from an Automation account](../change-tracking/enable-from-automation-account.md).
+
+ In your workspace, run this query.
+
+ ```kusto
+ Operation
+ | where OperationCategory == 'Data Collection Status'
+ | sort by TimeGenerated desc
+ ```
+
+1. If you get a ```Data collection stopped due to daily limit of free data reached. Ingestion status = OverQuota``` result, the quota defined on your workspace has been reached, which has stopped data from being saved. In your workspace, go to **Usage and estimated costs**. Either select a new **Pricing tier** that allows you to use more data, or click on **Daily cap**, and remove the cap.
++
+If your issue is still unresolved, follow the steps in [Deploy a Windows Hybrid Runbook Worker](../automation-windows-hrw-install.md) to reinstall the Hybrid Worker for Windows. For Linux, follow the steps in [Deploy a Linux Hybrid Runbook Worker](../automation-linux-hrw-install.md).
+ ## Windows ### <a name="records-not-showing-windows"></a>Scenario: Change Tracking and Inventory records aren't showing for Windows machines
automation https://docs.microsoft.com/en-us/azure/automation/troubleshoot/onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/onboarding.md
Title: Troubleshoot Azure Automation feature deployment issues description: This article tells how to troubleshoot and resolve issues that arise when deploying Azure Automation features. - Previously updated : 06/30/2020+ Last updated : 02/11/2021
automation https://docs.microsoft.com/en-us/azure/automation/update-management/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
The following table lists the supported operating systems for update assessments
|Ubuntu 14.04 LTS, 16.04 LTS, and 18.04 LTS (x64) |Linux agents require access to an update repository. | > [!NOTE]
-> Azure virtual machine scale sets can be managed through Update Management. Update Management works on the instances themselves and not on the base image. You'll need to schedule the updates in an incremental way, so that not all the VM instances are updated at once. You can add nodes for virtual machine scale sets by following the steps under [Add a non-Azure machine to Change Tracking and Inventory](../automation-tutorial-installed-software.md#add-a-non-azure-machine-to-change-tracking-and-inventory).
+> Update Management does not support safely automating update management across all instances in an Azure virtual machine scale set. [Automatic OS image upgrades](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) is the recommended method for managing OS image upgrades on your scale set.
### Unsupported operating systems
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/upload-metrics-and-logs-to-azure-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upload-metrics-and-logs-to-azure-monitor.md
Periodically, you can export out usage information for billing purposes, monitor
Before you can upload usage data, metrics, or logs you need to: * Install tools
-* [Register the `Microsoft.AzureData` resource provider](#register-the-resource-provider)
+* [Register the `Microsoft.AzureArcData` resource provider](#register-the-resource-provider)
* [Create the service principal](#create-service-principal) ## Install tools
For uploading metrics, Azure monitor only accepts the last 30 minutes of data ([
[Upload billing data to Azure and view it in the Azure portal](view-billing-data-in-azure.md)
-[View Azure Arc data controller resource in Azure portal](view-data-controller-in-azure-portal.md)
+[View Azure Arc data controller resource in Azure portal](view-data-controller-in-azure-portal.md)
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/conceptual-agent-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-agent-architecture.md
Title: "Azure Arc enabled Kubernetes Agent Architecture" Previously updated : 02/15/2021 Last updated : 02/17/2021
keywords: "Kubernetes, Arc, Azure, containers"
# Azure Arc enabled Kubernetes Agent Architecture
-[Kubernetes](https://kubernetes.io/) can be used to deploy containerized workloads on hybrid and multi-cloud environments in a consistent way. Azure Arc enabled Kubernetes can be used as a centralized control plane to consistently manage policy, governance, and security across these heterogenous environments. This article provides:
+On its own, [Kubernetes](https://kubernetes.io/) can deploy containerized workloads consistently on hybrid and multi-cloud environments. Azure Arc enabled Kubernetes, however, works as a centralized, consistent control plane that manages policy, governance, and security across heterogenous environments. This article provides:
* An architectural overview of connecting a cluster to Azure Arc. * The connectivity pattern followed by agents.
-* A description of the data exchanged between cluster environment and Azure.
+* A description of the data exchanged between the cluster environment and Azure.
## Deploy agents to your cluster
-Most on-prem datacenters enforce strict network rules that prevent inbound communication on the firewall used at the network boundary. Azure Arc enabled Kubernetes works with these restrictions by only enabling selective egress endpoints for outbound communication and not requiring any inbound ports on the firewall. Azure Arc enabled Kubernetes agents initiate the outbound connections.
+Most on-prem datacenters enforce strict network rules that prevent inbound communication on the network boundary firewall. Azure Arc enabled Kubernetes works with these restrictions by not requiring inbound ports on the firewall and only enabling selective egress endpoints for outbound communication. Azure Arc enabled Kubernetes agents initiate this outbound communication.
![Architectural overview](./media/architectural-overview.png)
-Connect a cluster to Azure Arc using the following steps:
+### Connect a cluster to Azure Arc
1. Create a Kubernetes cluster on your choice of infrastructure (VMware vSphere, Amazon Web Services, Google Cloud Platform, etc.). > [!NOTE]
- > Customers are required to create and manage the lifecycle of the Kubernetes cluster themselves as Azure Arc enabled Kubernetes currently only supports attaching existing Kubernetes clusters to Azure Arc.
+ > Since Azure Arc enabled Kubernetes currently only supports attaching existing Kubernetes clusters to Azure Arc, customers are required to create and manage the lifecycle of the Kubernetes cluster themselves.
-1. Initiate the Azure Arc registration for your cluster using Azure CLI.
+1. Start the Azure Arc registration for your cluster using Azure CLI.
* Azure CLI uses Helm to deploy the agent Helm chart on the cluster. * The cluster nodes initiate an outbound communication to the [Microsoft Container Registry](https://github.com/microsoft/containerregistry) and pull the images needed to create the following agents in the `azure-arc` namespace: | Agent | Description | | -- | -- |
- | `deployment.apps/clusteridentityoperator` | Azure Arc enabled Kubernetes currently supports only [system assigned identities](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview). clusteridentityoperator makes the first outbound communication needed to fetch the managed service identity (MSI) certificate used by other agents for communication with Azure. |
- | `deployment.apps/config-agent` | Watches the connected cluster for source control configuration resources applied on the cluster and updates compliance state |
- | `deployment.apps/controller-manager` | An operator of operators that orchestrates interactions between Azure Arc components |
- | `deployment.apps/metrics-agent` | Collects metrics of other Arc agents to ensure that these agents are exhibiting optimal performance |
- | `deployment.apps/cluster-metadata-operator` | Gathers cluster metadata - cluster version, node count, and Azure Arc agent version |
- | `deployment.apps/resource-sync-agent` | Syncs the above mentioned cluster metadata to Azure |
- | `deployment.apps/flux-logs-agent` | Collects logs from the flux operators deployed as a part of source control configuration |
+ | `deployment.apps/clusteridentityoperator` | Azure Arc enabled Kubernetes currently supports only [system assigned identities](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview). `clusteridentityoperator` initiates the first outbound communication. This first communication fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure. |
+ | `deployment.apps/config-agent` | Watches the connected cluster for source control configuration resources applied on the cluster. Updates the compliance state. |
+ | `deployment.apps/controller-manager` | An operator of operators that orchestrates interactions between Azure Arc components. |
+ | `deployment.apps/metrics-agent` | Collects metrics of other Arc agents to verify optimal performance. |
+ | `deployment.apps/cluster-metadata-operator` | Gathers cluster metadata, including cluster version, node count, and Azure Arc agent version. |
+ | `deployment.apps/resource-sync-agent` | Syncs the above-mentioned cluster metadata to Azure. |
+ | `deployment.apps/flux-logs-agent` | Collects logs from the flux operators deployed as a part of source control configuration. |
-1. Once all the Azure Arc enabled Kubernetes agent pods in `Running` state, verify that your cluster connected to Azure Arc. You should see:
- * An Azure Arc enabled Kubernetes resource in [Azure Resource Manager](../../azure-resource-manager/management/overview.md). This resource is tracked in Azure as a projection of the customer-managed Kubernetes cluster, not the actual Kubernetes cluster itself.
- * Cluster metadata, like Kubernetes version, agent version, and number of nodes, appears on the Azure Arc enabled Kubernetes resource as metadata.
+1. Once all the Azure Arc enabled Kubernetes agent pods are in `Running` state, verify that your cluster connected to Azure Arc. You should see:
+ * An Azure Arc enabled Kubernetes resource in [Azure Resource Manager](../../azure-resource-manager/management/overview.md). Azure tracks this resource as a projection of the customer-managed Kubernetes cluster, not the actual Kubernetes cluster itself.
+ * Cluster metadata (like Kubernetes version, agent version, and number of nodes) appears on the Azure Arc enabled Kubernetes resource as metadata.
## Data exchange between cluster environment and Azure
Connect a cluster to Azure Arc using the following steps:
| Resource consumption (memory/CPU) by agents | Diagnostics and supportability | Agent pushes to Azure | | Logs of all agent containers | Diagnostics and supportability | Agent pushes to Azure | | Agent upgrade availability | Agent upgrade | Agent pulls from Azure |
-| Desired state of Configuration - Git repo URL, flux operator parameters, private key, known hosts content, HTTPS username, token/password | Configuration | Agent pulls from Azure |
+| Desired state of configuration: Git repository URL, flux operator parameters, private key, known hosts content, HTTPS username, token, or password | Configuration | Agent pulls from Azure |
| Status of flux operator installation | Configuration | Agent pushes to Azure | | Azure Policy assignments that need Gatekeeper enforcement within cluster | Azure Policy | Agent pulls from Azure | | Audit and compliance status of in-cluster policy enforcements | Azure Policy | Agent pushes to Azure |
Connect a cluster to Azure Arc using the following steps:
| Status | Description | | | -- |
-| Connecting | Azure Arc enabled Kubernetes resource created in Azure Resource Manager, but service hasn't received agent heartbeat yet. |
+| Connecting | Azure Arc enabled Kubernetes resource is created in Azure Resource Manager, but service hasn't received the agent heartbeat yet. |
| Connected | Azure Arc enabled Kubernetes service received an agent heartbeat sometime within the previous 15 minutes. | | Offline | Azure Arc enabled Kubernetes resource was previously connected, but the service hasn't received any agent heartbeat for 15 minutes. |
-| Expired | Managed service identity (MSI) certificate has an expiration window of 90 days after it is issued. Once this certificate expires, the resource is considered `Expired` and all features such as configuration, monitoring and policy stop working on this cluster. More information on how to address expired Azure Arc enabled Kubernetes resources can be found [here](./faq.md#how-to-address-expired-azure-arc-enabled-kubernetes-resources) |
+| Expired | MSI certificate has an expiration window of 90 days after it is issued. Once this certificate expires, the resource is considered `Expired` and all features such as configuration, monitoring, and policy stop working on this cluster. More information on how to address expired Azure Arc enabled Kubernetes resources can be found [in the FAQ article](./faq.md#how-to-address-expired-azure-arc-enabled-kubernetes-resources). |
## Understand connectivity modes | Connectivity mode | Description | | -- | -- |
-| Fully connected | Agents are always able to reach out to Azure. Experience is ideal in this case as there is little delay in propagation of configurations (for GitOps), enforcement of policies (in Azure Policy and Gatekeeper) and collection of metrics and logs of workloads (in Azure Monitor) |
-| Semi-connected | MSI certificate pulled down by the `clusteridentityoperator` is valid for 90 days maximum before the certificate expires. Once the certificate expires, the Azure Arc enabled Kubernetes resource stops working. Delete and recreate the Azure Arc enabled Kubernetes resource and agents to get all the Arc features to work on the cluster. During the 90 days, users are recommended to connect the cluster at least once every 30 days. |
-| Disconnected | Kubernetes clusters in disconnected environments without any access to Azure are currently not supported by Azure Arc enabled Kubernetes. If this capability is of interest to you, submit or up-vote an idea on [Azure Arc's UserVoice forum](https://feedback.azure.com/forums/925690-azure-arc).
+| Fully connected | Agents can consistently communicate with Azure with little delay in propagating GitOps configurations, enforcing Azure Policy and Gatekeeper policies, and collecting workload metrics and logs in Azure Monitor. |
+| Semi-connected | The MSI certificate pulled down by the `clusteridentityoperator` is valid for up to 90 days before the certificate expires. Upon expiration, the Azure Arc enabled Kubernetes resource stops working. To reactivate all Azure Arc features on the cluster, delete and recreate the Azure Arc enabled Kubernetes resource and agents. During the 90 days, connect the cluster at least once every 30 days. |
+| Disconnected | Kubernetes clusters in disconnected environments unable to access Azure are currently unsupported by Azure Arc enabled Kubernetes. If this capability is of interest to you, submit or up-vote an idea on [Azure Arc's UserVoice forum](https://feedback.azure.com/forums/925690-azure-arc).
## Next steps
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/conceptual-configurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-configurations.md
Title: "Configurations and GitOps - Azure Arc enabled Kubernetes" Previously updated : 02/15/2021 Last updated : 02/17/2021
In relation to Kubernetes, GitOps is the practice of declaring the desired state
* YAML-format manifests describing any valid Kubernetes resources, including Namespaces, ConfigMaps, Deployments, DaemonSets, etc. * Helm charts for deploying applications.
-[Flux](https://docs.fluxcd.io/), a popular open-source tool in the GitOps space, can be deployed on the Kubernetes cluster to ease the flow of configurations from a Git repo to a Kubernetes cluster. Flux supports the deployment of its operator at both the cluster and namespace scopes. A flux operator deployed with namespace scope can only deploy Kubernetes objects within that specific namespace. The ability to choose between cluster or namespace scope helps you achieve multi-tenant deployment patterns on the same Kubernetes cluster.
+[Flux](https://docs.fluxcd.io/), a popular open-source tool in the GitOps space, can be deployed on the Kubernetes cluster to ease the flow of configurations from a Git repository to a Kubernetes cluster. Flux supports the deployment of its operator at both the cluster and namespace scopes. A flux operator deployed with namespace scope can only deploy Kubernetes objects within that specific namespace. The ability to choose between cluster or namespace scope helps you achieve multi-tenant deployment patterns on the same Kubernetes cluster.
## Configurations
In relation to Kubernetes, GitOps is the practice of declaring the desired state
The connection between your cluster and a Git repository is created as a `Microsoft.KubernetesConfiguration/sourceControlConfigurations` extension resource on top of the Azure Arc enabled Kubernetes resource (represented by `Microsoft.Kubernetes/connectedClusters`) in Azure Resource Manager.
-The `sourceControlConfiguration` resource properties are used to deploy Flux operator on the cluster with the appropriate parameters, such as the Git repo from which to pull manifests and the polling interval at which to pull them. The `sourceControlConfiguration` data is stored encrypted, at rest in an Azure Cosmos DB database to ensure data confidentiality.
+The `sourceControlConfiguration` resource properties are used to deploy Flux operator on the cluster with the appropriate parameters, such as the Git repository from which to pull manifests and the polling interval at which to pull them. The `sourceControlConfiguration` data is stored encrypted and at rest in an Azure Cosmos DB database to ensure data confidentiality.
The `config-agent` running in your cluster is responsible for: * Tracking new or updated `sourceControlConfiguration` extension resources on the Azure Arc enabled Kubernetes resource.
The `config-agent` running in your cluster is responsible for:
You can create multiple namespace-scoped `sourceControlConfiguration` resources on the same Azure Arc enabled Kubernetes cluster to achieve multi-tenancy. > [!NOTE]
-> * Since the `config-agent` monitors for new or updated `sourceControlConfiguration` extension resources to be available on Azure Arc enabled Kubernetes resource, agents require connectivity for the desired state to be pulled down to the cluster. Whenever agents aren't able to connect to Azure, the desired state properties declared on the `sourceControlConfiguration` resource in Azure Resource Manager are not applied on the cluster.
-> * Sensitive customer inputs like private key, known hosts content, HTTPS username, and token/password are not stored for more than 48 hours in the Azure Arc enabled Kubernetes services. If you are using sensitive inputs for configurations, be advised to bring the clusters online as regularly as possible.
+> * `config-agent` continually monitors for new or updated `sourceControlConfiguration` extension resources available on the Azure Arc enabled Kubernetes resource. Thus, agents require consistent connectivity to pull the desired state properties to the cluster. If agents are unable to connect to Azure, the desired state is not applied on the cluster.
+> * Sensitive customer inputs like private key, known hosts content, HTTPS username, and token or password are stored for up to 48 hours in the Azure Arc enabled Kubernetes services. If you are using sensitive inputs for configurations, bring the clusters online as regularly as possible.
## Apply configurations at scale
-Since Azure Resource Manager manages your configurations, you can use Azure Policy to automate the creation of the same configuration on all Azure Arc enabled Kubernetes resources within the scope of a subscription or a resource group.
+Since Azure Resource Manager manages your configurations, you can automate creating the same configuration across all Azure Arc enabled Kubernetes resources using Azure Policy, within scope of a subscription or a resource group.
-This at-scale enforcement ensures that a common baseline configuration (containing configurations like ClusterRoleBindings, RoleBindings, and NetworkPolicy) can be applied across the entire fleet or inventory of Azure Arc enabled Kubernetes clusters.
+This at-scale enforcement ensures a common baseline configuration (containing configurations like ClusterRoleBindings, RoleBindings, and NetworkPolicy) can be applied across an entire fleet or inventory of Azure Arc enabled Kubernetes clusters.
## Next steps
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/connect-cluster.md
Title: "Connect an Azure Arc enabled Kubernetes cluster (Preview)"
# Previously updated : 02/15/2021 Last updated : 02/17/2021
# Connect an Azure Arc enabled Kubernetes cluster (Preview)
-This article provides a walk-through on connecting any existing Kubernetes cluster to Azure Arc. A conceptual overview of the same can be found [here](./conceptual-agent-architecture.md).
+This article walks you through connecting an existing Kubernetes cluster to Azure Arc. For a conceptual take on connecting clusters, see the [Azure Arc enabled Kubernetes Agent Architecture article](./conceptual-agent-architecture.md).
## Before you begin
Verify you have prepared the following prerequisites:
* Create a Kubernetes cluster using [Kubernetes in Docker (kind)](https://kind.sigs.k8s.io/). * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes). * A kubeconfig file to access the cluster and cluster-admin role on the cluster for deployment of Arc enabled Kubernetes agents.
-* The user or service principal used with `az login` and `az connectedk8s connect` commands must have the 'Read' and 'Write' permissions on the 'Microsoft.Kubernetes/connectedclusters' resource type. The "Kubernetes Cluster - Azure Arc Onboarding" role has these permissions and can be used for role assignments on the user or service principal.
+* 'Read' and 'Write' permissions for the user or service principal used with `az login` and `az connectedk8s connect` commands on the `Microsoft.Kubernetes/connectedclusters` resource type. The "Kubernetes Cluster - Azure Arc Onboarding" role has these permissions and can be used for role assignments on the user or service principal.
* Helm 3 for onboarding the cluster using a `connectedk8s` extension. [Install the latest release of Helm 3](https://helm.sh/docs/intro/install) to meet this requirement. * Azure CLI version 2.15+ for installing the Azure Arc enabled Kubernetes CLI extensions. [Install Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest&preserve-view=true) or update to the latest version.
-* Install the Azure Arc enabled Kubernetes CLI extensions:
+* The Azure Arc enabled Kubernetes CLI extensions:
- * Install the `connectedk8s` extension, which helps you connect Kubernetes clusters to Azure:
+ * Install the `connectedk8s` extension to help you connect Kubernetes clusters to Azure:
```azurecli az extension add --name connectedk8s
Azure Arc agents require the following protocols/ports/outbound URLs to function
| `https://eastus.dp.kubernetesconfiguration.azure.com`, `https://westeurope.dp.kubernetesconfiguration.azure.com` | Data plane endpoint for the agent to push status and fetch configuration information. | | `https://login.microsoftonline.com` | Required to fetch and update Azure Resource Manager tokens. | | `https://mcr.microsoft.com` | Required to pull container images for Azure Arc agents. |
-| `https://eus.his.arc.azure.com`, `https://weu.his.arc.azure.com` | Required to pull system-assigned managed identity certificates. |
+| `https://eus.his.arc.azure.com`, `https://weu.his.arc.azure.com` | Required to pull system-assigned Managed Service Identity (MSI) certificates. |
## Register the two providers for Azure Arc enabled Kubernetes
Name Location ResourceGroup
AzureArcTest1 eastus AzureArcTest ```
-You can also view this resource on the [Azure portal](https://portal.azure.com/). Open the portal in your browser and navigate to the resource group and the Azure Arc enabled Kubernetes resource, based on the resource name and resource group name inputs used earlier in the `az connectedk8s connect` command.
+You can also view this resource on the [Azure portal](https://portal.azure.com/).
+1. Open the portal in your browser.
+1. Navigate to the resource group and the Azure Arc enabled Kubernetes resource, based on the resource name and resource group name inputs used earlier in the `az connectedk8s connect` command.
> [!NOTE] > After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes, etc.) to surface on the overview page of the Azure Arc enabled Kubernetes resource in Azure portal.
If your cluster is behind an outbound proxy server, Azure CLI and the Azure Arc
> [!NOTE] > * Specifying `excludedCIDR` under `--proxy-skip-range` is important to ensure in-cluster communication is not broken for the agents. > * While `--proxy-http`, `--proxy-https`, and `--proxy-skip-range` are expected for most outbound proxy environments, `--proxy-cert` is only required if trusted certificates from proxy need to be injected into trusted certificate store of agent pods.
-> * The above proxy specification is currently applied only for Arc agents and not for the flux pods used in sourceControlConfiguration. The Azure Arc enabled Kubernetes team is actively working on this feature and it will be available soon.
+> * The above proxy specification is currently applied only for Azure Arc agents and not for the flux pods used in `sourceControlConfiguration`. The Azure Arc enabled Kubernetes team is actively working on this feature.
## Azure Arc Agents for Kubernetes
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/faq.md
Title: "Azure Arc enabled Kubernetes frequently asked questions" Previously updated : 02/15/2021 Last updated : 02/17/2021
No. All Azure Arc enabled Kubernetes features, including Azure Monitor and Azure
## Should I connect my AKS-HCI cluster and Kubernetes clusters on Azure Stack Hub and Azure Stack Edge to Azure Arc?
-Yes, connecting your AKS-HCI cluster or Kubernetes clusters on Azure Stack Edge or Azure Stack Hub to Azure Arc provides clusters with resource representation in Azure Resource Manager. This resource representation extends capabilities like Cluster Configuration, Azure Monitor, and Azure Policy (Gatekeeper) to the Kubernetes clusters you connect.
+Yes, connecting your AKS-HCI cluster or Kubernetes clusters on Azure Stack Edge or Azure Stack Hub to Azure Arc provides clusters with resource representation in Azure Resource Manager. This resource representation extends capabilities like Cluster Configuration, Azure Monitor, and Azure Policy (Gatekeeper) to connected Kubernetes clusters.
## How to address expired Azure Arc enabled Kubernetes resources?
-The Managed service identity (MSI) certificate associated with your Azure Arc enabled Kuberenetes has an expiration window of 90 days. Once this certificate expires, the resource is considered `Expired` and all features such as configuration, monitoring and policy stop working on this cluster. Follow these steps to get your Kubernetes cluster working with Azure Arc again:
+The Managed Service Identity (MSI) certificate associated with your Azure Arc enabled Kubernetes has an expiration window of 90 days. Once this certificate expires, the resource is considered `Expired` and all features (such as configuration, monitoring, and policy) stop working on this cluster. To get your Kubernetes cluster working with Azure Arc again:
-1. Delete Azure Arc enabled Kubernetes resource and agents on the cluster
+1. Delete Azure Arc enabled Kubernetes resource and agents on the cluster.
```console az connectedk8s delete -n <name> -g <resource-group> ```
-1. Recreate the Azure Arc enabled Kubernetes resource by deploying agents on the cluster again.
+1. Recreate the Azure Arc enabled Kubernetes resource by deploying agents on the cluster.
```console az connectedk8s connect -n <name> -g <resource-group> ``` > [!NOTE]
-> `az connectedk8s delete` will also delete configurations on top of the cluster. After running `az connectedk8s connect`, create the configurations on the cluster again, either manually or using Azure Policy.
+> `az connectedk8s delete` will also delete configurations on top of the cluster. After running `az connectedk8s connect`, recreate the configurations on the cluster, either manually or using Azure Policy.
## If I am already using CI/CD pipelines, can I still use Azure Arc enabled Kubernetes and configurations?
The CI/CD pipeline applies changes only once during pipeline run. However, the G
**Apply GitOps at scale**
-CI/CD pipelines are good for event driven deployments to your Kubernetes cluster, where the event could be a push to a Git repository. However, deployment of the same configuration to all your Kubernetes clusters requires the CI/CD pipeline to be configured with credentials of each of these Kubernetes clusters manually. On the other hand, in the case of Azure Arc enabled Kubernetes, since Azure Resource Manager manages your configurations, you can use Azure Policy to automate the application of the desired configuration on all Kubernetes clusters under a subscription or resource group scope in one go. This capability is even applicable to Azure Arc enabled Kubernetes resources created after the policy assignment.
+CI/CD pipelines are useful for event-driven deployments to your Kubernetes cluster (for example, a push to a Git repository). However, if you want to deploy the same configuration to all of your Kubernetes clusters, you would need to manually configure each Kubernetes cluster's credentials to the CI/CD pipeline.
-The configurations feature is used to apply baseline configurations like network policies, role bindings, and pod security policies across the entire inventory of Kubernetes clusters for compliance and governance requirements.
+For Azure Arc enabled Kubernetes, since Azure Resource Manager manages your configurations, you can automate creating the same configuration across all Azure Arc enabled Kubernetes resources using Azure Policy, within scope of a subscription or a resource group. This capability is even applicable to Azure Arc enabled Kubernetes resources created after the policy assignment.
+
+This feature applies baseline configurations (like network policies, role bindings, and pod security policies) across the entire Kubernetes cluster inventory to meet compliance and governance requirements.
## Next steps
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/overview.md
Title: "Overview of Azure Arc enabled Kubernetes"
# Previously updated : 02/15/2021 Last updated : 02/17/2021
keywords: "Kubernetes, Arc, Azure, containers"
-# What is Azure Arc enabled Kubernetes Preview?
+# What is Azure Arc enabled Kubernetes?
-You can attach and configure Kubernetes clusters inside or outside of Azure by using Azure Arc enabled Kubernetes Preview. When a Kubernetes cluster is attached to Azure Arc, it will appear in the Azure portal. It will have an Azure Resource Manager ID and a managed identity. Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource.
+With Azure Arc enabled Kubernetes, you can attach and configure Kubernetes clusters located either inside or outside of Azure. When you connect a Kubernetes cluster to Azure Arc, it will:
+* Appear in the Azure portal with an Azure Resource Manager ID and a managed identity.
+* Be attached to standard Azure subscriptions.
+* Be placed in a resource group.
+* Receive tags just like any other Azure resource.
-To connect a Kubernetes cluster to Azure, the cluster administrator needs to deploy agents. These agents run in a Kubernetes namespace named `azure-arc` and are standard Kubernetes deployments. The agents are responsible for connectivity to Azure, collecting Azure Arc logs and metrics, and watching for configuration requests.
+To connect a Kubernetes cluster to Azure, the cluster administrator needs to deploy agents. These agents:
+* Run in the `azure-arc` Kubernetes namespace as standard Kubernetes deployments.
+* Handle connectivity to Azure.
+* Collect Azure Arc logs and metrics.
+* Watch for configuration requests.
-Azure Arc enabled Kubernetes supports industry-standard SSL to secure data in transit. Also, data is stored encrypted at rest in an Azure Cosmos DB database to ensure data confidentiality.
+Azure Arc enabled Kubernetes supports industry-standard SSL to secure data in transit. This data is stored encrypted and at rest in an Azure Cosmos DB database to ensure data confidentiality.
-> [!NOTE]
-> Azure Arc enabled Kubernetes is in preview. We don't recommend it for production workloads.
- ## Supported Kubernetes distributions
-Azure Arc enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes cluster such as AKS-engine on Azure, AKS-engine on Azure Stack Hub, GKE, EKS and VMware vSphere cluster.
+Azure Arc enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes cluster, such as:
+* AKS-engine on Azure
+* AKS-engine on Azure Stack Hub
+* GKE
+* EKS
+* VMware vSphere
-Azure Arc enabled Kubernetes features have been tested by the Arc team on following distributions:
+Azure Arc enabled Kubernetes features have been tested by the Arc team on the following distributions:
* RedHat OpenShift 4.3 * Rancher RKE 1.0.8 * Canonical Charmed Kubernetes 1.18
Azure Arc enabled Kubernetes features have been tested by the Arc team on follow
## Supported scenarios
-Azure Arc enabled Kubernetes supports these scenarios:
+Azure Arc enabled Kubernetes supports the following scenarios:
* Connect Kubernetes running outside of Azure for inventory, grouping, and tagging.
-* Deploy applications and apply configuration by using GitOps-based configuration management.
+* Deploy applications and apply configuration using GitOps-based configuration management.
-* Use Azure Monitor for containers to view and monitor your clusters.
+* View and monitor your clusters using Azure Monitor for containers.
-* Apply policies by using Azure Policy for Kubernetes.
+* Apply policies using Azure Policy for Kubernetes.
[!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/use-gitops-connected-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/use-gitops-connected-cluster.md
Title: "Deploy configurations using GitOps on Arc enabled Kubernetes cluster (Pr
# Previously updated : 02/15/2021 Last updated : 02/17/2021
keywords: "GitOps, Kubernetes, K8s, Azure, Arc, Azure Kubernetes Service, AKS, c
# Deploy configurations using GitOps on an Arc enabled Kubernetes cluster (Preview)
-This article demonstrates applying configurations on an Azure Arc enabled Kubernetes cluster. A conceptual overview of the same can be found [here](./conceptual-configurations.md).
+This article demonstrates applying configurations on an Azure Arc enabled Kubernetes cluster. For a conceptual take on this process, see the [Configurations and GitOps - Azure Arc enabled Kubernetes article](./conceptual-configurations.md).
## Before you begin
Command group 'k8sconfiguration' is in preview. It may be changed/removed in a f
"type": "Microsoft.KubernetesConfiguration/sourceControlConfigurations" ```
-#### Use a public Git repo
+#### Use a public Git repository
| Parameter | Format | | - | - | | `--repository-url` | http[s]://server/repo[.git] or git://server/repo[.git]
-#### Use a private Git repo with SSH and Flux-created keys
+#### Use a private Git repository with SSH and Flux-created keys
| Parameter | Format | Notes | - | - | - | | `--repository-url` | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may replace `user@` > [!NOTE]
-> The public key generated by Flux must be added to the user account in your Git service provider. If the key is added to the repo instead of the user account, use `git@` in place of `user@` in the URL. Jump to the [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository) section for more details.
+> The public key generated by Flux must be added to the user account in your Git service provider. If the key is added to the repository instead of the user account, use `git@` in place of `user@` in the URL. Jump to the [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository) section for more details.
-#### Use a private Git repo with SSH and user-provided keys
+#### Use a private Git repository with SSH and user-provided keys
| Parameter | Format | Notes | | - | - | - |
Command group 'k8sconfiguration' is in preview. It may be changed/removed in a f
| `--ssh-private-key-file` | full path to local file | Provide full path to local file that contains the PEM-format key > [!NOTE]
-> Provide your own private key directly or in a file. The key must be in [PEM format](https://aka.ms/PEMformat) and end with newline (\n). The associated public key must be added to the user account in your Git service provider. If the key is added to the repo instead of the user account, use `git@` in place of `user@`. Jump to the [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository) section for more details.
+> Provide your own private key directly or in a file. The key must be in [PEM format](https://aka.ms/PEMformat) and end with newline (\n). The associated public key must be added to the user account in your Git service provider. If the key is added to the repository instead of the user account, use `git@` in place of `user@`. Jump to the [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository) section for more details.
#### Use a private Git host with SSH and user-provided known hosts
Command group 'k8sconfiguration' is in preview. It may be changed/removed in a f
| `--ssh-known-hosts-file` | full path to local file | Provide known hosts content in a local file | > [!NOTE]
-> In order to authenticate the Git repo before establishing the SSH connection, the Flux operator maintains a list of common Git hosts in its known hosts file. If you are using an uncommon Git repo or your own Git host, you may need to supply the host key to ensure that Flux can identify your repo. You can provide your known_hosts content directly or in a file. Use the [known_hosts content format specifications](https://aka.ms/KnownHostsFormat) in conjunction with one of the SSH key scenarios described above when providing your own content.
+> In order to authenticate the Git repository before establishing the SSH connection, the Flux operator maintains a list of common Git hosts in its known hosts file. If you are using an uncommon Git repository or your own Git host, you may need to supply the host key to ensure that Flux can identify your repo. You can provide your known_hosts content directly or in a file. Use the [known_hosts content format specifications](https://aka.ms/KnownHostsFormat) in conjunction with one of the SSH key scenarios described above when providing your own content.
-#### Use a private Git repo with HTTPS
+#### Use a private Git repository with HTTPS
| Parameter | Format | Notes | | - | - | - |
Command group 'k8sconfiguration' is in preview. It may be changed/removed in a f
> [!NOTE] > HTTPS Helm release private auth is supported only with Helm operator chart version 1.2.0+ (default). > HTTPS Helm release private auth is not supported currently for Azure Kubernetes Services-managed clusters.
-> If you need Flux to access the Git repo through your proxy, then you will need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./connect-cluster.md#connect-using-an-outbound-proxy-server).
+> If you need Flux to access the Git repository through your proxy, then you will need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./connect-cluster.md#connect-using-an-outbound-proxy-server).
#### Additional Parameters
Customize the configuration with the following optional parameters:
| Option | Description | | - | - |
-| `--git-branch` | Branch of Git repo to use for Kubernetes manifests. Default is 'master'. Newer repositories have root branch named `main`, in which case you need to set `--git-branch=main`. |
-| `--git-path` | Relative path within the Git repo for Flux to locate Kubernetes manifests. |
-| `--git-readonly` | Git repo will be considered read-only; Flux will not attempt to write to it. |
+| `--git-branch` | Branch of Git repository to use for Kubernetes manifests. Default is 'master'. Newer repositories have root branch named `main`, in which case you need to set `--git-branch=main`. |
+| `--git-path` | Relative path within the Git repository for Flux to locate Kubernetes manifests. |
+| `--git-readonly` | Git repository will be considered read-only; Flux will not attempt to write to it. |
| `--manifest-generation` | If enabled, Flux will look for .flux.yaml and run Kustomize or other manifest generators. |
-| `--git-poll-interval` | Period at which to poll Git repo for new commits. Default is `5m` (5 minutes). |
+| `--git-poll-interval` | Period at which to poll Git repository for new commits. Default is `5m` (5 minutes). |
| `--sync-garbage-collection` | If enabled, Flux will delete resources that it created, but are no longer present in Git. | | `--git-label` | Label to keep track of sync progress. Used to tag the Git branch. Default is `flux-sync`. | | `--git-user` | Username for Git commit. | | `--git-email` | Email to use for Git commit.
-If you don't want Flux to write to the repo and `--git-user` or `--git-email` are not set, then `--git-readonly` will automatically be set.
+If you don't want Flux to write to the repository and `--git-user` or `--git-email` are not set, then `--git-readonly` will automatically be set.
For more information, see the [Flux documentation](https://aka.ms/FluxcdReadme).
While the provisioning process happens, the `sourceControlConfiguration` will mo
## Apply configuration from a private Git repository
-If you are using a private Git repo, you need to configure the SSH public key in your repo. The SSH public key will either be the one that Flux generates or the one you provide. You can configure the public key either on the specific Git repo or on the Git user that has access to the repo.
+If you are using a private Git repository, you need to configure the SSH public key in your repository. The SSH public key will either be the one that Flux generates or the one you provide. You can configure the public key either on the specific Git repository or on the Git user that has access to the repository.
### Get your own public key
The following is useful if Flux generates the keys.
Use one of the following options:
-* Option 1: Add the public key to your user account (applies to all repos in your account):
+* Option 1: Add the public key to your user account (applies to all repositories in your account):
1. Open GitHub and click on your profile icon at the top-right corner of the page. 2. Click on **Settings**. 3. Click on **SSH and GPG keys**.
Use one of the following options:
6. Paste the public key without any surrounding quotes. 7. Click on **Add SSH key**.
-* Option 2: Add the public key as a deploy key to the Git repo (applies to only this repo):
- 1. Open GitHub and navigate to your repo.
+* Option 2: Add the public key as a deploy key to the Git repository (applies to only this repository):
+ 1. Open GitHub and navigate to your repository.
1. Click on **Settings**. 1. Click on **Deploy keys**. 1. Click on **Add deploy key**.
Delete a `sourceControlConfiguration` using the Azure CLI or Azure portal. Afte
> [!NOTE] > After a `sourceControlConfiguration` with `namespace` scope is created, users with `edit` role binding on the namespace can deploy workloads on this namespace. When this `sourceControlConfiguration` with namespace scope gets deleted, the namespace is left intact and will not be deleted to avoid breaking these other workloads. If needed, you can delete this namespace manually with `kubectl`.
-> Any changes to the cluster that were the result of deployments from the tracked Git repo are not deleted when the `sourceControlConfiguration` is deleted.
+> Any changes to the cluster that were the result of deployments from the tracked Git repository are not deleted when the `sourceControlConfiguration` is deleted.
```azurecli az k8sconfiguration delete --name cluster-config --cluster-name AzureArcTest1 --resource-group AzureArcTest --cluster-type connectedClusters
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-custom-handlers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-custom-handlers.md
For custom handlers, set `FUNCTIONS_WORKER_RUNTIME` to `Custom` in *local.settin
} ```
-> [!NOTE]
-> `Custom` may not be recognized as a valid runtime on the Linux Premium or App Service plans. If that is your deployment target, set `FUNCTIONS_WORKER_RUNTIME` to an empty string.
- ### Function metadata When used with a custom handler, the *function.json* contents are no different from how you would define a function under any other context. The only requirement is that *function.json* files must be in a folder named to match the function name.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-core.md
The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Micro
* **IDE**: Visual Studio, VS Code, or command line. > [!NOTE]
-> ASP.NET Core 3.X requires [Application Insights 2.8.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.8.0) or later.
+> ASP.NET Core 3.1 requires [Application Insights 2.8.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.8.0) or later.
+
+> [!IMPORTANT]
+> The following versions of ASP.NET Core are supported: ASP.NET Core 2.1 and 3.1. Versions 2.0, 2.2, and 3.0 have been retired and are no longer supported.
## Prerequisites
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-dependencies.md
For ASP.NET applications, full SQL query text is collected with the help of byte
In addition to the platform specific steps above, you **must also explicitly opt-in to enable SQL command collection** by modifying the applicationInsights.config file with the following: ```xml
-<Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector">
-<EnableSqlCommandTextInstrumentation>true</EnableSqlCommandTextInstrumentation>
-</Add>
+<TelemetryModules>
+ <Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector">
+ <EnableSqlCommandTextInstrumentation>true</EnableSqlCommandTextInstrumentation>
+ </Add>
``` In the above cases, the correct way of validating that instrumentation engine is correctly installed is by validating that the SDK version of collected `DependencyTelemetry` is 'rddp'. 'rdddsd' or 'rddf' indicates dependencies are collected via DiagnosticSource or EventSource callbacks, and hence full SQL query won't be captured.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/quick-collect-activity-log-arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/quick-collect-activity-log-arm.md
az deployment sub create --name CreateDiagnosticSetting --location eastus --temp
# [PowerShell](#tab/PowerShell) ```powershell
-New-AzSubscriptionDeployment -Name CreateDiagnosticSetting -location eastus -TemplateFile CreateDiagnosticSetting.json -settingName="Send Activity log to workspace" -workspaceId "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/microsoft.operationalinsights/workspaces/my-workspace-01"
+New-AzSubscriptionDeployment -Name CreateDiagnosticSetting -location eastus -TemplateFile CreateDiagnosticSetting.json -settingName "Send Activity log to workspace" -workspaceId "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/microsoft.operationalinsights/workspaces/my-workspace-01"
```
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-create-volumes-smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na ms.devlang: na Previously updated : 12/01/2020 Last updated : 02/16/2021 # Create an SMB volume for Azure NetApp Files
Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB). A volume's capacity consumption counts against its pool's provisioned capacity. This article shows you how to create an SMB3 volume. ## Before you begin
-You must have already set up a capacity pool.
-[Set up a capacity pool](azure-netapp-files-set-up-capacity-pool.md)
-A subnet must be delegated to Azure NetApp Files.
-[Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md)
-## Requirements for Active Directory connections
+* You must have already set up a capacity pool. See [Set up a capacity pool](azure-netapp-files-set-up-capacity-pool.md).
+* A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
- You need to create Active Directory connections before creating an SMB volume. The requirements for Active Directory connections are as follows:
+## Configure Active Directory connections
-* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you will specify.
-
-* Proper ports must be open on the applicable Windows Active Directory (AD) server.
- The required ports are as follows:
-
- | Service | Port | Protocol |
- |--|--||
- | AD Web Services | 9389 | TCP |
- | DNS | 53 | TCP |
- | DNS | 53 | UDP |
- | ICMPv4 | N/A | Echo Reply |
- | Kerberos | 464 | TCP |
- | Kerberos | 464 | UDP |
- | Kerberos | 88 | TCP |
- | Kerberos | 88 | UDP |
- | LDAP | 389 | TCP |
- | LDAP | 389 | UDP |
- | LDAP | 3268 | TCP |
- | NetBIOS name | 138 | UDP |
- | SAM/LSA | 445 | TCP |
- | SAM/LSA | 445 | UDP |
- | w32time | 123 | UDP |
-
-* The site topology for the targeted Active Directory Domain Services must adhere to the guidelines, in particular the Azure VNet where Azure NetApp Files is deployed.
-
- The address space for the virtual network where Azure NetApp Files is deployed must be added to a new or existing Active Directory site (where a domain controller reachable by Azure NetApp Files is).
-
-* The specified DNS servers must be reachable from the [delegated subnet](./azure-netapp-files-delegate-subnet.md) of Azure NetApp Files.
-
- See [Guidelines for Azure NetApp Files network planning](./azure-netapp-files-network-topologies.md) for supported network topologies.
-
- The Network Security Groups (NSGs) and firewalls must have appropriately configured rules to allow for Active Directory and DNS traffic requests.
-
-* The Azure NetApp Files delegated subnet must be able to reach all Active Directory Domain Services (ADDS) domain controllers in the domain, including all local and remote domain controllers. Otherwise, service interruption can occur.
-
- If you have domain controllers that are unreachable by the Azure NetApp Files delegated subnet, you can specify an Active Directory site during creation of the Active Directory connection. Azure NetApp Files needs to communicate only with domain controllers in the site where the Azure NetApp Files delegated subnet address space is.
-
- See [Designing the site topology](/windows-server/identity/ad-ds/plan/designing-the-site-topology) about AD sites and services.
-
-* You can enable AES encryption for AD Authentication by checking the **AES Encryption** box in the [Join Active Directory](#create-an-active-directory-connection) window. Azure NetApp Files supports DES, Kerberos AES 128, and Kerberos AES 256 encryption types (from the least secure to the most secure). If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled that matches the capabilities enabled for your Active Directory.
-
- For example, if your Active Directory has only the AES-128 capability, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory does not have any Kerberos encryption capability, Azure NetApp Files uses DES by default.
-
- You can enable the account options in the properties of the Active Directory Users and Computers Microsoft Management Console (MMC):
-
- ![Active Directory Users and Computers MMC](../media/azure-netapp-files/ad-users-computers-mmc.png)
-
-* Azure NetApp Files supports [LDAP signing](/troubleshoot/windows-server/identity/enable-ldap-signing-in-windows-server), which enables secure transmission of LDAP traffic between the Azure NetApp Files service and the targeted [Active Directory domain controllers](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview). If you are following the guidance of Microsoft Advisory [ADV190023](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023) for LDAP signing, then you should enable the LDAP signing feature in Azure NetApp Files by checking the **LDAP Signing** box in the [Join Active Directory](#create-an-active-directory-connection) window.
-
- [LDAP channel binding](https://support.microsoft.com/help/4034879/how-to-add-the-ldapenforcechannelbinding-registry-entry) configuration alone has no effect on the Azure NetApp Files service. However, if you use both LDAP channel binding and secure LDAP (for example, LDAPS or `start_tls`), then the SMB volume creation will fail.
-
-See Azure NetApp Files [SMB FAQs](./azure-netapp-files-faqs.md#smb-faqs) about additional AD information.
-
-## Decide which Domain Services to use
-
-Azure NetApp Files supports both [Active Directory Domain Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) (ADDS) and Azure Active Directory Domain Services (AADDS) for AD connections. Before you create an AD connection, you need to decide whether to use ADDS or AADDS.
-
-For more information, see [Compare self-managed Active Directory Domain Services, Azure Active Directory, and managed Azure Active Directory Domain Services](../active-directory-domain-services/compare-identity-solutions.md).
-
-### Active Directory Domain Services
-
-You can use your preferred [Active Directory Sites and Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) scope for Azure NetApp Files. This option enables reads and writes to Active Directory Domain Services (ADDS) domain controllers that are [accessible by Azure NetApp Files](azure-netapp-files-network-topologies.md). It also prevents the service from communicating with domain controllers that are not in the specified Active Directory Sites and Services site.
-
-To find your site name when you use ADDS, you can contact the administrative group in your organization that is responsible for Active Directory Domain Services. The example below shows the Active Directory Sites and Services plugin where the site name is displayed:
-
-![Active Directory Sites and Services](../media/azure-netapp-files/azure-netapp-files-active-directory-sites-services.png)
-
-When you configure an AD connection for Azure NetApp Files, you specify the site name in scope for the **AD Site Name** field.
-
-### Azure Active Directory Domain Services
-
-For Azure Active Directory Domain Services (AADDS) configuration and guidance, see [Azure AD Domain Services documentation](../active-directory-domain-services/index.yml).
-
-Additional AADDS considerations apply for Azure NetApp Files:
-
-* Ensure the VNet or subnet where AADDS is deployed is in the same Azure region as the Azure NetApp Files deployment.
-* If you use another VNet in the region where Azure NetApp Files is deployed, you should create a peering between the two VNets.
-* Azure NetApp Files supports `user` and `resource forest` types.
-* For synchronization type, you can select `All` or `Scoped`.
- If you select `Scoped`, ensure the correct Azure AD group is selected for accessing SMB shares. If you are uncertain, you can use the `All` synchronization type.
-* Use of the Enterprise or Premium SKU is required. The Standard SKU is not supported.
-
-When you create an Active Directory connection, note the following specifics for AADDS:
-
-* You can find information for **Primary DNS**, **Secondary DNS**, and **AD DNS Domain Name** in the AADDS menu.
-For DNS servers, two IP addresses will be used for configuring the Active Directory connection.
-* The **organizational unit path** is `OU=AADDC Computers`.
-This setting is configured in the **Active Directory Connections** under **NetApp Account**:
-
- ![Organizational unit path](../media/azure-netapp-files/azure-netapp-files-org-unit-path.png)
-
-* **Username** credentials can be any user that is a member of the Azure AD group **Azure AD DC Administrators**.
--
-## Create an Active Directory connection
-
-1. From your NetApp account, click **Active Directory connections**, then click **Join**.
-
- ![Active Directory Connections](../media/azure-netapp-files/azure-netapp-files-active-directory-connections.png)
-
-2. In the Join Active Directory window, provide the following information, based on the Domain Services you want to use:
-
- For information specific to the Domain Services you use, see [Decide which Domain Services to use](#decide-which-domain-services-to-use).
-
- * **Primary DNS**
- This is the DNS that is required for the Active Directory domain join and SMB authentication operations.
- * **Secondary DNS**
- This is the secondary DNS server for ensuring redundant name services.
- * **AD DNS Domain Name**
- This is the domain name of your Active Directory Domain Services that you want to join.
- * **AD Site Name**
- This is the site name that the domain controller discovery will be limited to. This should match the site name in Active Directory Sites and Services.
- * **SMB server (computer account) prefix**
- This is the naming prefix for the machine account in Active Directory that Azure NetApp Files will use for creation of new accounts.
-
- For example, if the naming standard that your organization uses for file servers is NAS-01, NAS-02..., NAS-045, then you would enter "NAS" for the prefix.
-
- The service will create additional machine accounts in Active Directory as needed.
-
- > [!IMPORTANT]
- > Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You will need to re-mount existing SMB shares after renaming the SMB server prefix.
-
- * **Organizational unit path**
- This is the LDAP path for the organizational unit (OU) where SMB server machine accounts will be created. That is, OU=second level, OU=first level.
-
- If you are using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account.
-
- ![Join Active Directory](../media/azure-netapp-files/azure-netapp-files-join-active-directory.png)
-
- * **AES Encryption**
- Select this checkbox to enable AES encryption for an SMB volume. See [Requirements for Active Directory connections](#requirements-for-active-directory-connections) for requirements.
-
- ![Active Directory AES encryption](../media/azure-netapp-files/active-directory-aes-encryption.png)
-
- The **AES Encryption** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAesEncryption
- ```
-
- Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is **Registered** before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAesEncryption
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view=azure-cli-latest) `az feature register` and `az feature show` to register the feature and display the registration status.
-
- * **LDAP Signing**
- Select this checkbox to enable LDAP signing. This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified [Active Directory Domain Services domain controllers](/windows/win32/ad/active-directory-domain-services). For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023).
-
- ![Active Directory LDAP signing](../media/azure-netapp-files/active-directory-ldap-signing.png)
-
- The **LDAP Signing** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapSigning
- ```
-
- Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is **Registered** before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapSigning
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view=azure-cli-latest) `az feature register` and `az feature show` to register the feature and display the registration status.
-
- * **Backup policy users**
- You can include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. The specified accounts will be allowed to change the NTFS permissions at the file or folder level. For example, you can specify a non-privileged service account used for migrating data to an SMB file share in Azure NetApp Files.
-
- ![Active Directory backup policy users](../media/azure-netapp-files/active-directory-backup-policy-users.png)
-
- The **Backup policy users** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBackupOperator
- ```
-
- Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is **Registered** before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBackupOperator
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view=azure-cli-latest) `az feature register` and `az feature show` to register the feature and display the registration status.
-
- * Credentials, including your **username** and **password**
-
- ![Active Directory credentials](../media/azure-netapp-files/active-directory-credentials.png)
-
-3. Click **Join**.
-
- The Active Directory connection you created appears.
-
- ![Created Active Directory connections](../media/azure-netapp-files/azure-netapp-files-active-directory-connections-created.png)
+Before creating an SMB volume, you need to create an Active Directory connection. If you haven't configured Active Directory connections for Azure NetApp files, follow instructions described in [Create and manage Active Directory connections](create-active-directory-connections.md).
## Add an SMB volume
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-solution-architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
* [Azure NetApp Files ΓÇô SAP HANA offloading backup with Cloud Sync](https://blog.netapp.com/azure-netapp-files-sap-hana) * [Speed up your SAP HANA system copies using Azure NetApp Files](https://blog.netapp.com/sap-hana-faster-using-azure-netapp-files/) * [Cloud Volumes ONTAP and Azure NetApp Files: SAP HANA system migration made easy](https://blog.netapp.com/cloud-volumes-ontap-and-azure-netapp-files-sap-hana-system-migration-made-easy/)
-* [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 1 - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2078737)
-* [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 2 - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2117130)
+* [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 1](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2078737)
+* [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 2](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2117130)
## Azure VMware Solutions
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/configure-kerberos-encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-kerberos-encryption.md
The following requirements apply to NFSv4.1 client encryption:
## Configure the Azure portal
-1. Follow the instructions in [Create an Active Directory connection](azure-netapp-files-create-volumes-smb.md#create-an-active-directory-connection).
+1. Follow the instructions in [Create an Active Directory connection](create-active-directory-connections.md).
Kerberos requires that you create at least one machine account in Active Directory. The account information you provide is used for creating the accounts for both SMB *and* NFSv4.1 Kerberos volumes. This machine is account is created automatically during volume creation.
Performance impact of krb5p:
* [Troubleshoot NFSv4.1 Kerberos volume issues](troubleshoot-nfsv41-kerberos-volumes.md) * [FAQs About Azure NetApp Files](azure-netapp-files-faqs.md) * [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
-* [Create an Active Directory connection](azure-netapp-files-create-volumes-smb.md#create-an-active-directory-connection)
+* [Create an Active Directory connection](create-active-directory-connections.md)
* [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md)
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/create-active-directory-connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
+
+ Title: Create and manage Active Directory connections for Azure NetApp Files | Microsoft Docs
+description: This article shows you how to create and manage Active Directory connections for Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 02/16/2021++
+# Create and manage Active Directory connections for Azure NetApp Files
+
+Several features of Azure NetApp Files require that you have an Active Directory connection. For example, you need to have an Active Directory connection before you can create an [SMB volume](azure-netapp-files-create-volumes-smb.md) or a [dual-protocol volume](create-volumes-dual-protocol.md). This article shows you how to create and manage Active Directory connections for Azure NetApp Files.
+
+## Before you begin
+
+You must have already set up a capacity pool.
+[Set up a capacity pool](azure-netapp-files-set-up-capacity-pool.md)
+A subnet must be delegated to Azure NetApp Files.
+[Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md)
+
+## Requirements for Active Directory connections
+
+ The requirements for Active Directory connections are as follows:
+
+* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you will specify.
+
+* Proper ports must be open on the applicable Windows Active Directory (AD) server.
+ The required ports are as follows:
+
+ | Service | Port | Protocol |
+ |--|--||
+ | AD Web Services | 9389 | TCP |
+ | DNS | 53 | TCP |
+ | DNS | 53 | UDP |
+ | ICMPv4 | N/A | Echo Reply |
+ | Kerberos | 464 | TCP |
+ | Kerberos | 464 | UDP |
+ | Kerberos | 88 | TCP |
+ | Kerberos | 88 | UDP |
+ | LDAP | 389 | TCP |
+ | LDAP | 389 | UDP |
+ | LDAP | 3268 | TCP |
+ | NetBIOS name | 138 | UDP |
+ | SAM/LSA | 445 | TCP |
+ | SAM/LSA | 445 | UDP |
+ | w32time | 123 | UDP |
+
+* The site topology for the targeted Active Directory Domain Services must adhere to the guidelines, in particular the Azure VNet where Azure NetApp Files is deployed.
+
+ The address space for the virtual network where Azure NetApp Files is deployed must be added to a new or existing Active Directory site (where a domain controller reachable by Azure NetApp Files is).
+
+* The specified DNS servers must be reachable from the [delegated subnet](./azure-netapp-files-delegate-subnet.md) of Azure NetApp Files.
+
+ See [Guidelines for Azure NetApp Files network planning](./azure-netapp-files-network-topologies.md) for supported network topologies.
+
+ The Network Security Groups (NSGs) and firewalls must have appropriately configured rules to allow for Active Directory and DNS traffic requests.
+
+* The Azure NetApp Files delegated subnet must be able to reach all Active Directory Domain Services (ADDS) domain controllers in the domain, including all local and remote domain controllers. Otherwise, service interruption can occur.
+
+ If you have domain controllers that are unreachable by the Azure NetApp Files delegated subnet, you can specify an Active Directory site during creation of the Active Directory connection. Azure NetApp Files needs to communicate only with domain controllers in the site where the Azure NetApp Files delegated subnet address space is.
+
+ See [Designing the site topology](/windows-server/identity/ad-ds/plan/designing-the-site-topology) about AD sites and services.
+
+* You can enable AES encryption for AD Authentication by checking the **AES Encryption** box in the [Join Active Directory](#create-an-active-directory-connection) window. Azure NetApp Files supports DES, Kerberos AES 128, and Kerberos AES 256 encryption types (from the least secure to the most secure). If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled that matches the capabilities enabled for your Active Directory.
+
+ For example, if your Active Directory has only the AES-128 capability, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory does not have any Kerberos encryption capability, Azure NetApp Files uses DES by default.
+
+ You can enable the account options in the properties of the Active Directory Users and Computers Microsoft Management Console (MMC):
+
+ ![Active Directory Users and Computers MMC](../media/azure-netapp-files/ad-users-computers-mmc.png)
+
+* Azure NetApp Files supports [LDAP signing](/troubleshoot/windows-server/identity/enable-ldap-signing-in-windows-server), which enables secure transmission of LDAP traffic between the Azure NetApp Files service and the targeted [Active Directory domain controllers](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview). If you are following the guidance of Microsoft Advisory [ADV190023](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023) for LDAP signing, then you should enable the LDAP signing feature in Azure NetApp Files by checking the **LDAP Signing** box in the [Join Active Directory](#create-an-active-directory-connection) window.
+
+ [LDAP channel binding](https://support.microsoft.com/help/4034879/how-to-add-the-ldapenforcechannelbinding-registry-entry) configuration alone has no effect on the Azure NetApp Files service. However, if you use both LDAP channel binding and secure LDAP (for example, LDAPS or `start_tls`), then the SMB volume creation will fail.
+
+## Decide which Domain Services to use
+
+Azure NetApp Files supports both [Active Directory Domain Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) (ADDS) and Azure Active Directory Domain Services (AADDS) for AD connections. Before you create an AD connection, you need to decide whether to use ADDS or AADDS.
+
+For more information, see [Compare self-managed Active Directory Domain Services, Azure Active Directory, and managed Azure Active Directory Domain Services](../active-directory-domain-services/compare-identity-solutions.md).
+
+### Active Directory Domain Services
+
+You can use your preferred [Active Directory Sites and Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) scope for Azure NetApp Files. This option enables reads and writes to Active Directory Domain Services (ADDS) domain controllers that are [accessible by Azure NetApp Files](azure-netapp-files-network-topologies.md). It also prevents the service from communicating with domain controllers that are not in the specified Active Directory Sites and Services site.
+
+To find your site name when you use ADDS, you can contact the administrative group in your organization that is responsible for Active Directory Domain Services. The example below shows the Active Directory Sites and Services plugin where the site name is displayed:
+
+![Active Directory Sites and Services](../media/azure-netapp-files/azure-netapp-files-active-directory-sites-services.png)
+
+When you configure an AD connection for Azure NetApp Files, you specify the site name in scope for the **AD Site Name** field.
+
+### Azure Active Directory Domain Services
+
+For Azure Active Directory Domain Services (AADDS) configuration and guidance, see [Azure AD Domain Services documentation](../active-directory-domain-services/index.yml).
+
+Additional AADDS considerations apply for Azure NetApp Files:
+
+* Ensure the VNet or subnet where AADDS is deployed is in the same Azure region as the Azure NetApp Files deployment.
+* If you use another VNet in the region where Azure NetApp Files is deployed, you should create a peering between the two VNets.
+* Azure NetApp Files supports `user` and `resource forest` types.
+* For synchronization type, you can select `All` or `Scoped`.
+ If you select `Scoped`, ensure the correct Azure AD group is selected for accessing SMB shares. If you are uncertain, you can use the `All` synchronization type.
+* Use of the Enterprise or Premium SKU is required. The Standard SKU is not supported.
+
+When you create an Active Directory connection, note the following specifics for AADDS:
+
+* You can find information for **Primary DNS**, **Secondary DNS**, and **AD DNS Domain Name** in the AADDS menu.
+For DNS servers, two IP addresses will be used for configuring the Active Directory connection.
+* The **organizational unit path** is `OU=AADDC Computers`.
+This setting is configured in the **Active Directory Connections** under **NetApp Account**:
+
+ ![Organizational unit path](../media/azure-netapp-files/azure-netapp-files-org-unit-path.png)
+
+* **Username** credentials can be any user that is a member of the Azure AD group **Azure AD DC Administrators**.
++
+## Create an Active Directory connection
+
+1. From your NetApp account, click **Active Directory connections**, then click **Join**.
+
+ ![Active Directory Connections](../media/azure-netapp-files/azure-netapp-files-active-directory-connections.png)
+
+2. In the Join Active Directory window, provide the following information, based on the Domain Services you want to use:
+
+ For information specific to the Domain Services you use, see [Decide which Domain Services to use](#decide-which-domain-services-to-use).
+
+ * **Primary DNS**
+ This is the DNS that is required for the Active Directory domain join and SMB authentication operations.
+ * **Secondary DNS**
+ This is the secondary DNS server for ensuring redundant name services.
+ * **AD DNS Domain Name**
+ This is the domain name of your Active Directory Domain Services that you want to join.
+ * **AD Site Name**
+ This is the site name that the domain controller discovery will be limited to. This should match the site name in Active Directory Sites and Services.
+ * **SMB server (computer account) prefix**
+ This is the naming prefix for the machine account in Active Directory that Azure NetApp Files will use for creation of new accounts.
+
+ For example, if the naming standard that your organization uses for file servers is NAS-01, NAS-02..., NAS-045, then you would enter "NAS" for the prefix.
+
+ The service will create additional machine accounts in Active Directory as needed.
+
+ > [!IMPORTANT]
+ > Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You will need to re-mount existing SMB shares after renaming the SMB server prefix.
+
+ * **Organizational unit path**
+ This is the LDAP path for the organizational unit (OU) where SMB server machine accounts will be created. That is, OU=second level, OU=first level.
+
+ If you are using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account.
+
+ ![Join Active Directory](../media/azure-netapp-files/azure-netapp-files-join-active-directory.png)
+
+ * **AES Encryption**
+ Select this checkbox if you want to enable AES encryption for an SMB volume. See [Requirements for Active Directory connections](#requirements-for-active-directory-connections) for requirements.
+
+ ![Active Directory AES encryption](../media/azure-netapp-files/active-directory-aes-encryption.png)
+
+ The **AES Encryption** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAesEncryption
+ ```
+
+ Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAesEncryption
+ ```
+
+ You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view=azure-cli-latest) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+ * **LDAP Signing**
+ Select this checkbox to enable LDAP signing. This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified [Active Directory Domain Services domain controllers](/windows/win32/ad/active-directory-domain-services). For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023).
+
+ ![Active Directory LDAP signing](../media/azure-netapp-files/active-directory-ldap-signing.png)
+
+ The **LDAP Signing** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapSigning
+ ```
+
+ Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapSigning
+ ```
+
+ You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view=azure-cli-latest) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+ * **Backup policy users**
+ You can include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. The specified accounts will be allowed to change the NTFS permissions at the file or folder level. For example, you can specify a non-privileged service account used for migrating data to an SMB file share in Azure NetApp Files.
+
+ ![Active Directory backup policy users](../media/azure-netapp-files/active-directory-backup-policy-users.png)
+
+ The **Backup policy users** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBackupOperator
+ ```
+
+ Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBackupOperator
+ ```
+
+ You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view=azure-cli-latest) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+ * Credentials, including your **username** and **password**
+
+ ![Active Directory credentials](../media/azure-netapp-files/active-directory-credentials.png)
+
+3. Click **Join**.
+
+ The Active Directory connection you created appears.
+
+ ![Created Active Directory connections](../media/azure-netapp-files/azure-netapp-files-active-directory-connections-created.png)
+
+## Next steps
+
+* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md)
+* [Create a dual-protocol volume](create-volumes-dual-protocol.md)
+* [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md)
+* [Install a new Active Directory forest using Azure CLI](/windows-server/identity/ad-ds/deploy/virtual-dc/adds-on-azure-vm)
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/create-volumes-dual-protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
## Considerations
-* Ensure that you meet the [Requirements for Active Directory connections](azure-netapp-files-create-volumes-smb.md#requirements-for-active-directory-connections).
+* Ensure that you meet the [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections).
* Create a reverse lookup zone on the DNS server and then add a pointer (PTR) record of the AD host machine in that reverse lookup zone. Otherwise, the dual-protocol volume creation will fail. * Ensure that the NFS client is up to date and running the latest updates for the operating system. * Ensure that the Active Directory (AD) LDAP server is up and running on the AD. You can do so by installing and configuring the [Active Directory Lightweight Directory Services (AD LDS)](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/hh831593(v=ws.11)) role on the AD machine.
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/cross-region-replication-requirements-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
Note the following requirements and considerations about [using the volume cross
* The cross-region replication feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the [Azure NetApp Files cross-region replication waitlist submission page](https://aka.ms/anfcrrpreviewsignup). Wait for an official confirmation email from the Azure NetApp Files team before using the cross-region replication feature. * Azure NetApp Files replication is only available in certain fixed region pairs. See [Supported region pairs](cross-region-replication-introduction.md#supported-region-pairs).
-* SMB volumes are supported along with NFS volumes. Replication of SMB volumes requires an Active Directory connection in the source and destination NetApp accounts. The destination AD connection must have access to the DNS servers or ADDS Domain Controllers that are reachable from the delegated subnet in the destination region. For more information, see [Requirements for Active Directory connections](azure-netapp-files-create-volumes-smb.md#requirements-for-active-directory-connections).
+* SMB volumes are supported along with NFS volumes. Replication of SMB volumes requires an Active Directory connection in the source and destination NetApp accounts. The destination AD connection must have access to the DNS servers or ADDS Domain Controllers that are reachable from the delegated subnet in the destination region. For more information, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections).
* The destination account must be in a different region from the source volume region. You can also select an existing NetApp account in a different region. * The replication destination volume is read-only until you [fail over to the destination region](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume) to enable the destination volume for read and write. * Azure NetApp Files replication does not currently support multiple subscriptions; all replications must be performed under a single subscription.
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/troubleshoot-dual-protocol-volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-dual-protocol-volumes.md
This article describes resolutions to error conditions you might have when creat
| Error conditions | Resolutions | |-|-|
-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if ADDS and the volume are being deployed in same region.</li> <li>Check if ADDS and the volume are using the same VNet. If they are using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](azure-netapp-files-create-volumes-smb.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Azure ADDS. Azure ADDS should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if ADDS and the volume are being deployed in same region.</li> <li>Check if ADDS and the volume are using the same VNet. If they are using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Azure ADDS. Azure ADDS should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine accounts. </li> <li> If you use Azure ADDS, make sure that the user is part of the Azure AD group `Azure AD DC Administrators`. </li></ul> | | The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-A452\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\n [ 567] Loaded the preliminary configuration.\n [ 671] Successfully connected to ip 10.X.X.X, port 88 using TCP\n**[ 1099] FAILURE: Could not authenticate as\n** 'user@contoso.com': CIFS server account password does\n** not match password stored in Active Directory\n** (KRB5KDC_ERR_PREAUTH_FAILED)\n. "}]}` | Make sure that the password entered for joining the AD connection is correct. | | The SMB or dual-protocol volume creation fails with the following error: `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.X.X.X, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Azure ADDS, make sure that the organizational unit path is `OU=AADDC Computers`. |
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated on a regular basis. This article provides a summar
## May 2020
-* [Backup policy users](azure-netapp-files-create-volumes-smb.md#create-an-active-directory-connection) (Preview)
+* [Backup policy users](create-active-directory-connections.md) (Preview)
Azure NetApp Files allows you to include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. The specified accounts will be allowed to change the NTFS permissions at the file or folder level. For example, you can specify a non-privileged service account used for migrating data to an SMB file share in Azure NetApp Files. The Backup policy users feature is currently in preview.
azure-portal https://docs.microsoft.com/en-us/azure/azure-portal/supportability/how-to-create-azure-support-request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Last updated 06/25/2020
# Create an Azure support request
-Azure enables you to create and manage support requests, also known as support tickets. You can create and manage requests in the [Azure portal](https://portal.azure.com), which is covered in this article. You can also create and manage requests programmatically, using the [Azure support ticket REST API](/rest/api/support).
+Azure enables you to create and manage support requests, also known as support tickets. You can create and manage requests in the [Azure portal](https://portal.azure.com), which is covered in this article. You can also create and manage requests programmatically, using the [Azure support ticket REST API](/rest/api/support), or by using [Azure CLI](/cli/azure/azure-cli-support-request).
> [!NOTE] > The Azure portal URL is specific to the Azure cloud where your organization is deployed.
azure-portal https://docs.microsoft.com/en-us/azure/azure-portal/supportability/how-to-manage-azure-support-request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-manage-azure-support-request.md
Last updated 12/14/2020
# Manage an Azure support request
-After you [create an Azure support request](how-to-create-azure-support-request.md), you can manage it in the [Azure portal](https://portal.azure.com), which is covered in this article. You can also create and manage requests programmatically, using the [Azure support ticket REST API](/rest/api/support).
+After you [create an Azure support request](how-to-create-azure-support-request.md), you can manage it in the [Azure portal](https://portal.azure.com), which is covered in this article. You can also create and manage requests programmatically, using the [Azure support ticket REST API](/rest/api/support), or by using [Azure CLI](/cli/azure/azure-cli-support-request).
## View support requests
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-applications/publish-notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/publish-notifications.md
POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_paramet
{ "eventType": "PUT",
- "applicationId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
"eventTime": "2019-08-14T19:20:08.1707163Z", "provisioningState": "Succeeded",
- "applicationDefinitionId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>"
+ "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>"
} ```
POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_paramet
"applicationId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>", "eventTime": "2019-08-14T19:20:08.1707163Z", "provisioningState": "Failed",
- "applicationDefinitionId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>",
+ "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>",
"error": { "code": "ErrorCode", "message": "error message",
POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_paramet
{ "eventType": "PUT",
- "applicationId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
"eventTime": "2019-08-14T19:20:08.1707163Z", "provisioningState": "Succeeded", "billingDetails": {
POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_paramet
{ "eventType": "PUT",
- "applicationId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
"eventTime": "2019-08-14T19:20:08.1707163Z", "provisioningState": "Failed", "billingDetails": {
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/tag-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-support.md
Jump to a resource provider namespace:
> | workspaces / models / versions | No | No | > | workspaces / onlineEndpoints | Yes | Yes | > | workspaces / onlineEndpoints / deployments | Yes | Yes |
+
+> [!NOTE]
+> Workspace tags don't propagate to compute clusters and compute instances.
## Microsoft.Maintenance
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-outputs.md
Title: Outputs in templates
-description: Describes how to define output values in an Azure Resource Manager template (ARM template).
+description: Describes how to define output values in an Azure Resource Manager template (ARM template) and Bicep file.
Previously updated : 11/24/2020 Last updated : 02/17/2021 # Outputs in ARM templates
-This article describes how to define output values in your Azure Resource Manager template (ARM template). You use `outputs` when you need to return values from the deployed resources.
+This article describes how to define output values in your Azure Resource Manager template (ARM template) and Bicep file. You use outputs when you need to return values from the deployed resources.
-The format of each output value must match one of the [data types](template-syntax.md#data-types).
+The format of each output value must resolve to one of the [data types](template-syntax.md#data-types).
+ ## Define output values
-The following example shows how to return the resource ID for a public IP address:
+The following example shows how to return a property from a deployed resource.
+
+# [JSON](#tab/json)
+
+For JSON, add the `outputs` section to the template. The output value gets the fully qualified domain name for a public IP address.
```json "outputs": {
- "resourceID": {
- "type": "string",
- "value": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddresses_name'))]"
- }
+ "hostname": {
+ "type": "string",
+ "value": "[reference(resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))).dnsSettings.fqdn]"
+ },
} ```
+If you need to output a property that has a hyphen in the name, use brackets around the name instead of dot notation. For example, use `['property-name']` instead of `.property-name`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "variables": {
+ "user": {
+ "user-name": "Test Person"
+ }
+ },
+ "resources": [
+ ],
+ "outputs": {
+ "nameResult": {
+ "type": "string",
+ "value": "[variables('user')['user-name']]"
+ }
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+For Bicep, use the `output` keyword.
+
+In the following example, `publicIP` is the symbolic name of a public IP address deployed in the Bicep file. The output value gets the fully qualified domain name for the public IP address.
+
+```bicep
+output hostname string = publicIP.properties.dnsSettings.fqdn
+```
+
+If you need to output a property that has a hyphen in the name, use brackets around the name instead of dot notation. For example, use `['property-name']` instead of `.property-name`.
+
+```bicep
+var user = {
+ 'user-name': 'Test Person'
+}
+
+output stringOutput string = user['user-name']
+```
+++ ## Conditional output
-In the `outputs` section, you can conditionally return a value. Typically, you use `condition` in the `outputs` when you've [conditionally deployed](conditional-resource-deployment.md) a resource. The following example shows how to conditionally return the resource ID for a public IP address based on whether a new one was deployed:
+You can conditionally return a value. Typically, you use a conditional output when you've [conditionally deployed](conditional-resource-deployment.md) a resource. The following example shows how to conditionally return the resource ID for a public IP address based on whether a new one was deployed:
+
+# [JSON](#tab/json)
+
+In JSON, add the `condition` element to define whether the output is returned.
```json "outputs": {
In the `outputs` section, you can conditionally return a value. Typically, you u
} ```
+# [Bicep](#tab/bicep)
+
+Conditional output isn't currently available for Bicep.
+
+However, you can use the `?` operator to return one of two values depending on a condition.
+
+```bicep
+param deployStorage bool = true
+param storageName string
+param location string = resourceGroup().location
+
+resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = if (deployStorage) {
+ name: storageName
+ location: location
+ kind: 'StorageV2'
+ sku:{
+ name:'Standard_LRS'
+ tier: 'Standard'
+ }
+ properties: {
+ accessTier: 'Hot'
+ }
+}
+
+output endpoint string = deployStorage ? sa.properties.primaryEndpoints.blob : ''
+```
+++ For a simple example of conditional output, see [conditional output template](https://github.com/bmoore-msft/AzureRM-Samples/blob/master/conditional-output/azuredeploy.json). ## Dynamic number of outputs
-In some scenarios, you don't know the number of instances of a value you need to return when creating the template. You can return a variable number of values by using the `copy` element.
+In some scenarios, you don't know the number of instances of a value you need to return when creating the template. You can return a variable number of values by using iterative output.
+
+# [JSON](#tab/json)
+
+In JSON, add the `copy` element to iterate an output.
```json "outputs": {
In some scenarios, you don't know the number of instances of a value you need to
} ```
+# [Bicep](#tab/bicep)
+
+Iterative output isn't currently available for Bicep.
+++ For more information, see [Output iteration in ARM templates](copy-outputs.md). ## Linked templates
-To retrieve the output value from a linked template, use the [reference](template-functions-resource.md#reference) function in the parent template. The syntax in the parent template is:
+In JSON templates, you can deploy related templates by using [linked templates](linked-templates.md). To retrieve the output value from a linked template, use the [reference](template-functions-resource.md#reference) function in the parent template. The syntax in the parent template is:
```json "[reference('<deploymentName>').outputs.<propertyName>.value]" ```
-When getting an output property from a linked template, the property name can't include a dash.
- The following example shows how to set the IP address on a load balancer by retrieving a value from a linked template. ```json
The following example shows how to set the IP address on a load balancer by retr
} ```
+If the property name has a hyphen, use brackets around the name instead of dot notation.
+
+```json
+"publicIPAddress": {
+ "id": "[reference('linkedTemplate').outputs['resource-ID'].value]"
+}
+```
+ You can't use the `reference` function in the outputs section of a [nested template](linked-templates.md#nested-template). To return the values for a deployed resource in a nested template, convert your nested template to a linked template.
+The [Public IP address template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/public-ip.json) creates a public IP address and outputs the resource ID. The [Load balancer template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/public-ip-parentloadbalancer.json) links to the preceding template. It uses the resource ID in the output when creating the load balancer.
+
+## Modules
+
+In Bicep files, you can deploy related templates by using modules. To retrieve an output value from a module, use the following syntax:
+
+```bicep
+<module-name>.outputs.<property-name>
+```
+
+The following example shows how to set the IP address on a load balancer by retrieving a value from a module. The name of the module is `publicIP`.
+
+```bicep
+publicIPAddress: {
+ id: publicIP.outputs.resourceID
+}
+```
+
+## Example template
+
+The following template doesn't deploy any resources. It shows some ways of returning outputs of different types.
+
+# [JSON](#tab/json)
++
+# [Bicep](#tab/bicep)
+
+Bicep doesn't currently support loops.
++++ ## Get output values When the deployment succeeds, the output values are automatically returned in the results of the deployment.
az deployment group show \
-## Example templates
-
-The following examples demonstrate scenarios for using outputs.
-
-|Template |Description |
-|||
-|[Copy variables](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/multipleinstance/copyvariables.json) | Creates complex variables and outputs those values. Doesn't deploy any resources. |
-|[Public IP address](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/public-ip.json) | Creates a public IP address and outputs the resource ID. |
-|[Load balancer](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/public-ip-parentloadbalancer.json) | Links to the preceding template. Uses the resource ID in the output when creating the load balancer. |
- ## Next steps * To learn about the available properties for outputs, see [Understand the structure and syntax of ARM templates](template-syntax.md).
azure-sql-edge https://docs.microsoft.com/en-us/azure/azure-sql-edge/deploy-onnx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-onnx.md
description: Learn how to train a model, convert it to ONNX, deploy it to Azure
keywords: deploy SQL Edge ms.prod: sql ms.technology: machine-learning-+ Last updated 10/13/2020
FROM PREDICT(MODEL = @model, DATA = predict_input, RUNTIME=ONNX) WITH (variable1
## Next Steps * [Machine Learning and AI with ONNX in SQL Edge](onnx-overview.md)
-* [Machine Learning Services in Azure SQL Managed Instance (preview)](../azure-sql/managed-instance/machine-learning-services-overview.md)
+* [Machine Learning Services in Azure SQL Managed Instance (preview)](../azure-sql/managed-instance/machine-learning-services-overview.md)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/azure-hybrid-benefit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/azure-hybrid-benefit.md
Previously updated : 11/13/2019 Last updated : 02/16/2021 # Azure Hybrid Benefit - Azure SQL Database & SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](includes/appliesto-sqldb-sqlmi.md)]
Azure Hybrid Benefit for SQL Server differs from license mobility in two key are
#### What are the specific rights of the Azure Hybrid Benefit for SQL Server?
-SQL Database customers have the following rights associated with Azure Hybrid Benefit for SQL Server:
+SQL Database and SQL Managed Instance customers have the following rights associated with Azure Hybrid Benefit for SQL Server:
|License footprint|What does Azure Hybrid Benefit for SQL Server get you?| ||| |SQL Server Enterprise Edition core customers with SA|<li>Can pay base rate on Hyperscale, General Purpose, or Business Critical SKU</li><br><li>1 core on-premises = 4 cores in Hyperscale SKU</li><br><li>1 core on-premises = 4 cores in General Purpose SKU</li><br><li>1 core on-premises = 1 core in Business Critical SKU</li>|
-|SQL Server Standard Edition core customers with SA|<li>Can pay base rate on Hyperscale and General Purpose SKU only</li><br><li>1 core on-premises = 1 core in Hyperscale SKU</li><br><li>1 core on-premises = 1 core in General Purpose SKU</li>|
+|SQL Server Standard Edition core customers with SA|<li>Can pay base rate on Hyperscale, General Purpose, or Business Critical SKU</li><br><li>1 core on-premises = 1 core in Hyperscale SKU</li><br><li>1 core on-premises = 1 core in General Purpose SKU</li><br><li>4 core on-premises = 1 core in Business Critical SKU</li>|
|||
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/active-geo-replication-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/active-geo-replication-overview.md
Active geo-replication is an Azure SQL Database feature that allows you to creat
> [!NOTE] > Active geo-replication is not supported by Azure SQL Managed Instance. For geographic failover of instances of SQL Managed Instance, use [Auto-failover groups](auto-failover-group-overview.md).
+> [!NOTE]
+> To migrate SQL databases from Azure Germany using active geo-replication, see [Migrate SQL Database using active geo-replication](../../germany/germany-migration-databases.md#migrate-sql-database-using-active-geo-replication).
+ Active geo-replication is designed as a business continuity solution that allows the application to perform quick disaster recovery of individual databases in case of a regional disaster or large scale outage. If geo-replication is enabled, the application can initiate failover to a secondary database in a different Azure region. Up to four secondaries are supported in the same or different regions, and the secondaries can also be used for read-only access queries. The failover must be initiated manually by the application or the user. After failover, the new primary has a different connection end point. > [!NOTE]
As discussed previously, active geo-replication can also be managed programmatic
| [Delete Replication Link](/rest/api/sql/replicationlinks/delete) | Deletes a database replication link. Cannot be done during failover. | | | | +++ ## Next steps - For sample scripts, see:
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/data-discovery-and-classification-overview.md
Previously updated : 02/11/2021 Last updated : 02/17/2021 tags: azure-synapse # Data Discovery & Classification
You can use the REST API to programmatically manage classifications and recommen
- Consider configuring [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md) for monitoring and auditing access to your classified sensitive data. - For a presentation that includes data Discovery & Classification, see [Discovering, classifying, labeling & protecting SQL data | Data Exposed](https://www.youtube.com/watch?v=itVi9bkJUNc).
+- To classify your Azure SQL Databases and Azure Synapse Analytics with Azure Purview labels using T-SQL commands, see [Classify your Azure SQL data using Azure Purview labels](https://docs.microsoft.com/azure/sql-database/scripts/sql-database-import-purview-labels).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-pool-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-overview.md
For service tiers and resource limits in each purchasing model, see the [DTU-bas
The following steps can help you estimate whether a pool is more cost-effective than single databases: 1. Estimate the eDTUs or vCores needed for the pool as follows:-
-For DTU-based purchasing model:
-
-MAX(<*Total number of DBs* X *average DTU utilization per DB*>, <*Number of concurrently peaking DBs* X *Peak DTU utilization per DB*>)
-
-For vCore-based purchasing model:
-
-MAX(<*Total number of DBs* X *average vCore utilization per DB*>, <*Number of concurrently peaking DBs* X *Peak vCore utilization per DB*>)
-
+ - For the DTU-based purchasing model:
+ - MAX(<*Total number of DBs* &times; *Average DTU utilization per DB*>, <*Number of concurrently peaking DBs* &times; *Peak DTU utilization per DB*>)
+ - For the vCore-based purchasing model:
+ - MAX(<*Total number of DBs* &times; *Average vCore utilization per DB*>, <*Number of concurrently peaking DBs* &times; *Peak vCore utilization per DB*>)
2. Estimate the total storage space needed for the pool by adding the data size needed for all the databases in the pool. For the DTU purchasing model, then determine the eDTU pool size that provides this amount of storage. 3. For the DTU-based purchasing model, take the larger of the eDTU estimates from Step 1 and Step 2. For the vCore-based purchasing model, take the vCore estimate from Step 1. 4. See the [SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/) and find the smallest pool size that is greater than the estimate from Step 3.
You can use the built-in [performance monitoring](./performance-guidance.md) and
- To scale elastic pools, see [Scaling elastic pools](elastic-pool-scale.md) and [Scale an elastic pool - sample code](scripts/monitor-and-scale-pool-powershell.md) - To learn more about design patterns for SaaS applications using elastic pools, see [Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database](saas-tenancy-app-design-patterns.md). - For a SaaS tutorial using elastic pools, see [Introduction to the Wingtip SaaS application](saas-dbpertenant-wingtip-app-overview.md).-- To learn about resource management in elastic pools with many databases, see [Resource management in dense elastic pools](elastic-pool-resource-management.md).
+- To learn about resource management in elastic pools with many databases, see [Resource management in dense elastic pools](elastic-pool-resource-management.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-vcore.md
For more details, check [az sql mi update](/cli/azure/sql/mi#az-sql-mi-update) c
Gen4 hardware is [being phased out](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/) and is no longer available for new deployments. All new databases must be deployed on Gen5 hardware.
-Gen5 is available in most regions worldwide.
+Gen5 is available in all public regions worldwide.
#### Fsv2-series
For details about the specific compute and storage sizes available in the genera
- [vCore-based resource limits for Azure SQL Database](resource-limits-vcore-single-databases.md). - [vCore-based resource limits for pooled Azure SQL Database](resource-limits-vcore-elastic-pools.md).-- [vCore-based resource limits for Azure SQL Managed Instance](../managed-instance/resource-limits.md).
+- [vCore-based resource limits for Azure SQL Managed Instance](../managed-instance/resource-limits.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/log-replay-service-migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/log-replay-service-migrate.md
+
+ Title: Migrate databases to SQL Managed Instance using Log Replay Service
+description: Learn how to migrate databases from SQL Server to SQL Managed Instance using Log Replay Service
+++
+ms.devlang:
++++ Last updated : 02/17/2021++
+# Migrate databases from SQL Server to SQL Managed Instance using Log Replay Service
+
+This article explains how to manually configure database migration from SQL Server 2008-2019 to SQL Managed Instance using Log Replay Service (LRS). This is a cloud service enabled for managed instance based on the SQL Server log shipping technology in no recovery mode. LRS should be used in cases when Data Migration Service (DMS) cannot be used, when more control is needed or when there exists little tolerance for downtime.
+
+## When to use Log Replay Service
+
+In cases that [Azure DMS](https://docs.microsoft.com/azure/dms/tutorial-sql-server-to-managed-instance) cannot be used for migration, LRS cloud service can be used directly with PowerShell, CLI cmdlets, or API, to manually build and orchestrate database migrations to SQL managed instance.
+
+You might want to consider using LRS cloud service in some of the following cases:
+- More control is needed for your database migration project
+- There exists a little tolerance for downtime on migration cutover
+- DMS executable cannot be installed in your environment
+- DMS executable does not have file access to database backups
+- No access to host OS is available, or no Administrator privileges
+
+> [!NOTE]
+> Recommended automated way to migrate databases from SQL Server to SQL Managed Instance is using Azure DMS. This service is using the same LRS cloud service at the back end with log shipping in no-recovery mode. You should consider manually using LRS to orchestrate migrations in cases when Azure DMS does not fully support your scenarios.
+
+## How does it work
+
+Building a custom solution using LRS to migrate a database to the cloud requires several orchestration steps shown in the diagram and outlined in the table below.
+
+The migration entails making full database backups on SQL Server and copying backup files to Azure Blob storage. LRS is used to restore backup files from Azure Blob storage to SQL managed instance. Azure Blob storage is used as an intermediary storage between SQL Server and SQL Managed Instance.
+
+LRS will monitor Azure Blob storage for any new differential, or log backups added after the full backup has been restored, and will automatically restore any new files added. The progress of backup files being restored on SQL managed instance can be monitored using the service, and the process can also be aborted if necessary. Databases being restored during the migration process will be in a restoring mode and cannot be used to read or write until the process has been completed.
+
+LRS can be started in autocomplete, or continuous mode. When started in autocomplete mode, the migration will complete automatically when the last backup file specified has been restored. When started in continuous mode, the service will continuously restore any new backup files added, and the migration will complete on the manual cutover only. The final cutover step will make databases available for read and write use on SQL Managed Instance.
+
+ ![Log Replay Service orchestration steps explained for SQL Managed Instance](./media/log-replay-service-migrate/log-replay-service-conceptual.png)
+
+| Operation | Details |
+| :-- | :- |
+| **1. Copy database backups from SQL Server to Azure Blob storage**. | - Copy full, differential, and log backups from SQL Server to Azure Blob storage using [Azcopy](https://docs.microsoft.com/azure/storage/common/storage-use-azcopy-v10) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). <br />- In migrating several databases, a separate folder is required for each database. |
+| **2. Start the LRS service in the cloud**. | - Service can be started with a choice of cmdlets: <br /> PowerShell [start-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/start-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_start cmdlets](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_start). <br /><br />- Once started, the service will take backups from the Azure Blob storage and start restoring them on SQL Managed Instance. <br /> - Once all initially uploaded backups are restored, the service will watch for any new files uploaded to the folder and will continuously apply logs based on the LSN chain, until the service is stopped. |
+| **2.1. Monitor the operation progress**. | - Progress of the restore operation can be monitored with a choice of or cmdlets: <br /> PowerShell [get-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/get-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_show cmdlets](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_show). |
+| **2.2. Stop\abort the operation if needed**. | - In case that migration process needs to be aborted, the operation can be stopped with a choice of cmdlets: <br /> PowerShell [stop-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/stop-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_stop](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_stop) cmdlets. <br /><br />- This will result in deletion of the being database restored on SQL Managed Instance. <br />- Once stopped, LRS cannot be continued for a database. Migration process needs to be restarted from scratch. |
+| **3. Cutover to the cloud when ready**. | - Once all backups have been restored to SQL Managed Instance, complete the cutover by initiating LRS complete operation with a choice of API call, or cmdlets: <br />PowerShell [complete-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/complete-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_complete](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_complete) cmdlets. <br /><br />- This will cause LRS service to be stopped and database on Managed Instance will be recovered. <br />- Repoint the application connection string from SQL Server to SQL Managed Instance. <br />- On operation completion database is available for R/W operations in the cloud. |
+
+## Requirements for getting started
+
+### SQL Server side
+- SQL Server 2008-2019
+- Full backup of databases (one or multiple files)
+- Differential backup (one or multiple files)
+- Log backup (not split for transaction log file)
+- **CHECKSUM must be enabled** as mandatory
+
+### Azure side
+- PowerShell Az.SQL module version 2.16.0, or above ([install](https://www.powershellgallery.com/packages/Az.Sql/), or use Azure [Cloud Shell](https://docs.microsoft.com/azure/cloud-shell/))
+- CLI version 2.19.0, or above ([install](https://docs.microsoft.com/cli/azure/install-azure-cli))
+- Azure Blob Storage provisioned
+- SAS security token with **read** and **list** only permissions generated for the blob storage
+
+## Best practices
+
+The following are highly recommended as best practices:
+- Run [Data Migration Assistant](https://docs.microsoft.com/sql/dma/dma-overview) to validate your databases will have no issues being migrated to SQL Managed Instance.
+- Split full and differential backups into multiple files, instead of a single file.
+- Enable backup compression.
+- Use Cloud Shell to execute scripts as it will always be updated to the latest cmdlets released.
+- Plan to complete the migration within 47 hours since LRS service has been started.
+
+> [!IMPORTANT]
+> - Database being restored using LRS cannot be used until the migration process has been completed. This is because underlying technology is log shipping in no recovery mode.
+> - Standby mode for log shipping is not supported by LRS due to the version differences between SQL Managed Instance and latest in-market SQL Server version.
+
+## Steps to execute
+
+## Copy backups from SQL Server to Azure Blob storage
+
+The following two approaches can be utilized to copy backups to the blob storage in migrating databases to Managed Instance using LRS:
+- Using SQL Server native [BACKUP TO URL](https://docs.microsoft.com/sql/relational-databases/backup-restore/sql-server-backup-to-url) functionality.
+- Copying the backups to Blob Container using [Azcopy](https://docs.microsoft.com/azure/storage/common/storage-use-azcopy-v10), or [Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer).
+
+## Create Azure Blob and SAS authentication token
+
+Azure Blob storage is used as an intermediary storage for backup files between SQL Server and SQL Managed Instance. Follow these steps to create Azure Blob storage container:
+
+1. [Create a storage account](https://docs.microsoft.com/azure/storage/common/storage-account-create?tabs=azure-portal)
+2. [Crete a blob container](https://docs.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-portal) inside the storage account
+
+Once a blob container has been created, generate SAS authentication token with Read and List permissions only following these steps:
+
+1. Access storage account using Azure portal
+2. Navigate to Storage Explorer
+3. Expand Blob Containers
+4. Right click on the blob container
+5. Select Get Shared Access Signature
+6. Select the token expiry timeframe. Ensure the token is valid for duration of your migration.
+7. Ensure Read and List only permissions are selected
+8. Click create
+9. Copy the token starting with "sv=" in the URI for use in your code
+
+> [!IMPORTANT]
+> Permissions for the SAS token for Azure Blob storage need to be Read and List only. In case of any other permissions granted for the SAS authentication token, starting LRS service will fail. These security requirements are by design.
+
+## Log in to Azure and select subscription
+
+Use the following PowerShell cmdlet to log in to Azure:
+
+```powershell
+Login-AzAccount
+```
+
+Select the appropriate subscription where your SQL Managed Instance resides using the following PowerShell cmdlet:
+
+```powershell
+Select-AzSubscription -SubscriptionId <subscription ID>
+```
+
+## Start the migration
+
+The migration is started by starting the LRS service. The service can be started in autocomplete, or continuous mode. When started in autocomplete mode, the migration will complete automatically when the last backup file specified has been restored. This option requires the start command to specify the filename of the last backup file. When LRS is started in continuous mode, the service will continuously restore any new backup files added, and the migration will complete on the manual cutover only.
+
+### Start LRS in autocomplete mode
+
+To start LRS service in autocomplete mode, use the following PowerShell, or CLI commands. Specify the last backup file name with -LastBackupName parameter. Upon restoring the last backup file name specified, the service will automatically initiate a cutover.
+
+Start LRS in autocomplete mode - PowerShell example:
+
+```powershell
+Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
+ -InstanceName "ManagedInstance01" `
+ -Name "ManagedDatabaseName" `
+ -Collation "SQL_Latin1_General_CP1_CI_AS" `
+ -StorageContainerUri "https://test.blob.core.windows.net/testing" `
+ -StorageContainerSasToken "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D" `
+ -AutoComplete -LastBackupName "last_backup.bak"
+```
+
+Start LRS in autocomplete mode - CLI example:
+
+```cli
+az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb -a --last-bn "backup.bak"
+ --storage-uri "https://test.blob.core.windows.net/testing"
+ --storage-sas "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D"
+```
+
+### Start LRS in continuous mode
+
+Start LRS in continuous mode - PowerShell example:
+
+```powershell
+Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
+ -InstanceName "ManagedInstance01" `
+ -Name "ManagedDatabaseName" `
+ -Collation "SQL_Latin1_General_CP1_CI_AS" -StorageContainerUri "https://test.blob.core.windows.net/testing" `
+ -StorageContainerSasToken "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D"
+```
+
+Start LRS in continuous mode - CLI example:
+
+```cli
+az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb
+ --storage-uri "https://test.blob.core.windows.net/testing"
+ --storage-sas "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D"
+```
+
+> [!IMPORTANT]
+> Once LRS has been started, any system managed software patches will be halted for the next 47 hours. Upon passing of this window, the next automated software patch will automatically stop the ongoing LRS. In such case, migration cannot be resumed and it needs to be restarted from scratch.
+
+## Monitor the migration progress
+
+To monitor the migration operation progress, use the following PowerShell command:
+
+```powershell
+Get-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
+ -InstanceName "ManagedInstance01" `
+ -Name "ManagedDatabaseName"
+```
+
+To monitor the migration operation progress, use the following CLI command:
+
+```cli
+az sql midb log-replay show -g mygroup --mi myinstance -n mymanageddb
+```
+
+## Stop the migration
+
+In case you need to stop the migration, use the following cmdlets. Stopping the migration will delete the restoring database on SQL managed instance due to which it will not be possible to resume the migration.
+
+To stop\abort the migration process, use the following PowerShell command:
+
+```powershell
+Stop-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
+ -InstanceName "ManagedInstance01" `
+ -Name "ManagedDatabaseName"
+```
+
+To stop\abort the migration process, use the following CLI command:
+
+```cli
+az sql midb log-replay stop -g mygroup --mi myinstance -n mymanageddb
+```
+
+## Complete the migration (continuous mode)
+
+In case LRS is started in continuous mode, once you have ensured that all backups have been restored, initiating the cutover will complete the migration. Upon cutover completion, database will be migrated and ready for read and write access.
+
+To complete the migration process in LRS continuous mode, use the following PowerShell command:
+
+```powershell
+Complete-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
+-InstanceName "ManagedInstance01" `
+-Name "ManagedDatabaseName" -LastBackupName "last_backup.bak"
+```
+
+To complete the migration process in LRS continuous mode, use the following CLI command:
+
+```cli
+az sql midb log-replay complete -g mygroup --mi myinstance -n mymanageddb --last-backup-name "backup.bak"
+```
+
+## Next steps
+- Learn more about [Migrate SQL Server to SQL Managed instance](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
+- Learn more about [Differences between SQL Server and Azure SQL Managed Instance](transact-sql-tsql-differences-sql-server.md).
+- Learn more about [Best practices to cost and size workloads migrated to Azure](https://docs.microsoft.com/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-monitor-repair-private-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-monitor-repair-private-cloud.md
Title: Concepts - Monitor and repair Azure VMware Solution private clouds
description: Learn how Azure VMware Solution monitors and repairs VMware ESXi servers on an Azure VMware Solution private cloud. Previously updated : 02/03/2021 Last updated : 02/16/2021 # Monitor and repair Azure VMware Solution private clouds
Azure VMware Solution continuously monitors the VMware ESXi servers on an Azure
## What Azure VMware Solution monitors
-Azure VMware Solution monitors the following for failure conditions on the host:
+Azure VMware Solution monitors the following conditions on the host:
- Processor status - Memory status
Azure VMware Solution monitors the following for failure conditions on the host:
## Azure VMware Solution host remediation
-When Azure VMware Solution detects a degradation or failure on an Azure VMware Solution node on a tenantΓÇÖs private cloud, it triggers the host remediation process. Host remediation involves replacing the faulty node with a new healthy node.
+When Azure VMware Solution detects a degradation or failure on an Azure VMware Solution node, it triggers the host remediation process. Host remediation involves replacing the faulty node with a new healthy node.
-The host remediation process starts by adding a new healthy node in the cluster. Then, when possible, the faulty host is placed in VMware vSphere maintenance mode. VMware vMotion is used to move the VMs off the faulty host to other available servers in the cluster, potentially allowing for zero downtime live migration of workloads. In scenarios where the faulty host can't be placed in maintenance mode, the host is removed from the cluster.
+Host remediation starts by adding a new healthy node in the cluster. Then, when possible, the faulty host is placed in VMware vSphere maintenance mode. VMware vMotion moves the VMs off the faulty host to other available servers in the cluster, potentially allowing zero downtime for live migration of workloads. If the faulty host can't be placed in maintenance mode, the host is removed from the cluster.
## Next steps
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-upgrades https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-upgrades.md
Title: Concepts - Private cloud updates and upgrades description: Learn about the key upgrade processes and features in Azure VMware Solution. Previously updated : 02/02/2021 Last updated : 02/16/2021 # Azure VMware Solution private cloud updates and upgrades
-One of the key benefits of Azure VMware Solution private clouds is that the platform is maintained for you. Platform maintenance includes automated updates to a VMware validated software bundle, helping to ensure you're using the latest version of the validated Azure VMware Solution private cloud software.
+One benefit of Azure VMware Solution private clouds is the platform is maintained for you. Maintenance includes automated updates to a VMware validated software bundle to help ensure you're using the latest version of Azure VMware Solution private cloud software.
Specifically, an Azure VMware Solution private cloud includes:
Specifically, an Azure VMware Solution private cloud includes:
- VMware vSAN datastore for vSphere workload VMs - VMware HCX for workload mobility
-In addition to these components, an Azure VMware Solution private cloud includes resources in the Azure underlay required for connectivity and to operate the private cloud. Azure VMware Solution continuously monitors the health of both the underlay and the VMware components. When Azure VMware Solution detects a failure, it takes action to repair the failed components.
+An Azure VMware Solution private cloud also includes resources in the Azure underlay required for connectivity and to operate the private cloud. Azure VMware Solution continuously monitors the health of both the underlay and the VMware components. When Azure VMware Solution detects a failure, it takes action to repair the failed components.
## What components get updated?
Azure VMware Solution applies the following types of updates to VMware component
- Updates: Minor version updates of one or more VMware components. - Upgrades: Major version updates of one or more VMware components.
-You will be notified before and after patches are applied to your private clouds. We will also work with you to schedule a maintenance window before applying updates or upgrades to your private cloud.
+You'll be notified before and after patches are applied to your private clouds. We'll also work with you to schedule a maintenance window before applying updates or upgrades to your private cloud.
## VMware appliance backup
-In addition to making updates, Azure VMware Solution takes a configuration backup of these VMware components:
+Azure VMware Solution also takes a configuration backup of the following VMware components:
- vCenter Server - NSX-T Manager
-At times of failure, Azure VMware Solution can restore these from the configuration backup.
+At times of failure, Azure VMware Solution can restore these components from the configuration backup.
For more information on VMware software versions, see the [private clouds and clusters concept article](concepts-private-clouds-clusters.md) and the [FAQ](faq.yml).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/deploy-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-azure-vmware-solution.md
Title: Deploy and configure Azure VMware Solution
-description: Learn how to use the information gathered in the planning stage to deploy the Azure VMware Solution private cloud.
+description: Learn how to use the information gathered in the planning stage to deploy and configure the Azure VMware Solution private cloud.
Previously updated : 12/24/2020+ Last updated : 02/17/2021 # Deploy and configure Azure VMware Solution
-In this article, you'll use the information from the [planning section](production-ready-deployment-steps.md) to deploy Azure VMware Solution.
+In this article, you'll use the information from the [planning section](production-ready-deployment-steps.md) to deploy and configure Azure VMware Solution.
>[!IMPORTANT] >If you haven't defined the information yet, go back to the [planning section](production-ready-deployment-steps.md) before continuing.
-## Register the resource provider
+## Create an Azure VMware Solution private cloud
-
-## Deploy Azure VMware Solution
-
-Use the information you gathered in the [Planning the Azure VMware Solution deployment](production-ready-deployment-steps.md) article:
-
->[!NOTE]
->To deploy Azure VMware Solution, you must be at minimum contributor level in the subscription.
-
+Follow the prerequisites and steps in the [Create an Azure VMware Solution private cloud](tutorial-create-private-cloud.md) tutorial. You can create an Azure VMware Solution private cloud by using the [Azure portal](tutorial-create-private-cloud.md#azure-portal) or by using the [Azure CLI](tutorial-create-private-cloud.md#azure-cli).
>[!NOTE] >For an end-to-end overview of this step, view the [Azure VMware Solution: Deployment](https://www.youtube.com/embed/gng7JjxgayI) video.
If you didn't define a virtual network in the deployment step and your intent is
The jump box is in the virtual network where Azure VMware Solution connects through its ExpressRoute circuit. In Azure, go to the jump box's network interface and [view the effective routes](../virtual-network/manage-route-table.md#view-effective-routes).
-In the effective routes list, you should see the networks created as part of the Azure VMware Solution deployment. You'll see multiple networks that were derived from the [`/22` network you defined](production-ready-deployment-steps.md#ip-address-segment) during the [deployment step](#deploy-azure-vmware-solution) earlier in this article.
+In the effective routes list, you should see the networks created as part of the Azure VMware Solution deployment. You'll see multiple networks that were derived from the [`/22` network you defined](production-ready-deployment-steps.md#ip-address-segment) when you [create a private cloud](#create-an-azure-vmware-solution-private-cloud).
:::image type="content" source="media/pre-deployment/azure-vmware-solution-effective-routes.png" alt-text="Verify network routes advertised from Azure VMware Solution to Azure Virtual Network" lightbox="media/pre-deployment/azure-vmware-solution-effective-routes.png":::
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/enable-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/enable-azure-vmware-solution.md
Title: How to enable your Azure VMware Solution resource
-description: Learn how to submit a support request to enable your Azure VMware Solution resource. You can also request more hosts in your existing Azure VMware Solution private cloud.
+ Title: Request host quota and enable Azure VMware Solution
+description: Learn how to request host quota/capacity and enable the Azure VMware Solution resource provider. You can also request more hosts in an existing Azure VMware Solution private cloud.
Previously updated : 11/12/2020+ Last updated : 02/17/2021
-# How to enable Azure VMware Solution resource
-Learn how to submit a support request to enable your [Azure VMware Solution](introduction.md) resource. You can also request more hosts in your existing Azure VMware Solution private cloud.
+# Request host quota and enable Azure VMware Solution
-## Eligibility criteria
-
-You'll need an Azure account in an Azure subscription. The Azure subscription must comply with one of the following criteria:
-
-* A subscription under an [Azure Enterprise Agreement (EA)](../cost-management-billing/manage/ea-portal-agreements.md) with Microsoft.
-* A Cloud Solution Provider (CSP) managed subscription under an existing CSP Azure offers contract or an Azure plan.
--
-## Enable Azure VMware Solution for EA customers
-Before you create your Azure VMware Solution resource, you'll need to submit a support ticket to have your hosts allocated. Once the support team receives your request, it takes up to five business days to confirm your request and allocate your hosts. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll go through the same process.
--
-1. In your Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
- - **Issue type:** Technical
- - **Subscription:** Select your subscription
- - **Service:** All services > Azure VMware Solution
- - **Resource:** General question
- - **Summary:** Need capacity
- - **Problem type:** Capacity Management Issues
- - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
-
-1. In the **Description** of the support ticket, on the **Details** tab, provide:
-
- - POC or Production
- - Region Name
- - Number of hosts
- - Any other details
+In this how-to, you'll learn how to request host quot) resource provider, which enables the service. Before you can enable Azure VMware Solution, you'll need to submit a support ticket to have your hosts allocated. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll follow the same process.
- >[!NOTE]
- >Azure VMware Solution recommends a minimum of three hosts to spin up your private cloud and for redundancy N+1 hosts.
-
-1. Select **Review + Create** to submit the request.
-
- It will take up to five business days for a support representative to confirm your request.
-
- >[!IMPORTANT]
- >If you already have an existing Azure VMware Solution, and you are requesting additional hosts, please note that we need five business days to allocate the hosts.
-
-1. Before you can provision your hosts, make sure that you register the **Microsoft.AVS** resource provider in the Azure portal.
-
- ```azurecli-interactive
- az provider register -n Microsoft.AVS --subscription <your subscription ID>
- ```
-
- For additional ways to register the resource provider, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
-
-## Enable Azure VMware Solution for CSP customers
-
-CSPs must use [Microsoft Partner Center](https://partner.microsoft.com) to enable Azure VMware Solution for their customers. This article uses [CSP Azure plan](/partner-center/azure-plan-lp) as an example to illustrate the purchase procedure for partners.
-
- >[!IMPORTANT]
- >Azure VMware Solution service does not provide a multi-tenancy required. Hosting partners requiring it are not supported.
-
-1. In **Partner Center**, select **CSP** to access the **Customers** area.
-
- :::image type="content" source="media/enable-azure-vmware-solution/csp-customers-screen.png" alt-text="Microsoft Partner Center customers area" lightbox="media/enable-azure-vmware-solution/csp-customers-screen.png":::
-
-1. Select your customer and then select **Add products**.
-
- :::image type="content" source="media/enable-azure-vmware-solution/csp-partner-center.png" alt-text="Microsoft Partner Center" lightbox="media/enable-azure-vmware-solution/csp-partner-center.png":::
+>[!IMPORTANT]
+>It can take a few days to allocate the hosts depending on the number requested. So request what is needed for provisioning so you don't need to request a quota increase as often.
-1. Select **Azure plan** and then select **Add to cart**.
-1. Review and finish the general set up of the Azure plan subscription for your customer. For more information, see [Microsoft Partner Center documentation](/partner-center/azure-plan-manage).
+The overall process is simple and includes two steps:
+- Request additional host quota/capacity for either [EA customers](#request-host-quota-for-ea-customers) or [CSP customers](#request-host-quota-for-csp-customers)
+- Enable the Microsoft.AVS resource provider
-After configuring the Azure plan and the needed [Azure RBAC permissions](/partner-center/azure-plan-manage) are in place for the subscription, you'll engage Microsoft to enable the quota for an Azure plan subscription. Access Azure portal from [Microsoft Partner Center](https://partner.microsoft.com) using **Admin On Behalf Of** (AOBO) procedure.
+## Eligibility criteria
-1. Sign in to [Partner Center](https://partner.microsoft.com).
+You'll need an Azure account in an Azure subscription. The Azure subscription must follow with one of the following criteria:
-1. Select **CSP** to access the **Customers** area.
+- A subscription under an [Azure Enterprise Agreement (EA)](../cost-management-billing/manage/ea-portal-agreements.md) with Microsoft.
+- A Cloud Solution Provider (CSP) managed subscription under an existing CSP Azure offers contract or an Azure plan.
-1. Expand customer details and select **Microsoft Azure Management Portal**.
+## Request host quota for EA customers
-1. In Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
+1. In your Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
- **Issue type:** Technical - **Subscription:** Select your subscription - **Service:** All services > Azure VMware Solution
After configuring the Azure plan and the needed [Azure RBAC permissions](/partne
- Region Name - Number of hosts - Any other details
- - Is intended to host multiple customers?
>[!NOTE] >Azure VMware Solution recommends a minimum of three hosts to spin up your private cloud and for redundancy N+1 hosts. 1. Select **Review + Create** to submit the request.
- It will take up to five business days for a support representative to confirm your request.
-
- >[!IMPORTANT]
- >If you already have an existing Azure VMware Solution, and you are requesting additional hosts, please note that we need five business days to allocate the hosts.
-1. If the subscription is managed by the service provider then their administration team must access Azure portal using again **Admin On Behalf Of** (AOBO) procedure from Partner Center. One in Azure portal launch a [Cloud Shell](../cloud-shell/overview.md) instance and register the **Microsoft.AVS** resource provider and proceed with the deployment of the Azure VMware Solution private cloud.
+## Request host quota for CSP customers
- ```azurecli-interactive
- az provider register -n Microsoft.AVS --subscription <your subscription ID>
- ```
-
- For additional ways to register the resource provider, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+CSPs must use [Microsoft Partner Center](https://partner.microsoft.com) to enable Azure VMware Solution for their customers. This article uses [CSP Azure plan](/partner-center/azure-plan-lp) as an example to illustrate the purchase procedure for partners.
-1. If the subscription is managed directly by the customer the registration of the **Microsoft.AVS** resource provider must be done by an user with enough permissions in the subscription, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md) for more details and ways to register the resource provider.
+Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from Partner Center.
+
+>[!IMPORTANT]
+>Azure VMware Solution service does not provide a multi-tenancy required. Hosting partners requiring it are not supported.
+
+1. Configure the CSP Azure plan:
+
+ 1. In **Partner Center**, select **CSP** to access the **Customers** area.
+
+ :::image type="content" source="media/enable-azure-vmware-solution/csp-customers-screen.png" alt-text="Microsoft Partner Center customers area" lightbox="media/enable-azure-vmware-solution/csp-customers-screen.png":::
+
+ 1. Select your customer and then select **Add products**.
+
+ :::image type="content" source="media/enable-azure-vmware-solution/csp-partner-center.png" alt-text="Microsoft Partner Center" lightbox="media/enable-azure-vmware-solution/csp-partner-center.png":::
+
+ 1. Select **Azure plan** and then select **Add to cart**.
+
+ 1. Review and finish the general setup of the Azure plan subscription for your customer. For more information, see [Microsoft Partner Center documentation](/partner-center/azure-plan-manage).
+
+1. After you configure the Azure plan and you have the needed [Azure RBAC permissions](/partner-center/azure-plan-manage) in place for the subscription, you'll request the quota for your Azure plan subscription.
+
+ 1. Access Azure portal from [Microsoft Partner Center](https://partner.microsoft.com) using the **Admin On Behalf Of** (AOBO) procedure.
+
+ 1. Select **CSP** to access the **Customers** area.
+
+ 1. Expand customer details and select **Microsoft Azure Management Portal**.
+
+ 1. In Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
+ - **Issue type:** Technical
+ - **Subscription:** Select your subscription
+ - **Service:** All services > Azure VMware Solution
+ - **Resource:** General question
+ - **Summary:** Need capacity
+ - **Problem type:** Capacity Management Issues
+ - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
+
+ 1. In the **Description** of the support ticket, on the **Details** tab, provide:
+
+ - POC or Production
+ - Region Name
+ - Number of hosts
+ - Any other details
+ - Is intended to host multiple customers?
+
+ >[!NOTE]
+ >Azure VMware Solution recommends a minimum of three hosts to spin up your private cloud and for redundancy N+1 hosts.
+
+ 1. Select **Review + Create** to submit the request.
+
+## Register the **Microsoft.AVS** resource provider
+ ## Next steps
-After you enable your Azure VMware Solution resource, and you have the proper networking in place, you can [create a private cloud](tutorial-create-private-cloud.md).
+After enabling the resource, and the proper networking in place, you can [create a private cloud](tutorial-create-private-cloud.md).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/includes/create-private-cloud-azure-portal-steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/includes/create-private-cloud-azure-portal-steps.md
Title: Deploy Azure VMware Solution
-description: Steps to deploy Azure VMware Solution using the Azure portal.
+ Title: Create an Azure VMware Solution private cloud
+description: Steps to create an Azure VMware Solution private cloud using the Azure portal.
Previously updated : 09/28/2020 Last updated : 02/17/2021 <!-- Used in deploy-azure-vmware-solution.md and tutorial-create-private-cloud.md -->
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/includes/register-resource-provider-steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/includes/register-resource-provider-steps.md
Title: Register the Azure VMware Solution resource provider description: Steps to register the Azure VMware Solution resource provider. Previously updated : 12/24/2020 Last updated : 02/17/2021
-<!-- Used in avs-deployment.md and tutorial-create-private-cloud.md -->
+<!-- Used in deploy-azure-vmware-solution.md and tutorial-create-private-cloud.md -->
-To use Azure VMware Solution, you must first register the resource provider with your subscription.
+To use Azure VMware Solution, you must first register the resource provider with your subscription. For more information about resource providers, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types).
### Azure CLI
To use Azure VMware Solution, you must first register the resource provider with
az provider register -n Microsoft.AVS --subscription <your subscription ID> ``` - ### Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com).
az provider register -n Microsoft.AVS --subscription <your subscription ID>
1. Select **Resource providers** and enter **Microsoft.AVS** into the search.
-1. If the resource provider is not registered, select **Register**.
+1. If the resource provider is not registered, select **Register**.
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/netapp-files-with-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
In this article, we'll walk through the steps of integrating Azure NetApp Files
### Features (Services where Azure NetApp Files are used.) -- **Active Directory connections**: Azure NetApp Files supports [Active Directory Domain Services and Azure Active Directory Domain Services](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md#decide-which-domain-services-to-use).
+- **Active Directory connections**: Azure NetApp Files supports [Active Directory Domain Services and Azure Active Directory Domain Services](../azure-netapp-files/create-active-directory-connections.md#decide-which-domain-services-to-use).
- **Share Protocol**: Azure NetApp Files supports Server Message Block (SMB) and Network File System (NFS) protocols. This support means the volumes can be mounted on the Linux client and can be mapped on Windows client.
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/tutorial-create-private-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-create-private-cloud.md
Title: Tutorial - Deploy vSphere Cluster in Azure
-description: Learn how to deploy a vSphere Cluster in Azure using Azure VMware Solution
+ Title: Tutorial - Create and deploy an Azure VMware Solution private cloud
+description: Learn how to create and deploy an Azure VMware Solution private cloud
Last updated 11/19/2020
-# Tutorial: Deploy an Azure VMware Solution private cloud in Azure
+# Tutorial: Create an Azure VMware Solution private cloud
-Azure VMware Solution gives you the ability to deploy a vSphere cluster in Azure. The minimum initial deployment is three hosts. Additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster.
+In this tutorial, you'll learn how to create and deploy an Azure VMware Solution private cloud. The minimum initial deployment of hosts is three. Additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster.
Because Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter at launch, additional configuration is needed. These procedures and related prerequisites are covered in this tutorial.
In this tutorial, you'll learn how to:
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Appropriate administrative rights and permission to create a private cloud.
+- Appropriate administrative rights and permission to create a private cloud. You must be at minimum contributor level in the subscription.
+- Follow the information you gathered in the [planning](production-ready-deployment-steps.md) article to deploy Azure VMware Solution.
- Ensure you have the appropriate networking configured as described in [Tutorial: Network checklist](tutorial-network-checklist.md).-
-## Register the resource provider
--
+- Hosts have been provisioned and the Microsoft.AVS resource provider registered as described in [Request hosts and enable the Microsoft.AVS resource provider](enable-azure-vmware-solution.md).
## Create a Private Cloud
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-manage-vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-manage-vms.md
You can run an on-demand backup of a VM after you set up its protection. Keep th
* The retention range for an on-demand backup is the retention value that you specify when you trigger the backup. > [!NOTE]
-> The Azure Backup service supports up to nine on-demand backups per day, but Microsoft recommends no more than four daily on-demand backups to ensure best performance.
+> The Azure Backup service supports up to three on-demand backups per day, and one additional scheduled backup.
To trigger an on-demand backup:
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-automation.md
The template isn't directly accessible since it's under a customer's storage acc
3. Deploy the template to create a new VM as explained [here](../azure-resource-manager/templates/deploy-powershell.md). ```powershell
- New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateUri $templateBlobFullURI -storageAccountType Standard_GRS
+ New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateUri $templateBlobFullURI
``` ### Create a VM using the config file
backup https://docs.microsoft.com/en-us/azure/backup/backup-support-matrix-iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Back up managed disks after enabling resource group lock | Not supported.<br/><b
Modify backup policy for a VM | Supported.<br/><br/> The VM will be backed up by using the schedule and retention settings in new policy. If retention settings are extended, existing recovery points are marked and kept. If they're reduced, existing recovery points will be pruned in the next cleanup job and eventually deleted. Cancel a backup job| Supported during snapshot process.<br/><br/> Not supported when the snapshot is being transferred to the vault. Back up the VM to a different region or subscription |Not supported.<br><br>To successfully back up, virtual machines must be in the same subscription as the vault for backup.
-Backups per day (via the Azure VM extension) | One scheduled backup per day.<br/><br/>The Azure Backup service supports up to nine on-demand backups per day, but Microsoft recommends no more than four daily on-demand backups to ensure best performance.
+Backups per day (via the Azure VM extension) | One scheduled backup per day.<br/><br/>The Azure Backup service supports up to three on-demand backups per day, and one additional scheduled backup.
Backups per day (via the MARS agent) | Three scheduled backups per day. Backups per day (via DPM/MABS) | Two scheduled backups per day. Monthly/yearly backup| Not supported when backing up with Azure VM extension. Only daily and weekly is supported.<br/><br/> You can set up the policy to retain daily/weekly backups for monthly/yearly retention period.
bastion https://docs.microsoft.com/en-us/azure/bastion/bastion-connect-vm-ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-connect-vm-ssh.md
In order to connect to the Linux VM via SSH, you must have the following ports o
1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown. :::image type="content" source="./media/bastion-connect-vm-ssh/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected":::
-1. After you select Bastion, a side bar appears that has three tabs ΓÇô RDP, SSH, and Bastion. If Bastion was provisioned for the virtual network, the Bastion tab is active by default. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./tutorial-create-host-portal.md).
+1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
+1. On the **Connect using Azure Bastion** page, enter the **Username** and **Password**.
- :::image type="content" source="./media/bastion-connect-vm-ssh/bastion.png" alt-text="Screenshot shows the Connect to virtual machine dialog box with BASTION selected":::
-1. Enter the username and password for SSH to your virtual machine.
-1. Select **Connect** button after entering the key.
+ :::image type="content" source="./media/bastion-connect-vm-ssh/password.png" alt-text="Password authentication":::
+1. Select **Connect** to connect to the VM.
## <a name="privatekey"></a>Connect: Manually enter a private key 1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown. :::image type="content" source="./media/bastion-connect-vm-ssh/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected":::
-1. After you select Bastion, a side bar appears that has three tabs ΓÇô RDP, SSH, and Bastion. If Bastion was provisioned for the virtual network, the Bastion tab is active by default. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./tutorial-create-host-portal.md).
+1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
+1. On the **Connect using Azure Bastion** page, enter the **Username** and **SSH Private Key**.
- :::image type="content" source="./media/bastion-connect-vm-ssh/bastion.png" alt-text="Connect to virtual machine dialog box with BASTION selected.":::
-1. Enter the username and select **SSH Private Key**.
+ :::image type="content" source="./media/bastion-connect-vm-ssh/ssh-private-key.png" alt-text="SSH Private Key authentication":::
1. Enter your private key into the text area **SSH Private Key** (or paste it directly).
-1. Select **Connect** button after entering the key.
+1. Select **Connect** to connect to the VM.
## <a name="ssh"></a>Connect: Using a private key file 1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
- :::image type="content" source="./media/bastion-connect-vm-ssh/connect.png" alt-text="Connect selected":::
-1. After you select Bastion, a side bar appears that has three tabs ΓÇô RDP, SSH, and Bastion. If Bastion was provisioned for the virtual network, the Bastion tab is active by default. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./tutorial-create-host-portal.md).
+ :::image type="content" source="./media/bastion-connect-vm-ssh/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected":::
+1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
+1. On the **Connect using Azure Bastion** page, enter the **Username** and **SSH Private Key from Local File**.
+
+ :::image type="content" source="./media/bastion-connect-vm-ssh/private-key-file.png" alt-text="SSH Private Key file":::
- :::image type="content" source="./media/bastion-connect-vm-ssh/bastion.png" alt-text="BASTION selected.":::
-1. Enter the username and select **SSH Private Key from Local File**.
-1. Select the **Browse** button (the folder icon in the local file).
1. Browse for the file, then select **Open**. 1. Select **Connect** to connect to the VM. Once you click Connect, SSH to this virtual machine will directly open in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP of your virtual machine.
In order to connect to the Linux VM via SSH, you must have the following ports o
> 1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
-1. After you select Bastion, a side bar appears that has three tabs ΓÇô RDP, SSH, and Bastion. If Bastion was provisioned for the virtual network, the Bastion tab is active by default. If you didn't provision Bastion for the virtual network, see [Configure Bastion](bastion-create-host-portal.md).
- :::image type="content" source="./media/bastion-connect-vm-ssh/bastion.png" alt-text="Bastion tab":::
-1. Enter the username and select **SSH Private Key from Azure Key Vault**.
+ :::image type="content" source="./media/bastion-connect-vm-ssh/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected":::
+1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
+1. On the **Connect using Azure Bastion** page, enter the **Username** and select **SSH Private Key from Azure Key Vault**.
+
+ :::image type="content" source="./media/bastion-connect-vm-ssh/ssh-key-vault.png" alt-text="SSH Private Key from Azure Key Vault":::
1. Select the **Azure Key Vault** dropdown and select the resource in which you stored your SSH private key. If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](../key-vault/general/quick-create-portal.md) and store your SSH private key as the value of a new Key Vault secret. :::image type="content" source="./media/bastion-connect-vm-ssh/key-vault.png" alt-text="Azure Key Vault":::
-Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md).
-
+ Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md).
1. Select the **Azure Key Vault Secret** dropdown and select the Key Vault secret containing the value of your SSH private key.
-1. Select **Connect** to connect to the VM. Once you click Connect, SSH to this virtual machine will directly open in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP of your virtual machine.
+1. Select **Connect** to connect to the VM. Once you click **Connect**, SSH to this virtual machine will directly open in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP of your virtual machine.
## Next steps
-Read the [Bastion FAQ](bastion-faq.md)
+For more information about Azure Bastion, see the [Bastion FAQ](bastion-faq.md).
bastion https://docs.microsoft.com/en-us/azure/bastion/tutorial-create-host-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/tutorial-create-host-portal.md
Previously updated : 10/13/2020 Last updated : 02/12/2021
batch https://docs.microsoft.com/en-us/azure/batch/batch-rendering-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-applications.md
Some applications only support Windows, but most are supported on both Windows a
## Applications on latest CentOS 7 rendering image
-The following list applies to the CentOS rendering image, version 1.1.7.
+The following list applies to the CentOS rendering image, version 1.2.0.
* Autodesk Maya I/O 2020 Update 4.6 * Autodesk Arnold for Maya 2020 (Arnold version 6.2.0.0) MtoA-4.2.0-2020
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-guestos-msrc-releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
Title: List of updates applied to the Azure Guest OS | Microsoft Docs
description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to the Guest OS you are using. documentationcenter: na-+ editor: '' ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 na Previously updated : 2/9/2021 Last updated : 2/17/2021
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Anomaly-Detector/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview.md
Previously updated : 01/05/2021 Last updated : 02/16/2021 keywords: anomaly detection, machine learning, algorithms
With the Anomaly Detector, you can automatically detect anomalies throughout you
|Anomaly detection in real-time. | Detect anomalies in your streaming data by using previously seen data points to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target point is an anomaly. By calling the API with each new data point you generate, you can monitor your data as it's created. | |Detect anomalies throughout your data set as a batch. | Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. | |Detect change points throughout your data set as a batch. | Use your time series to detect any trend change points that exist in your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
-| Get additional information about your data. | Get useful details about your data and any observed anomalies, including expected values, anomaly boundaries and positions. |
+| Get additional information about your data. | Get useful details about your data and any observed anomalies, including expected values, anomaly boundaries, and positions. |
| Adjust anomaly detection boundaries. | The Anomaly Detector API automatically creates boundaries for anomaly detection. Adjust these boundaries to increase or decrease the API's sensitivity to data anomalies, and better fit your data. | ## Demo
To learn how to call the Anomaly Detector API, try this [Notebook](https://aka.m
To run the Notebook, complete the following steps: 1. Get a valid Anomaly Detector API subscription key and an API endpoint. The section below has instructions for signing up.
-1. Sign in, and click Clone, in the upper right corner.
-1. Un-check the "public" option in the dialog box before completing the clone operation, otherwise your notebook, including any subscription keys, will be public.
-1. Click **Run on free compute**
+1. Sign in, and select Clone, in the upper right corner.
+1. Uncheck the "public" option in the dialog box before completing the clone operation, otherwise your notebook, including any subscription keys, will be public.
+1. Select **Run on free compute**
1. Select one of the notebooks. 1. Add your valid Anomaly Detector API subscription key to the `subscription_key` variable. 1. Change the `endpoint` variable to your endpoint. For example: `https://westus2.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/last/detect`
-1. On the top menu bar, click **Cell**, then **Run All**.
+1. On the top menu bar, select **Cell**, then **Run All**.
## Workflow
No customer configuration is necessary to enable zone-resiliency. Zone-resilienc
## Deploy on premises using Docker containers
-[Use Anomaly Detector containers](anomaly-detector-container-howto.md) to deploy API features on-premises. Docker containers enable you to bring the service closer to your data for compliance, security or other operational reasons.
+[Use Anomaly Detector containers](anomaly-detector-container-howto.md) to deploy API features on-premises. Docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
## Join the Anomaly Detector community
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
Title: Introduction to Computer Vision spatial analysis
+ Title: Overview of Spatial Analysis
description: This document explains the basic concepts and features of a Computer Vision spatial analysis container. -+ -+ Previously updated : 12/14/2020 Last updated : 02/01/2021
-# Introduction to Computer Vision spatial analysis
+# Overview of Computer Vision spatial analysis
Computer Vision spatial analysis is a new feature of Azure Cognitive Services Computer Vision that helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, run AI operations to extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, an AI operation can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines.
-## The basics of spatial analysis
+## The basics of Spatial Analysis
Today the core operations of spatial analysis are all built on a pipeline that ingests video, detects people in the video, tracks the people as they move around over time, and generates events as people interact with regions of interest.
-## Spatial analysis terms
+## Spatial Analysis terms
| Term | Definition | |||
Today the core operations of spatial analysis are all built on a pipeline that i
| Region of Interest | This is a zone or line defined in the input video as part of configuration. When a person interacts with the region of the video the system generates an event. For example, for the PersonCrossingLine operation, a line is defined in the video. When a person crosses that line an event is generated. | | Event | An event is the primary output of spatial analysis. Each operation emits a specific event either periodically (ex. once per minute) or when a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the PeopleCount operation can emit an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
-## Example use cases for spatial analysis
+## Responsible use of Spatial Analysis technology
-The following are example use cases that we had in mind as we designed and tested spatial analysis.
+To learn how to use Spatial Analysis technology responsibly, see the [transparency note](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext). MicrosoftΓÇÖs transparency notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment.
-**Social Distancing Compliance** - An office space has several cameras that use spatial analysis to monitor social distancing compliance by measuring the distance between people. The facilities manager can use heatmaps showing aggregate statistics of social distancing compliance over time to adjust the workspace and make social distancing easier.
+## Spatial Analysis gating for public preview
-**Shopper Analysis** - A grocery store uses cameras pointed at product displays to measure the impact of merchandising changes on store traffic. The system allows the store manager to identify which new products drive the most change to engagement.
-
-**Queue Management** - Cameras pointed at checkout queues provide alerts to managers when wait time gets too long, allowing them to open more lines. Historical data on queue abandonment gives insights into consumer behavior.
-
-**Face Mask Compliance** ΓÇô Retail stores can use cameras pointing at the store fronts to check if customers walking into the store are wearing face masks to maintain safety compliance and analyze aggregate statistics to gain insights on mask usage trends.
-
-**Building Occupancy & Analysis** - An office building uses cameras focused on entrances to key spaces to measure footfall and how people use the workplace. Insights allow the building manager to adjust service and layout to better serve occupants.
-
-**Minimum Staff Detection** - In a data center, cameras monitor activity around servers. When employees are physically fixing sensitive equipment two people are always required to be present during the repair for security reasons. Cameras are used to verify that this guideline is followed.
-
-**Workplace Optimization** - In a fast casual restaurant, cameras in the kitchen are used to generate aggregate information about employee workflow. This is used by managers to improve processes and training for the team.
-
-## Considerations when choosing a use case
-
-**Avoid critical safety alerting** - Spatial analysis was not designed for critical safety real-time alerting. It should not be relied on for scenarios when real-time alerts are needed to trigger intervention to prevent injury, like turning off a piece of heavy machinery when a person is present. It can be used for risk reduction using statistics and intervention to reduce risky behavior, like people entering a restricted/forbidden area.
-
-**Avoid use for employment-related decisions** - Spatial analysis provides probabilistic metrics regarding the location and movement of people within a space. While this data may be useful for aggregate process improvement, the data is not a good indicator of individual worker performance and should not be used for making employment-related decisions.
-
-**Avoid use for health care-related decisions** - Spatial analysis provides probabilistic and partial data related to people's movements. The data is not suitable for making health-related decisions.
-
-**Avoid use in protected spaces** - Protect individuals' privacy by evaluating camera locations and positions, adjusting angles and region of interests so they do not overlook protected areas such as restrooms.
-
-**Carefully consider use in schools or elderly care facilities** - Spatial analysis has not heavily tested with data containing minors
-under the age of 18 or adults over age 65. We would recommend that customers thoroughly evaluate error rates for their scenario in environments where these ages predominate.
-
-**Carefully consider use in public spaces** - Evaluate camera locations and positions, adjusting angles and region of interests to minimize collection from public spaces. Lighting and weather in public spaces such as streets and parks will significantly impact the performance of the spatial analysis system, and it is extremely difficult to provide effective disclosure in public spaces.
-
-## Spatial analysis gating for public preview
-
-To ensure spatial analysis is used for scenarios it was designed for, we are making this technology available to customers through an application process. To get access to spatial analysis, you will need to start by filling out our online intake form. [Begin your
-application here](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRyQZ7B8Cg2FEjpibPziwPcZUNlQ4SEVORFVLTjlBSzNLRlo0UzRRVVNPVy4u).
+To ensure spatial analysis is used for scenarios it was designed for, we are making this technology available to customers through an application process. To get access to spatial analysis, you will need to start by filling out our online intake form. [Begin your application here](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRyQZ7B8Cg2FEjpibPziwPcZUNlQ4SEVORFVLTjlBSzNLRlo0UzRRVVNPVy4u).
Access to the spatial analysis public preview is subject to Microsoft's sole discretion based on our eligibility criteria, vetting process, and availability to support a limited number of customers during this gated preview. In public preview, we are looking for customers who have a significant relationship with Microsoft, are interested in working with us on the recommended use cases, and additional scenarios that are in keeping with our responsible AI commitments. ## Next steps > [!div class="nextstepaction"]
-> [Characteristics and limitations for spatial analysis](/legal/cognitive-services/computer-vision/accuracy-and-limitations?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext)
+> [Get started with Spatial Analysis Container](spatial-analysis-container.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/spatial-analysis-container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
sudo systemctl --now enable nvidia-mps.service
## Configure Azure IoT Edge on the host computer
-To deploy the spatial analysis container on the host computer, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier. If your host computer is an Azure Stack Edge, use the same subscription and resource group that is used by the Azure Stack Edge resource.
+To deploy the spatial analysis container on the host computer, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters where appropriate. Alternatively, you can create the Azure IoT Hub on the [Azure portal](https://portal.azure.com/).
sudo az iot hub create --name "test-iot-hub-123" --sku S1 --resource-group "test
sudo az iot hub device-identity create --hub-name "test-iot-hub-123" --device-id "my-edge-device" --edge-enabled ```
-If the host computer isn't an Azure Stack Edge device, you will need to install [Azure IoT Edge](../../iot-edge/how-to-install-iot-edge.md) version 1.0.9. Follow these steps to download the correct version:
+You will need to install [Azure IoT Edge](../../iot-edge/how-to-install-iot-edge.md) version 1.0.9. Follow these steps to download the correct version:
Ubuntu Server 18.04: ```bash
sudo apt-get install -y docker-ce nvidia-docker2
sudo systemctl restart docker ```
-Now that you have set up and configured your VM, follow the steps below to deploy the spatial analysis container.
+Now that you have set up and configured your VM, follow the steps below to configure Azure IoT Edge.
+
+## Configure Azure IoT Edge on the VM
+
+To deploy the spatial analysis container on the VM, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
+
+Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters where appropriate. Alternatively, you can create the Azure IoT Hub on the [Azure portal](https://portal.azure.com/).
+
+```bash
+curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
+sudo az login
+sudo az account set --subscription <name or ID of Azure Subscription>
+sudo az group create --name "test-resource-group" --location "WestUS"
+
+sudo az iot hub create --name "test-iot-hub-123" --sku S1 --resource-group "test-resource-group"
+
+sudo az iot hub device-identity create --hub-name "test-iot-hub-123" --device-id "my-edge-device" --edge-enabled
+```
+
+You will need to install [Azure IoT Edge](../../iot-edge/how-to-install-iot-edge.md) version 1.0.9. Follow these steps to download the correct version:
+
+Ubuntu Server 18.04:
+```bash
+curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
+```
+
+Copy the generated list.
+```bash
+sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
+```
+
+Install the Microsoft GPG public key.
+
+```bash
+curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
+sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
+```
+
+Update the package lists on your device.
+
+```bash
+sudo apt-get update
+```
+
+Install the 1.0.9 release:
+
+```bash
+sudo apt-get install iotedge=1.0.9* libiothsm-std=1.0.9*
+```
+
+Next, register the VM as an IoT Edge device in your IoT Hub instance, using a [connection string](../../iot-edge/how-to-manual-provision-symmetric-key.md?view=iotedge-2018-06).
+
+You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
+
+```bash
+sudo az iot hub device-identity show-connection-string --device-id my-edge-device --hub-name test-iot-hub-123
+```
+
+On the VM open `/etc/iotedge/config.yaml` for editing. Replace `ADD DEVICE CONNECTION STRING HERE` with the connection string. Save and close the file.
+Run this command to restart the IoT Edge service on the VM.
+
+```bash
+sudo systemctl restart iotedge
+```
+
+Deploy the spatial analysis container as an IoT Module on the VM, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows). If you're using the portal, set the image URI to the location of your Azure Container Registry.
+
+Use the below steps to deploy the container using the Azure CLI.
In this article, you learned concepts and workflow for downloading, installing,
* [Configure spatial analysis operations](spatial-analysis-operations.md) * [Logging and troubleshooting](spatial-analysis-logging.md) * [Camera placement guide](spatial-analysis-camera-placement.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/encrypt-data-at-rest.md
Azure Custom Vision automatically encrypts your data when persisted it to the cl
> [!IMPORTANT] > Customer-managed keys are only available resources created after 11 May, 2020. To use CMK with Custom Vision, you will need to create a new Custom Vision resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
-## Regional availability
-
-Customer-managed keys are currently available in these regions:
-
-* US South Central
-* West US 2
-* East US
-* US Gov Virginia
- [!INCLUDE [cognitive-services-cmk](../includes/configure-customer-managed-keys.md)] ## Next steps
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Face/APIReference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/APIReference.md
+
+ Title: API Reference - Face
+
+description: API reference provides information about the Person, LargePersonGroup/PersonGroup, LargeFaceList/FaceList, and Face Algorithms APIs.
+++++++ Last updated : 02/17/2021+++
+# Face API reference list
+
+Azure Face is a cloud-based service that provides algorithms for face detection and recognition. The Face APIs comprise the following categories:
+
+- Face Algorithm APIs: Cover core functions such as [Detection](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237), [Verification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), and [Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
+- [FaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b): Used to manage a FaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).
+- [LargePersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adcba3a7b9412a4d53f40): Used to manage LargePersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [LargePersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d): Used to manage a LargePersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [LargeFaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc): Used to manage a LargeFaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).
+- [PersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c): Used to manage PersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [PersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244): Used to manage a PersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [Snapshot APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-take): Used to manage a Snapshot for data migration across subscriptions.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Face/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/encrypt-data-at-rest.md
# Face service encryption of data at rest
-The Face service automatically encrypts your data when persisted it to the cloud. The Face service encryption protects your data and to help you to meet your organizational security and compliance commitments.
+The Face service automatically encrypts your data when persisted to the cloud. The Face service encryption protects your data and helps you to meet your organizational security and compliance commitments.
[!INCLUDE [cognitive-services-about-encryption](../includes/cognitive-services-about-encryption.md)]
The Face service automatically encrypts your data when persisted it to the cloud
* For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md) * [What is Azure Key Vault](../../key-vault/general/overview.md)?
-* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
Audio with human-labeled transcripts offers the greatest accuracy improvements i
Consider these details:
-* Custom Speech can only capture word context to reduce substitution errors, not insertion or deletion errors.
+* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by just using related text.
+* If you use one of the most heavily used languages such as US-English, there's a good chance that there's no need to train with audio data. For such languages, the base models offer already very good recognition results in most scenarios; it's probably enough to train with related text.
+* Custom Speech can only capture word context to reduce substitution errors, not insertion, or deletion errors.
* Avoid samples that include transcription errors, but do include a diversity of audio quality. * Avoid sentences that are not related to your problem domain. Unrelated sentences can harm your model. * When the quality of transcripts vary, you can duplicate exceptionally good sentences (like excellent transcriptions that include key phrases) to increase their weight. * The Speech service will automatically use the transcripts to improve the recognition of domain-specific words and phrases, as if they were added as related text.
-* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by just using related text.
* It can take several days for a training operation to complete. To improve the speed of training, make sure to create your Speech service subscription in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training. > [!NOTE]
-> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
+> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data. Even if a base model supports training with audio data, the service might use only part of the audio. Still it will use all the transcripts.
> [!NOTE] > In cases when you change the base model used for training, and you have audio in the training dataset, *always* check whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model did not support training with audio data, and the training dataset contains audio, training time with the new base model will **drastically** increase, and may easily go from several hours to several days and more. This is especially true if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Files should be grouped by type into a dataset and uploaded as a .zip file. Each
> To quickly get started, consider using sample data. See this GitHub repository for <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">sample Custom Speech data <span class="docon docon-navigate-external x-hidden-focus"></span></a> > [!NOTE]
-> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
+> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data. Even if a base model supports training with audio data, the service might use only part of the audio. Still it will use all the transcripts.
> [!NOTE] > In cases when you change the base model used for training, and you have audio in the training dataset, *always* check whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model did not support training with audio data, and the training dataset contains audio, training time with the new base model will **drastically** increase, and may easily go from several hours to several days and more. This is especially true if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/spx-setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/spx-setup.md
Follow these steps to install the Speech CLI on Windows: 1. On Windows, you need the [Microsoft Visual C++ Redistributable for Visual Studio 2019](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads) for your platform. Installing this for the first time may require a restart.
-1. Install [.NET Core 3.1](/dotnet/core/install/linux).
+1. Install [.NET Core 3.1 SDK](/dotnet/core/install/linux).
2. Install the Speech CLI using NuGet by entering this command:
- `dotnet tool install --global Microsoft.CognitiveServices.Speech.CLI --version 1.15.0`
-
+ ```console
+ dotnet tool install --global Microsoft.CognitiveServices.Speech.CLI --version 1.15.0
+ ```
Type `spx` to see help for the Speech CLI. > [!NOTE]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Language | Locale (BCP-47) | Customizations | [Language detection](how-to-automatic-language-detection.md) | ||--||-|
-| Arabic (Bahrain), modern standard | `ar-BH` | Language model | Yes |
-| Arabic (Egypt) | `ar-EG` | Language model | Yes |
-| Arabic (Iraq) | `ar-IQ` | Language model | |
-| Arabic (Israel) | `ar-IL` | Language model | |
-| Arabic (Jordan) | `ar-JO` | Language model | |
-| Arabic (Kuwait) | `ar-KW` | Language model | |
-| Arabic (Lebanon) | `ar-LB` | Language model | |
-| Arabic (Oman) | `ar-OM` | Language model | |
-| Arabic (Qatar) | `ar-QA` | Language model | |
-| Arabic (Saudi Arabia) | `ar-SA` | Language model | Yes |
-| Arabic (State of Palestine) | `ar-PS` | Language model | |
-| Arabic (Syria) | `ar-SY` | Language model | Yes |
-| Arabic (United Arab Emirates) | `ar-AE` | Language model | |
-| Bulgarian (Bulgaria) | `bg-BG` | Language model | |
-| Catalan (Spain) | `ca-ES` | Language model | Yes |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Acoustic model (20201015)<br>Language model | Yes |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Acoustic model (20200910)<br>Language model | Yes |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Acoustic model (20190701, 20201015)<br>Language model | Yes |
-| Croatian (Croatia) | `hr-HR` | Language model | |
-| Czech (Czech Republic) | `cs-CZ` | Language Model | |
-| Danish (Denmark) | `da-DK` | Language model | Yes |
-| Dutch (Netherlands) | `nl-NL` | Acoustic model (20201015)<br>Language model | Yes |
-| English (Australia) | `en-AU` | Acoustic model (20201019)<br>Language model | Yes |
-| English (Canada) | `en-CA` | Acoustic model (20201019)<br>Language model | Yes |
-| English (Hong Kong) | `en-HK` | Language Model | |
-| English (India) | `en-IN` | Acoustic model (20200923)<br>Language model | Yes |
-| English (Ireland) | `en-IE` | Language Model | |
-| English (New Zealand) | `en-NZ` | Acoustic model (20201019)<br>Language model | Yes |
-| English (Nigeria) | `en-NG` | Language Model | |
-| English (Philippines) | `en-PH` | Language Model | |
-| English (Singapore) | `en-SG` | Language Model | |
-| English (South Africa) | `en-ZA` | Language Model | |
-| English (United Kingdom) | `en-GB` | Acoustic model (20201019)<br>Language model<br>Pronunciation| Yes |
-| English (United States) | `en-US` | Acoustic model (20201019)<br>Language model<br>Pronunciation| Yes |
-| Estonian(Estonia) | `et-EE` | Language Model | |
-| Finnish (Finland) | `fi-FI` | Language model | Yes |
-| French (Canada) | `fr-CA` | Acoustic model (20201015)<br>Language model | Yes |
-| French (France) | `fr-FR` | Acoustic model (20201015)<br>Language model<br>Pronunciation| Yes |
-| German (Germany) | `de-DE` | Acoustic model (20190701, 20200619, 20201127)<br>Language model<br>Pronunciation| Yes |
-| Greek (Greece) | `el-GR` | Language model | |
-| Gujarati (Indian) | `gu-IN` | Language model | |
-| Hindi (India) | `hi-IN` | Acoustic model (20200701)<br>Language model | Yes |
-| Hungarian (Hungary) | `hu-HU` | Language Model | |
-| Irish(Ireland) | `ga-IE` | Language model | |
-| Italian (Italy) | `it-IT` | Acoustic model (20201016)<br>Language model<br>Pronunciation| Yes |
-| Japanese (Japan) | `ja-JP` | Language model | Yes |
-| Korean (Korea) | `ko-KR` | Acoustic model (20201015)<br>Language model | Yes |
-| Latvian (Latvia) | `lv-LV` | Language model | |
-| Lithuanian (Lithuania) | `lt-LT` | Language model | |
-| Maltese(Malta) | `mt-MT` | Language model | |
-| Marathi (India) | `mr-IN` | Language model | |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Language model | Yes |
-| Polish (Poland) | `pl-PL` | Language model | Yes |
-| Portuguese (Brazil) | `pt-BR` | Acoustic model (20190620, 20201015)<br>Language model<br>Pronunciation| Yes |
-| Portuguese (Portugal) | `pt-PT` | Language model | Yes |
-| Romanian (Romania) | `ro-RO` | Language model | |
-| Russian (Russia) | `ru-RU` | Acoustic model (20200907)<br>Language model | Yes |
-| Slovak (Slovakia) | `sk-SK` | Language model | |
-| Slovenian (Slovenia) | `sl-SI` | Language model | |
-| Spanish (Argentina) | `es-AR` | Language Model | |
-| Spanish (Bolivia) | `es-BO` | Language Model | |
-| Spanish (Chile) | `es-CL` | Language Model | |
-| Spanish (Colombia) | `es-CO` | Language Model | |
-| Spanish (Costa Rica) | `es-CR` | Language Model | |
-| Spanish (Cuba) | `es-CU` | Language Model | |
-| Spanish (Dominican Republic) | `es-DO` | Language Model | |
-| Spanish (Ecuador) | `es-EC` | Language Model | |
-| Spanish (El Salvador) | `es-SV` | Language Model | |
-| Spanish (Equatorial Guinea) | `es-GQ` | Language Model | |
-| Spanish (Guatemala) | `es-GT` | Language Model | |
-| Spanish (Honduras) | `es-HN` | Language Model | |
-| Spanish (Mexico) | `es-MX` | Acoustic model (20200907)<br>Language model | Yes |
-| Spanish (Nicaragua) | `es-NI` | Language Model | |
-| Spanish (Panama) | `es-PA` | Language Model | |
-| Spanish (Paraguay) | `es-PY` | Language Model | |
-| Spanish (Peru) | `es-PE` | Language Model | |
-| Spanish (Puerto Rico) | `es-PR` | Language Model | |
-| Spanish (Spain) | `es-ES` | Acoustic model (20201015)<br>Language model | Yes |
-| Spanish (Uruguay) | `es-UY` | Language Model | |
-| Spanish (USA) | `es-US` | Language Model | |
-| Spanish (Venezuela) | `es-VE` | Language Model | |
-| Swedish (Sweden) | `sv-SE` | Language model | Yes |
-| Tamil (India) | `ta-IN` | Language model | |
-| Telugu (India) | `te-IN` | Language model | |
-| Thai (Thailand) | `th-TH` | Language model | Yes |
-| Turkish (Turkey) | `tr-TR` | Language model | |
+| Arabic (Bahrain), modern standard | `ar-BH` | Text | Yes |
+| Arabic (Egypt) | `ar-EG` | Text | Yes |
+| Arabic (Iraq) | `ar-IQ` | Text | |
+| Arabic (Israel) | `ar-IL` | Text | |
+| Arabic (Jordan) | `ar-JO` | Text | |
+| Arabic (Kuwait) | `ar-KW` | Text | |
+| Arabic (Lebanon) | `ar-LB` | Text | |
+| Arabic (Oman) | `ar-OM` | Text | |
+| Arabic (Qatar) | `ar-QA` | Text | |
+| Arabic (Saudi Arabia) | `ar-SA` | Text | Yes |
+| Arabic (State of Palestine) | `ar-PS` | Text | |
+| Arabic (Syria) | `ar-SY` | Text | Yes |
+| Arabic (United Arab Emirates) | `ar-AE` | Text | |
+| Bulgarian (Bulgaria) | `bg-BG` | Text | |
+| Catalan (Spain) | `ca-ES` | Text | Yes |
+| Chinese (Cantonese, Traditional) | `zh-HK` | Audio (20201015)<br>Text | Yes |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Audio (20200910)<br>Text | Yes |
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Audio (20190701, 20201015)<br>Text | Yes |
+| Croatian (Croatia) | `hr-HR` | Text | |
+| Czech (Czech Republic) | `cs-CZ` | Text | |
+| Danish (Denmark) | `da-DK` | Text | Yes |
+| Dutch (Netherlands) | `nl-NL` | Audio (20201015)<br>Text | Yes |
+| English (Australia) | `en-AU` | Audio (20201019)<br>Text | Yes |
+| English (Canada) | `en-CA` | Audio (20201019)<br>Text | Yes |
+| English (Hong Kong) | `en-HK` | Text | |
+| English (India) | `en-IN` | Audio (20200923)<br>Text | Yes |
+| English (Ireland) | `en-IE` | Text | |
+| English (New Zealand) | `en-NZ` | Audio (20201019)<br>Text | Yes |
+| English (Nigeria) | `en-NG` | Text | |
+| English (Philippines) | `en-PH` | Text | |
+| English (Singapore) | `en-SG` | Text | |
+| English (South Africa) | `en-ZA` | Text | |
+| English (United Kingdom) | `en-GB` | Audio (20201019)<br>Text<br>Pronunciation| Yes |
+| English (United States) | `en-US` | Audio (20201019)<br>Text<br>Pronunciation| Yes |
+| Estonian(Estonia) | `et-EE` | Text | |
+| Finnish (Finland) | `fi-FI` | Text | Yes |
+| French (Canada) | `fr-CA` | Audio (20201015)<br>Text | Yes |
+| French (France) | `fr-FR` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
+| German (Germany) | `de-DE` | Audio (20190701, 20200619, 20201127)<br>Text<br>Pronunciation| Yes |
+| Greek (Greece) | `el-GR` | Text | |
+| Gujarati (Indian) | `gu-IN` | Text | |
+| Hindi (India) | `hi-IN` | Audio (20200701)<br>Text | Yes |
+| Hungarian (Hungary) | `hu-HU` | Text | |
+| Irish(Ireland) | `ga-IE` | Text | |
+| Italian (Italy) | `it-IT` | Audio (20201016)<br>Text<br>Pronunciation| Yes |
+| Japanese (Japan) | `ja-JP` | Text | Yes |
+| Korean (Korea) | `ko-KR` | Audio (20201015)<br>Text | Yes |
+| Latvian (Latvia) | `lv-LV` | Text | |
+| Lithuanian (Lithuania) | `lt-LT` | Text | |
+| Maltese(Malta) | `mt-MT` | Text | |
+| Marathi (India) | `mr-IN` | Text | |
+| Norwegian (Bokmål, Norway) | `nb-NO` | Text | Yes |
+| Polish (Poland) | `pl-PL` | Text | Yes |
+| Portuguese (Brazil) | `pt-BR` | Audio (20190620, 20201015)<br>Text<br>Pronunciation| Yes |
+| Portuguese (Portugal) | `pt-PT` | Text | Yes |
+| Romanian (Romania) | `ro-RO` | Text | |
+| Russian (Russia) | `ru-RU` | Audio (20200907)<br>Text | Yes |
+| Slovak (Slovakia) | `sk-SK` | Text | |
+| Slovenian (Slovenia) | `sl-SI` | Text | |
+| Spanish (Argentina) | `es-AR` | Text | |
+| Spanish (Bolivia) | `es-BO` | Text | |
+| Spanish (Chile) | `es-CL` | Text | |
+| Spanish (Colombia) | `es-CO` | Text | |
+| Spanish (Costa Rica) | `es-CR` | Text | |
+| Spanish (Cuba) | `es-CU` | Text | |
+| Spanish (Dominican Republic) | `es-DO` | Text | |
+| Spanish (Ecuador) | `es-EC` | Text | |
+| Spanish (El Salvador) | `es-SV` | Text | |
+| Spanish (Equatorial Guinea) | `es-GQ` | Text | |
+| Spanish (Guatemala) | `es-GT` | Text | |
+| Spanish (Honduras) | `es-HN` | Text | |
+| Spanish (Mexico) | `es-MX` | Audio (20200907)<br>Text | Yes |
+| Spanish (Nicaragua) | `es-NI` | Text | |
+| Spanish (Panama) | `es-PA` | Text | |
+| Spanish (Paraguay) | `es-PY` | Text | |
+| Spanish (Peru) | `es-PE` | Text | |
+| Spanish (Puerto Rico) | `es-PR` | Text | |
+| Spanish (Spain) | `es-ES` | Audio (20201015)<br>Text | Yes |
+| Spanish (Uruguay) | `es-UY` | Text | |
+| Spanish (USA) | `es-US` | Text | |
+| Spanish (Venezuela) | `es-VE` | Text | |
+| Swedish (Sweden) | `sv-SE` | Text | Yes |
+| Tamil (India) | `ta-IN` | Text | |
+| Telugu (India) | `te-IN` | Text | |
+| Thai (Thailand) | `th-TH` | Text | Yes |
+| Turkish (Turkey) | `tr-TR` | Text | |
## Text-to-speech
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/swagger-documentation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/swagger-documentation.md
Title: Swagger documentation - Speech service
description: The Swagger documentation can be used to auto-generate SDKs for a number of programming languages. All operations in our service are supported by Swagger -+ Previously updated : 07/05/2019- Last updated : 02/16/2021+ # Swagger documentation
-The Speech service offers a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through the Custom Speech portal can be completed programmatically using these APIs.
+Speech service offers a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through [the Custom Speech area of the Speech Studio](https://aka.ms/customspeech) can be completed programmatically using these APIs.
> [!NOTE]
-> Both Speech-to-Text and Text-to-Speech operations are supported available as REST APIs, which are in turn documented in the Swagger specification.
+> Speech service has several REST APIs for [Speech-to-text](rest-speech-to-text.md) and [Text-to-speech](rest-text-to-speech.md).
+>
+> However only [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) and v2.0 are documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech Services REST APIs.
## Generating code from the Swagger specification The [Swagger specification](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) has options that allow you to quickly test for various paths. However, sometimes it's desirable to generate code for all paths, creating a single library of calls that you can base future solutions on. Let's take a look at the process to generate a Python library.
-You'll need to set Swagger to the same region as your Speech service subscription. You can confirm your region in the Azure portal under your Speech service resource. For a complete list of supported regions, see [regions](regions.md).
+You'll need to set Swagger to the region of your Speech resource. You can confirm the region in the **Overview** part of your Speech resource settings in Azure portal. The complete list of supported regions is available [here](regions.md#speech-to-text).
-1. In a browser, go to the Swagger specification for your region:
+1. In a browser, go to the Swagger specification for your [region](regions.md#speech-to-text):
`https://<your-region>.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0` 1. On that page, click **API definition**, and click **Swagger**. Copy the URL of the page that appears.
-1. In a new browser, go to https://editor.swagger.io
+1. In a new browser, go to [https://editor.swagger.io](https://editor.swagger.io)
1. Click **File**, click **Import URL**, paste the URL, and click **OK**. 1. Click **Generate Client** and select **python**. The client library downloads to your computer in a `.zip` file. 1. Extract everything from the download. You might use `tar -xf` to extract everything. 1. Install the extracted module into your Python environment:
- `pip install path/to/package/python-client`
-1. The installed package is named `swagger_client`. Check that the installation worked:
+ `pip install path/to/package/python-client`
+1. The installed package is named `swagger_client`. Check that the installation has worked:
`python -c "import swagger_client"` You can use the Python library that you generated with the [Speech service samples on GitHub](https://aka.ms/csspeech/samples).
-## Reference docs
+## Reference documents
-* [REST (Swagger): Batch transcription and customization](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
-* [REST API: Speech-to-text](rest-speech-to-text.md)
-* [REST API: Text-to-speech](rest-text-to-speech.md)
+* [Swagger: Speech-to-text REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
+* [Speech-to-text REST API](rest-speech-to-text.md)
+* [Text-to-speech REST API](rest-text-to-speech.md)
## Next steps
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/document-translation/create-sas-tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/create-sas-tokens.md
+
+ Title: Create shared access signature (SAS) token for containers and blobs with Microsoft Storage Explorer
+description: How to create a Shared Access Token (SAS) for containers and blobs with Microsoft Storage Explorer and the Azure portal
++++ Last updated : 02/11/2021++
+# Create SAS tokens for Document Translation
+
+In this article, you'll learn how to create shared access signature (SAS) tokens using the Azure Storage Explorer or the Azure portal. An SAS token provides secure, delegated access to resources in your Azure storage account.
+
+## Create SAS tokens with Azure Storage Explorer
+
+### Prerequisites
+
+* You'll need a [**Azure Storage Explorer**](/azure/vs-azure-tools-storage-manage-with-storage-explorer) app installed in your Windows, macOS, or Linux development environment. Azure Storage Explorer is a free tool that enables you to easily manage your Azure cloud storage resources.
+* After the Azure Storage Explorer app is installed, [connect it the storage account](/azure/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows#connect-to-a-storage-account-or-service) you're using for Document Translation.
+
+### Create your tokens
+
+### [SAS tokens for containers](#tab/Containers)
+
+1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**.
+1. Expand the Storage Accounts node and select **Blob Containers**.
+1. Expand the Blob Containers node and right-click on a storage **container** node or to display the options menu.
+1. Select **Get Shared Access Signature...** from options menu.
+1. In the **Shared Access Signature** window, make the following selections:
+ * Select your **Access policy** (the default is none).
+ * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, an SAS can't be revoked.
+ * Select the **Time zone** for the Start and Expiry date and time (default is Local).
+ * Define your container **Permissions** by checking and/or clearing the appropriate check box.
+ * Review and select **Create**.
+
+1. A new window will appear with the **Container** name, **URI**, and **Query string** for your container.
+1. **Copy and paste the container, URI, and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.**
+1. To construct an SAS URL, append the SAS token (URI) to the URL for a storage service.
+
+### [SAS tokens for blobs](#tab/blobs)
+
+1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**.
+1. Expand your storage node and select **Blob Containers**.
+1. Expand the Blob Containers node and select a **container** node to display the contents in the main window.
+1. Select the blob where you wish to delegate SAS access and right-click to display the options menu.
+1. Select **Get Shared Access Signature...** from options menu.
+1. In the **Shared Access Signature** window, make the following selections:
+ * Select your **Access policy** (the default is none).
+ * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, an SAS can't be revoked.
+ * Select the **Time zone** for the Start and Expiry date and time (default is Local).
+ * Define your container **Permissions** by checking and/or clearing the appropriate check box.
+ * Review and select **Create**.
+1. A new window will appear with the **Blob** name, **URI**, and **Query string** for your blob.
+1. **Copy and paste the blob, URI, and query string values in a secure location. They will only be displayed once and cannot be retrieved once the window is closed.**
+1. To construct an SAS URL, append the SAS token (URI) to the URL for a storage service.
+++
+## Create SAS tokens for blobs in the Azure portal
+
+> [!NOTE]
+> Creating SAS tokens for containers directly in the Azure portal is currently not supported. However, you can create an SAS token with [**Azure Storage Explorer**](#create-sas-tokens-with-azure-storage-explorer) or complete the task [programmatically](/azure/storage/blobs/sas-service-create).
+
+<!-- markdownlint-disable MD024 -->
+### Prerequisites
+
+To get started, you'll need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+* A [**Translator**](https://ms.portal.azure.com/#create/Microsoft) service resource (**not** a Cognitive Services multi-service resource. *See* [Create a new Azure resource](../../cognitive-services-apis-create-account.md#create-a-new-azure-cognitive-services-resource).
+* An [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM). All access to Azure Storage takes place through a storage account.
+
+### Create your tokens
+
+Go to the [Azure portal](https://ms.portal.azure.com/#home) and navigate as follows:
+
+ **Your storage account** → **containers** → **your container** → **your blob**
+
+1. Select **Generate SAS** from the menu near the top of the page.
+
+1. Select **Signing method** → **User delegation key**.
+
+1. Define **Permissions** by checking and/or clearing the appropriate check box.
+
+1. Specify the signed key **Start** and **Expiry** times.
+
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, it won't be authorized.
+
+1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS.
+
+1. Review then select **Generate SAS token and URL**.
+
+1. The **Blob SAS token** query string and **Blob SAS URL** will be displayed in the lower area of window.
+
+1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.**
+
+1. To construct an SAS URL, append the SAS token (URI) to the URL for a storage service.
+
+## Learn more
+
+* [Create SAS tokens for blobs or containers programmatically](/azure/storage/blobs/sas-service-create)
+* [Permissions for a directory, container, or blob](/rest/api/storageservices/create-service-sas#permissions-for-a-directory-container-or-blob)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get Started with Document Translation](get-started-with-document-translation.md)
+>
+>
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/document-translation/get-started-with-document-translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
+
+ Title: Get started with Document Translation
+description: How to create a Document Translation service using C#, Go, Java, Node.js, or Python programming languages and platforms
++++ Last updated : 02/11/2021++
+# Get started with Document Translation (Preview)
+
+ In this article, you'll learn to use Document Translation with HTTP REST API methods. Document Translation is a cloud-based feature of the [Azure Translator](../translator-info-overview.md) service. The Document Translation API enables the translation of whole documents while preserving source document structure and text formatting.
+
+## Prerequisites
+
+To get started, you'll need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**Translator**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) service resource (**not** a Cognitive Services resource).
+
+* An [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM). All access to Azure Storage takes place through a storage account.
+
+> [!NOTE]
+> Document Translation is currently only supported in the Translator (single-service) resource, **not** the Cognitive Services (multi-service) resource.
+
+## Get your custom domain name and subscription key
+
+> [!IMPORTANT]
+>
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+
+### What is the custom domain endpoint?
+
+The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories:
+
+```http
+https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1
+```
+
+### Find your custom domain name
+
+The **NAME-OF-YOUR-RESOURCE** (also called *custom domain name*) parameter is the value that you entered in the **Name** field when you created your Translator resource.
++
+### Get your subscription key
+
+Requests to the Translator service require a read-only key for authenticating access.
+
+1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+1. Copy and paste your subscription key in a convenient location, such as *Microsoft Notepad*.
+1. You'll paste it into the code below to authenticate your request to the Document Translation service.
++
+## Create your Azure blob storage containers
+
+You'll need to [**create containers**](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) in your [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source, target, and optional glossary files.
+
+* **Source container**. This container is where you upload your files for translation (required).
+* **Target container**. This container is where your translated files will be stored (required).
+* **Glossary container**. This container is where you upload your glossary files (optional).
+
+*See* **Create SAS access tokens for Document Translation**
+
+The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs.
+
+* Your **source** container or blob must have designated **read** and **list** access.
+* Your **target** container or blob must have designated **write** and **list** access.
+* Your **glossary** container or blob must have designated **read** and **list** access.
+
+> [!TIP]
+>
+> * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**.
+> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
+>
+
+## Set up your coding Platform
+
+### [C#](#tab/csharp)
+
+* Create a new project.
+* Replace Program.cs with the C# code shown below.
+* Set your endpoint. subscription key, and container URL values in Program.cs.
+* To process JSON data, add [Newtonsoft.Json package using .NET CLI](https://www.nuget.org/packages/Newtonsoft.Json/).
+* Run the program from the project directory.
+
+### [Node.js](#tab/javascript)
+
+* Create a new Node.js project.
+* Install the Axios library with `npm i axios`.
+* Copy paste the code below into your project.
+* Set your endpoint, subscription key, and container URL values.
+* Run the program.
+
+### [Python](#tab/python)
+
+* Create a new project.
+* Copy and paste the code from one of the samples into your project.
+* Set your endpoint, subscription key, and container URL values.
+* Run the program. For example: `python translate.py`.
+
+### [Java](#tab/java)
+
+* Create a working directory for your project. For example:
+
+```powershell
+mkdir sample-project
+```
+
+* In your project directory, create the following subdirectory structure:
+
+ src</br>
+&emsp; Γöö main</br>
+&emsp;&emsp;&emsp;Γöö java
+
+```powershell
+mkdir -p src/main/java/
+```
+
+**NOTE**: Java source files (for example, _sample.java_) live in src/main/**java**.
+
+* In your root directory (for example, *sample-project*), initialize your project with Gradle:
+
+```powershell
+gradle init --type basic
+```
+
+* When prompted to choose a **DSL**, select **Kotlin**.
+
+* Update the `build.gradle.kts` file. Keep in mind that you'll need to update your `mainClassName` depending on the sample:
+
+ ```java
+ plugins {
+ java
+ application
+ }
+ application {
+ mainClassName = "{NAME OF YOUR CLASS}"
+ }
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ compile("com.squareup.okhttp:okhttp:2.5.0")
+ }
+ ```
+
+* Create a Java file in the **java** directory and copy/paste the code from the provided sample. Don't forget to add your subscription key and endpoint.
+
+* **Build and run the sample from the root directory**:
+
+```powershell
+gradle build
+gradle run
+```
+
+### [Go](#tab/go)
+
+* Create a new Go project.
+* Add the code provided below.
+* Set your endpoint, subscription key, and container URL values.
+* Save the file with a '.go' extension.
+* Open a command prompt on a computer with Go installed.
+* Build the file, for example: 'go build example-code.go'.
+* Run the file, for example: 'example-code'.
+
+
+
+## Make Document Translation requests
+
+A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service.
+
+### HTTP headers
+
+The following headers are included with each Document Translator API request:
+
+|HTTP header|Description|
+||--|
+|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure subscription key for your Translator or Cognitive Services resource.|
+|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.|
+|Content-Length|**Required**: the length of the request body.|
+
+### POST request body properties
+
+* The POST request body is a JSON object named `inputs`.
+* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs and can optionally contain a `glossaryURL` container address.
+* The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders.
+* A value for the `glossaries` field (optional) is applied when the document is being translated.
+* The `targetUrl` for each target language must be unique.
+
+>[!NOTE]
+> If a file with the same name already exists in the destination, it will be overwritten.
+
+### POST a translation request
+
+> [!IMPORTANT]
+>
+> * For the code samples, below, you may need to update the following fields, depending upon the operation:
+
+>> [!div class="checklist"]
+>>
+>> * `endpoint`
+>> * `subscriptionKey`
+>> * `sourceURL`
+>> * `targetURL`
+>> * `glossaryURL`
+>> * `id` (job ID)
+>>
+> * You can find the job `id` in the The POST method's response Header `Operation-Location` URL value. The last parameter of the URL is the operation's job **`id`**.
+> * You can also use a GET Jobs request to retrieve the job `id` for a Document Translation operation.
+> * For the samples below, you'll hard-code your key and endpoint where indicated; remember to remove the key from your code when you're done, and never post it publicly.
+>
+> See [Azure Cognitive Services security](/azure/cognitive-services/cognitive-services-security?tabs=command-line%2Ccsharp) for ways to securely store and access your credentials.
+
+<!-- markdownlint-disable MD024 -->
+### POST request body without optional glossaryURL
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "<https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS>",
+ "storageSource": "AzureBlob",
+ "filter": {
+ "prefix": "News",
+ "suffix": ".txt"
+ },
+ "language": "en"
+ },
+ "targets": [
+ {
+ "targetUrl": "<https://YOUR-SOURCE-URL-WITH-WRITE-LIST-ACCESS-SAS>",
+ "storageSource": "AzureBlob",
+ "category": "general",
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### POST request body with optional glossaryURL
+
+```json
+{
+ "inputs":[
+ {
+ "source":{
+ "sourceUrl":"<https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS>",
+ "storageSource":"AzureBlob",
+ "filter":{
+ "prefix":"News",
+ "suffix":".txt"
+ },
+ "language":"en"
+ },
+ "targets":[
+ {
+ "targetUrl":"<https://YOUR-SOURCE-URL-WITH-WRITE-LIST-ACCESS-SAS>",
+ "storageSource":"AzureBlob",
+ "category":"general",
+ "language":"de",
+ "glossaries":[
+ {
+ "glossaryUrl":"<https://YOUR-GLOSSARY-URL-WITH-READ-LIST-ACCESS-SAS>",
+ "format":"xliff",
+ "version":"1.2"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+## _POST Document Translation_ request code samples
+
+Submit a batch Document Translation request to the translation service.
+
+### [C#](#tab/csharp)
+
+```csharp
+
+ using System;
+ using System.Net.Http;
+ using System.Threading.Tasks;
+ using System.Text;
+
+
+ class Program
+ {
+
+ static readonly string route = "/batches";
+
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+
+ private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+
+ static readonly string json = ("{\"inputs\": [{\"source\": {\"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS",\"storageSource\": \"AzureBlob\",\"language\": \"en\",\"filter\":{\"prefix\": \"Demo_1/\"} }, \"targets\": [{\"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"category\": \"general\",\"language\": \"es\"}]}]}");
+
+ static async Task Main(string[] args)
+ {
+ using HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+
+ StringContent content = new StringContent(json, Encoding.UTF8, "application/json");
+
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Content = content;
+
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+ if (response.IsSuccessStatusCode)
+ {
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine();
+ Console.WriteLine($"Response Headers:");
+ Console.WriteLine(response.Headers);
+ }
+ else
+ Console.Write("Error");
+
+ }
+
+ }
+
+ }
+}
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios').default;
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1';
+let route = '/batches';
+let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+
+let data = JSON.stringify({"inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS",
+ "storageSource": "AzureBlob",
+ "language": "en",
+ "filter":{
+ "prefix": "Demo_1/"
+ }
+ },
+ "targets": [
+ {
+ "targetUrl": "https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS",
+ "storageSource": "AzureBlob",
+ "category": "general",
+ "language": "es"}]}]});
+
+let config = {
+ method: 'post',
+ baseURL: endpoint,
+ url: route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Content-Type': 'application/json'
+ },
+ data: data
+};
+
+axios(config)
+.then(function (response) {
+ let result = { statusText: response.statusText, statusCode: response.status, headers: response.headers };
+ console.log()
+ console.log(JSON.stringify(result));
+})
+.catch(function (error) {
+ console.log(error);
+});
+```
+
+### [Python](#tab/python)
+
+```python
+
+import requests
+
+endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1"
+subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+path = '/batches'
+constructed_url = endpoint + path
+
+payload= {
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS",
+ "storageSource": "AzureBlob",
+ "language": "en",
+ "filter":{
+ "prefix": "Demo_1/"
+ }
+ },
+ "targets": [
+ {
+ "targetUrl": "https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS",
+ "storageSource": "AzureBlob",
+ "category": "general",
+ "language": "es"
+ }
+ ]
+ }
+ ]
+}
+headers = {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Content-Type': 'application/json'
+}
+
+response = requests.post(constructed_url, headers=headers, json=payload)
+
+print(f'response status code: {response.status_code}\nresponse status: {response.reason}\nresponse headers: {response.headers}')
+```
+
+### [Java](#tab/java)
+
+```java
+
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class DocumentTranslation {
+ String subscriptionKey = "'<YOUR-SUBSCRIPTION-KEY>'";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+ String path = endpoint + "/batches";
+
+ OkHttpClient client = new OkHttpClient();
+
+ public void post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType, "{\n \"inputs\": [\n {\n \"source\": {\n \"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\n \"filter\": {\n \"prefix\": \"Demo_1\"\n },\n \"language\": \"en\",\n \"storageSource\": \"AzureBlob\"\n },\n \"targets\": [\n {\n \"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\n \"category\": \"general\",\n\"language\": \"fr\",\n\"storageSource\": \"AzureBlob\"\n }\n ],\n \"storageType\": \"Folder\"\n }\n ]\n}");
+ Request request = new Request.Builder()
+ .url(path).post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.code());
+ System.out.println(response.headers());
+ }
+
+ public static void main(String[] args) {
+ try {
+ DocumentTranslation sampleRequest = new DocumentTranslation();
+ sampleRequest.post();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+"net/http"
+)
+
+func main() {
+endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1"
+subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+uri := endpoint + "/batches"
+method := "POST"
+
+var jsonStr = []byte(`{"inputs":[{"source":{"sourceUrl":"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS","storageSource":"AzureBlob","language":"en","filter":{"prefix":"Demo_1/"}},"targets":[{"targetUrl":"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS","storageSource":"AzureBlob","category":"general","language":"es"}]}]}`)
+
+req, err := http.NewRequest(method, endpoint, bytes.NewBuffer(jsonStr))
+req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+req.Header.Add("Content-Type", "application/json")
+
+client := &http.Client{}
+
+if err != nil {
+ fmt.Println(err)
+ return
+ }
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+ fmt.Println("response status:", res.Status)
+ fmt.Println("response headers", res.Header)
+}
+```
+++
+## _GET file formats_ code samples
+
+Retrieve a list of supported file formats. If successful, this method returns a `200 OK` response code.
+
+### [C#](#tab/csharp)
+
+```csharp
+
+using System;
+using System.Net.Http;
+using System.Threading.Tasks;
++
+class Program
+{
++
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+
+ static readonly string route = "/documents/formats";
+
+ private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+
+ static async Task Main(string[] args)
+ {
+
+ HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+ request.Method = HttpMethod.Get;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
++
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine($"Response Headers: {response.Headers}");
+ Console.WriteLine();
+ Console.WriteLine(result);
+ }
+}
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios');
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1';
+let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+let route = '/documents/formats';
+
+let config = {
+ method: 'get',
+ url: endpoint + route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey
+ }
+};
+
+axios(config)
+.then(function (response) {
+ console.log(JSON.stringify(response.data));
+})
+.catch(function (error) {
+ console.log(error);
+});
+
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class GetFileFormats {
+
+ String subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+ String url = endpoint + "/documents/formats";
+ OkHttpClient client = new OkHttpClient();
+
+ public void get() throws IOException {
+ Request request = new Request.Builder().url(
+ url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.body().string());
+ }
+
+ public static void main(String[] args) throws IOException {
+ try{
+ GetJobs jobs = new GetJobs();
+ jobs.get();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+
+```
+
+### [Python](#tab/python)
+
+```python
+
+import http.client
+
+host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
+parameters = '//translator/text/batch/v1.0-preview.1/documents/formats'
+subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+conn = http.client.HTTPSConnection(host)
+payload = ''
+headers = {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey
+}
+conn.request("GET", parameters , payload, headers)
+res = conn.getresponse()
+data = res.read()
+print(res.status)
+print()
+print(data.decode("utf-8"))
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "fmt"
+ "net/http"
+ "io/ioutil"
+)
+
+func main() {
+
+ endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1"
+ subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+ uri := endpoint + "/documents/formats"
+ method := "GET"
+
+ client := &http.Client {
+ }
+ req, err := http.NewRequest(method, uri, nil)
+
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+
+ body, err := ioutil.ReadAll(res.Body)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ fmt.Println(res.StatusCode)
+ fmt.Println(string(body))
+}
+```
+++
+## _GET job status_ code samples
+
+Get the current status for a single job and a summary of all jobs in a Document Translation request. If successful, this method returns a `200 OK` response code.
+<!-- markdownlint-disable MD024 -->
+
+### [C#](#tab/csharp)
+
+```csharp
+
+using System;
+using System.Net.Http;
+using System.Threading.Tasks;
++
+class Program
+{
++
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+
+ static readonly string route = "/batches/{id}";
+
+ private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+
+ static async Task Main(string[] args)
+ {
+
+ HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+ request.Method = HttpMethod.Get;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
++
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine($"Response Headers: {response.Headers}");
+ Console.WriteLine();
+ Console.WriteLine(result);
+ }
+}
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios');
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1';
+let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+let route = '/batches/{id}';
+
+let config = {
+ method: 'get',
+ url: endpoint + route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey
+ }
+};
+
+axios(config)
+.then(function (response) {
+ console.log(JSON.stringify(response.data));
+})
+.catch(function (error) {
+ console.log(error);
+});
+
+```
+
+### [Java](#tab/java)
+
+```java
+
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class GetJobStatus {
+
+ String subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+ String url = endpoint + "/batches/{id}";
+ OkHttpClient client = new OkHttpClient();
+
+ public void get() throws IOException {
+ Request request = new Request.Builder().url(
+ url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.body().string());
+ }
+
+ public static void main(String[] args) throws IOException {
+ try{
+ GetJobs jobs = new GetJobs();
+ jobs.get();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+
+```
+
+### [Python](#tab/python)
+
+```python
+
+import http.client
+
+host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
+parameters = '//translator/text/batch/v1.0-preview.1/batches/{id}'
+subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+conn = http.client.HTTPSConnection(host)
+payload = ''
+headers = {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey
+}
+conn.request("GET", parameters , payload, headers)
+res = conn.getresponse()
+data = res.read()
+print(res.status)
+print()
+print(data.decode("utf-8"))
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "fmt"
+ "net/http"
+ "io/ioutil"
+)
+
+func main() {
+
+ endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1"
+ subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+ uri := endpoint + "/batches/{id}"
+ method := "GET"
+
+ client := &http.Client {
+ }
+ req, err := http.NewRequest(method, uri, nil)
+
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+
+ body, err := ioutil.ReadAll(res.Body)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ fmt.Println(res.StatusCode)
+ fmt.Println(string(body))
+}
+```
+++
+## _GET document status_ code samples
+
+### Brief overview
+
+Retrieve the status of a specific document in a Document Translation request. If successful, this method returns a `200 OK` response code.
+
+### [C#](#tab/csharp)
+
+```csharp
+
+using System;
+using System.Net.Http;
+using System.Threading.Tasks;
++
+class Program
+{
++
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+
+ static readonly string route = "/{id}/document/{documentId}";
+
+ private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+
+ static async Task Main(string[] args)
+ {
+
+ HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+ request.Method = HttpMethod.Get;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
++
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine($"Response Headers: {response.Headers}");
+ Console.WriteLine();
+ Console.WriteLine(result);
+ }
+}
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios');
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1';
+let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+let route = '/{id}/document/{documentId}';
+
+let config = {
+ method: 'get',
+ url: endpoint + route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey
+ }
+};
+
+axios(config)
+.then(function (response) {
+ console.log(JSON.stringify(response.data));
+})
+.catch(function (error) {
+ console.log(error);
+});
+
+```
+
+### [Java](#tab/java)
+
+```java
+
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class GetDocumentStatus {
+
+ String subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+ String url = endpoint + "/{id}/document/{documentId}";
+ OkHttpClient client = new OkHttpClient();
+
+ public void get() throws IOException {
+ Request request = new Request.Builder().url(
+ url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.body().string());
+ }
+
+ public static void main(String[] args) throws IOException {
+ try{
+ GetJobs jobs = new GetJobs();
+ jobs.get();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+
+```
+
+### [Python](#tab/python)
+
+```python
+
+import http.client
+
+host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
+parameters = '//translator/text/batch/v1.0-preview.1/{id}/document/{documentId}'
+subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+conn = http.client.HTTPSConnection(host)
+payload = ''
+headers = {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey
+}
+conn.request("GET", parameters , payload, headers)
+res = conn.getresponse()
+data = res.read()
+print(res.status)
+print()
+print(data.decode("utf-8"))
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "fmt"
+ "net/http"
+ "io/ioutil"
+)
+
+func main() {
+
+ endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1"
+ subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+ uri := endpoint + "/{id}/document/{documentId}"
+ method := "GET"
+
+ client := &http.Client {
+ }
+ req, err := http.NewRequest(method, uri, nil)
+
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+
+ body, err := ioutil.ReadAll(res.Body)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ fmt.Println(res.StatusCode)
+ fmt.Println(string(body))
+}
+```
+++
+## _DELETE job_ code samples
+
+### Brief overview
+
+Cancel currently processing or queued job. Only documents for which translation hasn't started will be canceled.
+
+### [C#](#tab/csharp)
+
+```csharp
+
+using System;
+using System.Net.Http;
+using System.Threading.Tasks;
++
+class Program
+{
++
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+
+ static readonly string route = "/batches/{id}";
+
+ private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+
+ static async Task Main(string[] args)
+ {
+
+ HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+ request.Method = HttpMethod.Delete;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
++
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine($"Response Headers: {response.Headers}");
+ Console.WriteLine();
+ Console.WriteLine(result);
+ }
+}
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios');
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1';
+let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+let route = '/batches/{id}';
+
+let config = {
+ method: 'delete',
+ url: endpoint + route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey
+ }
+};
+
+axios(config)
+.then(function (response) {
+ console.log(JSON.stringify(response.data));
+})
+.catch(function (error) {
+ console.log(error);
+});
+
+```
+
+### [Java](#tab/java)
+
+```java
+
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class DeleteJob {
+
+ String subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1";
+ String url = endpoint + "/batches/{id}";
+ OkHttpClient client = new OkHttpClient();
+
+ public void get() throws IOException {
+ Request request = new Request.Builder().url(
+ url).method("DELETE", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.body().string());
+ }
+
+ public static void main(String[] args) throws IOException {
+ try{
+ GetJobs jobs = new GetJobs();
+ jobs.get();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+
+```
+
+### [Python](#tab/python)
+
+```python
+
+import http.client
+
+host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
+parameters = '//translator/text/batch/v1.0-preview.1/batches/{id}'
+subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+conn = http.client.HTTPSConnection(host)
+payload = ''
+headers = {
+ 'Ocp-Apim-Subscription-Key': subscriptionKey
+}
+conn.request("DELETE", parameters , payload, headers)
+res = conn.getresponse()
+data = res.read()
+print(res.status)
+print()
+print(data.decode("utf-8"))
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "fmt"
+ "net/http"
+ "io/ioutil"
+)
+
+func main() {
+
+ endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1"
+ subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+ uri := endpoint + "/batches/{id}"
+ method := "DELETE"
+
+ client := &http.Client {
+ }
+ req, err := http.NewRequest(method, uri, nil)
+
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+
+ body, err := ioutil.ReadAll(res.Body)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ fmt.Println(res.StatusCode)
+ fmt.Println(string(body))
+}
+```
+++
+## Content limits
+
+The table below lists the limits for data that you send to Document Translation.
+
+|Attribute | Limit|
+|||
+|Document size| Γëñ 40 MB |
+|Total number of files.|Γëñ 1000 |
+|Total content size in a batch | Γëñ 250 MB|
+|Number of target languages in a batch| Γëñ 10 |
+|Size of Translation memory file| Γëñ 10 MB|
+
+> [!NOTE]
+> The above content limits are subject to change prior to the public release.
+
+## Learn more
+
+* [Translator v3 API reference](../reference/v3-0-reference.md)
+* [Language support](../language-support.md)
+* [Subscriptions in Azure API Management](/azure/api-management/api-management-subscriptions).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create a customized language system using Custom Translator](../custom-translator/overview.md)
+>
+>
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/document-translation/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/overview.md
+
+ Title: What is Document Translation?
+description: An overview of the cloud-based batch document translation service and process.
++++ Last updated : 02/11/2021++
+# What is Document Translation (Preview)?
+
+Document Translation is a cloud-based feature of the [Azure Translator](../translator-info-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API translates documents to and from more than 70 languages while preserving document structure and data format.
+
+## Document Translation key features
+
+| Feature | Description |
+| | -|
+| **Translate large files**| Translate whole documents asynchronously.|
+|**Translate numerous files**|Translate multiple files to and from more than 70 languages.|
+|**Preserve source file presentation**| Translate files while preserving the original layout and format.|
+|**Apply custom translation**| Translate documents using general and [custom translation](../customization.md#custom-translator) models.|
+|**Apply custom glossaries**|Translate documents using custom glossaries.|
+
+## How to get started?
+
+In our how-to guide, you'll learn how to quickly get started using Document Translator. To begin, you'll need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free).
+
+> [!div class="nextstepaction"]
+> [Get Started](get-started-with-document-translation.md)
+
+## Supported document formats
+
+The following document file types are supported by Document Translation:
+
+| File type| File extension|Description|
+|||--|
+|Adobe PDF|.pdf|Adobe Acrobat portable document format|
+|HTML|.html|Hyper Text Markup Language.|
+|Localization Interchange File Format|.xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
+|Microsoft Excel|.xlsx|A spreadsheet file for data analysis and documentation.|
+|Microsoft Outlook|.msg|An email message created or saved within Microsoft Outlook.|
+|Microsoft PowerPoint|.pptx| A presentation file used to display content in a slideshow format.|
+|Microsoft Word|.docx| A text document file.|
+|Tab Separated Values/TAB|.tsv/.tab| a tab-delimited raw-data file used by spreadsheet programs.|
+|Text|.txt| An unformatted text document.|
+|Translation Memory Exchange|.tmx|An open XML standard used for exchanging translation memory (TM) data created by Computer Aided Translation (CAT) and localization applications.|
+
+## Supported glossary formats
+
+The following glossary file types are supported by Document Translation:
+
+| File type| File extension|Description|
+|||--|
+|Localization Interchange File Format|.xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
+|Tab Separated Values/TAB|.tsv/.tab| a tab-delimited raw-data file used by spreadsheet programs.|
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get Started with Document Translation](get-started-with-document-translation.md)
+>
+>
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/reference/v3-0-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/v3-0-reference.md
Microsoft Translator is served out of multiple datacenter locations. Currently t
* **Americas:** East US, South Central US, West Central US, and West US 2 * **Asia Pacific:** Korea South, Japan East, Southeast Asia, and Australia East
-* **Europe:** North Europe and West Europe
+* **Europe:** North Europe, West Europe, Switzerland North<sup>1,2</sup>, and Switzerland West<sup>1,2</sup>
Requests to the Microsoft Translator are in most cases handled by the datacenter that is closest to where the request originated. In case of a datacenter failure, the request may be routed outside of the Azure geography.
To force the request to be handled by a specific Azure geography, change the Glo
|Azure|Europe| api-eur.cognitive.microsofttranslator.com| |Azure|Asia Pacific| api-apc.cognitive.microsofttranslator.com|
+<sup>1</sup> Customer with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the ΓÇÿResource regionΓÇÖ ΓÇÿSwitzerland NorthΓÇÖ or ΓÇÿSwitzerland WestΓÇÖ, then use the resourceΓÇÖs custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with ΓÇÿResource regionΓÇÖ as ΓÇÿSwitzerland NorthΓÇÖ and your resource name is ΓÇÿmy-ch-nΓÇÖ then your custom endpoint is ΓÇ£https://my-ch-n.cognitiveservices.azure.comΓÇ¥. And a sample request to translate is:
+```curl
+// Pass secret key and region using headers to a custom endpoint
+curl -X POST " my-ch-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \
+-H "Ocp-Apim-Subscription-Key: xxx" \
+-H "Ocp-Apim-Subscription-Region: switzerlandnorth" \
+-H "Content-Type: application/json" \
+-d "[{'Text':'Hello'}]" -v
+```
+<sup>2</sup>Custom Translator is not currently available in Switzerland.
+ ## Authentication Subscribe to Translator or [Cognitive Services multi-service](https://azure.microsoft.com/pricing/details/cognitive-services/) in Azure Cognitive Services, and use your subscription key (available in the Azure portal) to authenticate.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md
Before you use the Text Analytics API, you will need to create a Azure resource
## Change your pricing tier
-If you have an existing Text Analytics resource using the S0 through S4 pricing tier, you can update it to use the Standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/):
+If you have an existing Text Analytics resource using the S0 through S4 pricing tier, you should update it to use the Standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/). The S0 through S4 pricing tiers will be retired. To update your resource's pricing:
1. Navigate to your Text Analytics resource in the [Azure portal](https://portal.azure.com/). 2. Select **Pricing tier** in the left navigation menu. It will be below **RESOURCE MANAGEMENT**.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
Previously updated : 01/27/2021 Last updated : 02/16/2021
The Text Analytics API is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## February 2021
+
+* The S0 through S4 pricing tiers are being retired on March 8th, 2021. If you have an existing Text Analytics resource using the S0 through S4 pricing tier, you should update it to use the Standard (S) [pricing tier](how-tos/text-analytics-how-to-call-api.md#change-your-pricing-tier).
+ ## January 2021 * The `2021-01-15` model version for [Named Entity Recognition](how-tos/text-analytics-how-to-entity-linking.md) v3.x, which provides
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/chat/concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
There are two core parts to chat architecture: 1) Trusted Service and 2) Client
- **Trusted service:** To properly manage a chat session, you need a service that helps you connect to Communication Services using your resource connection string. This service is responsible for creating chat threads, managing thread memberships, and providing access tokens to users. More information about access tokens can be found in our [access tokens](../../quickstarts/access-tokens.md) quickstart. - **Client app:** The client application connects to your trusted service and receives the access tokens that are used to connect directly to Communication Services. After this connection is made, your client app can send and receive messages.+
+We recommend generating access tokens using the trusted service tier. In this scenario the server side would be responsible for creating and managing users and issuing their tokens.
## Message types Communication Services Chat shares user-generated messages as well as system-generated messages called **Thread activities**. Thread activities are generated when a chat thread is updated. When you call `List Messages` or `Get Messages` on a chat thread, the result will contain the user-generated text messages as well as the system messages in chronological order. This helps you identify when a member was added or removed or when the chat thread topic was updated. Supported message types are:
+ - `Text`: A plain text message composed and sent by a user as part of a chat conversation.
- `RichText/HTML`: A formatted text message. Note that Communication Services users currently can't send RichText messages. This message type is supported by messages sent from Teams users to Communication Services users in Teams Interop scenarios.-
-```xml
-
-<addmember>
- <eventtime>1598478187549</eventtime>
- <initiator>8:acs:57b9bac9-df6c-4d39-a73b-26e944adf6ea_0e59221d-0c1d-46ae-9544-c963ce56c10b</initiator>
- <detailedinitiatorinfo>
- <friendlyName>User 1</friendlyName>
- </detailedinitiatorinfo>
- <rosterVersion>1598478184564</rosterVersion>
- <target>8:acs:57b9bac9-df6c-4d39-a73b-26e944adf6ea_0e59221d-0c1d-46ae-9544-c963ce56c10b</target>
- <detailedtargetinfo>
- <id>8:acs:57b9bac9-df6c-4d39-a73b-26e944adf6ea_0e59221d-0c1d-46ae-9544-c963ce56c10b</id>
- <friendlyName>User 1</friendlyName>
- </detailedtargetinfo>
- <target>8:acs:57b9bac9-df6c-4d39-a73b-26e944adf6ea_8540c0de-899f-5cce-acb5-3ec493af3800</target>
- <detailedtargetinfo>
- <id>8:acs:57b9bac9-df6c-4d39-a73b-26e944adf6ea_8540c0de-899f-5cce-acb5-3ec493af3800</id>
- <friendlyName>User 2</friendlyName>
- </detailedtargetinfo>
-</addmember>
-
-```
--- `ThreadActivity/DeleteMember`: System message that indicates a member has been removed from the chat thread. For example:-
-```xml
-
-<deletemember>
- <eventtime>1598478187642</eventtime>
- <initiator>8:acs:57b9bac9-df6c-4d39-a73b-26e944adf6ea_0e59221d-0c1d-46ae-9544-c963ce56c10b</initiator>
- <detailedinitiatorinfo>
- <friendlyName>User 1</friendlyName>
- </detailedinitiatorinfo>
- <rosterVersion>1598478184564</rosterVersion>
- <target>8:acs:57b9bac9-df6c-4d39-a73b-26e944adf6ea_8540c0de-899f-5cce-acb5-3ec493af3800</target>
- <detailedtargetinfo>
- <id>8:acs:57b9bac9-df6c-4d39-a73b-26e944adf6ea_8540c0de-899f-5cce-acb5-3ec493af3800</id>
- <friendlyName>User 2</friendlyName>
- </detailedtargetinfo>
-</deletemember>
-```
+ - `ThreadActivity/ParticipantAdded`: A system message that indicates one or more participants have been added to the chat thread. For example:
-- `ThreadActivity/MemberJoined`: A system message generated when a guest user joins the Teams meeting chat. Communication Services users can join as a guest of Teams meeting chats. For example:
-```xml
-{
-  "id": "1606351443605",
-  "type": "ThreadActivity/MemberJoined",
-  "version": "1606347753409",
-  "priority": "normal",
-  "content": "{\"eventtime\":1606351443080,\"initiator\":\"8:orgid:8a53fd2b5ef150bau8442ad732a6ac6b_0e8deebe7527544aa2e7bdf3ce1b8733\",\"members\":[{\"id\":\"8:acs:9b665d83-8164-4923-ad5d-5e983b07d2d7_00000006-7ef9-3bbe-b274-5a3a0d0002b1\",\"friendlyname\":\"\"}]}",
-  "senderId": " 19:meeting_curGQFTQ8tifs3EK9aTusiszGpkZULzNTTy2dbfI4dCJEaik@thread.v2",
-  "createdOn": "2020-11-29T00:44:03.6950000Z"
-}
```-- `ThreadActivity/MemberLeft`: A system message generated when a guest user leaves the meeting chat. Communication Services users can join as a guest of Teams meeting chats. For example:
-```xml
-{
-  "id": "1606347703429",
-  "type": "ThreadActivity/MemberLeft",
-  "version": "1606340753429",
-  "priority": "normal",
-  "content": "{\"eventtime\":1606340755385,\"initiator\":\"8:orgid:8a53fd2b5u8150ba81442ad732a6ac6b_0e8deebe7527544aa2e7bdf3ce1b8733\",\"members\":[{\"id\":\"8:acs:9b665753-8164-4923-ad5d-5e983b07d2d7_00000006-7ef9-3bbe-b274-5a3a0d0002b1\",\"friendlyname\":\"\"}]}",
-  "senderId": "19:meeting_9u7hBcYiADudn41Djm0n9DTVyAHuMZuh7p0bDsx1rLVGpnMk@thread.v2",
-  "createdOn": "2020-11-29T23:42:33.4290000Z"
-}
+{
+ "id": "1613589626560",
+ "type": "participantAdded",
+ "sequenceId": "7",
+ "version": "1613589626560",
+ "content":
+ {
+ "participants":
+ [
+ {
+ "id": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4df6-f40f-343a0d003226",
+ "displayName": "Jane",
+ "shareHistoryTime": "1970-01-01T00:00:00Z"
+ }
+ ],
+ "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
+ },
+ "createdOn": "2021-02-17T19:20:26Z"
+ }
```-- `ThreadActivity/TopicUpdate`: System message that indicates the topic has been updated. For example:
-```xml
+- `ThreadActivity/ParticipantRemoved`: System message that indicates a participant has been removed from the chat thread. For example:
+
+```
+{
+ "id": "1613589627603",
+ "type": "participantRemoved",
+ "sequenceId": "8",
+ "version": "1613589627603",
+ "content":
+ {
+ "participants":
+ [
+ {
+ "id": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4df6-f40f-343a0d003226",
+ "displayName": "Jane",
+ "shareHistoryTime": "1970-01-01T00:00:00Z"
+ }
+ ],
+ "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
+ },
+ "createdOn": "2021-02-17T19:20:27Z"
+ }
+```
-<topicupdate>
- <eventtime>1598477591811</eventtime>
- <initiator>8:acs:57b9bac9-df6c-4d39-a73b-26e944adf6ea_0e59221d-0c1d-46ae-9544-c963ce56c10b</initiator>
- <value>New topic</value>
-</topicupdate>
+- `ThreadActivity/TopicUpdate`: System message that indicates the thread topic has been updated. For example:
```
+{
+ "id": "1613589623037",
+ "type": "topicUpdated",
+ "sequenceId": "2",
+ "version": "1613589623037",
+ "content":
+ {
+ "topic": "New topic",
+ "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
+ },
+ "createdOn": "2021-02-17T19:20:23Z"
+ }
+```
## Real-time signaling
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/client-and-server-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/client-and-server-architecture.md
Communicating over the phone system can dramatically increase the reach of your
:::image type="content" source="../media/scenarios/archdiagram-pstn.png" alt-text="Diagram showing Communication Services PSTN architecture.":::
-For more information on PSTN and SMS solutions, see [Plan your PSTN and SMS solution](../concepts/telephony-sms/plan-solution.md)
+For more information on PSTN phone numbers, see [Phone number types](../concepts/telephony-sms/plan-solution.md)
## Humans communicating with bots and other services
You may want to exchange arbitrary data between users, for example to synchroniz
For more information, see the following articles: - Learn about [authentication](../concepts/authentication.md)-- Learn about [PSTN and SMS solutions](../concepts/telephony-sms/plan-solution.md)
+- Learn about [Phone number types](../concepts/telephony-sms/plan-solution.md)
- [Add chat to your app](../quickstarts/chat/get-started.md) - [Add voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/event-handling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/event-handling.md
Azure Event Grid is a fully managed event routing service, which uses a publish-
:::image type="content" source="https://docs.microsoft.com/azure/event-grid/media/overview/functional-model.png" alt-text="Diagram showing Azure Event Grid's event model.":::
+> [!NOTE]
+> To learn more about how data residency relates to event handling, visit the [Data Residency conceptual documentation](./privacy.md)
+ ## Events types Event grid uses [event subscriptions](../../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers.
This section contains an example of what that data would look like for each even
* For an introduction to Azure Event Grid, see [What is Event Grid?](../../event-grid/overview.md) * For an introduction to Azure Event Grid Concepts, see [Concepts in Event Grid?](../../event-grid/concepts.md)
-* For an introduction to Azure Event Grid SystemTopics, see [System topics in Azure Event Grid?](../../event-grid/system-topics.md)
+* For an introduction to Azure Event Grid SystemTopics, see [System topics in Azure Event Grid?](../../event-grid/system-topics.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
Azure Communication Services is committed to helping our customers meet their pr
## Data residency
-When creating an Communication Services resource, you specify a **geography** (not an Azure data center). All data stored by Communication Services at rest will be retained in that geography, in a data center selected internally by Communication Services. However data may transit or be processed in other geographies, these global endpoints are necessary to provide a high-performance, low-latency experience to end-users no matter their location.
+When creating an Communication Services resource, you specify a **geography** (not an Azure data center). All data stored by Communication Services at rest will be retained in that geography, in a data center selected internally by Communication Services. Data may transit or be processed in other geographies. These global endpoints are necessary to provide a high-performance, low-latency experience to end-users no matter their location.
+
+## Data residency and events
+
+Any Event Grid system topic configured with Azure Communication Services will be created in a global location. To support reliable delivery, a global Event Grid system topic may store the event data in any Microsoft data center. When you configure Event Grid with Azure Communication Services, you're delivering your event data to Event Grid, which is an Azure resource under your control. While Azure Communication Services may be configured to utilize Azure Event Grid, you're responsible for managing your Event Grid resource and the data stored within it.
## Relating humans to Azure Communication Services identities
Azure Communication Services will feed into Azure Monitor logging data for under
- [Azure Data Subject Requests for the GDPR and CCPA](/microsoft-365/compliance/gdpr-dsr-azure?preserve-view=true&view=o365-worldwide) - [Microsoft Trust Center](https://www.microsoft.com/trust-center/privacy/data-location)-- [Azure Interactive Map - Where is my customer data?](https://azuredatacentermap.azurewebsites.net/)
+- [Azure Interactive Map - Where is my customer data?](https://azuredatacentermap.azurewebsites.net/)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/telephony-sms/concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/concepts.md
# SMS concepts [!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)] Azure Communication Services enables you to send and receive SMS text messages using the Communication Services SMS client libraries. These client libraries can be used to support customer service scenarios, appointment reminders, two-factor authentication, and other real-time communication needs. Communication Services SMS allows you to reliably send messages while exposing deliverability and response rate insights surrounding your campaigns.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/telephony-sms/telephony-concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/telephony-concept.md
# Telephony concepts [!INCLUDE [Private Preview Notice](../../includes/private-preview-include.md)] Azure Communication Services Calling client libraries can be used to add telephony and PSTN to your applications. This page summarizes key telephony concepts and capabilities. See the [calling library](../../quickstarts/voice-video-calling/calling-client-samples.md) to learn more about specific client library languages and capabilities.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/about-call-types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/about-call-types.md
During the preview you can use the group ID to join the same conversation. You c
For more information, see the following articles: - Familiarize yourself with general [call flows](../call-flows.md)-- [Plan your PSTN solution](../telephony-sms/plan-solution.md)
+- [Phone number types](../telephony-sms/plan-solution.md)
- Learn about the [calling client library capabilities](../voice-video-calling/calling-sdk-features.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/calling-sdk-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Note that in group scenarios, one mixed audio stream is used to support all audi
For more information, see the following articles: - Familiarize yourself with general [call flows](../call-flows.md) - Learn about [call types](../voice-video-calling/about-call-types.md)-- [Plan your PSTN solution](../telephony-sms/plan-solution.md)
+- Learn about [phone number types](../telephony-sms/plan-solution.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/includes/regional-availability-include https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/includes/regional-availability-include.md
Last updated 11/29/2020
> [!IMPORTANT]
-> Phone number availability is currently restricted to Azure subscriptions that have a billing address in the United States. For more information, visit the [telephony and SMS solution planning](../concepts/telephony-sms/plan-solution.md) documentation.
+> Phone number availability is currently restricted to Azure subscriptions that have a billing address in the United States. For more information, visit the [Phone number types](../concepts/telephony-sms/plan-solution.md) documentation.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/get-started.md
Last updated 09/30/2020
-zone_pivot_groups: acs-js-csharp-java-python-swift
+zone_pivot_groups: acs-js-csharp-java-python-swift-android
# Quickstart: Add Chat to your App
Get started with Azure Communication Services by using the Communication Service
[!INCLUDE [Chat with Java client library](./includes/chat-java.md)] ::: zone-end + ::: zone pivot="programming-language-csharp" [!INCLUDE [Chat with C# client library](./includes/chat-csharp.md)] ::: zone-end
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/includes/chat-android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/includes/chat-android.md
+
+ Title: include file
+description: include file
+++++ Last updated : 2/16/2020+++++
+## Prerequisites
+Before you get started, make sure to:
+
+- Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Install [Android Studio](https://developer.android.com/studio), we will be using Android Studio to create an Android application for the quickstart to install dependencies.
+- Create an Azure Communication Services resource. For details, see [Create an Azure Communication Resource](../../create-communication-resource.md). You'll need to **record your resource endpoint** for this quickstart.
+- Create **two** Communication Services Users and issue them a user access token [User Access Token](../../access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**. In this quickstart we will create a thread with an initial participant and then add a second participant to the thread.
+
+## Setting up
+
+### Create a new android application
+
+1. Open Android Studio and select `Create a new project`.
+2. On the next window, select `Empty Activity` as the project template.
+3. When choosing options enter `ChatQuickstart` as the project name.
+4. Click next and choose the directory where you want the project to be created.
+
+### Install the libraries
+
+We'll use Gradle to install the necessary Communication Services dependencies. From the command line navigate inside the root directory of the `ChatQuickstart` project. Open the app's build.gradle file and add the following dependencies to the `ChatQuickstart` target:
+
+```
+implementation 'com.azure.android:azure-communication-common:1.0.0-beta.5'
+implementation 'com.azure.android:azure-communication-chat:1.0.0-beta.5'
+```
+
+Click 'sync now' in Android Studio.
+
+#### (Alternative) To install libraries through Maven
+To import the library into your project using the [Maven](https://maven.apache.org/) build system, add it to the `dependencies` section of your app's `pom.xml` file, specifying its artifact ID and the version you wish to use:
+
+```xml
+<dependency>
+ <groupId>com.azure.android</groupId>
+ <artifactId>azure-communication-chat</artifactId>
+ <version>1.0.0-beta.5</version>
+</dependency>
+```
++
+### Setup the placeholders
+
+Open and edit the file `MainActivity.java`. In this Quickstart, we'll add our code to `MainActivity`, and view the output in the console. This quickstart does not address building a UI. At the top of file, import the `Communication common` and `Communication chat` libraries:
+
+```
+import com.azure.android.communication.chat.*;
+import com.azure.android.communication.common.*;
+```
+
+Copy the following code into the file `MainActivity`:
+
+```
+ @Override
+ protected void onStart() {
+ super.onStart();
+ try {
+ // <CREATE A CHAT CLIENT>
+
+ // <CREATE A CHAT THREAD>
+
+ // <CREATE A CHAT THREAD CLIENT>
+
+ // <SEND A MESSAGE>
+
+ // <ADD A USER>
+
+ // <LIST USERS>
+
+ // <REMOVE A USER>
+ } catch (Exception e){
+ System.out.println("Quickstart failed: " + e.getMessage());
+ }
+ }
+```
+
+In following steps, we'll replace the placeholders with sample code using the Azure Communication Services Chat library.
++
+### Create a chat client
+
+Replace the comment `<CREATE A CHAT CLIENT>` with the following code (put the import statements at top of the file):
+
+```java
+import com.azure.android.communication.chat.ChatClient;
+import com.azure.android.core.http.HttpHeader;
+
+final String endpoint = "https://<resource>.communication.azure.com";
+final String userAccessToken = "<user_access_token>";
+
+ChatClient client = new ChatClient.Builder()
+ .endpoint(endpoint)
+ .credentialInterceptor(chain -> chain.proceed(chain.request()
+ .newBuilder()
+ .header(HttpHeader.AUTHORIZATION, userAccessToken)
+ .build());
+```
+
+1. Use the `AzureCommunicationChatServiceAsyncClient.Builder` to configure and create an instance of `AzureCommunicationChatClient`.
+2. Replace `<resource>` with your Communication Services resource.
+3. Replace `<user_access_token>` with a valid Communication Services access token.
+
+## Object model
+The following classes and interfaces handle some of the major features of the Azure Communication Services Chat client library for JavaScript.
+
+| Name | Description |
+| -- | - |
+| ChatClient | This class is needed for the Chat functionality. You instantiate it with your subscription information, and use it to create, get and delete threads. |
+| ChatThreadClient | This class is needed for the Chat Thread functionality. You obtain an instance via the ChatClient, and use it to send/receive/update/delete messages, add/remove/get users, send typing notifications and read receipts, subscribe chat events. |
+
+## Start a chat thread
+
+We'll use our `ChatClient` to create a new thread with an initial user.
+
+Replace the comment `<CREATE A CHAT THREAD>` with the following code:
+
+```java
+// The list of ChatParticipant to be added to the thread.
+List<ChatParticipant> participants = new ArrayList<>();
+// The communication user ID you created before, required.
+final String id = "<user_id>";
+// The display name for the thread participant.
+final String displayName = "initial participant";
+participants.add(new ChatParticipant()
+ .setId(id)
+ .setDisplayName(displayName));
+
+// The topic for the thread.
+final String topic = "General";
+// The model to pass to the create method.
+CreateChatThreadRequest thread = new CreateChatThreadRequest()
+ .setTopic(topic)
+ .setParticipants(participants);
+
+// optional, set a repeat request ID
+final String repeatabilityRequestID = '123';
+
+client.createChatThread(thread, repeatabilityRequestID, new Callback<CreateChatThreadResult>() {
+ public void onSuccess(CreateChatThreadResult result, okhttp3.Response response) {
+ // MultiStatusResponse is the result returned from creating a thread.
+ // It has a 'multipleStatus' property which represents a list of IndividualStatusResponse.
+ String threadId;
+ List<IndividualStatusResponse> statusList = result.getMultipleStatus();
+ for (IndividualStatusResponse status : statusList) {
+ if (status.getId().endsWith("@thread.v2")
+ && status.getType().contentEquals("Thread")) {
+ threadId = status.getId();
+ break;
+ }
+ }
+ // Take further action.
+ }
+
+ public void onFailure(Throwable throwable, okhttp3.Response response) {
+ // Handle error.
+ }
+});
+```
+
+Replace `<user_id>` with a valid Communication Services user ID. We'll use the `threadId` from the response returned to the completion handler in later steps.
+
+## Get a chat thread client
+
+Now that we've created a Chat thread we'll obtain a `ChatThreadClient` to perform operations within the thread. Replace the comment `<CREATE A CHAT THREAD CLIENT>` with the following code:
+
+```
+ChatThreadClient threadClient =
+ new ChatThreadClient.Builder()
+ .endpoint(<endpoint>))
+ .build();
+```
+
+Replace `<endpoint>` with your Communication Services endpoint.
+
+## Send a message to a chat thread
+
+Replace the comment `<SEND A MESSAGE>` with the following code:
+
+```java
+// The chat message content, required.
+final String content = "Test message 1";
+// The display name of the sender, if null (i.e. not specified), an empty name will be set.
+final String senderDisplayName = "An important person";
+SendChatMessageRequest message = new SendChatMessageRequest()
+ .setType(ChatMessageType.TEXT)
+ .setContent(content)
+ .setSenderDisplayName(senderDisplayName);
+
+// The unique ID of the thread.
+final String threadId = "<thread_id>";
+threadClient.sendChatMessage(threadId, message, new Callback<String>() {
+ @Override
+ public void onSuccess(String messageId, Response response) {
+ // A string is the response returned from sending a message, it is an id,
+ // which is the unique ID of the message.
+ final String chatMessageId = messageId;
+ // Take further action.
+ }
+
+ @Override
+ public void onFailure(Throwable throwable, Response response) {
+ // Handle error.
+ }
+});
+```
+
+Replace `<thread_id>` with the thread id that sending message to.
+
+## Add a user as a participant to the chat thread
+
+Replace the comment `<ADD A USER>` with the following code:
+
+```java
+// The list of ChatParticipant to be added to the thread.
+List<ChatParticipant> participants = new ArrayList<>();
+// The CommunicationUser.identifier you created before, required.
+final String id = "<user_id>";
+// The display name for the thread participant.
+final String displayName = "a new participant";
+participants.add(new ChatParticipant().setId(id).setDisplayName(displayName));
+// The model to pass to the add method.
+AddChatParticipantsRequest participants = new AddChatParticipantsRequest()
+ .setParticipants(participants);
+
+// The unique ID of the thread.
+final String threadId = "<thread_id>";
+threadClient.addChatParticipants(threadId, participants, new Callback<Void>() {
+ @Override
+ public void onSuccess(Void result, Response response) {
+ // Take further action.
+ }
+
+ @Override
+ public void onFailure(Throwable throwable, Response response) {
+ // Handle error.
+ }
+});
+```
+
+1. Replace `<user_id>` with the Communication Services user ID of the user to be added.
+2. Replace `<thread_id>` with the thread ID that user is adding to.
+
+## List users in a thread
+
+Replace the `<LIST USERS>` comment with the following code:
+
+```java
+// The unique ID of the thread.
+final String threadId = "<thread_id>";
+
+// The maximum number of participants to be returned per page, optional.
+final int maxPageSize = 10;
+
+// Skips participants up to a specified position in response.
+final int skip = 0;
+
+threadClient.listChatParticipantsPages(threadId,
+ maxPageSize,
+ skip,
+ new Callback<AsyncPagedDataCollection<ChatParticipant, Page<ChatParticipant>>>() {
+ @Override
+ public void onSuccess(AsyncPagedDataCollection<ChatParticipant, Page<ChatParticipant>> firstPage,
+ Response response) {
+ // pageCollection enables enumerating list of chat participants.
+ pageCollection.getFirstPage(new Callback<Page<ChatParticipant>>() {
+ @Override
+ public void onSuccess(Page<ChatParticipant> firstPage, Response response) {
+ for (ChatParticipant participant : firstPage.getItems()) {
+ // Take further action.
+ }
+ retrieveNextParticipantsPages(firstPage.getPageId(), pageCollection);
+ }
+
+ @Override
+ public void onFailure(Throwable throwable, Response response) {
+ // Handle error.
+ }
+ }
+ }
+
+ @Override
+ public void onFailure(Throwable throwable, Response response) {
+ // Handle error.
+ }
+});
+
+void listChatParticipantsNext(String nextLink,
+ AsyncPagedDataCollection<Page<ChatParticipant>> pageCollection) {
+ @Override
+ public void onSuccess(Page<ChatParticipant> nextPage, Response response) {
+ for (ChatParticipant participant : nextPage.getItems()) {
+ // Take further action.
+ }
+ if (nextPage.getPageId() != null) {
+ retrieveNextParticipantsPages(nextPage.getPageId(), pageCollection);
+ }
+ }
+
+ @Override
+ public void onFailure(Throwable throwable, Response response) {
+ // Handle error.
+ }
+}
+```
+
+Replace `<thread_id>` with the thread ID you're listing users for.
+
+## Remove user from a chat thread
+
+Replace the `<REMOVE A USER>` comment with the following code:
+
+```java
+// The unique ID of the thread.
+final String threadId = "<thread_id>";
+// The unique ID of the participant.
+final String participantId = "<participant_id>";
+threadClient.removeChatParticipant(threadId, participantId, new Callback<Void>() {
+ @Override
+ public void onSuccess(Void result, Response response) {
+ // Take further action.
+ }
+
+ @Override
+ public void onFailure(Throwable throwable, Response response) {
+ // Handle error.
+ }
+});
+```
+
+1. Replace `<thread_id>` with the thread id that removing user from.
+1. Replace `<participant_id>` with the Communication Services user ID of the participant being removed.
+
+## Run the code
+
+In Android Studio hit the Run button to build and run the project. In the console you can view the output from the code and the logger output from the ChatClient.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/includes/chat-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/includes/chat-csharp.md
dotnet build
Install the Azure Communication Chat client library for .NET ```PowerShell
-dotnet add package Azure.Communication.Chat --version 1.0.0-beta.3
+dotnet add package Azure.Communication.Chat --version 1.0.0-beta.4
``` ## Object model
The following classes handle some of the major features of the Azure Communicati
| Name | Description | | - | | | ChatClient | This class is needed for the Chat functionality. You instantiate it with your subscription information, and use it to create, get and delete threads. |
-| ChatThreadClient | This class is needed for the Chat Thread functionality. You obtain an instance via the ChatClient, and use it to send/receive/update/delete messages, add/remove/get users, send typing notifications and read receipts. |
+| ChatThreadClient | This class is needed for the Chat Thread functionality. You obtain an instance via the ChatClient, and use it to send/receive/update/delete messages, add/remove/get participants, send typing notifications and read receipts. |
## Create a chat client
-To create a chat client, you'll use your Communication Services endpoint and the access token that was generated as part of prerequisite steps. You need to use the `CommunicationIdentityClient` class from the `Administration` client library to create a user and issue a token to pass to your chat client. Learn more about [User Access Tokens](../../access-tokens.md).
+To create a chat client, you'll use your Communication Services endpoint and the access token that was generated as part of the prerequisite steps. You need to use the `CommunicationIdentityClient` class from the `Administration` client library to create a user and issue a token to pass to your chat client.
+
+Learn more about [User Access Tokens](../../access-tokens.md).
+
+This quickstart does not cover creating a service tier to manage tokens for your chat application, although it is recommended. Learn more about [Chat Architecture](../../../concepts/chat/concepts.md)
```csharp using Azure.Communication.Identity;
using Azure.Communication.Chat;
// Your unique Azure Communication service endpoint Uri endpoint = new Uri("https://<RESOURCE_NAME>.communication.azure.com");
-CommunicationUserCredential communicationUserCredential = new CommunicationUserCredential(<Access_Token>);
-ChatClient chatClient = new ChatClient(endpoint, communicationUserCredential);
+CommunicationTokenCredential communicationTokenCredential = new CommunicationTokenCredential(<Access_Token>);
+ChatClient chatClient = new ChatClient(endpoint, communicationTokenCredential);
``` ## Start a chat thread
-Use the `createChatThread` method to create a chat thread.
-- Use `topic` to give a topic to this chat; Topic can be updated after the chat thread is created using the `UpdateThread` function.-- Use `members` property to pass a list of `ChatThreadMember` objects to be added to the chat thread. The `ChatThreadMember` object is initialized with a `CommunicationUser` object. To get a `CommunicationUser` object, you will need to pass an Access ID which you
-created by following instruction to [Create a user](../../access-tokens.md#create-an-identity)
+Use the `createChatThread` method on the chatClient to create a chat thread
+- Use `topic` to give a topic to this chat; Topic can be updated after the chat thread is created using the `UpdateTopic` function.
+- Use `participants` property to pass a list of `ChatParticipant` objects to be added to the chat thread. The `ChatParticipant` object is initialized with a `CommunicationIdentifier` object. `CommunicationIdentifier` could be of type `CommunicationUserIdentifier`, `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier`. For example, to get a `CommunicationIdentifier` object, you will need to pass an Access ID which you created by following instruction to [Create a user](../../access-tokens.md#create-an-identity)
-The response `chatThreadClient` is used to perform operations on the created chat thread: adding members to the chat thread, sending a message, deleting a message, etc.
-It contains the `Id` attribute which is the unique ID of the chat thread.
+The response object from the createChatThread method contains the chatThread details. To interact with the chat thread operations such as adding participants, sending a message, deleting a message, etc., a chatThreadClient client instance needs to instantiated using the GetChatThreadClient method on the ChatClient client.
```csharp
-var chatThreadMember = new ChatThreadMember(new CommunicationUser("<Access_ID>"))
+var chatParticipant = new ChatParticipant(communicationIdentifier: new CommunicationUserIdentifier(id: "<Access_ID>"))
{ DisplayName = "UserDisplayName" };
-ChatThreadClient chatThreadClient = await chatClient.CreateChatThreadAsync(topic: "Chat Thread topic C# sdk", members: new[] { chatThreadMember });
+CreateChatThreadResult createChatThreadResult = await chatClient.CreateChatThreadAsync(topic: "Hello world!", participants: new[] { chatParticipant });
+ChatThreadClient chatThreadClient = chatClient.GetChatThreadClient(createChatThreadResult.ChatThread.Id);
string threadId = chatThreadClient.Id; ```
ChatThreadClient chatThreadClient = chatClient.GetChatThreadClient(threadId);
## Send a message to a chat thread
-Use `SendMessage` method to send a message to a thread identified by threadId.
--- Use `content` to provide the chat message content, it is required.-- Use `priority` to specify the message priority level, such as 'Normal' or 'High', if not specified, 'Normal' will be used.-- Use `senderDisplayName` to specify the display name of the sender, if not specified, empty name will be used.
+Use `SendMessage` to send a message to a thread.
-`SendChatMessageResult` is the response returned from sending a message, it contains an id, which is the unique ID of the message.
+- Use `content` to provide the content for the message, it is required.
+- Use `type` for the content type of the message such as 'Text' or 'Html'. If not specified, 'Text' will be set.
+- Use `senderDisplayName` to specify the display name of the sender. If not specified, empty string will be set.
```csharp
-var content = "hello world";
-var priority = ChatMessagePriority.Normal;
-var senderDisplayName = "sender name";
+var messageId = await chatThreadClient.SendMessageAsync(content:"hello world", type: );
+```
+## Get a message
+
+Use `GetMessage` to retrieve a message from the service.
+`messageId` is the unique ID of the message.
-SendChatMessageResult sendChatMessageResult = await chatThreadClient.SendMessageAsync(content, priority, senderDisplayName);
-string messageId = sendChatMessageResult.Id;
+`ChatMessage` is the response returned from getting a message, it contains an ID, which is the unique identifier of the message, among other fields. Please refer to Azure.Communication.Chat.ChatMessage
+
+```csharp
+ChatMessage chatMessage = await chatThreadClient.GetMessageAsync(messageId);
``` ## Receive chat messages from a chat thread
await foreach (ChatMessage message in allMessages)
- `Text`: Regular chat message sent by a thread member. -- `ThreadActivity/TopicUpdate`: System message that indicates the topic has been updated.
+- `Html`: A formatted text message. Note that Communication Services users currently can't send RichText messages. This message type is supported by messages sent from Teams users to Communication Services users in Teams Interop scenarios.
-- `ThreadActivity/AddMember`: System message that indicates one or more members have been added to the chat thread.
+- `TopicUpdated`: System message that indicates the topic has been updated. (readonly)
-- `ThreadActivity/DeleteMember`: System message that indicates a member has been removed from the chat thread.
+- `ParticipantAdded`: System message that indicates one or more participants have been added to the chat thread.(readonly)
+
+- `ParticipantRemoved`: System message that indicates a participant has been removed from the chat thread.
For more details, see [Message Types](../../../concepts/chat/concepts.md#message-types).
string id = "id-of-message-to-delete";
await chatThreadClient.DeleteMessageAsync(id); ```
-## Add a user as member to the chat thread
-
-Once a thread is created, you can then add and remove users from it. By adding users, you give them access to be able to send messages to the thread, and add/remove other members. Before calling `AddMembers`, ensure that you have acquired a new access token and identity for that user. The user will need that access token in order to initialize their chat client.
+## Add a user as a participant to the chat thread
-Use `AddMembers` method to add thread members to the thread identified by threadId.
+Once a thread is created, you can then add and remove users from it. By adding users, you give them access to be able to send messages to the thread, and add/remove other participant. Before calling `AddParticipants`, ensure that you have acquired a new access token and identity for that user. The user will need that access token in order to initialize their chat client.
+Use `AddParticipants` to add one or more participants to the chat thread. The following are the supported attributes for each thread participant(s):
+- `communicationUser`, required, is the identity of the thread participant.
+- `displayName`, optional, is the display name for the thread participant.
+- `shareHistoryTime`, optional, time from which the chat history is shared with the participant.
```csharp
-ChatThreadMember member = new ChatThreadMember(communicationUser);
-member.DisplayName = "display name member 1";
-member.ShareHistoryTime = DateTime.MinValue; // share all history
-await chatThreadClient.AddMembersAsync(members: new[] {member});
+var josh = new CommunicationUserIdentifier(id: "<Access_ID_For_Josh>");
+var gloria = new CommunicationUserIdentifier(id: "<Access_ID_For_Gloria>");
+var amy = new CommunicationUserIdentifier(id: "<Access_ID_For_Amy>");
+
+var participants = new[]
+{
+ new ChatParticipant(josh) { DisplayName = "Josh" },
+ new ChatParticipant(gloria) { DisplayName = "Gloria" },
+ new ChatParticipant(amy) { DisplayName = "Amy" }
+};
+
+await chatThreadClient.AddParticipantsAsync(participants);
``` ## Remove user from a chat thread
-Similar to adding a user to a thread, you can remove users from a chat thread. To do that, you need to track the identity (CommunicationUser) of the members you have added.
+Similar to adding a user to a thread, you can remove users from a chat thread. To do that, you need to track the identity `CommunicationUser` of the participant you have added.
+
+```csharp
+var gloria = new CommunicationUserIdentifier(id: "<Access_ID_For_Gloria>");
+await chatThreadClient.RemoveParticipantAsync(gloria);
+```
+
+## Get thread participants
+
+Use `GetParticipants` to retrieve the participants of the chat thread.
```csharp
-await chatThreadClient.RemoveMemberAsync(communicationUser);
+AsyncPageable<ChatParticipant> allParticipants = chatThreadClient.GetParticipantsAsync();
+await foreach (ChatParticipant participant in allParticipants)
+{
+ Console.WriteLine($"{((CommunicationUserIdentifier)participant.User).Id}:{participant.DisplayName}:{participant.ShareHistoryTime}");
+}
+```
+
+## Send typing notification
+
+Use `SendTypingNotification` to indicate that the user is typing a response in the thread.
+
+```csharp
+await chatThreadClient.SendTypingNotificationAsync();
+```
+
+## Send read receipt
+
+Use `SendReadReceipt` to notify other participants that the message is read by the user.
+
+```csharp
+await chatThreadClient.SendReadReceiptAsync(messageId);
```
+## Get read receipts
+
+Use `GetReadReceipts` to check the status of messages to see which ones are read by other participants of a chat thread.
+
+```csharp
+AsyncPageable<ChatMessageReadReceipt> allReadReceipts = chatThreadClient.GetReadReceiptsAsync();
+await foreach (ChatMessageReadReceipt readReceipt in allReadReceipts)
+{
+ Console.WriteLine($"{readReceipt.ChatMessageId}:{((CommunicationUserIdentifier)readReceipt.Sender).Id}:{readReceipt.ReadOn}");
+}
+```
## Run the code Run the application from your application directory with the `dotnet run` command.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/includes/chat-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/includes/chat-java.md
The following classes and interfaces handle some of the major features of the Az
## Create a chat client To create a chat client, you'll use the Communications Service endpoint and the access token that was generated as part of pre-requisite steps. User access tokens enable you to build client applications that directly authenticate to Azure Communication Services. Once you generate these tokens on your server, pass them back to a client device. You need to use the CommunicationTokenCredential class from the Common client library to pass the token to your chat client.
+Learn more about [Chat Architecture](../../../concepts/chat/concepts.md)
+ When adding the import statements, be sure to only add imports from the com.azure.communication.chat and com.azure.communication.chat.models namespaces, and not from the com.azure.communication.chat.implementation namespace. In the App.java file that was generated via Maven, you can use the following code to begin with: ```Java
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/includes/chat-js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/includes/chat-js.md
Create a file in the root directory of your project called **client.js** to cont
To create a chat client in your web app, you'll use the Communications Service **endpoint** and the **access token** that was generated as part of pre-requisite steps.
-User access tokens enable you to build client applications that directly authenticate to Azure Communication Services.
-
-##### Server vs. client side
-
-We recommend generating access tokens using a server-side component that passes them to the client application. In this scenario the server side would be responsible for creating and managing users and issuing their tokens. The client side can then receive access tokens from the service and use them to authenticate the Azure Communication Services client libraries.
-
-Tokens can also be issued on the client side using the Azure Communication Administration library for JavaScript. In this scenario the client side would need to be aware of users in order to issue their tokens.
-
-See the following documentation for more detail [Client and Server Architecture](../../../concepts/client-and-server-architecture.md)
-
-In the diagram below the client side application receives an access token from a trusted service tier. The application then uses the token to authenticate Communication Services libraries. Once authenticated, the application can now use the Communication Services client side libraries to perform operations such as chatting with other users.
--
-##### Instructions
-This demo does not cover creating a service tier for your chat application.
-
-If you have not generated users and their tokens, follow the instructions here to do so: [User Access Token](../../access-tokens.md). Remember to set the scope to "chat" and not "voip".
+User access tokens enable you to build client applications that directly authenticate to Azure Communication Services. This quickstart does not cover creating a service tier to manage tokens for your chat application. See [chat concepts](../../../concepts/chat/concepts.md) for more information about chat architecture, and [user access tokens](../../access-tokens.md) for more information about access tokens.
Inside **client.js** use the endpoint and access token in the code below to add chat capability using the Azure Communication Chat client library for JavaScript.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/includes/chat-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/includes/chat-python.md
The following classes and interfaces handle some of the major features of the Az
To create a chat client, you'll use Communications Service endpoint and the `Access Token` that was generated as part of pre-requisite steps. Learn more about [User Access Tokens](../../access-tokens.md).
+This quickstart does not cover creating a service tier to manage tokens for your chat application, although it is recommended. See the following documentation for more detail [Chat Architecture](../../../concepts/chat/concepts.md)
+ ```console pip install azure-communication-administration ``` ```python
-from azure.communication.chat import ChatClient, CommunicationUserCredential
+from azure.communication.chat import ChatClient, CommunicationTokenCredential, CommunicationTokenRefreshOptions
endpoint = "https://<RESOURCE_NAME>.communication.azure.com"
-chat_client = ChatClient(endpoint, CommunicationUserCredential(<Access Token>))
+refresh_options = CommunicationTokenRefreshOptions(<Access Token>)
+chat_client = ChatClient(endpoint, CommunicationTokenCredential(refresh_options))
``` ## Start a chat thread
chat_client = ChatClient(endpoint, CommunicationUserCredential(<Access Token>))
Use the `create_chat_thread` method to create a chat thread. - Use `topic` to give a thread topic; Topic can be updated after the chat thread is created using the `update_thread` function.-- Use `members` to list the `ChatThreadMember` to be added to the chat thread, the `ChatThreadMember` takes `CommunicationUser` type as `user`, which is what you got after you
+- Use `thread_participants` to list the `ChatThreadParticipant` to be added to the chat thread, the `ChatThreadParticipant` takes `CommunicationUserIdentifier` type as `user`, which is what you got after you
created by [Create a user](../../access-tokens.md#create-an-identity)
+- Use `repeatability_request_id` to direct that the request is repeatable. The client can make the request multiple times with the same Repeatability-Request-ID and get back an appropriate response without the server executing the request multiple times.
+
+The response `chat_thread_client` is used to perform operations on the newly created chat thread like adding participants to the chat thread, send message, delete message, etc. It contains a `thread_id` property which is the unique ID of the chat thread.
+
+#### Without Repeatability-Request-ID
+```python
+from datetime import datetime
+from azure.communication.chat import ChatThreadParticipant
+
+topic="test topic"
+participants = [ChatThreadParticipant(
+ user=user,
+ display_name='name',
+ share_history_time=datetime.utcnow()
+)]
-The response `chat_thread_client` is used to perform operations on the newly created chat thread like adding members to the chat thread, send message, delete message, etc. It contains a `thread_id` property which is the unique ID of the chat thread.
+chat_thread_client = chat_client.create_chat_thread(topic, participants)
+```
+#### With Repeatability-Request-ID
```python from datetime import datetime
-from azure.communication.chat import ChatThreadMember
+from azure.communication.chat import ChatThreadParticipant
topic="test topic"
-thread_members=[ChatThreadMember(
+participants = [ChatThreadParticipant(
user=user, display_name='name', share_history_time=datetime.utcnow() )]
-chat_thread_client = chat_client.create_chat_thread(topic, thread_members)
+
+repeatability_request_id = 'b66d6031-fdcc-41df-8306-e524c9f226b8' # some unique identifier
+chat_thread_client = chat_client.create_chat_thread(topic, participants, repeatability_request_id)
``` ## Get a chat thread client
-The get_chat_thread_client method returns a thread client for a thread that already exists. It can be used for performing operations on the created thread: add members, send message, etc. thread_id is the unique ID of the existing chat thread.
+The `get_chat_thread` method returns a thread client for a thread that already exists. It can be used for performing operations on the created thread: add participants, send message, etc. thread_id is the unique ID of the existing chat thread.
```python thread_id = 'id'
-chat_thread_client = chat_client.get_chat_thread_client(thread_id)
+chat_thread = chat_client.get_chat_thread(thread_id)
+```
+
+## List all chat threads
+The `list_chat_threads` method returns a iterator of type `ChatThreadInfo`. It can be used for listing all chat threads.
+
+- Use `start_time` to specify the earliest point in time to get chat threads up to.
+- Use `results_per_page` to specify the maximum number of chat threads returned per page.
+
+```python
+from datetime import datetime, timedelta
+
+start_time = datetime.utcnow() - timedelta(days=2)
+start_time = start_time.replace(tzinfo=pytz.utc)
+chat_thread_infos = chat_client.list_chat_threads(results_per_page=5, start_time=start_time)
+
+for info in chat_thread_infos:
+ # Iterate over all chat threads
+ print("thread id:", info.id)
+```
+
+## Delete a chat thread
+The `delete_chat_thread` is used to delete a chat thread.
+
+- Use `thread_id` to specify the thread_id of an existing chat thread that needs to be deleted
+
+```python
+thread_id='id'
+chat_client.delete_chat_thread(thread_id)
``` ## Send a message to a chat thread
-Use `send_message` method to send a message to a chat thread you just created, identified by threadId.
+Use `send_message` method to send a message to a chat thread you just created, identified by thread_id.
- Use `content` to provide the chat message content;-- Use `priority` to specify the message priority level, such as 'Normal' or 'High' ; this property can be used to have UI indicator for the recipient user in your app to bring attention to the message or execute custom business logic.-- Use `senderDisplayName` to specify the display name of the sender;
+- Use `chat_message_type` to specify the message content type. Possible values are 'text' and 'html'; if not specified default value of 'text' is assigned.
+- Use `sender_display_name` to specify the display name of the sender;
+
+The response is an "id" of type `str`, which is the unique ID of that message.
+
+#### Message type not specified
+```python
+
+content='hello world'
+sender_display_name='sender name'
-The response `SendChatMessageResult` contains an "id", which is the unique ID of that message.
+send_message_result_id = chat_thread_client.send_message(content=content, sender_display_name=sender_display_name)
+```
+#### Message type specified
```python
-from azure.communication.chat import ChatMessagePriority
+from azure.communication.chat import ChatMessageType
content='hello world'
-priority=ChatMessagePriority.NORMAL
sender_display_name='sender name'
-send_message_result = chat_thread_client.send_message(content, priority=priority, sender_display_name=sender_display_name)
+# specify chat message type with pre-built enumerations
+send_message_result_id_w_enum = chat_thread_client.send_message(content=content, sender_display_name=sender_display_name, chat_message_type=ChatMessageType.TEXT)
+
+# specify chat message type as string
+send_message_result_id_w_str = chat_thread_client.send_message(content=content, sender_display_name=sender_display_name, chat_message_type='text')
+```
+
+## Get a specific chat message from a chat thread
+The `get_message` function can be used to retrieve a specific message, identified by a message_id
+
+- Use `message_id` to specify the message id
+
+The response of type `ChatMessage` contains all information related to the single message.
+
+```python
+message_id = 'message_id'
+chat_message = chat_thread_client.get_message(message_id)
``` ## Receive chat messages from a chat thread You can retrieve chat messages by polling the `list_messages` method at specified intervals.
+- Use `results_per_page` to specify the maximum number of messages to be returned per page.
+- Use `start_time` to specify the earliest point in time to get messages up to.
+ ```python
-chat_messages = chat_thread_client.list_messages()
+chat_messages = chat_thread_client.list_messages(results_per_page=1, start_time=start_time)
```+ `list_messages` returns the latest version of the message, including any edits or deletes that happened to the message using `update_message` and `delete_message`. For deleted messages `ChatMessage.deleted_on` returns a datetime value indicating when that message was deleted. For edited messages, `ChatMessage.edited_on` returns a datetime indicating when the message was edited. The original time of message creation can be accessed using `ChatMessage.created_on` which can be used for ordering the messages. `list_messages` returns different types of messages which can be identified by `ChatMessage.type`. These types are: -- `Text`: Regular chat message sent by a thread member.
+- `ChatMessageType.TEXT`: Regular chat message sent by a thread participant.
+
+- `ChatMessageType.HTML`: HTML chat message sent by a thread participant.
-- `ThreadActivity/TopicUpdate`: System message that indicates the topic has been updated.
+- `ChatMessageType.TOPIC_UPDATED`: System message that indicates the topic has been updated.
-- `ThreadActivity/AddMember`: System message that indicates one or more members have been added to the chat thread.
+- `ChatMessageType.PARTICIPANT_ADDED`: System message that indicates one or more participants have been added to the chat thread.
-- `ThreadActivity/DeleteMember`: System message that indicates a member has been removed from the chat thread.
+- `ChatMessageType.PARTICIPANT_REMOVED`: System message that indicates a participant has been removed from the chat thread.
For more details, see [Message Types](../../../concepts/chat/concepts.md#message-types).
-## Add a user as member to the chat thread
+## Update topic of a chat thread
+You can update the topic of a chat thread using the `update_topic` method
+
+```python
+topic = "updated thread topic"
+chat_thread_client.update_topic(topic=topic)
+```
+
+## Update a message
+You can update the content of an existing message using the `update_message` method, identified by the message_id
+
+- Use `message_id` to specify the message_id
+- Use `content` to set the new content of the message
+
+```python
+message_id='id'
+content = 'updated content'
+chat_thread_client.update_message(message_id=message_id, content=content)
+```
+
+## Send read receipt for a message
+The `send_read_receipt` method can be used to posts a read receipt event to a thread, on behalf of a user.
+
+- Use `message_id` to specify the id of the latest message read by current user
+
+```python
+message_id='id'
+chat_thread_client.send_read_receipt(message_id=message_id)
+```
+
+## List read receipts for a chat thread
+The `list_read_receipts` method can be used to get read receipts for a thread.
+
+- Use `results_per_page` to specify the maximum number of chat message read receipts to be returned per page.
+- Use `skip` to specify skip chat message read receipts up to a specified position in response.
+
+```python
+read_receipts = chat_thread_client.list_read_receipts(results_per_page=2, skip=0)
+
+for page in read_receipts.by_page():
+ for item in page:
+ print(item)
+```
+
+## Send typing notification
+The `send_typing_notification` method can be used to post a typing event to a thread, on behalf of a user.
+
+```python
+chat_thread_client.send_typing_notification()
+```
+
+## Delete message
+The `delete_message` method can be used to delete a message, identified by a message_id
+
+- Use `message_id` to specify the message_id
+
+```python
+message_id='id'
+chat_thread_client.delete_message(message_id=message_id)
+```
-Once a chat thread is created, you can then add and remove users from it. By adding users, you give them access to be able to send messages to the chat thread, and add/remove other members. Before calling `add_members` method, ensure that you have acquired a new access token and identity for that user. The user will need that access token in order to initialize their chat client.
+## Add a user as participant to the chat thread
-Use `add_members` method to add thread members to the thread identified by threadId.
+Once a chat thread is created, you can then add and remove users from it. By adding users, you give them access to be able to send messages to the chat thread, and add/remove other participants. Before calling `add_participant` method, ensure that you have acquired a new access token and identity for that user. The user will need that access token in order to initialize their chat client.
-- Use `members` to list the members to be added to the chat thread;-- `user`, required, is the `CommunicationUser` you created by `CommunicationIdentityClient` at [create a user](../../access-tokens.md#create-an-identity)-- `display_name`, optional, is the display name for the thread member.-- `share_history_time`, optional, is the time from which the chat history is shared with the member. To share history since the inception of the chat thread, set this property to any date equal to, or less than the thread creation time. To share no history previous to when the member was added, set it to the current date. To share partial history, set it to an intermediary date.
+Use `add_participant` method to add thread participants to the thread identified by thread_id.
+
+- Use `thread_participant` to specify the participant to be added to the chat thread;
+- `user`, required, is the `CommunicationUserIdentifier` you created by `CommunicationIdentityClient` at [create a user](../../access-tokens.md#create-an-identity)
+- `display_name`, optional, is the display name for the thread participant.
+- `share_history_time`, optional, is the time from which the chat history is shared with the participant. To share history since the inception of the chat thread, set this property to any date equal to, or less than the thread creation time. To share no history previous to when the participant was added, set it to the current date. To share partial history, set it to an intermediary date.
```python new_user = identity_client.create_user()
-from azure.communication.chat import ChatThreadMember
+from azure.communication.chat import ChatThreadParticipant
from datetime import datetime
-member = ChatThreadMember(
+
+new_chat_thread_participant = ChatThreadParticipant(
user=new_user, display_name='name', share_history_time=datetime.utcnow())
-thread_members = [member]
-chat_thread_client.add_members(thread_members)
+
+chat_thread_client.add_participant(new_chat_thread_participant)
+```
+
+Multiple users can also be added to the chat thread using the `add_participants` method, provided new access token and identify is available for all users.
+
+```python
+from azure.communication.chat import ChatThreadParticipant
+from datetime import datetime
+
+new_chat_thread_participant = ChatThreadParticipant(
+ user=self.new_user,
+ display_name='name',
+ share_history_time=datetime.utcnow())
+thread_participants = [new_chat_thread_participant] # instead of passing a single participant, you can pass a list of participants
+chat_thread_client.add_participants(thread_participants)
``` + ## Remove user from a chat thread
-Similar to adding a member, you can also remove members from a thread. In order to remove, you'll need to track the IDs of the members you have added.
+Similar to adding a participant, you can also remove participants from a thread. In order to remove, you'll need to track the IDs of the participants you have added.
-Use `remove_member` method to remove thread member from the thread identified by threadId.
-- `user` is the CommunicationUser to be removed from the thread.
+Use `remove_participant` method to remove thread participant from the thread identified by threadId.
+- `user` is the `CommunicationUserIdentifier` to be removed from the thread.
```python
-chat_thread_client.remove_member(user)
+chat_thread_client.remove_participant(user)
``` ## Run the code
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/includes/chat-swift https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/includes/chat-swift.md
let endpoint = "<ACS_RESOURCE_ENDPOINT>"
Replace `<ACS_RESOURCE_ENDPOINT>` with the endpoint of your ACS Resource. Replace `<ACCESS_TOKEN>` with a valid ACS access token.
+This quickstart does not cover creating a service tier to manage tokens for your chat application, although it is recommended. See the following documentation for more detail [Chat Architecture](../../../concepts/chat/concepts.md)
+
+Learn more about [User Access Tokens](../../access-tokens.md).
+ ## Object model The following classes and interfaces handle some of the major features of the Azure Communication Services Chat client library for JavaScript.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/telephony-sms/logic-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/logic-app.md
Although this quickstart focuses on using the connector to respond to a trigger,
- An SMS enabled phone number, or [get a phone number](./get-phone-number.md). + ## Add an SMS action To add the **Send SMS** action as a new step in your workflow by using the Azure Communication Services SMS connector, follow these steps in the [Azure portal](https://portal.azure.com) with your logic app workflow open in the Logic App Designer:
In this quickstart, you learned how to send SMS messages by using Azure Logic Ap
For more information about SMS in Azure Communication Services, see these articles: - [SMS concepts](../../concepts/telephony-sms/concepts.md)-- [Plan your telephony and SMS solution](../../concepts/telephony-sms/plan-solution.md)
+- [Phone number types](../../concepts/telephony-sms/plan-solution.md)
- [SMS SDK](../../concepts/telephony-sms/sdk-features.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/telephony-sms/send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/send.md
zone_pivot_groups: acs-js-csharp-java-python
# Quickstart: Send an SMS message [!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]+ > [!IMPORTANT] > SMS messages can be sent to and received from United States phone numbers. Phone numbers located in other geographies are not yet supported by Communication Services SMS.
-> For more information, see **[Plan your telephony and SMS solution](../../concepts/telephony-sms/plan-solution.md)**.
+> For more information, see **[Phone number types](../../concepts/telephony-sms/plan-solution.md)**.
::: zone pivot="programming-language-csharp" [!INCLUDE [Send SMS with .NET client library](./includes/send-sms-net.md)]
In this quickstart, you learned how to send SMS messages using Azure Communicati
> [Subscribe to SMS Events](./handle-sms-events.md) > [!div class="nextstepaction"]
-> [Plan your PSTN Solution](../../concepts/telephony-sms/plan-solution.md)
+> [Phone number types](../../concepts/telephony-sms/plan-solution.md)
> [!div class="nextstepaction"] > [Learn more about SMS](../../concepts/telephony-sms/concepts.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/tutorials/includes/trusted-service-js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/includes/trusted-service-js.md
We'll now proceed to install Azure Communication Services libraries.
### Install communication services libraries
-We'll use the `Administration` library to generate `User Access Tokens`.
+We'll use the `Identity` library to generate `User Access Tokens`.
Use the `npm install` command to install the Azure Communication Services Administration client library for JavaScript. ```console
-npm install @azure/communication-administration --save
+npm install @azure/communication-identity --save
```
The `--save` option lists the library as a dependency in your **package.json** f
At the top of the `index.js` file, import the interface for the `CommunicationIdentityClient` ```javascript
-const { CommunicationIdentityClient } = require('@azure/communication-administration');
+const { CommunicationIdentityClient } = require('@azure/communication-identity');
``` ## Access token generation
Open the URL on your browser and you should see a response body with the Communi
To deploy your Azure Function, you can follow [step by step instructions](../../../azure-functions/create-first-function-vs-code-csharp.md?pivots=programming-language-javascript#sign-in-to-azure)
-Generally, you will need to:
+In summary, you will need to:
1. Sign in to Azure from Visual Studio 2. Publish your project to your Azure account. Here you will need to choose an existing subscription. 3. Create a new Azure Function resource using the Visual Studio wizard or use an existing resource. For a new resource, you will need to configure it to your desired region, runtime and unique identifier.
confidential-computing https://docs.microsoft.com/en-us/azure/confidential-computing/confidential-containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-containers.md
- Last updated 9/22/2020
+ Last updated 2/11/2020
Confidential containers help protect:
- hardware-based assurances - allow running existing apps - create hardware root of trust
+- remove host administrator, Kubernetes administrator, hypervisor from the trust boundary
A hardware based Trusted Execution Environment (TEE) is an important component that is used to provide strong assurances through hardware and software measurements from trusted computing base (TCB) components. Verifications of these measurements help with validation of the expected computation and verify any tampering of the container apps.
-Confidential containers support custom applications developed with **Python, Java, Node JS, etc. or packaged software applications like NGINX, Redis Cache, MemCache**, and so on, to be run unmodified on AKS.
+Confidential containers support custom applications developed with **Python, Java, Node JS, etc. or packaged container applications like NGINX, Redis Cache, MemCache**, and so on, to be run unmodified on AKS with confidential computing nodes.
-Confidential containers are the fastest path to container confidentiality, including the container protection through encryption, enabling lift and shift with no/minimal changes to your business logic.
-
-![The confidential container conversion](./media/confidential-containers/conf-con-deploy-process.jpg)
+Confidential containers are the fastest path to container confidentiality and will only require repackaging of the existing docker container applications and will not require application code changes. Confidential containers are docker container applications that are packaged to run in a TEE. Some confidential container enablers also offer container encryption that can help protect the container code during storage and transport and while mounted in the host. Container encryption allows you to go further and protect the code/model packaged in the container with its decryption key attached to the TEE.
+Below is the process for confidential containers from development to deployment
+![The confidential container how to process.](./media/confidential-containers/how-to-confidential-container.png)
## Confidential Container Enablers
+To run an existing docker container unmodified requires an SGX software so the application calls can use special CPU instruction set made available to lower the attach surface area and take no dependency on Guest OS. Once wrapped with SGX runtime software the containers automatically launch in the protected enclaves thus removing the Guest OS, Host OS, or Hypervisor from the trust boundary. This isolated execution in a node (Virtual Machine) with in memory data encryption backed by the hardware reduces the overall surface attack areas and reduces the vulnerabilities with operating system or hypervisor layers.
-To run an existing docker container, applications on confidential computing nodes require an abstraction layer or SGX software to leverage the special CPU instruction set. The SGX software also enables your sensitive applications code to be protected and create a direct execution to CPU to remove the Guest OS, Host OS, or Hypervisor. This protection reduces the overall surface attack areas and vulnerabilities with operating system or hypervisor layers.
-
-Confidential containers are fully supported on AKS and enabled through Azure Partners and Open Source Software (OSS) projects. Developers can choose software providers based on the features, integration to Azure services and tooling support.
+Confidential containers are fully supported on AKS and enabled through Azure Partners and Open-Source Software (OSS) projects. Developers can choose software providers based on the features, integration to Azure services and tooling support.
## Partner Enablers > [!NOTE] > The below solutions are offered through Azure Partners and may incur licensing fees. Please verify the partner software terms independently.
+### Anjuna
+
+[Anjuna](https://www.anjuna.io/) provides SGX platform software that enables you to run unmodified containers on AKS. Learn more on the functionality and check out the sample applications [here](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp).
+
+Get started with a sample Redis Cache and Python Custom Application [here](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp)
+
+![Anjuna Process](./media/confidential-containers/anjuna-process-flow.png)
+ ### Fortanix
-[Fortanix](https://www.fortanix.com/) offers developers a choice of a portal and CLI-based experience to bring their containerized applications and covert them to SGX capable confidential containers without any need to modify or recompile the application. Fortanix provides the flexibility to run and manage the broadest set of applications, including existing applications, new enclave-native applications, and pre-packaged applications. Users can start with [Enclave Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/em/) to create confidential containers by following the [Quick Start](https://support.fortanix.com/hc/en-us/articles/360049658291-Fortanix-Confidential-Container-on-Azure-Kubernetes-Service) guide for Azure Kubernetes Service.
+[Fortanix](https://www.fortanix.com/) offers developers a choice of a portal and CLI-based experience to bring their containerized applications and covert them to SGX capable confidential containers without any need to modify or recompile the application. Fortanix provides the flexibility to run and manage the broadest set of applications, including existing applications, new enclave-native applications, and pre-packaged applications. Users can start with [Confidential Computing Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/em/) to create confidential containers by following the [Quick Start](https://support.fortanix.com/hc/en-us/articles/360049658291-Fortanix-Confidential-Container-on-Azure-Kubernetes-Service) guide for Azure Kubernetes Service.
![Fortanix Deployment Process](./media/confidential-containers/fortanix-confidential-containers-flow.png)
Confidential containers are fully supported on AKS and enabled through Azure Par
[SCONE](https://scontain.com/https://docsupdatetracker.net/index.html?lang=en) supports security policies that can generate certificates, keys, and secrets, and ensures they are only visible to attested services of an application. In this way, the services of an application automatically attest each other via TLS - without the need to modify the applications nor TLS. This is explained with the help of a simple Flask application here: https://sconedocs.github.io/flask_demo/
-SCONE can convert existing most binaries into applications that run inside of enclaves without needing to change the application or to recompile that application. SCONE also protects interpreted languages like Python by encrypting both data files as well as Python code files. With the help of a SCONE security policy, one can protect the encrypted files against unauthorized accesses, modifications, and rollbacks. How to "sconify" an existing Python application is explained [here](https://sconedocs.github.io/sconify_image/)
+SCONE can convert existing most binaries into applications that run inside of enclaves without needing to change the application or to recompile that application. SCONE also protects interpreted languages like Python by **encrypting** both data files as well as Python code files. With the help of a SCONE security policy, one can protect the encrypted files against unauthorized accesses, modifications, and rollbacks. How to "sconify" an existing Python application is explained [here](https://sconedocs.github.io/sconify_image/)
![Scontain Flow](./media/confidential-containers/scone-workflow.png)
-Scone deployments on confidential computing nodes with AKS are fully supported and integrated. Get started with a sample application here https://sconedocs.github.io/aks/
-
-### Anjuna
-
-[Anjuna](https://www.anjuna.io/) provides SGX platform software that enables you to run unmodified containers on AKS. Learn more on the functionality and check out the sample applications [here](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp).
-
-Get started with a sample Redis Cache and Python Custom Application [here](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp)
+Scone deployments on confidential computing nodes with AKS are fully supported and integrated with other Azure services. Get started with a sample application here https://sconedocs.github.io/aks/
-![Anjuna Process](./media/confidential-containers/anjuna-process-flow.png)
## OSS Enablers > [!NOTE]
-> The below solutions are offered through Open Source Projects and are not directly affiliated with Azure Confidential Computing (ACC) or Microsoft.
+> The below solutions are offered through Open-Source Projects and are not directly affiliated with Azure Confidential Computing (ACC) or Microsoft.
### Graphene
Occlum supports AKS deployments. Follow the deployment instructions with various
## Confidential Containers Demo
-View the confidential healthcare demo with confidential containers. Sample is available [here](https://github.com/Azure-Samples/confidential-container-samples/blob/main/confidential-healthcare-scone-confinf-onnx/README.md).
+View the confidential healthcare demo with confidential containers. Sample is available [here](https://docs.microsoft.com/azure/architecture/example-scenario/confidential/healthcare-inference).
> [!VIDEO https://www.youtube.com/embed/PiYCQmOh0EI] ## Get In Touch
-Have questions with your implementation or want to become an enabler? Send an email to acconaks@microsoft.com
+Have questions with your implementation or want to become an enabler? Send an email to product team **acconaks@microsoft.com**
## Reference Links
confidential-computing https://docs.microsoft.com/en-us/azure/confidential-computing/confidential-nodes-aks-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-faq.md
description: Find answers to some of the common questions about Azure Kubernetes
Previously updated : 09/22/2020 Last updated : 02/09/2020 # Frequently asked questions about Confidential Computing Nodes on Azure Kubernetes Service (AKS)
-This article addresses frequent questions about Intel SGX based confidential computing nodes on Azure Kubernetes Service (AKS). If you have any further questions, email acconaks@microsoft.com.
+This article addresses frequent questions about Intel SGX based confidential computing nodes on Azure Kubernetes Service (AKS). If you have any further questions, email **acconaks@microsoft.com**.
-## What Service Level Agreement (SLA) and Azure Support is provided during the preview?
-
-SLA is not provided during the product preview as defined [here](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). However, product support is provided through Azure support.
-
-## What is attestation and how can we do attestation of apps running in enclaves?
+<a name="1"></a>
+### Are the confidential computing nodes on AKS in GA? ###
+Yes
+<a name="2"></a>
+### What is attestation and how can we do attestation of apps running in enclaves? ###
Attestation is the process of demonstrating and validating that a piece of software has been properly instantiated on the specific hardware platform. It also ensures its evidence is verifiable to provide assurances that it is running in a secure platform and has not been tampered with. [Read more](attestation.md) on how attestation is done for enclave apps.
-## Can I enable Accelerated Networking with Azure confidential computing AKS Clusters?
-
-No. Accelerated Networking isn't supported on confidential computing nodes on AKS. Ensure that Accelerated Networking is disabled in your deployment.
-
-## Can I bring my existing containerized applications and run it on AKS with Azure Confidential Computing?
+<a name="3"></a>
+### Can I enable Accelerated Networking with Azure confidential computing AKS Clusters? ###
+No. Accelerated Networking is not supported on DCSv2 Virtual machines that makeup confidential computing nodes on AKS.
+<a name="4"></a>
+### Can I bring my existing containerized applications and run it on AKS with Azure Confidential Computing? ###
Yes, review the [confidential containers page](confidential-containers.md) for more details on platform enablers.
-## What Intel SGX Driver version is installed in the AKS Image?
-
+<a name="5"></a>
+### What version of Intel SGX Driver version is on the AKS Image for confidential nodes? ###
Currently, Azure confidential computing DCSv2 VMs are installed with Intel SGX DCAP 1.33.
-## Can I open an Azure Support ticket if I run into issues?
-
-Yes. Azure support is provided during the preview. There is no SLA attached because the product is in preview stage.
-
-## Can I inject post install scripts/customize drivers to the Nodes provisioned by AKS?
-
-No. [AKS-Engine based confidential computing nodes](https://github.com/Azure/aks-engine/blob/master/docs/topics/sgx.md) support confidential computing nodes that allow custom installations.
-
-## Should I be using a Docker base image to get started on enclave applications?
+<a name="6"></a>
+### Can I inject post install scripts/customize drivers to the Nodes provisioned by AKS? ###
+No. [AKS-Engine based confidential computing nodes](https://github.com/Azure/aks-engine/blob/master/docs/topics/sgx.md) support confidential computing nodes that allow custom installations and have full control over your Kubernetes control plane.
+<a name="7"></a>
+### Should I be using a Docker base image to get started on enclave applications? ###
Various enablers (ISVs and OSS projects) provide ways to enable confidential containers. Review the [confidential containers page](confidential-containers.md) for more details and individual references to implementations.
-## Can I run ACC Nodes with other standard AKS SKUs (build a heterogenous node pool cluster)?
+<a name="8"></a>
+### Can I run ACC Nodes with other standard AKS SKUs (build a heterogenous node pool cluster)? ###
Yes, you can run different node pools within the same AKS cluster including ACC nodes. To target your enclave applications on a specific node pool, consider adding node selectors or applying EPC limits. Refer to more details on the quick start on confidential nodes [here](confidential-nodes-aks-get-started.md).
-## Can I run Windows Nodes and windows containers with ACC?
-
-Not at this time. Contact us if you have Windows nodes or container needs.
+<a name="9"></a>
+### Can I run Windows Nodes and windows containers with ACC? ###
+Not at this time. Contact the product team at *acconaks@microsoft.com* if you have Windows nodes or container needs.
-## What if my container size is more than available EPC memory?
+<a name="10"></a>
+### What if my container size is more than available EPC memory? ###
+The EPC memory applies to the part of your application that is programmed to execute in the enclave. The total size of your container is not the right way to compare it with the max available EPC memory. In fact, DCSv2 machines with SGX, allow maximum VM memory of 32 GB where your untrusted part of the application would utilize. However, if your container consumes more than available EPC memory, then the performance of the portion of the program running in the enclave might be impacted.
-The EPC memory applies to the part of your application that is programmed to execute in the enclave. The total size of your container is not the right way to compare it with the max available EPC memory. In fact, DCSv2 machines with SGX, allow maximum VM memory of 32 GB where your untrusted part of the application would utilize. However, if your container consumes more than available EPC memory, then the performance of the portion of the program running in the enclave might be impacted.
-
-To better manage the EPC memory in the worker nodes, consider the EPC memory-based limits management through Kubernetes. Follow the example below as reference
+To better manage the EPC memory in the worker nodes, consider the EPC memory-based limits management through Kubernetes. Follow the example below as reference.
```yaml apiVersion: batch/v1
spec:
image: oeciteam/sgx-test:1.0 resources: limits:
- kubernetes.azure.com/sgx_epc_mem_in_MiB: 10 # This limit will automatically place the job into confidential computing node. Alternatively you can target deployment to nodepools
+ kubernetes.azure.com/sgx_epc_mem_in_MiB: 10 # This limit will automatically place the job into confidential computing node. Alternatively, you can target deployment to nodepools
restartPolicy: Never backoffLimit: 0 ```
+<a name="11"></a>
+### What happens if my enclave consumes more than maximum available EPC memory? ###
-## What happens if my enclave consumes more than maximum available EPC memory?
-
-Total available EPC memory is shared between the enclave applications in the same VMs or worker nodes. If your application uses EPC memory more than available then the application performance might be impacted. For this reason, we recommend you setting toleration per application in your deployment yaml file to better manage the available EPC memory per worker nodes as shown in the examples above. Alternatively, you can always choose to move up on the worker node pool VM sizes or add more nodes.
+Total available EPC memory is shared between the enclave applications in the same VMs or worker nodes. If your application uses EPC memory more than available, then the application performance might be impacted. For this reason, we recommend you setting toleration per application in your deployment yaml file to better manage the available EPC memory per worker nodes as shown in the examples above. Alternatively, you can always choose to move up on the worker node pool VM sizes or add more nodes.
-## Why can't I do forks () and exec to run multiple processes in my enclave application?
+<a name="12"></a>
+### Why can't I do forks () and exec to run multiple processes in my enclave application? ###
-Currently, Azure confidential computing DCsv2 SKU VMs support a single address space for the program executing in an enclave. Single process is a current limitation designed around high security. However, confidential container enablers may have alternate implementations to overcome this limitation.
+Currently, Azure confidential computing DCsv2 SKU VMs support a single address space for the program executing in an enclave. Single process is a current limitation designed around high security. However, confidential container enablers may have alternate implementations to overcome this limitation.
+<a name="13"></a>
+### Do you automatically install any additional daemonset to expose the SGX drivers? ###
-## Do you automatically install any additional daemonsets to expose the SGX drivers?
+Yes. The name of the daemonset is sgx-device-plugin. Read more on their respective purposes [here](confidential-nodes-aks-overview.md).
-Yes. The name of the daemonset is sgx-device-plugin and sgx-quote-helper. Read more on their respective purposes [here](confidential-nodes-aks-overview.md).
-
-## What is the VM SKU I should be choosing for confidential computing nodes?
+<a name="14"></a>
+### What is the VM SKU I should be choosing for confidential computing nodes? ###
DCSv2 SKUs. The [DCSv2 SKUs](../virtual-machines/dcv2-series.md) are available in the [supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines&regions=all).
-## Can I still schedule and run non-enclave containers on confidential computing nodes?
+<a name="15"></a>
+### Can I still schedule and run non-enclave containers on confidential computing nodes? ###
Yes. The VMs also have a regular memory that can run standard container workloads. Consider the security and threat model of your applications before you decide on the deployment models.
+<a name="16"></a>
-## Can I provision AKS with DCSv2 Node Pools through Azure portal?
+### Can I provision AKS with DCSv2 Node Pools through Azure portal? ###
Yes. Azure CLI could also be used as an alternative as documented [here](confidential-nodes-aks-get-started.md).
-## What Ubuntu version and VM generation is supported?
-
+<a name="17"></a>
+### What Ubuntu version and VM generation is supported? ###
18.04 on Gen 2.
-## Can we change the current Intel SGX DCAP diver version on AKS?
+<a name="18"></a>
+### Can we change the current Intel SGX DCAP diver version on AKS? ###
No. To perform any custom installations, we recommend you choose [AKS-Engine Confidential Computing Worker Nodes](https://github.com/Azure/aks-engine/blob/master/docs/topics/sgx.md) deployments.
-## What version of Kubernetes do you support and recommend?
+<a name="19"></a>
-We support and recommend Kubernetes version 1.16 and above
+### What version of Kubernetes do you support and recommend? ###
-## What are the known current limitation or technical limitations of the product in preview?
+We support and recommend Kubernetes version 1.16 and above.
+
+<a name="20"></a>
+### What are the known current limitations of the product? ###
- Supports Ubuntu 18.04 Gen 2 VM Nodes only - No Windows Nodes Support or Windows Containers Support - EPC Memory based Horizontal Pod Autoscaling is not supported. CPU and regular memory-based scaling is supported.-- Dev Spaces on AKS for confidential apps is not currently supported
+- Dev Spaces on AKS for confidential apps are not currently supported
+
+<a name="21"></a>
+### Will only signed and trusted images be loaded in the enclave for confidential computing? ###
+Not natively during enclave initialization but yes through attestation process signature can be validated. Ref [here](../attestation/basic-concepts.md#benefits-of-policy-signing).
-## Next Steps
-Review the [confidential containers page](confidential-containers.md) for more details around confidential containers.
+### Next Steps
+Review the [confidential containers page](confidential-containers.md) for more details around confidential containers.
confidential-computing https://docs.microsoft.com/en-us/azure/confidential-computing/confidential-nodes-aks-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
description: Learn to create an AKS cluster with confidential nodes and deploy a
Previously updated : 2/5/2020 Last updated : 2/8/2020
-# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes (DCsv2) using Azure CLI (preview)
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes (DCsv2) using Azure CLI
-This quickstart is intended for developers or cluster operators who want to quickly create an AKS cluster and deploy an application to monitor applications using the managed Kubernetes service in Azure.
+This quickstart is intended for developers or cluster operators who want to quickly create an AKS cluster and deploy an application to monitor applications using the managed Kubernetes service in Azure. You can also provision the cluster and add confidential computing nodes from Azure portal.
## Overview
-In this quickstart, you'll learn how to deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes using the Azure CLI and run an hello world application in an enclave. AKS is a managed Kubernetes service that lets you quickly deploy and manage clusters. Read more about AKS [here](../aks/intro-kubernetes.md).
+In this quickstart, you'll learn how to deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes using the Azure CLI and run a simple hello world application in an enclave. AKS is a managed Kubernetes service that lets you quickly deploy and manage clusters. Read more about AKS [here](../aks/intro-kubernetes.md).
> [!NOTE] > Confidential computing DCsv2 VMs leverage specialized hardware that is subject to higher pricing and region availability. For more information, see the virtual machines page for [available SKUs and supported regions](virtual-machine-solutions.md).
-> DCsv2 leverages Generation 2 Virtual Machines on Azure, this Generation 2 VM is a preview feature with AKS.
-
-### Deployment pre-requisites
-This deployment instructions assumes:
-
-1. Have an active Azure Subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin
-1. Have the Azure CLI version 2.0.64 or later installed and configured on your deployment machine (Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](../container-registry/container-registry-get-started-azure-cli.md)
-1. [aks-preview extension](https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview) minimum version 0.4.62
-1. VM Cores Quota availability. Have a minimum of six **DC<x>s-v2** cores available in your subscription for use. By default, the VM cores quota for the confidential computing per Azure subscription 8 cores. If you plan to provision a cluster that requires more than 8 cores, follow [these](../azure-portal/supportability/per-vm-quota-requests.md) instructions to raise a quota increase ticket
- ### Confidential computing node features (DC<x>s-v2) 1. Linux Worker Nodes supporting Linux Containers Only
This deployment instructions assumes:
1. Intel SGX-based CPU with Encrypted Page Cache Memory (EPC). Read more [here](./faq.md) 1. Supporting Kubernetes version 1.16+ 1. Intel SGX DCAP Driver Pre-installed on the AKS Nodes. Read more [here](./faq.md)
-1. Supporting CLI based deployed during preview with portal based provisioning post GA.
+## Deployment prerequisites
+The deployment tutorial requires the below :
+
+1. An active Azure Subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin
+1. Azure CLI version 2.0.64 or later installed and configured on your deployment machine (Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](../container-registry/container-registry-get-started-azure-cli.md)
+1. Azure [aks-preview extension](https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview) minimum version 0.5.0
+1. Minimum of six **DC<x>s-v2** cores available in your subscription for use. By default, the VM cores quota for the confidential computing per Azure subscription 8 cores. If you plan to provision a cluster that requires more than 8 cores, follow [these](../azure-portal/supportability/per-vm-quota-requests.md) instructions to raise a quota increase ticket
+
+## CLI-based preparation steps (required for add-on in preview - optional but recommended)
+Follow the below instructions to enable Confidential computing add-on on AKS.
-## Installing the CLI pre-requisites
+### Step 1: Installing the CLI prerequisites
-To install the aks-preview 0.4.62 extension or later, use the following Azure CLI commands:
+To install the aks-preview 0.5.0 extension or later, use the following Azure CLI commands:
```azurecli-interactive az extension add --name aks-preview
To update the aks-preview CLI extension, use the following Azure CLI commands:
```azurecli-interactive az extension update --name aks-preview ```
-### Generation 2 VM's feature registration on Azure
-Registering the Gen2VMPreview on the Azure Subscription. This feature allows you to provision Generation 2 Virtual Machines as AKS Node Pools :
-
-```azurecli-interactive
-az feature register --name Gen2VMPreview --namespace Microsoft.ContainerService
-```
-It might take several minutes for the status to show as Registered. You can check the registration status by using the 'az feature list' command. This feature registration is done only once per subscription. If this was registered previously you can skip the above step:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/Gen2VMPreview')].{Name:name,State:properties.state}"
-```
-When the status shows as registered, refresh the registration of the Microsoft.ContainerService resource provider by using the 'az provider register' command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
-
-### Azure Confidential Computing feature registration on Azure (optional but recommended)
-Registering the AKS-ConfidentialComputingAddon on the Azure Subscription. This feature will add two daemonsets as discussed in details [here](./confidential-nodes-aks-overview.md#aks-provided-daemon-sets-addon):
+### Step 2: Azure Confidential Computing addon feature registration on Azure
+Registering the AKS-ConfidentialComputingAddon on the Azure Subscription. This feature will add SGX device plugin daemonset as discussed in details [here](./confidential-nodes-aks-overview.md#confidential-computing-add-on-for-aks):
1. SGX Device Driver Plugin
-2. SGX Attestation Quote Helper
- ```azurecli-interactive az feature register --name AKS-ConfidentialComputingAddon --namespace Microsoft.ContainerService ```
When the status shows as registered, refresh the registration of the Microsoft.C
```azurecli-interactive az provider register --namespace Microsoft.ContainerService ```
+## Creating new AKS cluster with confidential computing nodes and add-on
+Follow the below instructions to add confidential computing capable nodes with add-on.
-## Creating an AKS cluster
+### Step 1: Creating an AKS cluster with system node pool
If you already have an AKS cluster that meets the above requirements, [skip to the existing cluster section](#existing-cluster) to add a new confidential computing node pool.
az group create --name myResourceGroup --location westus2
Now create an AKS cluster using the az aks create command. ```azurecli-interactive
-# Create a new AKS cluster with system node pool with Confidential Computing addon enabled
+# Create a new AKS cluster with system node pool with Confidential Computing addon enabled
az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addon confcom ```
-The above creates a new AKS cluster with system node pool. Now proceed adding a user node of Confidential Computing Nodepool type on AKS (DCsv2)
+The above creates a new AKS cluster with system node pool with the add-on enabled. Now proceed adding a user node of Confidential Computing Nodepool type on AKS (DCsv2)
-The below example adds an user nodepool with 3 nodes of `Standard_DC2s_v2` size. You can choose other supported list of DCsv2 SKUs and regions from [here](../virtual-machines/dcv2-series.md):
+### Step 2: Adding confidential computing node pool to AKS cluster
+
+Run the below command to an user nodepool of `Standard_DC2s_v2` size with 3 nodes. You can choose other supported list of DCsv2 SKUs and regions from [here](../virtual-machines/dcv2-series.md):
```azurecli-interactive
-az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-vm-size Standard_DC2s_v2 --aks-custom-headers usegen2vm=true
+az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-vm-size Standard_DC2s_v2
```
-The above command should add a new node pool with **DC<x>s-v2** automatically run two daemonsets on this node pool - ([SGX Device Plugin](confidential-nodes-aks-overview.md#sgx-plugin) & [SGX Quote Helper](confidential-nodes-aks-overview.md#sgx-quote))
-
+The above command is complete a new node pool with **DC<x>s-v2** should be visible with Confidential computing add-on daemonsets ([SGX Device Plugin](confidential-nodes-aks-overview.md#sgx-plugin)
+
+### Step 3: Verify the node pool and add-on
Get the credentials for your AKS cluster using the az aks get-credentials command: ```azurecli-interactive
$ kubectl get pods --all-namespaces
output kube-system sgx-device-plugin-xxxx 1/1 Running
-kube-system sgx-quote-helper-xxxx 1/1 Running
``` If the output matches to the above, then your AKS cluster is now ready to run confidential applications.
-Go to [Hello World from Enclave](#hello-world) deployment section to test an app in an enclave. Or, follow the below instructions to add additional node pools on AKS (AKS supports mixing SGX node pools and non-SGX node pools)
+Go to [Hello World from Enclave](#hello-world) deployment section to test an app in an enclave. Or follow the below instructions to add additional node pools on AKS (AKS supports mixing SGX node pools and non-SGX node pools)
## Adding confidential computing node pool to existing AKS cluster<a id="existing-cluster"></a>
-This section assumes you have an AKS cluster running already that meets the criteria listed in the pre-requisites section.
+This section assumes you have an AKS cluster running already that meets the criteria listed in the prerequisites section (applies to add-on).
-First, lets add the feature to Azure Subscription
+### Step 1: Enabling the confidential computing AKS add-on on the existing cluster
-```azurecli-interactive
-az feature register --name AKS-ConfidentialComputingAddon --namespace Microsoft.ContainerService
-```
-It might take several minutes for the status to show as Registered. You can check the registration status by using the 'az feature list' command. This feature registration is done only once per subscription. If this was registered previously you can skip the above step:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ConfidentialComputingAddon')].{Name:name,State:properties.state}"
-```
-When the status shows as registered, refresh the registration of the Microsoft.ContainerService resource provider by using the 'az provider register' command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
-
-Next, lets enable the confidential computing-related AKS add-ons on the existing cluster:
+Run the below command to enable the confidential computing add-on
```azurecli-interactive az aks enable-addons --addons confcom --name MyManagedCluster --resource-group MyResourceGroup ```
-Now add a **DC<x>s-v2** user node pool to the cluster
+### Step 2: Add **DC<x>s-v2** user node pool to the cluster
> [!NOTE] > To use the confidential computing capability your existing AKS cluster need to have at minimum one **DC<x>s-v2** VM SKU based node pool. Learn more on confidential computing DCsv2 VMs SKU's here [available SKUs and supported regions](virtual-machine-solutions.md). ```azurecli-interactive
-az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-count 1 --node-vm-size Standard_DC4s_v2 --aks-custom-headers usegen2vm=true
+az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-count 1 --node-vm-size Standard_DC4s_v2
output node pool added
Verify
az aks nodepool list --cluster-name myAKSCluster --resource-group myResourceGroup ```
+the above command should list the recent node pool you added with the name confcompool1.
+
+### Step 3: Verify that daemonsets are running on confidential node pools
+
+Login to your existing AKS cluster to perform the below verification.
```console kubectl get nodes
$ kubectl get pods --all-namespaces
output (you may also see other daemonsets along SGX daemonsets as below) kube-system sgx-device-plugin-xxxx 1/1 Running
-kube-system sgx-quote-helper-xxxx 1/1 Running
```
-If the output matches to the above, then your AKS cluster is now ready to run confidential applications.
+If the output matches to the above, then your AKS cluster is now ready to run confidential applications. Please follow the below test application deployment.
## Hello World from isolated enclave application <a id="hello-world"></a> Create a file named *hello-world-enclave.yaml* and paste the following YAML manifest. This Open Enclave based sample application code can be found in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). The below deployment assumes you have deployed the addon "confcom".
confidential-computing https://docs.microsoft.com/en-us/azure/confidential-computing/confidential-nodes-aks-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-overview.md
- Title: Confidential computing nodes on Azure Kubernetes Service (AKS) public preview
+ Title: Confidential computing nodes on Azure Kubernetes Service (AKS)
description: Confidential computing nodes on AKS
- Last updated 9/22/2020
+ Last updated 2/08/2021
+
-# Confidential computing nodes on Azure Kubernetes Service (public preview)
+# Confidential computing nodes on Azure Kubernetes Service
-[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. The underlying infrastructures protect this data from other applications, administrators, and cloud providers with a hardware backed trusted execution container environments.
+[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. The underlying confidential computing infrastructure protects this data from other applications, administrators, and cloud providers with a hardware backed trusted execution container environments. Adding confidential computing nodes allow you to target container application to run in an isolated, hardware protected and attestable environment.
## Overview
-Azure Kubernetes Service (AKS) supports adding [DCsv2 confidential computing nodes](confidential-computing-enclaves.md) powered by Intel SGX. These nodes run can run sensitive workloads within a hardware-based trusted execution environment (TEE) by allowing user-level code to allocate private regions of memory. These private memory regions are called enclaves. Enclaves are designed protect code and data from processes running at higher privilege. The SGX execution model removes the intermediate layers of Guest OS, Host OS and Hypervisor. The *hardware based per container isolated execution* model allows applications to directly execute with the CPU, while keeping the special block of memory encrypted. Confidential computing nodes help with the overall security posture of container applications on AKS and a great addition to defense-in-depth container strategy.
+Azure Kubernetes Service (AKS) supports adding [DCsv2 confidential computing nodes](confidential-computing-enclaves.md) powered by Intel SGX. These nodes allow you to run sensitive workloads within a hardware-based trusted execution environment (TEE). TEEΓÇÖs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect the data confidentiality, data integrity and code integrity from other processes running on the same nodes. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The *hardware based per container isolated execution* model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero trust security planning and defense-in-depth container strategy.
![sgx node overview](./media/confidential-nodes-aks-overview/sgxaksnode.jpg) ## AKS Confidential Nodes Features -- Hardware based and process level container isolation through SGX trusted execution environment (TEE)
+- Hardware based and process level container isolation through Intel SGX trusted execution environment (TEE)
- Heterogenous node pool clusters (mix confidential and non-confidential node pools)-- Encrypted Page Cache (EPC) memory-based pod scheduling-- SGX DCAP driver pre-installed-- Intel FSGS Patch pre-installed-- Supports CPU consumption based horizontal pod autoscaling and cluster autoscaling-- Out of proc attestation helper through AKS daemonset
+- Encrypted Page Cache (EPC) memory-based pod scheduling (requires add-on)
+- Intel SGX DCAP driver pre-installed
+- CPU consumption based horizontal pod autoscaling and cluster autoscaling
- Linux Containers support through Ubuntu 18.04 Gen 2 VM worker nodes
-## AKS Provided Daemon Sets (addon)
+## Confidential Computing add-on for AKS
+The add-on feature enables extra capability on AKS when running confidential computing node pools on the cluster. This add-on enables the features below.
-#### SGX Device Plugin <a id="sgx-plugin"></a>
+#### Azure Device Plugin for Intel SGX <a id="sgx-plugin"></a>
-The SGX Device Plugin implements the Kubernetes device plugin interface for EPC memory. Effectively, this plugin makes EPC memory an additional resource type in Kubernetes. Users can specify limits on this resource just as other resources. Apart from the scheduling function, the device plugin helps assign SGX device driver permissions to confidential workload containers. A sample implementation of the EPC memory-based deployment (`kubernetes.azure.com/sgx_epc_mem_in_MiB`) sample is [here](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/helloworld/helm/templates/helloworld.yaml)
+The device plugin implements the Kubernetes device plugin interface for Encrypted Page Cache (EPC) memory and exposes the device drivers from the nodes. Effectively, this plugin makes EPC memory as an another resource type in Kubernetes. Users can specify limits on this resource just as other resources. Apart from the scheduling function, the device plugin helps assign Intel SGX device driver permissions to confidential workload containers. With this plugin developer can avoid mounting the Intel SGX driver volumes in the deployment files. A sample implementation of the EPC memory-based deployment (`kubernetes.azure.com/sgx_epc_mem_in_MiB`) sample is [here](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/helloworld/helm/templates/helloworld.yaml)
-#### SGX Quote Helper Service <a id="sgx-quote"></a>
-Enclave applications that perform remote attestation need to generate a QUOTE. The QUOTE provides cryptographic proof of the identity and the state of the application, and the environment the enclave is running in. QUOTE generation relies on certain trusted software components from Intel, which are part of the SGX Platform Software Components (PSW/DCAP). This PSW is packaged as a daemon set that runs per node. It can leveraged when requesting attestation QUOTE from enclave apps. Using the AKS provided service will help better maintain the compatibility between the PSW and other SW components in the host. [Read more](confidential-nodes-out-of-proc-attestation.md) on its usage and feature details.
-
-## Programming & application models
+## Programming models
### Confidential Containers
-[Confidential containers](confidential-containers.md) run existing programs and most **common programming language** runtime (Python, Node, Java etc.), along with their existing library dependencies, without any source-code modification or recompilation. This model is the fastest model to confidentiality enabled through Open Source Projects & Azure Partners. The container images that are made ready created to run in the secure enclaves are termed as confidential containers.
+[Confidential containers](confidential-containers.md) help you run existing unmodified container applications of most **common programming languages** runtimes (Python, Node, Java etc.) confidentially. This packaging model does not need any source-code modifications or recompilation. This is the fastest method to confidentiality that could be achieved by packaging your standard docker containers with Open-Source Projects or Azure Partner Solutions. In this packaging and execution model all parts of the container application are loaded in the trusted boundary (enclave). This model works well for off the shelf container applications available in the market or custom apps currently running on general purpose nodes.
### Enclave aware containers-
-AKS supports applications that are programmed to run on confidential nodes and utilize **special instruction set** made available through the SDKs and frameworks. This application model provides most control to your applications with a lowest Trusted Computing Base (TCB). [Read more](enclave-aware-containers.md) on enclave aware containers.
+Confidential computing nodes on AKS also support containers that are programmed to run in an enclave to utilize **special instruction set** available from the CPU. This programming model allows tighter control of your execution flow and requires use of special SDKs and frameworks. This programming model provides most control of application flow with a lowest Trusted Computing Base (TCB). Enclave aware container development involves untrusted and trusted parts to the container application thus allowing you to manage the regular memory and Encrypted Page Cache (EPC) memory where enclave is executed. [Read more](enclave-aware-containers.md) on enclave aware containers.
## Next Steps
AKS supports applications that are programmed to run on confidential nodes and u
[DCsv2 SKU List](../virtual-machines/dcv2-series.md)
+[Defense-in-depth with confidential containers webinar session](https://www.youtube.com/watch?reload=9&v=FYZxtHI_Or0&feature=youtu.be)
+ <!-- LINKS - external --> [Azure Attestation]: ../attestation/index.yml <!-- LINKS - internal -->
-[DC Virtual Machine]: /confidential-computing/virtual-machine-solutions
+[DC Virtual Machine]: /confidential-computing/virtual-machine-solutions
confidential-computing https://docs.microsoft.com/en-us/azure/confidential-computing/confidential-nodes-out-of-proc-attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-out-of-proc-attestation.md
- Title: Out-of-proc attestation support with Intel SGX quote helper DaemonSet on Azure
+ Title: Out-of-proc attestation support with Intel SGX quote helper Daemonset on Azure (preview)
description: DaemonSet for generating the quote outside of the SGX application process. This article explains how the out-of-proc attestation facility is provided for confidential workloads running inside a container.
- Last updated 9/22/2020
+ Last updated 2/12/2021
-# Platform Software Management with SGX quote helper daemon set
+# Platform Software Management with SGX quote helper daemon set (preview)
[Enclave applications](confidential-computing-enclaves.md) that perform remote attestation requires a generated QUOTE. This QUOTE provides cryptographic proof of the identity and the state of the application, as well as the environment the enclave is running. The generation of the QUOTE requires trusted software components that are part of the IntelΓÇÖs Platform Software Components (PSW).
SGX applications built using Open Enclave SDK by default use in-proc attestation
Utilizing this feature is **highly recommended**, as it enhances uptime for your enclave apps during Intel Platform updates or DCAP driver updates.
+To enable this feature on AKS Cluster please modify add --enable-sgxquotehelper command to the CLI when enabling the confidential computing add-on. Detailed CLI instructions are [here](confidential-nodes-aks-get-started.md):
+
+```azurecli-interactive
+# Create a new AKS cluster with system node pool with Confidential Computing addon enabled and SGX Quote Helper
+az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addon confcom --enable-sgxquotehelper
+```
+ ## Why and What are the benefits of out-of-proc? - No updates are required for quote generation components of PSW for each containerized application:
spec:
<!-- LINKS - internal -->
-[DC Virtual Machine]: /confidential-computing/virtual-machine-solutions
+[DC Virtual Machine]: /confidential-computing/virtual-machine-solutions
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-concepts.md
Title: About repositories & images
-description: Introduction to key concepts of Azure container registries, repositories, and container images.
+ Title: About registries, repositories, images, and artifacts
+description: Introduction to key concepts of Azure container registries, repositories, container images, and other artifacts.
Previously updated : 06/16/2020 Last updated : 01/29/2021
-# About registries, repositories, and images
+# About registries, repositories, and artifacts
This article introduces the key concepts of container registries, repositories, and container images and related artifacts.
-## Registry
-
-A container *registry* is a service that stores and distributes container images. Docker Hub is a public container registry that supports the open source community and serves as a general catalog of images. Azure Container Registry provides users with direct control of their images, with integrated authentication, [geo-replication](container-registry-geo-replication.md) supporting global distribution and reliability for network-close deployments, [virtual network and firewall configuration](container-registry-vnet.md), [tag locking](container-registry-image-lock.md), and many other enhanced features.
-
-In addition to Docker container images, Azure Container Registry supports related [content artifacts](container-registry-image-formats.md) including Open Container Initiative (OCI) image formats.
-
-## Content addressable elements of an artifact
-
-The address of an artifact in an Azure container registry includes the following elements.
-
-`[loginUrl]/[repository:][tag]`
-
-* **loginUrl** - The fully qualified name of the registry host. The registry host in an Azure container registry is in the format *myregistry*.azurecr.io (all lowercase). You must specify the loginUrl when using Docker or other client tools to pull or push artifacts to an Azure container registry.
-* **repository** - Name of a logical grouping of one or more related images or artifacts - for example, the images for an application or a base operating system. May include *namespace* path.
-* **tag** - Identifier of a specific version of an image or artifact stored in a repository.
-
-For example, the full name of an image in an Azure container registry might look like:
-*myregistry.azurecr.io/marketing/campaign10-18/email-sender:v2*
+## Registry
-See the following sections for details about these elements.
+A container *registry* is a service that stores and distributes container images and related artifacts. Docker Hub is an example of a public container registry that serves as a general catalog of Docker container images. Azure Container Registry provides users with direct control of their container content, with integrated authentication, [geo-replication](container-registry-geo-replication.md) supporting global distribution and reliability for network-close deployments, [virtual network configuration with Private Link](container-registry-private-link.md), [tag locking](container-registry-image-lock.md), and many other enhanced features.
-## Repository name
+In addition to Docker-compatible container images, Azure Container Registry supports a range of [content artifacts](container-registry-image-formats.md) including Helm charts and Open Container Initiative (OCI) image formats.
-A *repository* is a collection of container images or other artifacts with the same name, but different tags. For example, the following three images are in the "acr-helloworld" repository:
+## Repository
+A *repository* is a collection of container images or other artifacts in a registry that have the same name, but different tags. For example, the following three images are in the `acr-helloworld` repository:
- *acr-helloworld:latest* - *acr-helloworld:v1*
Repository names can only include lowercase alphanumeric characters, periods, da
For complete repository naming rules, see the [Open Container Initiative Distribution Specification](https://github.com/docker/distribution/blob/master/docs/spec/api.md#overview).
-## Image
+## Artifact
A container image or other artifact within a registry is associated with one or more tags, has one or more layers, and is identified by a manifest. Understanding how these components relate to each other can help you manage your registry effectively.
For tag naming rules, see the [Docker documentation](https://docs.docker.com/eng
### Layer
-Container images are made up of one or more *layers*, each corresponding to a line in the Dockerfile that defines the image. Images in a registry share common layers, increasing storage efficiency. For example, several images in different repositories might share the same Alpine Linux base layer, but only one copy of that layer is stored in the registry.
+Container images and artifacts are made up of one or more *layers*. Different artifact types define layers differently. For example, in a Docker container image, each layer corresponds to a line in the Dockerfile that defines the image:
+
-Layer sharing also optimizes layer distribution to nodes with multiple images sharing common layers. For example, if an image already on a node includes the Alpine Linux layer as its base, the subsequent pull of a different image referencing the same layer doesn't transfer the layer to the node. Instead, it references the layer already existing on the node.
+Artifacts in a registry share common layers, increasing storage efficiency. For example, several images in different repositories might have a common ASP.NET Core base layer, but only one copy of that layer is stored in the registry. Layer sharing also optimizes layer distribution to nodes, with multiple artifacts sharing common layers. If an image already on a node includes the ASP.NET Core layer as its base, the subsequent pull of a different image referencing the same layer doesn't transfer the layer to the node. Instead, it references the layer already existing on the node.
To provide secure isolation and protection from potential layer manipulation, layers are not shared across registries. ### Manifest
-Each container image or artifact pushed to a container registry is associated with a *manifest*. The manifest, generated by the registry when the image is pushed, uniquely identifies the image and specifies its layers.
+Each container image or artifact pushed to a container registry is associated with a *manifest*. The manifest, generated by the registry when the content is pushed, uniquely identifies the artifacts and specifies the layers. You can list the manifests for a repository with the Azure CLI command [az acr repository show-manifests][az-acr-repository-show-manifests].
A basic manifest for a Linux `hello-world` image looks similar to the following:
az acr repository show-manifests --name myregistry --repository acr-helloworld
### Manifest digest
-Manifests are identified by a unique SHA-256 hash, or *manifest digest*. Each image or artifact--whether tagged or not--is identified by its digest. The digest value is unique even if the image's layer data is identical to that of another image. This mechanism is what allows you to repeatedly push identically tagged images to a registry. For example, you can repeatedly push `myimage:latest` to your registry without error because each image is identified by its unique digest.
+Manifests are identified by a unique SHA-256 hash, or *manifest digest*. Each image or artifact--whether tagged or not--is identified by its digest. The digest value is unique even if the artifact's layer data is identical to that of another artifact. This mechanism is what allows you to repeatedly push identically tagged images to a registry. For example, you can repeatedly push `myimage:latest` to your registry without error because each image is identified by its unique digest.
-You can pull an image from a registry by specifying its digest in the pull operation. Some systems may be configured to pull by digest because it guarantees the image version being pulled, even if an identically tagged image is subsequently pushed to the registry.
+You can pull an artifact from a registry by specifying its digest in the pull operation. Some systems may be configured to pull by digest because it guarantees the image version being pulled, even if an identically tagged image is pushed later to the registry.
-For example, pull an image from the "acr-helloworld" repository by manifest digest:
+> [!IMPORTANT]
+> If you repeatedly push modified artifacts with identical tags, you might create "orphans"--artifacts that are untagged, but still consume space in your registry. Untagged images are not shown in the Azure CLI or in the Azure portal when you list or view images by tag. However, their layers still exist and consume space in your registry. Deleting an untagged image frees registry space when the manifest is the only one, or the last one, pointing to a particular layer. For information about freeing space used by untagged images, see [Delete container images in Azure Container Registry](container-registry-delete.md).
+
+## Addressing an artifact
+
+To address a registry artifact for push and pull operations with Docker or other client tools, combine the fully qualified registry name, repository name (including namespace path if applicable), and an artifact tag or manifest digest. See previous sections for explanations of these terms.
+
+ **Address by tag**: `[loginServerUrl]/[repository][:tag]`
+
+ **Address by digest**: `[loginServerUrl]/[repository@sha256][:digest]`
+
+When using Docker or other client tools to pull or push artifacts to an Azure container registry, use the registry's fully qualified URL, also called the *login server* name. In the Azure cloud, the fully qualified URL of an Azure container registry is in the format `myregistry.azurecr.io` (all lowercase).
+
+> [!NOTE]
+> * You can't specify a port number in the registry login server URL, such as `myregistry.azurecr.io:443`.
+> * The tag `latest` is used by default if you don't provide a tag in your command.
+
+
+### Push by tag
+
+Examples:
+
+ `docker push myregistry.azurecr.io/samples/myimage:20210106`
+
+ `docker push myregistry.azurecr.io/marketing/email-sender`
+
+### Pull by tag
+
+Example:
+
+ `docker pull myregistry.azurecr.io/marketing/campaign10-18/email-sender:v2`
+
+### Pull by manifest digest
++
+Example:
+
+ `docker pull myregistry.azurecr.io/acr-helloworld@sha256:0a2e01852872580b2c2fea9380ff8d7b637d3928783c55beb3f21a6e58d5d108`
-`docker pull myregistry.azurecr.io/acr-helloworld@sha256:0a2e01852872580b2c2fea9380ff8d7b637d3928783c55beb3f21a6e58d5d108`
-> [!IMPORTANT]
-> If you repeatedly push modified images with identical tags, you might create orphaned images--images that are untagged, but still consume space in your registry. Untagged images are not shown in the Azure CLI or in the Azure portal when you list or view images by tag. However, their layers still exist and consume space in your registry. Deleting an untagged image frees registry space when the manifest is the only one, or the last one, pointing to a particular layer. For information about freeing space used by untagged images, see [Delete container images in Azure Container Registry](container-registry-delete.md).
## Next steps
-Learn more about [image storage](container-registry-storage.md) and [supported content formats](container-registry-image-formats.md) in Azure Container Registry.
+Learn more about [registry storage](container-registry-storage.md) and [supported content formats](container-registry-image-formats.md) in Azure Container Registry.
+
+Learn how to [push and pull images](container-registry-get-started-docker-cli.md) from Azure Container Registry.
<!-- LINKS - Internal --> [az-acr-repository-show-manifests]: /cli/azure/acr/repository#az-acr-repository-show-manifests
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-permissions.md
Following permissions are required to perform the different activities pertainin
|Permission |Impact |Minimum scope |Maximum scope | ||||| |`Microsoft.Resources/deployments/validate/action`, `Microsoft.Resources/deployments/write` | These permissions are required for the ARM template deployment to create the restored account. See the sample permission [RestorableAction](#custom-restorable-action) below for how to set this role. | Not applicable | Not applicable |
-|Microsoft.DocumentDB/databaseAccounts/write | This permission is required to restore an account into a resource group | Resource group under which the restored account is created. | Subscription under which the restored account is created |
+|`Microsoft.DocumentDB/databaseAccounts/write` | This permission is required to restore an account into a resource group | Resource group under which the restored account is created. | Subscription under which the restored account is created |
|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action` |This permission is required on the source restorable database account scope to allow restore actions to be performed on it. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>* | The subscription containing the restorable database account. The resource group cannot be chosen as scope. | |`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read` |This permission is required on the source restorable database account scope to list the database accounts that can be restored. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>*| The subscription containing the restorable database account. The resource group cannot be chosen as scope. | |`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` | This permission is required on the source restorable account scope to allow reading of restorable resources such as list of databases and containers for a restorable account. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>*| The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/create-mongodb-flask https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-flask.md
- Title: Build a Python Flask web app using Azure Cosmos DB's API for MongoDB
-description: Presents a Python Flask code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.
----- Previously updated : 12/26/2018---
-# Quickstart: Build a Python app using Azure Cosmos DB's API for MongoDB
-
-> [!div class="op_single_selector"]
-> * [.NET](create-mongodb-dotnet.md)
-> * [Java](create-mongodb-java.md)
-> * [Node.js](create-mongodb-nodejs.md)
-> * [Python](create-mongodb-flask.md)
-> * [Xamarin](create-mongodb-xamarin.md)
-> * [Golang](create-mongodb-go.md)
->
-
-In this quickstart, you use an Azure Cosmos DB for Mongo DB API account or the Azure Cosmos DB Emulator to run a Python Flask To-Do web app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. Or, you can use the [Azure Cosmos DB Emulator](local-emulator.md). -- [Python 3.6+](https://www.python.org/downloads/)-- [Visual Studio Code](https://code.visualstudio.com/Download) with the [Python Extension](https://marketplace.visualstudio.com/items?itemName=donjayamanne.python).-
-## Clone the sample application
-
-Now let's clone a Flask-MongoDB app from GitHub, set the connection string, and run it. You see how easy it is to work with data programmatically.
-
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
-
- ```bash
- md "C:\git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/CosmosDB-Flask-Mongo-Sample.git
- ```
-3. Run the following command to install the python modules.
-
- ```bash
- pip install -r .\requirements.txt
- ```
-4. Open the folder in Visual Studio Code.
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the web app](#run-the-web-app).
-
-The following snippets are all taken from the *app.py* file and uses the connection string for the local Azure Cosmos DB Emulator. The password needs to be split up as seen below to accommodate for the forward slashes that cannot be parsed otherwise.
-
-* Initialize the MongoDB client, retrieve the database, and authenticate.
-
- ```python
- client = MongoClient("mongodb://127.0.0.1:10250/?ssl=true") #host uri
- db = client.test #Select the database
- db.authenticate(name="localhost",password='C2y6yDjf5' + r'/R' + '+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw' + r'/Jw==')
- ```
-
-* Retrieve the collection or create it if it does not already exist.
-
- ```python
- todos = db.todo #Select the collection
- ```
-
-* Create the app
-
- ```Python
- app = Flask(__name__)
- title = "TODO with Flask"
- heading = "ToDo Reminder"
- ```
-
-## Run the web app
-
-1. Make sure the Azure Cosmos DB Emulator is running.
-
-2. Open a terminal window and `cd` to the directory that the app is saved in.
-
-3. Then set the environment variable for the Flask app with `set FLASK_APP=app.py`, `$env:FLASK_APP = app.py` for PowerShell editors, or `export FLASK_APP=app.py` if you are using a Mac.
-
-4. Run the app with `flask run` and browse to *http:\//127.0.0.1:5000/*.
-
-5. Add and remove tasks and see them added and changed in the collection.
-
-## Create a database account
-
-If you want to test the code against a live Azure Cosmos DB account, go to the Azure portal to create an account.
--
-## Update your connection string
-
-To test the code against the live Azure Cosmos DB account, get your connection string information. Then copy it into the app.
-
-1. In your Azure Cosmos DB account in the Azure portal, in the left navigation select **Connection String**, and then select **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the username, connection string, and password.
-
-2. Open the *app.py* file in the root directory.
-
-3. Copy your **username** value from the portal (using the copy button) and make it the value of the **name** in your *app.py* file.
-
-4. Then copy your **connection string** value from the portal and make it the value of the **MongoClient** in your *app.py* file.
-
-5. Finally copy your **password** value from the portal and make it the value of the **password** in your *app.py* file.
-
-You've now updated your app with all the info it needs to communicate with Azure Cosmos DB. You can run it the same way as before.
-
-## Deploy to Azure
-
-To deploy this app, you can create a new web app in Azure and enable continuous deployment with a fork of this GitHub repo. Follow this [tutorial](../app-service/deploy-continuous-deployment.md) to set up continuous deployment with GitHub in Azure.
-
-When deploying to Azure, you should remove your application keys and make sure the section below is not commented out:
-
-```python
- client = MongoClient(os.getenv("MONGOURL"))
- db = client.test #Select the database
- db.authenticate(name=os.getenv("MONGO_USERNAME"),password=os.getenv("MONGO_PASSWORD"))
-```
-
-You then need to add your MONGOURL, MONGO_PASSWORD, and MONGO_USERNAME to the application settings. You can follow this [tutorial](../app-service/configure-common.md#configure-app-settings) to learn more about Application Settings in Azure Web Apps.
-
-If you don't want to create a fork of this repo, you can also select the **Deploy to Azure** button below. You should then go into Azure and set up the application settings with your Azure Cosmos DB account info.
-
-<a href="https://deploy.azure.com/?repository=https://github.com/heatherbshapiro/To-Do-ListFlask-MongoDB-Example" target="_blank">
-<img src="https://azuredeploy.net/deploybutton.png" alt="Click to Deploy to Azure">
-</a>
-
-> [!NOTE]
-> If you plan to store your code in GitHub or other source control options, please be sure to remove your connection strings from the code. They can be set with application settings for the web app instead.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB for Mongo DB API account, and use the Azure Cosmos DB Emulator to run a Python Flask To-Do web app cloned from GitHub. You can now import additional data to your Azure Cosmos DB account.
-
-> [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/troubleshoot-sdk-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-sdk-availability.md
If you **don't set a preferred region**, the SDK client defaults to the primary
| Multiple write regions | Primary region | Primary region | > [!NOTE]
-> Primary region refers to the first region in the [Azure Cosmos account region list](distribute-data-globally.md)
+> Primary region refers to the first region in the [Azure Cosmos account region list](distribute-data-globally.md).
+> If the values specified as regional preference do not match with any existing Azure regions, they will be ignored. If they match an existing region but the account is not replicated to it, then the client will connect to the next preferred region that matches or to the primary region.
+
+> [!WARNING]
+> Disabling the endpoint rediscovery (that is setting it to false) on the client configuration will disable all failover and availability logic described in this document.
+> This configuration can be accessed by the following parameters in each Azure Cosmos SDK:
+>
+> * The [ConnectionPolicy.EnableEndpointRediscovery](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.enableendpointdiscovery) property in .NET V2 SDK.
+> * The [CosmosClientBuilder.endpointDiscoveryEnabled](/java/api/com.azure.cosmos.cosmosclientbuilder.endpointdiscoveryenabled) method in Java V4 SDK.
+> * The [CosmosClient.enable_endpoint_discovery](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) parameter in Python SDK.
+> * The [CosmosClientOptions.ConnectionPolicy.enableEndpointDiscovery](/javascript/api/@azure/cosmos/connectionpolicy#enableEndpointDiscovery) parameter in JS SDK.
Under normal circumstances, the SDK client will connect to the preferred region (if a regional preference is set) or to the primary region (if no preference is set), and the operations will be limited to that region, unless any of the below scenarios occur.
For a comprehensive detail on SLA guarantees during these events, see the [SLAs
## <a id="remove-region"></a>Removing a region from the account
-When you remove a region from an Azure Cosmos account, any SDK client that actively uses the account will detect the region removal through a backend response code. The client then marks the regional endpoint as unavailable. The client retries the current operation and all the future operations are permanently routed to the next region in order of preference.
+When you remove a region from an Azure Cosmos account, any SDK client that actively uses the account will detect the region removal through a backend response code. The client then marks the regional endpoint as unavailable. The client retries the current operation and all the future operations are permanently routed to the next region in order of preference. In case the preference list only had one entry (or was empty) but the account has other regions available, it will route to the next region in the account list.
## Adding a region to an account
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/understand-cost-mgt-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/understand-cost-mgt-data.md
Title: Understand Azure Cost Management data
description: This article helps you better understand data that's included in Azure Cost Management and how frequently it's processed, collected, shown, and closed. Previously updated : 01/06/2021 Last updated : 01/17/2021
The following information shows the currently supported [Microsoft Azure offers]
| **Category** | **Offer name** | **Quota ID** | **Offer number** | **Data available from** | | | | | | | | **Azure Government** | Azure Government Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-USGOV-0017P | May 2014<sup>1</sup> |
+| **Azure Government** | Azure Government Pay-As-You-Go | PayAsYouGo_2014-09-01 | MS-AZR-USGOV-0003P | October 2, 2018<sup>2</sup> |
| **Enterprise Agreement (EA)** | Enterprise Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0148P | May 2014<sup>1</sup> | | **Enterprise Agreement (EA)** | Microsoft Azure Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-0017P | May 2014<sup>1</sup> | | **Microsoft Customer Agreement** | Microsoft Azure Plan | EnterpriseAgreement_2014-09-01 | N/A | March 2019<sup>3</sup> |
The following information shows the currently supported [Microsoft Azure offers]
_<sup>**1**</sup> For data before May 2014, visit the [Azure Enterprise portal](https://ea.azure.com)._
-_<sup>**2**</sup> For data before October 2, 2018, visit the [Azure Account Center](https://account.azure.com/subscriptions)._
+_<sup>**2**</sup> For data before October 2, 2018, visit the [Azure Account Center](https://account.azure.com/subscriptions) for global accounts and [Azure Account Center Gov](https://account.windowsazure.us/subscriptions) for Azure government accounts._
_<sup>**3**</sup> Microsoft Customer Agreements started in March 2019 and don't have any historical data before this point._
The following offers aren't supported yet:
| Category | **Offer name** | **Quota ID** | **Offer number** | | | | | | | **Azure Germany** | Azure Germany Pay-As-You-Go | PayAsYouGo_2014-09-01 | MS-AZR-DE-0003P |
-| **Azure Government** | Azure Government Pay-As-You-Go | PayAsYouGo_2014-09-01 | MS-AZR-USGOV-0003P |
| **Cloud Solution Provider (CSP)** | Microsoft Azure | CSP_2015-05-01 | MS-AZR-0145P | | **Cloud Solution Provider (CSP)** | Azure Government CSP | CSP_2015-05-01 | MS-AZR-USGOV-0145P | | **Cloud Solution Provider (CSP)** | Azure Germany in CSP for Microsoft Cloud Germany | CSP_2015-05-01 | MS-AZR-DE-0145P |
Once cost and usage data becomes available in Cost Management + Billing, it will
### Rerated data
-Whether you use the Cost Management APIs, Power BI, or the Azure portal to retrieve data, expect the current billing period's charges to get rerated, and as a consequence change, until the invoice is closed.
+Whether you use the Cost Management APIs, Power BI, or the Azure portal to retrieve data, expect the current billing period's charges to get rerated. Charges might change until the invoice is closed.
## Cost rounding
Costs shown in Cost Management are rounded. Costs returned by the Query API aren
## Historical data might not match invoice
-Historical data for credit-based and pay-in-advance offers might not match your invoice. Some Azure pay-as-you-go, MSDN, and Visual Studio offers can have Azure credits and advanced payments applied to the invoice. However, the historical data shown in Cost Management is based on your estimated consumption charges only. Cost Management historical data doesn't include payments and credits. So, historical data shown for the following offers may not match exactly with your invoice.
+Historical data for credit-based and pay-in-advance offers might not match your invoice. Some Azure pay-as-you-go, MSDN, and Visual Studio offers can have Azure credits and advanced payments applied to the invoice. The historical data shown in Cost Management is based on your estimated consumption charges only. Cost Management historical data doesn't include payments and credits. Historical data shown for the following offers may not match exactly with your invoice.
- Azure for Students (MS-AZR-0170P) - Azure in Open (MS-AZR-0111P)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/compute-linked-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-linked-services.md
See following articles if you are new to Azure Batch service:
* [New-AzBatchAccount](/powershell/module/az.batch/New-azBatchAccount) cmdlet to create an Azure Batch account (or) [Azure portal](../batch/batch-account-create-portal.md) to create the Azure Batch account using Azure portal. See [Using PowerShell to manage Azure Batch Account](/archive/blogs/windowshpc/using-azure-powershell-to-manage-azure-batch-account) article for detailed instructions on using the cmdlet. * [New-AzBatchPool](/powershell/module/az.batch/New-AzBatchPool) cmdlet to create an Azure Batch pool.
+> [!IMPORTANT]
+> When creating a new Azure Batch pool, ΓÇÿVirtualMachineConfigurationΓÇÖ must be used and NOT ΓÇÿCloudServiceConfiguration'. For more details refer [Azure Batch Pool migration guidance](https://docs.microsoft.com/azure/batch/batch-pool-cloud-service-to-virtual-machine-configuration).
+ ### Example ```json
data-factory https://docs.microsoft.com/en-us/azure/data-factory/store-credentials-in-key-vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/store-credentials-in-key-vault.md
# Store credential in Azure Key Vault You can store credentials for data stores and computes in an [Azure Key Vault](../key-vault/general/overview.md). Azure Data Factory retrieves the credentials when executing an activity that uses the data store/compute.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/transform-data-using-dotnet-custom-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-using-dotnet-custom-activity.md
See following articles if you are new to Azure Batch service:
* [New-AzBatchAccount](/powershell/module/az.batch/New-azBatchAccount) cmdlet to create an Azure Batch account (or) [Azure portal](../batch/batch-account-create-portal.md) to create the Azure Batch account using Azure portal. See [Using PowerShell to manage Azure Batch Account](/archive/blogs/windowshpc/using-azure-powershell-to-manage-azure-batch-account) article for detailed instructions on using the cmdlet. * [New-AzBatchPool](/powershell/module/az.batch/New-AzBatchPool) cmdlet to create an Azure Batch pool.
+> [!IMPORTANT]
+> When creating a new Azure Batch pool, ΓÇÿVirtualMachineConfigurationΓÇÖ must be used and NOT ΓÇÿCloudServiceConfiguration'. For more details refer [Azure Batch Pool migration guidance](https://docs.microsoft.com/azure/batch/batch-pool-cloud-service-to-virtual-machine-configuration).
+ ## Azure Batch linked service The following JSON defines a sample Azure Batch linked service. For details, see [Compute environments supported by Azure Data Factory](compute-linked-services.md)
See the following articles that explain how to transform data in other ways:
* [Hadoop Streaming activity](transform-data-using-hadoop-streaming.md) * [Spark activity](transform-data-using-spark.md) * [Azure Machine Learning Studio (classic) Batch Execution activity](transform-data-using-machine-learning.md)
-* [Stored procedure activity](transform-data-using-stored-procedure.md)
+* [Stored procedure activity](transform-data-using-stored-procedure.md)
databox https://docs.microsoft.com/en-us/azure/databox/data-box-disk-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-faq.md
Previously updated : 12/17/2020 Last updated : 02/17/2021
A. For information on the price of Data Box Disks, go to [Pricing page](https://
A. To get Azure Data Box Disks, log into Azure portal and create a Data Box order for disks. Provide your contact information and notification details. Once you place an order, based on the availability, disks are shipped to you within 10 days. ### Q. What is the maximum amount of data I can transfer with Data Box Disks in one instance?
-A. For 5 disks each of 8 TB (7 TB usable capacity), the maximum usable capacity is 35 TB. Hence, you can transfer 35 TB of data in one instance. To transfer more data, you need to order more disks.
+A. For 5 disks, each with 8 TB capacity (7 TB of usable capacity), the maximum usable capacity is 35 TB. So you can transfer 35 TB of data in one instance. To transfer more data, you need to order more disks.
### Q. How can I check if Data Box Disks are available in my region? A. To see where the Data Box Disks are currently available, go to the [Region availability](data-box-disk-overview.md#region-availability). ### Q. Which regions can I store data in with Data Box Disks?
-A. Data Box Disk is supported for all regions within US, Canada, Australia, West Europe and North Europe, Korea and Japan. Only the Azure public cloud regions are supported. The Azure Government or other sovereign clouds are not supported.
-
-### Q. Which regions can I store data in with Data Box Disks?
-A. Data Box Disk is supported for all regions within US, Canada, Australia, West Europe and North Europe, Korea and Japan. Only the Azure public cloud regions are supported. The Azure Government or other sovereign clouds are not supported.
+A. Data Box Disk is supported for all regions within US, Canada, Australia, West Europe and North Europe, Korea, and Japan. Only the Azure public cloud regions are supported. The Azure Government or other sovereign clouds are not supported.
### Q. How can I import source data present at my location in one country/region to an Azure region in a different country? A. Data Box Disk supports data ingestion only within the same country/region as their destination and will not cross any international borders. The only exception is for orders in the European Union (EU), where Data Box Disks can ship to and from any EU country/region. For example, if you wanted to move data at your location in Canada to an Azure West US storage account, then you could achieve it in the following way:
-### Option 1:
+#### Option 1:
Ship a [supported disk](../import-export/storage-import-export-requirements.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#supported-disks) containing data using the [Azure Import/Export service](../import-export/storage-import-export-service.md) from the source location in Canada to the Azure West US datacenter.
-### Option 2:
+#### Option 2:
1. Order Data Box Disk in Canada by choosing a storage account say in Canada Central. The SSD disk(s) are shipped from the Azure datacenter in Canada Central to the shipping address (in Canada) provided during order creation.
Ship a [supported disk](../import-export/storage-import-export-requirements.md?t
3. You can then use a tool like AzCopy to copy the data to a storage account in West US. This step incurs [standard storage](https://azure.microsoft.com/pricing/details/storage/) and [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) that aren't included in the Data Box Disk billing.
+### Q. How can I recover my data if an entire region fails?
+
+A. In extreme circumstances where a region is lost because of a significant disaster, Microsoft may initiate a regional failover. No action on your part is required in this case. Your order will be fulfilled through the failover region if it is within the same country or commerce boundary. However, some Azure regions don't have a paired region in the same geographic or commerce boundary. If there is a disaster in any of those regions, you will need to create the Data Box order again from a different region that is available, and copy the data to Azure in the new region. For more information, see [Business continuity and disaster recovery (BCDR): Azure Paired Regions](../best-practices-availability-paired-regions.md).
+ ### Q. Whom should I contact if I encounter any issues with Data Box Disks? A. If you encounter any issues with Data Box Disks, [contact Microsoft Support](./data-box-disk-contact-microsoft-support.md).
A. Yes. Multiple Data Box Disks can be connected to the same host computer to tr
## Track status ### Q. How do I track the disks from when I placed the order to shipping the disks back?
-A. You can track the status of the Data Box Disk order in the Azure portal. When you create the order, you are also prompted to provide a notification email. If you have provided one, then you are notified via email on all status changes of the order. More information on how to [Configure notification emails](data-box-portal-ui-admin.md#edit-notification-details).
+A. You can track the status of the Data Box Disk order in the Azure portal. When you create the order, you are also prompted to provide a notification email. If you have provided one, then you're notified via email on all status changes of the order. More information on how to [Configure notification emails](data-box-portal-ui-admin.md#edit-notification-details).
### Q. How do I return the disks? A. Microsoft provides a shipping label with the Data Box Disks in the shipping package. Affix the label to the shipping box and drop off the sealed package at your shipping carrier location. If the label is damaged or lost, go to **Overview > Download shipping label** and download a new return shipping label.
A. Microsoft provides a shipping label with the Data Box Disks in the shipping
### Can I pick up my Data Box Disk order myself? Can I return the disks via a carrier that I choose? A. Yes. Microsoft also offers self-managed shipping in US Gov region only. When placing the Data Box Disk order, you can choose self-managed shipping option. To pick up your Data Box Disk order, take the following steps:
-1. After you have placed the order, the order is processed and the disks are prepared. You will be notified via an email that your order is ready for pickup.
+1. After you place the order, the order is processed and the disks are prepared. You will be notified via an email that your order is ready for pickup.
2. Once the order is ready for pickup, go to your order in the Azure portal and navigate to the **Overview** blade. 3. You will see a notification with a code in the Azure portal. Email the [Azure Data Box Operations team](mailto:adbops@microsoft.com) and provide them the code. The team will provide the location and schedule a pickup date and time. You must call the team within 5 business days after you receive the email notification.
A. To speed up the copy process:
- Use multiple streams of data copy. For instance, with `Robocopy`, use the multithreaded option. For more information on the exact command used, go to [Tutorial: Copy data to Azure Data Box Disk and verify](data-box-disk-deploy-copy-data.md#copy-data-to-disks). - Use multiple sessions. - Instead of copying over network share (where you could be limited by the network speeds) ensure that you have the data residing locally on the computer to which the disks are connected.-- Ensure that you are using USB 3.0 or later throughout the copy process. Download and use the [`USBView` tool](/windows-hardware/drivers/debugger/usbview) to identify the USB controllers and USB devices connected to the computer.-- Benchmark the performance of the computer used to copy the data. Download and use the [Bluestop `FIO` tool](https://ci.appveyor.com/project/axboe/fio) to benchmark the performance of the server hardware. Select the latest x86 or x64 build, select the **Artifacts** tab, and download the MSI.
+- Ensure that you're using USB 3.0 or later throughout the copy process. Download and use the [`USBView` tool](/windows-hardware/drivers/debugger/usbview) to identify the USB controllers and USB devices connected to the computer.
+- Benchmark the performance of the computer used to copy the data. Download and use the [`Bluestop` `FIO` tool](https://ci.appveyor.com/project/axboe/fio) to benchmark the performance of the server hardware. Select the latest x86 or x64 build, select the **Artifacts** tab, and download the MSI.
### Q. How to speed up the data if the source data has small files (KBs or few MBs)? A. To speed up the copy process:
A. No. Only one storage account, general or classic, is currently supported wit
A. The toolset available with the Data Box Disk contains three tools: - **Data Box Disk Unlock tool**: Use this tool to unlock the encrypted disks that are shipped from Microsoft. When unlocking the disks using the tool, you need to provide a passkey available in the Data Box Disk order in the Azure portal. - **Data Box Disk Validation tool**: Use this tool to validate the size, format, and blob names as per the Azure naming conventions. It also generates checksums for the copied data, which are then used to verify the data uploaded to Azure.
+ - **Data Box Disk Split Copy tool**: Use this tool when you are using multiple disks and have a large dataset that needs to be split and copied across all the disks. This tool is currently available for Windows. This tool is not supported with managed disks. This tool validates the data as it copies it, so you can skip the validation step when using this tool.
The toolset is available both for Windows and Linux. You can download the toolset here: - [Download Data Box Disk toolset for Windows](https://aka.ms/databoxdisktoolswin)
A. Once the order status for Data Copy shows as complete, you should be able to
### Q. Where is my data located in Azure after the upload? A. When you copy the data under *BlockBlob* and *PageBlob* folders on your disk, a container is created in the Azure storage account for each subfolder under the *BlockBlob* and *PageBlob* folder. If you copied the files under the *BlockBlob* and *PageBlob* folders directly, then the files are in a default container *$root* under the Azure Storage account. When you copy the data into a folder under *AzureFile* folder, a fileshare is created.
-### Q. I just noticed that I did not follow the Azure naming requirements for my containers. Will my data fail to upload to Azure?
+### Q. I just noticed that I didn't follow the Azure naming requirements for my containers. Will my data fail to upload to Azure?
A. Any uppercase letters in your container names are automatically converted to lowercase. If the names are not compliant in other ways - for example, they contain special characters or other languages - the upload will fail. For more information, go to [Azure naming conventions](data-box-disk-limits.md#azure-block-blob-page-blob-and-file-naming-conventions). ### Q. How do I verify the data I copied onto multiple Data Box Disks?
A. Yes. If you decide to validate your data (we recommend you do!), you need to
A. You can clone your previous order. Cloning creates the same order as before and allow you to edit order details only without the need to type in address, contact, and notification details. ### Q. I copied data to the ManagedDisk folder. I don't see any managed disks with the resource group specified for managed disks. Was my data uploaded to Azure? How can I locate it?
-A. Yes. Your data was uploaded to Azure, but if you don't see any managed disks with the specified resource groups, it is likely because the data was not valid. If page blobs, block blobs, Azure Files, or managed disks are not valid, they will go to the following folders:
+A. Yes. Your data was uploaded to Azure, but if you don't see any managed disks with the specified resource groups, it's likely because the data was not valid. If page blobs, block blobs, Azure Files, or managed disks are not valid, they will go to the following folders:
- Page blobs will go to a block blob container starting with *databoxdisk-invalid-pb-*. - Azure Files will go to a block blob container starting with *databoxdisk-invalid-af-*. - Managed disks will go to a block blob container starting with *databoxdisk-invalid-md-*.
databox https://docs.microsoft.com/en-us/azure/databox/data-box-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-faq.md
Previously updated : 12/17/2020 Last updated : 02/17/2021 # Azure Data Box: Frequently Asked Questions
For example, in the import scenario, if you had the source data in Canada that y
3. You can then use a tool like AzCopy to copy the data to a storage account in West US. This step incurs [standard storage](https://azure.microsoft.com/pricing/details/storage/) and [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) that aren't included in the Data Box billing.
+### Q. How can I recover my data if an entire region fails?
+
+A. In extreme circumstances where a region is lost because of a significant disaster, Microsoft may initiate a regional failover. No action on your part is required in this case. Your order will be fulfilled through the failover region if it is within the same country or commerce boundary. However, some Azure regions don't have a paired region in the same geographic or commerce boundary. If there is a disaster in any of those regions, you will need to create the Data Box order again from a different region that is available, and copy the data to Azure in the new region. For more information, see [Business continuity and disaster recovery (BCDR): Azure Paired Regions](../best-practices-availability-paired-regions.md).
+ ### Q. Who should I contact if I come across any issues with Data Box? A. If you come across any issues with Data Box, [contact Microsoft Support](data-box-disk-contact-microsoft-support.md).
A. To speed up the copy process:
- Use multiple streams of data copy. For instance, with `Robocopy`, use the multithreaded option. For more information on the exact command used, go to [Tutorial: Copy data to Azure Data Box and verify](data-box-deploy-copy-data.md). - Use multiple sessions. - Instead of copying over a network share (where network speeds can limit copy speed), store the data locally on the computer to which the Data Box is connected.-- Benchmark the performance of the computer used to copy the data. Download and use the [`Bluestop` FIO tool](https://ci.appveyor.com/project/axboe/fio) to benchmark the performance of the server hardware. Select the latest x86 or x64 build, select the **Artifacts** tab, and download the MSI.
+- Benchmark the performance of the computer used to copy the data. Download and use the [`Bluestop` `FIO` tool](https://ci.appveyor.com/project/axboe/fio) to benchmark the performance of the server hardware. Select the latest x86 or x64 build, select the **Artifacts** tab, and download the MSI.
<!--### Q. How to speed up the data copy if the source data has small files (KBs or few MBs)? A. To speed up the copy process:
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/agent-based-recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/agent-based-recommendations.md
+
+ Title: Agent based recommendations
+
+description: Learn about the concept of security recommendations and how they are used for Defender for IoT devices.
++
+documentationcenter: na
++
+editor: ''
+
+ms.devlang: na
+
+ na
+ Last updated : 02/16/2021+++
+# Security recommendations for IoT devices
+
+Defender for IoT scans your Azure resources and IoT devices and provides security recommendations to reduce your attack surface.
+Security recommendations are actionable and aim to aid customers in complying with security best practices.
+
+In this article, you will find a list of recommendations, which can be triggered on your IoT devices.
+
+## Agent based recommendations
+
+Device recommendations provide insights and suggestions to improve device security posture.
+
+| Severity | Name | Data Source | Description |
+|--|--|--|--|
+| Medium | Open Ports on device | Classic security module | A listening endpoint was found on the device. |
+| Medium | Permissive firewall policy found in one of the chains. | Classic security module | Allowed firewall policy found (INPUT/OUTPUT). Firewall policy should deny all traffic by default, and define rules to allow necessary communication to/from the device. |
+| Medium | Permissive firewall rule in the input chain was found | Classic security module | A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
+| Medium | Permissive firewall rule in the output chain was found | Classic security module | A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
+| Medium | Operation system baseline validation has failed | Classic security module | Device doesn't comply with [CIS Linux benchmarks](https://www.cisecurity.org/cis-benchmarks/). |
+
+### Agent based operational recommendations
+
+Operational recommendations provide insights and suggestions to improve security agent configuration.
+
+| Severity | Name | Data Source | Description |
+|--|--|--|--|
+| Low | Agent sends unutilized messages | Classic security module | 10% or more of security messages were smaller than 4 KB during the last 24 hours. |
+| Low | Security twin configuration not optimal | Classic security module | Security twin configuration is not optimal. |
+| Low | Security twin configuration conflict | Classic security module | Conflicts were identified in the security twin configuration. | |
+
+## Next steps
+
+- Defender for IoT service [Overview](overview.md)
+- Learn how to [Access your security data](how-to-security-data-access.md)
+- Learn more about [Investigating a device](how-to-investigate-device.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/agent-based-security-alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/agent-based-security-alerts.md
+
+ Title: Agent based security alerts
+
+description: Learn about security alerts and recommended remediation using Defender for IoT device's features and service.
++
+documentationcenter: na
++
+editor: ''
+
+ms.devlang: na
+
+ na
+ Last updated : 2/16/2021+++
+# Defender for IoT devices security alerts
+
+Defender for IoT continuously analyzes your IoT solution using advanced analytics and threat intelligence to alert you to malicious activity.
+In addition, you can create custom alerts based on your knowledge of expected device behavior.
+An alert acts as an indicator of potential compromise, and should be investigated and remediated.
+
+In this article, you will find a list of built-in alerts, which can be triggered on your IoT devices.
+In addition to built-in alerts, Defender for IoT allows you to define custom alerts based on expected IoT Hub and/or device behavior.
+For more information, see [customizable alerts](concept-customizable-security-alerts.md).
+
+## Agent based security alerts
+
+| Name | Severity | Data Source | Description | Suggested remediation steps |
+|--|--|--|--|--|
+| **High** severity | | | |
+| Binary Command Line | High | Classic security module | LA Linux binary being called/executed from the command line was detected. This process may be legitimate activity, or an indication that your device is compromised. | Review the command with the user that ran it and check if this is something legitimately expected to run on the device. If not, escalate the alert to your information security team. |
+| Disable firewall | High | Classic security module | Possible manipulation of on-host firewall detected. Malicious actors often disable the on-host firewall in an attempt to exfiltrate data. | Review with the user that ran the command to confirm if this was legitimate expected activity on the device. If not, escalate the alert to your information security team. |
+| Port forwarding detection | High | Classic security module | Initiation of port forwarding to an external IP address detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Possible attempt to disable Auditd logging detected | High | Classic security module | Linux Auditd system provides a way to track security-relevant information on the system. The system records as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine who violated the security policy and the actions they performed. Disabling Auditd logging may prevent your ability to discover violations of security policies used on the system. | Check with the device owner if this was legitimate activity with business reasons. If not, this event may be hiding activity by malicious actors. Immediately escalated the incident to your information security team. |
+| Reverse shells | High | Classic security module | Analysis of host data on a device detected a potential reverse shell. Reverse shells are often used to get a compromised machine to call back into a machine controlled by a malicious actor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Successful Bruteforce attempt | High | Classic security module | Multiple unsuccessful login attempts were identified, followed by a successful login. Attempted Brute force attack may have succeeded on the device. | Review SSH Brute force alert and the activity on the devices. <br>If the activity was malicious:<br> Roll out password reset for compromised accounts.<br> Investigate and remediate (if found) devices for malware. |
+| Successful local login | High | Classic security module | Successful local sign in to the device detected | Make sure the signed in user is an authorized party. |
+| Web shell | High | Classic security module | Possible web shell detected. Malicious actors commonly upload a web shell to a compromised machine to gain persistence or for further exploitation. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| **Medium** severity | | | |
+| Behavior similar to common Linux bots detected | Medium | Classic security module | Execution of a process normally associated with common Linux botnets detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Behavior similar to Fairware ransomware detected | Medium | Classic security module | Execution of rm -rf commands applied to suspicious locations detected using analysis of host data. Because rm -rf recursively deletes files, it is normally only used on discrete folders. In this case, it is being used in a location that could remove a large amount of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Review with the user that ran the command this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Behavior similar to ransomware detected | Medium | Classic security module | Execution of files similar to known ransomware that may prevent users from accessing their system, or personal files, and may demand ransom payment to regain access. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Crypto coin miner container image detected | Medium | Classic security module | Container detecting running known digital currency mining images. | 1. If this behavior is not intended, delete the relevant container image.<br> 2. Make sure that the Docker daemon is not accessible via an unsafe TCP socket.<br> 3. Escalate the alert to the information security team. |
+| Crypto coin miner image | Medium | Classic security module | Execution of a process normally associated with digital currency mining detected. | Verify with the user that ran the command if this was legitimate activity on the device. If not, escalate the alert to the information security team. |
+| Detected suspicious use of the nohup command | Medium | Classic security module | Suspicious use of the nohup command on host detected. Malicious actors commonly run the nohup command from a temporary directory, effectively allowing their executables to run in the background. Seeing this command run on files located in a temporary directory is not expected or usual behavior. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Detected suspicious use of the useradd command | Medium | Classic security module | Suspicious use of the useradd command detected on the device. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Exposed Docker daemon by TCP socket | Medium | Classic security module | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. Default Docker configuration enables full access to the Docker daemon, by anyone with access to the relevant port. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Failed local login | Medium | Classic security module | A failed local login attempt to the device was detected. | Make sure no unauthorized party has physical access to the device. |
+| File downloads from a known malicious source detected | Medium | Classic security module | Download of a file from a known malware source detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| htaccess file access detected | Medium | Classic security module | Analysis of host data detected possible manipulation of a htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running Apache Web software, including basic redirect functionality, and more advanced functions, such as basic password protection. Malicious actors often modify htaccess files on compromised machines to gain persistence. | Confirm this is legitimate expected activity on the host. If not, escalate the alert to your information security team. |
+| Known attack tool | Medium | Classic security module | A tool often associated with malicious users attacking other machines in some way was detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| IoT agent attempted and failed to parse the module twin configuration | Medium | Classic security module | The Defender for IoT security agent failed to parse the module twin configuration due to type mismatches in the configuration object | Validate your module twin configuration against the IoT agent configuration schema, fix all mismatches. |
+| Local host reconnaissance detected | Medium | Classic security module | Execution of a command normally associated with common Linux bot reconnaissance detected. | Review the suspicious command line to confirm that it was executed by a legitimate user. If not, escalate the alert to your information security team. |
+| Mismatch between script interpreter and file extension | Medium | Classic security module | Mismatch between the script interpreter and the extension of the script file provided as input detected. This type of mismatch is commonly associated with attacker script executions. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Possible backdoor detected | Medium | Classic security module | A suspicious file was downloaded and then run on a host in your subscription. This type of activity is commonly associated with the installation of a backdoor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Potential loss of data detected | Medium | Classic security module | Possible data egress condition detected using analysis of host data. Malicious actors often egress data from compromised machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Potential overriding of common files | Medium | Classic security module | Common executable overwritten on the device. Malicious actors are known to overwrite common files as a way to hide their actions or as a way to gain persistence. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Privileged container detected | Medium | Classic security module | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to host resources. If compromised, a malicious actor can use the privileged container to gain access to the host machine. | If the container doesn't need to run in privileged mode, remove the privileges from the container. |
+| Removal of system logs files detected | Medium | Classic security module | Suspicious removal of log files on the host detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Space after filename | Medium | Classic security module | Execution of a process with a suspicious extension detected using analysis of host data. Suspicious extensions may trick users into thinking files are safe to be opened and can indicate the presence of malware on the system. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspected malicious credentials access tools detected | Medium | Classic security module | Detection usage of a tool commonly associated with malicious attempts to access credentials. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspicious compilation detected | Medium | Classic security module | Suspicious compilation detected. Malicious actors often compile exploits on a compromised machine to escalate privileges. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspicious file download followed by file run activity | Medium | Classic security module | Analysis of host data detected a file that was downloaded and run in the same command. This technique is commonly used by malicious actors to get infected files onto victim machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspicious IP address communication | Medium | Classic security module | Communication with a suspicious IP address detected. | Verify if the connection is legitimate. Consider blocking communication with the suspicious IP. |
+| **LOW** severity | | | |
+| Bash history cleared | Low | Classic security module | Bash history log cleared. Malicious actors commonly erase bash history to hide their own commands from appearing in the logs. | Review with the user that ran the command that the activity in this alert to see if you recognize this as legitimate administrative activity. If not, escalate the alert to the information security team. |
+| Device silent | Low | Classic security module | Device has not sent any telemetry data in the last 72 hours. | Make sure device is online and sending data. Check that the Azure Security Agent is running on the device. |
+| Failed Bruteforce attempt | Low | Classic security module | Multiple unsuccessful login attempts identified. Potential Brute force attack attempt failed on the device. | Review SSH Brute force alerts and the activity on the device. No further action required. |
+| Local user added to one or more groups | Low | Classic security module | New local user added to a group on this device. Changes to user groups are uncommon, and can indicate a malicious actor may be collecting extra permissions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
+| Local user deleted from one or more groups | Low | Classic security module | A local user was deleted from one or more groups. Malicious actors are known to use this method in an attempt to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
+| Local user deletion detected | Low | Classic security module | Deletion of a local user detected. Local user deletion is uncommon, a malicious actor may be trying to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
+
+## Next steps
+
+- Defender for IoT service [Overview](overview.md)
+- Learn how to [Access your security data](how-to-security-data-access.md)
+- Learn more about [Investigating a device](how-to-investigate-device.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/agent-based-security-custom-alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/agent-based-security-custom-alerts.md
+
+ Title: Agent based security custom alerts
+
+description: Learn about customizable security alerts and recommended remediation using Defender for IoT device's features and service.
++
+documentationcenter: na
++
+editor: ''
+
+ms.devlang: na
+
+ na
+ Last updated : 2/16/2021++++
+# Defender for IoT devices custom security alerts
+
+Defender for IoT continuously analyzes your IoT solution using advanced analytics and threat intelligence to alert you to malicious activity.
+
+We encourage you to create custom alerts based on your knowledge of expected device behavior to ensure alerts act as the most efficient indicators of potential compromise in your unique organizational deployment and landscape.
+
+The following lists of Defender for IoT alerts are definable by you based on your expected IoT device behavior. For more information about how to customize each alert, see [create custom alerts](quickstart-create-custom-alerts.md).
+
+## Agent-based security custom alerts
+
+| Severity | Alert name | Data source | Description | Suggested remediation |
+|--|--|--|--|--|
+| Low | Custom alert - The number of active connections is outside the allowed range | Classic security module, Azure RTOS | Number of active connections within a specific time window is outside the currently configured and allowable range. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed connection list. |
+| Low | Custom alert - The outbound connection created to an IP that isn't allowed | Classic security module, Azure RTOS | An outbound connection was created to an IP that is outside your allowed IP list. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed IP list. |
+| Low | Custom alert - The number of failed local logins is outside the allowed range | Classic security module, Azure RTOS | The number of failed local logins within a specific time window is outside the currently configured and allowable range. | |
+| Low | Custom alert - The sign in of a user that is not on the allowed user list | Classic security module, Azure RTOS | A local user outside your allowed user list, logged in to the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
+| Low | Custom alert - A process was executed that is not allowed | Classic security module, Azure RTOS | A process that is not allowed was executed on the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
+|
+
+## Next steps
+
+- Learn how to [customize an alert](quickstart-create-custom-alerts.md)
+- Defender for IoT service [Overview](overview.md)
+- Learn how to [Access your security data](how-to-security-data-access.md)
+- Learn more about [Investigating a device](how-to-investigate-device.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/concept-customizable-security-alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-customizable-security-alerts.md
Title: Custom security alerts
-description: Learn about customizable security alerts and recommended remediation using Defender for IoT features and service.
+ Title: Custom security alerts for IoT Hub
+description: Learn about customizable security alerts and recommended remediation using Defender for IoT Hub's features and service.
documentationcenter: na-+ editor: ''
ms.devlang: na
na Previously updated : 03/04/2020- Last updated : 2/16/2021+
-# Defender for IoT custom security alerts
+# Defender for IoT Hub custom security alerts
Defender for IoT continuously analyzes your IoT solution using advanced analytics and threat intelligence to alert you to malicious activity. We encourage you to create custom alerts based on your knowledge of expected device behavior to ensure alerts act as the most efficient indicators of potential compromise in your unique organizational deployment and landscape.
-The following lists of Defender for IoT alerts are definable by you based on your expected IoT Hub and/or device behavior. For more information about how to customize each alert, see [create custom alerts](quickstart-create-custom-alerts.md).
+The following lists of Defender for IoT alerts are definable by you based on your expected IoT Hub behavior. For more information about how to customize each alert, see [create custom alerts](quickstart-create-custom-alerts.md).
## Built-in custom alerts in the IoT Hub
The following lists of Defender for IoT alerts are definable by you based on you
| Low | Custom alert - The number of module twin updates is outside the allowed range | IoT Hub | The amount of module twin updates within a specific time window is outside the currently configured and allowable range. | | Low | Custom alert - The number of unauthorized operations is outside the allowed range | IoT Hub | The amount of unauthorized operations within a specific time window is outside the currently configured and allowable range. | -
-## Agent-based security custom alerts
-
-| Severity | Alert name | Data source | Description | Suggested remediation |
-|--|--|--|--|--|
-| Low | Custom alert - The number of active connections is outside the allowed range | Classic security module, Azure RTOS | Number of active connections within a specific time window is outside the currently configured and allowable range. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed connection list. |
-| Low | Custom alert - The outbound connection created to an IP that isn't allowed | Classic security module, Azure RTOS | An outbound connection was created to an IP that is outside your allowed IP list. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed IP list. |
-| Low | Custom alert - The number of failed local logins is outside the allowed range | Classic security module, Azure RTOS | The number of failed local logins within a specific time window is outside the currently configured and allowable range. | |
-| Low | Custom alert - The sign in of a user that is not on the allowed user list | Classic security module, Azure RTOS | A local user outside your allowed user list, logged in to the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
-| Low | Custom alert - A process was executed that is not allowed | Classic security module, Azure RTOS | A process that is not allowed was executed on the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
-|
- ## Next steps - Learn how to [customize an alert](quickstart-create-custom-alerts.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/concept-recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-recommendations.md
Title: Security recommendations
-description: Learn about the concept of security recommendations and how they are used in Defender for IoT.
+ Title: Security recommendations for IoT Hub
+description: Learn about the concept of security recommendations and how they are used in the Defender for IoT Hub.
documentationcenter: na
ms.devlang: na
na Previously updated : 01/25/2021 Last updated : 02/16/2021
-# Security recommendations
+# Security recommendations for IoT Hub
Defender for IoT scans your Azure resources and IoT devices and provides security recommendations to reduce your attack surface. Security recommendations are actionable and aim to aid customers in complying with security best practices.
-In this article, you will find a list of recommendations, which can be triggered on your IoT Hub and/or IoT devices.
-
-## Agent-based recommendations
-
-Device recommendations provide insights and suggestions to improve device security posture.
-
-| Severity | Name | Data Source | Description |
-|--|--|--|--|
-| Medium | Open Ports on device | Classic security module | A listening endpoint was found on the device. |
-| Medium | Permissive firewall policy found in one of the chains. | Classic security module | Allowed firewall policy found (INPUT/OUTPUT). Firewall policy should deny all traffic by default, and define rules to allow necessary communication to/from the device. |
-| Medium | Permissive firewall rule in the input chain was found | Classic security module | A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
-| Medium | Permissive firewall rule in the output chain was found | Classic security module | A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
-| Medium | Operation system baseline validation has failed | Classic security module | Device doesn't comply with [CIS Linux benchmarks](https://www.cisecurity.org/cis-benchmarks/). |
-
-### Agent-based operational recommendations
-
-Operational recommendations provide insights and suggestions to improve security agent configuration.
-
-| Severity | Name | Data Source | Description |
-|--|--|--|--|
-| Low | Agent sends unutilized messages | Classic security module | 10% or more of security messages were smaller than 4 KB during the last 24 hours. |
-| Low | Security twin configuration not optimal | Classic security module | Security twin configuration is not optimal. |
-| Low | Security twin configuration conflict | Classic security module | Conflicts were identified in the security twin configuration. | |
-
+In this article, you will find a list of recommendations, which can be triggered on your IoT Hub.
## Built in recommendations in IoT Hub
Recommendation alerts provide insight and suggestions for actions to improve the
| Medium | IP filter rule includes large IP range | IoT Hub | An allow IP filter rule source IP range is too large. Overly permissive rules can expose your IoT hub to malicious actors. | | Low | Enable diagnostics logs in IoT Hub | IoT Hub | Enable logs and retain them for up to a year. Retaining logs enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. | - ## Next steps - Defender for IoT service [Overview](overview.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/concept-security-alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-security-alerts.md
Title: Built-in & custom alerts list
-description: Learn about security alerts and recommended remediation using Defender for IoT features and service.
+description: Learn about security alerts and recommended remediation using Defender for IoT Hub's features and service.
documentationcenter: na
ms.devlang: na
na Previously updated : 1/25/2021 Last updated : 2/16/2021
-# Defender for IoT security alerts
+# Defender for IoT Hub security alerts
Defender for IoT continuously analyzes your IoT solution using advanced analytics and threat intelligence to alert you to malicious activity. In addition, you can create custom alerts based on your knowledge of expected device behavior. An alert acts as an indicator of potential compromise, and should be investigated and remediated.
-In this article, you will find a list of built-in alerts, which can be triggered on your IoT Hub and IoT devices.
+In this article, you will find a list of built-in alerts, which can be triggered on your IoT Hub.
In addition to built-in alerts, Defender for IoT allows you to define custom alerts based on expected IoT Hub and/or device behavior. For more information, see [customizable alerts](concept-customizable-security-alerts.md).
-## Agent based security alerts
-
-| Name | Severity | Data Source | Description | Suggested remediation steps |
-|--|--|--|--|--|
-| **High** severity | | | |
-| Binary Command Line | High | Classic security module | LA Linux binary being called/executed from the command line was detected. This process may be legitimate activity, or an indication that your device is compromised. | Review the command with the user that ran it and check if this is something legitimately expected to run on the device. If not, escalate the alert to your information security team. |
-| Disable firewall | High | Classic security module | Possible manipulation of on-host firewall detected. Malicious actors often disable the on-host firewall in an attempt to exfiltrate data. | Review with the user that ran the command to confirm if this was legitimate expected activity on the device. If not, escalate the alert to your information security team. |
-| Port forwarding detection | High | Classic security module | Initiation of port forwarding to an external IP address detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Possible attempt to disable Auditd logging detected | High | Classic security module | Linux Auditd system provides a way to track security-relevant information on the system. The system records as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine who violated the security policy and the actions they performed. Disabling Auditd logging may prevent your ability to discover violations of security policies used on the system. | Check with the device owner if this was legitimate activity with business reasons. If not, this event may be hiding activity by malicious actors. Immediately escalated the incident to your information security team. |
-| Reverse shells | High | Classic security module | Analysis of host data on a device detected a potential reverse shell. Reverse shells are often used to get a compromised machine to call back into a machine controlled by a malicious actor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Successful Bruteforce attempt | High | Classic security module | Multiple unsuccessful login attempts were identified, followed by a successful login. Attempted Bruteforce attack may have succeeded on the device. | Review SSH Bruteforce alert and the activity on the devices. <br>If the activity was malicious:<br> Roll out password reset for compromised accounts.<br> Investigate and remediate (if found) devices for malware. |
-| Successful local login | High | Classic security module | Successful local sign in to the device detected | Make sure the signed in user is an authorized party. |
-| Web shell | High | Classic security module | Possible web shell detected. Malicious actors commonly upload a web shell to a compromised machine to gain persistence or for further exploitation. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| **Medium** severity | | | |
-| Behavior similar to common Linux bots detected | Medium | Classic security module | Execution of a process normally associated with common Linux botnets detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Behavior similar to Fairware ransomware detected | Medium | Classic security module | Execution of rm -rf commands applied to suspicious locations detected using analysis of host data. Because rm -rf recursively deletes files, it is normally only used on discrete folders. In this case, it is being used in a location that could remove a large amount of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Review with the user that ran the command this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Behavior similar to ransomware detected | Medium | Classic security module | Execution of files similar to known ransomware that may prevent users from accessing their system, or personal files, and may demand ransom payment to regain access. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Crypto coin miner container image detected | Medium | Classic security module | Container detecting running known digital currency mining images. | 1. If this behavior is not intended, delete the relevant container image.<br> 2. Make sure that the Docker daemon is not accessible via an unsafe TCP socket.<br> 3. Escalate the alert to the information security team. |
-| Crypto coin miner image | Medium | Classic security module | Execution of a process normally associated with digital currency mining detected. | Verify with the user that ran the command if this was legitimate activity on the device. If not, escalate the alert to the information security team. |
-| Detected suspicious use of the nohup command | Medium | Classic security module | Suspicious use of the nohup command on host detected. Malicious actors commonly run the nohup command from a temporary directory, effectively allowing their executables to run in the background. Seeing this command run on files located in a temporary directory is not expected or usual behavior. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Detected suspicious use of the useradd command | Medium | Classic security module | Suspicious use of the useradd command detected on the device. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Exposed Docker daemon by TCP socket | Medium | Classic security module | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. Default Docker configuration enables full access to the Docker daemon, by anyone with access to the relevant port. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Failed local login | Medium | Classic security module | A failed local login attempt to the device was detected. | Make sure no unauthorized party has physical access to the device. |
-| File downloads from a known malicious source detected | Medium | Classic security module | Download of a file from a known malware source detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| htaccess file access detected | Medium | Classic security module | Analysis of host data detected possible manipulation of an htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running Apache Web software, including basic redirect functionality, and more advanced functions, such as basic password protection. Malicious actors often modify htaccess files on compromised machines to gain persistence. | Confirm this is legitimate expected activity on the host. If not, escalate the alert to your information security team. |
-| Known attack tool | Medium | Classic security module | A tool often associated with malicious users attacking other machines in some way was detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| IoT agent attempted and failed to parse the module twin configuration | Medium | Classic security module | The Defender for IoT security agent failed to parse the module twin configuration due to type mismatches in the configuration object | Validate your module twin configuration against the IoT agent configuration schema, fix all mismatches. |
-| Local host reconnaissance detected | Medium | Classic security module | Execution of a command normally associated with common Linux bot reconnaissance detected. | Review the suspicious command line to confirm that it was executed by a legitimate user. If not, escalate the alert to your information security team. |
-| Mismatch between script interpreter and file extension | Medium | Classic security module | Mismatch between the script interpreter and the extension of the script file provided as input detected. This type of mismatch is commonly associated with attacker script executions. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Possible backdoor detected | Medium | Classic security module | A suspicious file was downloaded and then run on a host in your subscription. This type of activity is commonly associated with the installation of a backdoor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Potential loss of data detected | Medium | Classic security module | Possible data egress condition detected using analysis of host data. Malicious actors often egress data from compromised machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Potential overriding of common files | Medium | Classic security module | Common executable overwritten on the device. Malicious actors are known to overwrite common files as a way to hide their actions or as a way to gain persistence. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Privileged container detected | Medium | Classic security module | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to host resources. If compromised, a malicious actor can use the privileged container to gain access to the host machine. | If the container doesn't need to run in privileged mode, remove the privileges from the container. |
-| Removal of system logs files detected | Medium | Classic security module | Suspicious removal of log files on the host detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Space after filename | Medium | Classic security module | Execution of a process with a suspicious extension detected using analysis of host data. Suspicious extensions may trick users into thinking files are safe to be opened and can indicate the presence of malware on the system. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspected malicious credentials access tools detected | Medium | Classic security module | Detection usage of a tool commonly associated with malicious attempts to access credentials. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspicious compilation detected | Medium | Classic security module | Suspicious compilation detected. Malicious actors often compile exploits on a compromised machine to escalate privileges. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspicious file download followed by file run activity | Medium | Classic security module | Analysis of host data detected a file that was downloaded and run in the same command. This technique is commonly used by malicious actors to get infected files onto victim machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspicious IP address communication | Medium | Classic security module | Communication with a suspicious IP address detected. | Verify if the connection is legitimate. Consider blocking communication with the suspicious IP. |
-| **LOW** severity | | | |
-| Bash history cleared | Low | Classic security module | Bash history log cleared. Malicious actors commonly erase bash history to hide their own commands from appearing in the logs. | Review with the user that ran the command that the activity in this alert to see if you recognize this as legitimate administrative activity. If not, escalate the alert to the information security team. |
-| Device silent | Low | Classic security module | Device has not sent any telemetry data in the last 72 hours. | Make sure device is online and sending data. Check that the Azure Security Agent is running on the device. |
-| Failed Bruteforce attempt | Low | Classic security module | Multiple unsuccessful login attempts identified. Potential Bruteforce attack attempt failed on the device. | Review SSH Bruteforce alerts and the activity on the device. No further action required. |
-| Local user added to one or more groups | Low | Classic security module | New local user added to a group on this device. Changes to user groups are uncommon, and can indicate a malicious actor may be collecting additional permissions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
-| Local user deleted from one or more groups | Low | Classic security module | A local user was deleted from one or more groups. Malicious actors are known to use this method in an attempt to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
-| Local user deletion detected | Low | Classic security module | Deletion of a local user detected. Local user deletion is uncommon, a malicious actor may be trying to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
- ## Built-in alerts for IoT Hub | Severity | Name | Description | Suggested remediation |
For more information, see [customizable alerts](concept-customizable-security-al
| Expired SAS Token | Low | Expired SAS token used by a device | May be a legitimate device with an expired token, or an attempt to impersonate a legitimate device. If the legitimate device is currently communicating correctly, this is likely an impersonation attempt. | | Invalid SAS token signature | Low | A SAS token used by a device has an invalid signature. The signature does not match either the primary or secondary key. | Review the alerts on the devices. No further action required. | - ## Next steps - Defender for IoT service [Overview](overview.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/references-work-with-defender-for-iot-cli-commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-work-with-defender-for-iot-cli-commands.md
# Work with Defender for IoT CLI commands
-This article describes CLI commands for sensors and on-premises management consoles. The commands are accessible to administrators, cyberx users, and support users.
+This article describes CLI commands for sensors and on-premises management consoles. The commands are accessible to the following users:
-Define exclusion rules when you're planning maintenance activities or an activity that doesn't require an alert.
+- Administrator
+- CyberX
+- Support
+
+To start working in the CLI, connect using a terminal. For example, terminal name `Putty`, and `Support` user.
## Create local alert exclusion rules
-You can create an exclusion rule by entering the following command into the CLI:
+You can create a local alert exclusion rule by entering the following command into the CLI:
```azurecli-interactive alerts exclusion-rule-create [-h] -n NAME [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS] ```
-The attributes that you can define within the alert exclusion rules are as follows:
+The following attributes can be used with the alert exclusion rules:
| Attribute | Description | |--|--|
The attributes that you can define within the alert exclusion rules are as follo
## Append local alert exclusion rules
-You can add new rules to the current alert exclusion rules by entering the following command in the CLI:
+You can append local alert exclusion rules by entering the following command in the CLI:
```azurecli-interactive alerts exclusion-rule-append [-h] -n NAME [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS] ```
-The attributes used here are similar to attributes described when you're creating local alert exclusion rules. In the usage here, the attributes are applied to existing rules.
+The attributes used here are the same as the attributes explained in the Create local alert exclusion rules section. The difference in the usage is that here the attributes are applied on the existing rules.
## Show local alert exclusion rules
-Enter the following command to view all existing exclusion rules:
+Enter the following command to present the existing list of exclusion rules:
```azurecli-interactive alerts exclusion-rule-list [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
alerts exclusion-rule-remove [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
[-dev DEVICES] [-a ALERTS] ```
-You can use the following attribute with the alert exclusion rules:
+The following attribute can be used with the alert exclusion rules:
| Attribute | Description| | | - |
You can use the following attribute with the alert exclusion rules:
## Sync time from the NTP server
-You can enable and disable a time sync from an NTP server.
+You can enable, or disable a time sync from a specified NTP server.
### Enable NTP sync
-Entering the following command will enable a periodic retrieval of the current time from a specified NTP server:
+Enter the following command to periodically retrieve the time from the specified NTP server:
```azurecli-interactive ntp enable IP
The attribute that you can define within the command is the IP address of the NT
### Disable NTP sync
-Entering the following command will disable the time sync with the specified NTP server:
+Enter the following command to disable the time sync with the specified NTP server:
```azurecli-interactive ntp disable IP
ntp disable IP
The attribute that you can define within the command is the IP address of the NTP server.
-## Configure the network
+## Network configuration
The following table describes the commands available to configure your network options for Azure Defender for IoT: |Name|Command|Description| |--|-|--|
-|Ping|`ping IP `| Pings addresses outside the Defender for IoT platform.|
-|Blink|`network blink`|Enables changing the network configuration parameters.|
-|Reconfigure the network |`network edit-settings`| Enables changing the network configuration parameters. |
+|Ping|`ping IP`| Ping an address outside the Defender for IoT platform.|
+|Blink|`network blink`| Locate a connection by causing the interface lights to blink. |
+|Reconfigure the network |`network edit-settings`| Enable a change in the network configuration parameters. |
|Show network settings |`network list`|Displays the network adapter parameters. | |Validate the network configuration |`network validate` |Presents the output network settings. <br /> <br />For example: <br /> <br />Current Network Settings: <br /> interface: eth0 <br /> ip: 10.100.100.1 <br />subnet: 255.255.255.0 <br />default gateway: 10.100.100.254 <br />dns: 10.100.100.254 <br />monitor interfaces: eth1| |Import a certificate |`certificate import FILE` |Imports the HTTPS certificate. You'll need to specify the full path, which leads to a \*.crt file. |
The following table describes the commands available to configure your network o
## Filter network configurations
-The `network capture-filter` command lets administrators eliminate network traffic that doesn't need to be analyzed. Filter traffic by using an include list or an exclude list.
+The `network capture-filter` command allows administrators to eliminate network traffic that doesn't need to be analyzed. You can filter traffic by using an include list, or an exclude list.
```azurecli-interactive network capture-filter
After you enter the command, you'll be prompted with the following question:
>`Would you like to supply devices and subnet masks you wish to include in the capture filter? [Y/N]:`
-Select `Y` to open a nano file where you can add devices, channels, ports, and subsets according to the following syntax:
+Select `Y` to open a nano file where you can add a device, channel, port, and subset according to the following syntax:
| Attribute | Description | |--|--|
Separate arguments by dropping a row.
When you include a device, channel, or subnet, the sensor processes all the valid traffic for that argument, including ports and traffic that wouldn't usually be processed.
-You'll then be asked the following:
+You'll then be asked the following question:
>`Would you like to supply devices and subnet masks you wish to exclude from the capture filter? [Y/N]:`
-Select `Y` to open a nano file where you can add device, channels, ports, and subsets according to the following syntax:
+Select `Y` to open a nano file where you can add a device, channel, port, and subsets according to the following syntax:
| Attribute | Description | |--|--|
Include or exclude UDP and TCP ports for all the traffic.
### Components
-You're asked the following:
+You're asked the following question:
>`In which component do you wish to apply this capture filter?`
sudo cyberx-xsense-capture-filter -p all -m all-connected
## Define client and server hosts
-If Defender for IoT did not automatically detect the client and server hosts, enter the following command to set the client and server hosts:
+If Defender for IoT didn't automatically detect the client, and server hosts, enter the following command to set the client and server hosts:
```azurecli-interactive directions [-h] [--identifier IDENTIFIER] [--port PORT] [--remove] [--add]
The following table describes the commands available to perform various system a
|Name|Code|Description| |-|-|--|
+|Show the date|`date`|Returns the current date on the host in GMT format.|
|Reboot the host|`system reboot`|Reboots the host device.| |Shut down the host|`system shutdown`|Shuts down the host.| |Back up the system|`system backup`|Initiates an immediate backup (an unscheduled backup).|
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
Here is an example of JSON Patch code. This document replaces the *mass* and *ra
:::code language="json" source="~/digital-twins-docs-samples/models/patch.json":::
-You can create patches using a `JsonPatchDocument` in the [SDK](how-to-use-apis-sdks.md). Here is an example.
+You can create patches using the Azure .NET SDK's [JsonPatchDocument](/dotnet/api/azure.jsonpatchdocument?view=azure-dotnet&preserve-view=true). Here is an example.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="UpdateTwin":::
event-grid https://docs.microsoft.com/en-us/azure/event-grid/quotas-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/quotas-limits.md
Title: Quotas and limits - Azure Event Grid | Microsoft Docs description: This article provides limits and quotas for Azure Event Grid. For example, number of subscriptions for topic, number of custom topics per subscription, etc. Previously updated : 07/07/2020 Last updated : 02/17/2021 # Azure Event Grid quotas and limits
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-capture-enable-through-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-capture-enable-through-portal.md
Title: Event Hubs - Capture streaming events using Azure portal description: This article describes how to enable capturing of events streaming through Azure Event Hubs by using the Azure portal.-+ Last updated 06/23/2020
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-capture-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-capture-overview.md
Title: Capture streaming events - Azure Event Hubs | Microsoft Docs description: This article provides an overview of the Capture feature that allows you to capture events streaming through Azure Event Hubs. Previously updated : 06/23/2020 Last updated : 02/16/2021 # Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage
A native support to Azure Blob storage is available, which makes it easy to quer
[Apache Drill: Azure Blob Storage Plugin][Apache Drill: Azure Blob Storage Plugin]
-To easily query captured files, you can create and execute a VM with Apache Drill enabled via a container to access Azure Blob storage:
-
-https://github.com/yorek/apache-drill-azure-blob
-
-A full end-to-end sample is available in the Streaming at Scale repository:
-
-[Streaming at Scale: Event Hubs Capture]
+To easily query captured files, you can create and execute a VM with Apache Drill enabled via a container to access Azure Blob storage. See the following sample: [Streaming at Scale with Event Hubs Capture](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-capture).
### Use Apache Spark
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-get-started-send-legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dotnet-standard-get-started-send-legacy.md
Title: Send and receive events from Azure Event Hubs using .NET (old) description: This article provides a walkthrough for creating a .NET Core app that sends/receives events to/from Azure Event Hubs by using the old Microsoft.Azure.EventHubs package. -+ Last updated 06/23/2020
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md
Title: Create an event hub with capture enabled - Azure Event Hubs | Microsoft Docs description: Create an Azure Event Hubs namespace with one event hub and enable Capture using Azure Resource Manager template-+ Last updated 06/23/2020
firewall-manager https://docs.microsoft.com/en-us/azure/firewall-manager/dns-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/dns-settings.md
Title: Azure Firewall policy DNS settings (preview)
+ Title: Azure Firewall policy DNS settings
description: You can configure Azure Firewall policies with DNS server and DNS proxy settings. Previously updated : 06/30/2020 Last updated : 02/17/2021
-# Azure Firewall policy DNS settings (preview)
-
-> [!IMPORTANT]
-> Azure Firewall DNS settings is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Azure Firewall policy DNS settings
You can configure a custom DNS server and enable DNS proxy for Azure Firewall policies. You can configure these settings when you deploy the firewall or later from the **DNS settings** page.
A DNS server maintains and resolves domain names to IP addresses. By default, Az
4. Select **Save**. 5. The firewall now directs DNS traffic to the specified DNS server(s) for name resolution.
-## DNS proxy (preview)
+## DNS proxy
You can configure Azure Firewall to act as a DNS proxy. A DNS proxy acts as an intermediary for DNS requests from client virtual machines to a DNS server. If you configure a custom DNS server, you should enable DNS proxy to avoid DNS resolution mismatch, and enable FQDN filtering in network rules. If you don't enable DNS proxy, DNS requests from the client may travel to a DNS server at a different time or return a different response compared to that of the firewall. DNS proxy puts Azure Firewall in the path of the client requests to avoid inconsistency. DNS Proxy configuration requires three steps:+ 1. Enable DNS proxy in Azure Firewall DNS settings. 2. Optionally configure your custom DNS server or use the provided default. 3. Finally, you must configure the Azure FirewallΓÇÖs private IP address as a Custom DNS address in your virtual network DNS server settings. This ensures DNS traffic is directed to Azure Firewall.
-### Configure DNS proxy (preview)
+### Configure DNS proxy
To configure DNS proxy, you must configure your virtual network DNS servers setting to use the firewall private IP address. Then, enable DNS Proxy in Azure Firewall policy **DNS settings**.
To configure DNS proxy, you must configure your virtual network DNS servers sett
4. Enter the firewallΓÇÖs private IP address. 5. Select **Save**.
-#### Enable DNS proxy (preview)
+#### Enable DNS proxy
1. Select your Azure Firewall policy. 2. Under **Settings**, select **DNS settings**.
firewall-manager https://docs.microsoft.com/en-us/azure/firewall-manager/quick-firewall-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/quick-firewall-policy.md
Previously updated : 02/16/2021 Last updated : 02/17/2021
If your environment meets the prerequisites and you're familiar with using ARM t
## Review the template
-This template creates a secured virtual hub using Azure Firewall Manager, along with the necessary resources to support the scenario.
+This template creates a hub virtual network, along with the necessary resources to support the scenario.
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-azurefirewall-create-with-firewallpolicy-apprule-netrule-ipgroups/).
firewall-manager https://docs.microsoft.com/en-us/azure/firewall-manager/threat-intelligence-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/threat-intelligence-settings.md
If you've configured threat intelligence-based filtering, the associated rules a
:::image type="content" source="media/threat-intelligence-settings/threat-intelligence-policy.png" alt-text="Threat intelligence policy":::
-## Threat intelligence Mode
+## Threat intelligence mode
-You can choose to log only an alert when a rule is triggered, or you can choose alert and deny mode.
+You can configure threat intelligence in one of the three modes that are described in the following table. By default, threat intelligence-based filtering is enabled in alert mode.
-By default, threat intelligence-based filtering is enabled in alert mode.
+|Mode |Description |
+|||
+|`Off` | The threat intelligence feature is not enabled for your firewall. |
+|`Alert only` | You will receive high-confidence alerts for traffic going through your firewall to or from known malicious IP addresses and domains. |
+|`Alert and deny` | Traffic is blocked and you will receive high-confidence alerts when traffic is detected attempting to go through your firewall to or from known malicious IP addresses and domains. |
-## Allowed list addresses
+> [!NOTE]
+> Threat intelligence mode is inherited from parent policies to child policies. A child policy must be configured with the same or a stricter mode than the parent policy.
-You can configure a list of allowed IP addresses so that threat intelligence won't filter any of the addresses, ranges, or subnets that you specify.
+## Allowlist addresses
+Threat intelligence might trigger false positives and block traffic that actually is valid. You can configure a list of allowed IP addresses so that threat intelligence won't filter any of the addresses, ranges, or subnets that you specify.
+![Allowlist addresses](media/threat-intelligence-settings/allow-list.png)
+
+You can update the allowlist with multiple entries at once by uploading a CSV file. The CSV file can only contain IP addresses and ranges. The file can't contain headings.
+
+> [!NOTE]
+> Threat intelligence allowlist addresses are inherited from parent policies to child policies. Any IP address or range added to a parent policy will apply for all child policies as well.
## Logs
-The following log excerpt shows a triggered rule:
+The following log excerpt shows a triggered rule for outbound traffic to a malicious site:
-```
+```json
{ "category": "AzureFirewallNetworkRule", "time": "2018-04-16T23:45:04.8295030Z",
The following log excerpt shows a triggered rule:
## Next steps -- Review the [Microsoft Security intelligence report](https://www.microsoft.com/en-us/security/operations/security-intelligence-report)
+- Review the [Microsoft Security intelligence report](https://www.microsoft.com/en-us/security/operations/security-intelligence-report)
germany https://docs.microsoft.com/en-us/azure/germany/germany-migration-databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/germany/germany-migration-databases.md
Title: Migrate Azure database resources, Azure Germany to global Azure description: This article provides information about migrating your Azure database resources from Azure Germany to global Azure Previously updated : 10/16/2020 Last updated : 02/16/2021
This article has information that can help you migrate Azure database resources
## SQL Database
-To migrate smaller Azure SQL Database workloads, use the export function to create a BACPAC file. A BACPAC file is a compressed (zipped) file that contains metadata and the data from the SQL Server database. After you create the BACPAC file, you can copy the file to the target environment (for example, by using AzCopy) and use the import function to rebuild the database. Be aware of the following considerations:
+To migrate smaller Azure SQL Database workloads, without keeping the migrated database online, use the export function to create a BACPAC file. A BACPAC file is a compressed (zipped) file that contains metadata and the data from the SQL Server database. After you create the BACPAC file, you can copy the file to the target environment (for example, by using AzCopy) and use the import function to rebuild the database. Be aware of the following considerations:
- For an export to be transactionally consistent, make sure that one of the following conditions is true: - No write activity occurs during the export.
For more information:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-## Azure Synapse Analytics
-To migrate Azure Synapse Analytics resources from Azure Germany to global Azure, follow the steps that are described in Azure SQL Database.
+## Migrate SQL Database using active geo-replication
+
+For databases that are too large for BACPAC files, or to migrate from one cloud to another and remain online with minimum downtime, you can configure active geo-replication from Azure Germany to global Azure.
+
+> [!IMPORTANT]
+> Configuring active geo-replication to migrate databases to global Azure is only supported using Transact-SQL (T-SQL), and prior to migrating you must request enablement of your subscription to support migrating to global Azure. To submit a request, you must use [this support request link](#requesting-access).
+
+For details about active geo-replication costs, see the section titled **Active geo-replication** in [Azure SQL Database pricing](https://azure.microsoft.com/pricing/details/sql-database/single/).
+
+Migrating databases with active geo-replication requires an Azure SQL logical server in global Azure. You can create the server using the portal, Azure PowerShell, Azure CLI, etc., but configuring active geo-replication to migrate from Azure Germany to global Azure is only supported using Transact-SQL (T-SQL).
+
+> [!IMPORTANT]
+> When migrating between clouds, the primary (Azure Germany) and secondary (global Azure) server name prefixes must be different. If the server names are the same, running the ALTER DATABASE statement will succeed, but the migration will fail. For example, if the prefix of the primary server name is `myserver` (`myserver.database.cloudapi.de`), the prefix of the secondary server name in global Azure cannot be `myserver`.
++
+The `ALTER DATABASE` statement allows you to specify a target server in global Azure by using its fully qualified dns server name on the target side.
++
+```sql
+ALTER DATABASE [sourcedb] add secondary on server [public-server.database.windows.net]
+```
++
+- *`sourcedb`* represents the database name in an Azure SQL server in Azure Germany.
+- *`public-server.database.windows.net`* represents the Azure SQL server name that exists in global Azure, where the database should be migrated. The namespace "database.windows.net" is required, replace *public-server* with the name of your logical SQL server in global Azure. The server in global Azure must have a different name than the primary server in Azure Germany.
++
+The command is executed on the master database on the Azure Germany server hosting the local database to be migrated.
+- The T-SQL start-copy API authenticates the logged-in user in the public cloud server by finding a user with the same SQL login/user name in master database of that server. This approach is cloud-agnostic; thus, the T-SQL API is used to start cross-cloud copies. For permissions and more information on this topic see [Creating and using active geo-replication](../azure-sql/database/active-geo-replication-overview.md) and [ALTER DATABASE (Transact-SQL)](/sql/t-sql/statements/alter-database-transact-sql/).
+- Except for the initial T-SQL command extension indicating an Azure SQL logical server in global Azure, the rest of the active geo-replication process is identical to the existing execution in the local cloud. For detailed steps to create active geo-replication, see [Creating and using active geo-replication](../azure-sql/database/active-geo-replication-overview.md) with an exception the secondary database is created in the secondary logical server created in global Azure.
+- Once the secondary database exists in global Azure (as its online copy of the Azure Germany database), customer can initiate a database failover from Azure Germany to global Azure for this database using the ALTER DATABASE T-SQL command (see the table below).
+- After the failover, once the secondary becomes a primary database in global Azure, you can stop the active geo-replication and remove the secondary database on the Azure Germany side at any time (see the table below and the steps present in the diagram).
+- After failover, the secondary database in Azure Germany will continue to incur costs until deleted.
+
+- Using the `ALTER DATABASE` command is the only way to set up active geo-replication to migrate an Azure Germany database to global Azure.
+- No Azure portal, Azure Resource Manager, PowerShell, or CLI is available to configure active geo-replication for this migration.
+
+To migrate a database from Azure Germany to global Azure:
+
+1. Choose the user database in Azure Germany, for example, `azuregermanydb`
+2. Create a logical server in global Azure (the public cloud), for example, `globalazureserver`.
+Its fully qualified domain name (FQDN) is `globalazureserver.database.windows.net`.
+3. Start active geo-replication from Azure Germany to global Azure by executing this T-SQL command on the server in Azure Germany. Note that the fully qualified dns name is used for the public server `globalazureserver.database.windows.net`. This is to indicate that the target server is in global Azure, and not Azure Germany.
+
+ ```sql
+ ALTER DATABASE [azuregermanydb] ADD SECONDARY ON SERVER [globalazureserver.database.windows.net];
+ ```
+
+4. When the replication is ready to move the read-write workload to the global Azure server, initiate a planned failover to global Azure by executing this T-SQL command on the global Azure server.
+
+ ```sql
+ ALTER DATABASE [azuregermanydb] FAILOVER;
+ ```
+
+5. Use the following T-SQL to stop active geo-replication. If this command is run after the planned failover, it will terminate the geo-link with the database in global Azure being the read-write copy. This will complete the migration process. However, if the command is executed before the planned failover, it will stop the migration process and the database in Azure Germany will remain the read-write copy. This T-SQL command should be run on the current geo-primary database's logical server, for example, on the Azure Germany server before planned failover and the global Azure server after planned failover.
++
+ `ALTER DATABASE [azuregermanydb] REMOVE SECONDARY ON SERVER [azuregermanyserver];`
+ or
+ `ALTER DATABASE [azuregermanydb] REMOVE SECONDARY ON SERVER [globalazureserver];`
++
+These steps to migrate Azure SQL databases from Azure Germany to global Azure can also be followed using active geo-replication.
++
+For more information the following tables below indicates T-SQL commands for managing failover. The following commands are supported for cross-cloud active geo-replication between Azure Germany and global Azure:
+
+|Command |Description|
+|:--|:--|
+|[ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current&preserve-view=true) |Use ADD SECONDARY ON SERVER argument to create a secondary database for an existing database and starts data replication|
+|[ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current&preserve-view=true) |Use FAILOVER or FORCE_FAILOVER_ALLOW_DATA_LOSS to switch a secondary database to be primary to initiate failover |
+|[ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current&preserve-view=true) |Use REMOVE SECONDARY ON SERVER to terminate a data replication between a SQL Database and the specified secondary database. |
+
+### Active geo-replication monitoring system views
+
+|Command |Description|
+|:--|:--|
+|[sys.geo_replication_links](/sql/relational-databases/system-dynamic-management-views/sys-geo-replication-links-azure-sql-database?view=azuresqldb-current&preserve-view=true)|Returns information about all existing replication links for each database on the Azure SQL Database server. |
+|[sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database?view=azuresqldb-current&preserve-view=true) |Gets the last replication time, last replication lag, and other information about the replication link for a given SQL database. |
+|[sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database?view=azuresqldb-current&preserve-view=true) | Shows the status for all database operations including the status of the replication links. |
+|[sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync?view=azuresqldb-current&preserve-view=true) | Causes the application to wait until all committed transactions are replicated and acknowledged by the active secondary database. |
+
+++
+### Limitations
+
+- Failover Groups are not supported. This means that customers migrating Azure Germany database(s) will need to manage connection strings themselves during failover.
+- No support for Azure portal, Azure Resource Manager APIs, PowerShell, or CLI. This means that each Azure Germany migration will need to manage active geo-replication setup and failover through T-SQL.
+- Customers cannot create multiple geo-secondaries in global Azure for databases in Azure Germany.
+- Creation of a geo secondary must be initiated from the Azure Germany region.
+- Customers can migrate databases out of Azure Germany only to global Azure. Currently no other cross-cloud migration is supported.
+- Azure AD users in Azure Germany user databases are migrated but are not available in the new Azure AD tenant where the migrated database resides. To enable these users, they must be manually dropped and recreated using the current Azure AD users available in the new Azure AD tenant where the newly migrated database resides.
+- [Point-in-time restore (PITR)](../azure-sql/database/recovery-using-backups.md#point-in-time-restore) backups are only taken on the primary database, this is by design. When migrating databases from Azure Germany using Geo-DR, PITR backups will start happening on the new primary after failover. However, the existing PITR backups (on the previous primary in Azure Germany) will not be migrated. If you need PITR backups to support any point-in-time restore scenarios, you need to restore the database from PITR backups in Azure Germany and then migrate the recovered database to global Azure.
+- Long-term retention policies are not migrated with the database. If you have a [long-term retention (LTR)](../azure-sql/database/long-term-retention-overview.md) policy on your database in Azure Germany, you need to manually copy and recreate the LTR policy on the new database after migrating. Functionality to migrate LTR backups from Azure Germany to global Azure are not currently available.
++
+### Requesting access
+
+To migrate a database from Azure Germany to global Azure using geo-replication, your subscription *in Azure Germany* needs to be enabled to successfully configure the cross-cloud migration.
+
+To enable your Azure Germany subscription, you must use the following link to create a migration support request:
+
+1. Browse to the following [migration support request](https://portal.microsoftazure.de/#create/Microsoft.Support/Parameters/%7B%0D%0A++++%22pesId%22%3A+%22f3dc5421-79ef-1efa-41a5-42bf3cbb52c6%22%2C%0D%0A++++%22supportTopicId%22%3A+%229fc72ed5-805f-3894-eb2b-b1f1f6557d2d%22%2C%0D%0A++++%22contextInfo%22%3A+%22Migration+from+cloud+Germany+to+Azure+global+cloud+%28Azure+SQL+Database%29%22%2C%0D%0A++++%22caller%22%3A+%22NoSupportPlanCloudGermanyMigration%22%2C%0D%0A++++%22severity%22%3A+%223%22%0D%0A%7D).
+
+2. On the Basics tab, enter *Geo-DR migration* as the **Summary**, and then select **Next: Solutions**
+
+ :::image type="content" source="media/germany-migration-databases/support-request-basics.png" alt-text="new support request form":::
+
+3. Review the **Recommended Steps**, then select **Next: Details**.
+
+ :::image type="content" source="media/germany-migration-databases/support-request-solutions.png" alt-text="required support request information":::
+
+4. On the details page, provide the following:
+
+ 1. In the Description box, enter the global Azure subscription ID to migrate to. To migrate databases to more than one subscription, add a list of the global Azure IDs you want to migrate databases to.
+ 1. Provide contact information: name, company name, email or phone number.
+ 1. Complete the form, then select **Next: Review + create**.
+
+ :::image type="content" source="media/germany-migration-databases/support-request-details.png" alt-text="support request details":::
++
+5. Review the support request, then select **Create**.
++
+You'll be contacted once the request is processed.
++ ## Azure Cosmos DB
To export from the source instance and import to the destination instance:
### Option 4: Write data to two Azure Cache for Redis instances, read from one instance For this approach, you must modify your application. The application needs to write data to more than one cache instance while reading from one of the cache instances. This approach makes sense if the data stored in Azure Cache for Redis meets the following criteria:-- The data is refreshed on a regular basis.
+- The data is refreshed regularly.
- All data is written to the target Azure Cache for Redis instance. - You have enough time for all data to be refreshed.
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/azure-security-benchmark-foundation/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/deploy.md
The following table provides a list of the blueprint parameters:
|Azure Virtual Network spoke template|Resource Manager template|Subnet address names (optional)|Array of subnet names to deploy to the spoke virtual network; for example, "subnet1","subnet2"| |Azure Virtual Network spoke template|Resource Manager template|Subnet address prefixes (optional)|Array of IP address prefixes for optional subnets for the spoke virtual network; for example, "10.0.7.0/24","10.0.8.0/24"| |Azure Virtual Network spoke template|Resource Manager template|Deploy spoke|Enter 'true' or 'false' to specify whether the assignment deploys the spoke components of the architecture|
-|Network Watcher resource group|Resource group|Resource group name|Locked - Uses Network Watcher resource group name|
-|Network Watcher resource group|Resource group|Resource group location|Locked - Uses hub location|
-|Azure Network Watcher template|Resource Manager template|Network Watcher location|Location for the Network Watcher resource|
-|Azure Network Watcher template|Resource Manager template|Network Watcher resource group location|Location of the Network Watcher resource group|
+|Azure Network Watcher template|Resource Manager template|Network Watcher location|If Network Watcher is already enabled, this parameter value **must** match the location of the existing Network Watcher resource group.|
+|Azure Network Watcher template|Resource Manager template|Network Watcher resource group location|If Network Watcher is already enabled, this parameter value **must** match the name of the existing Network Watcher resource group.|
## Next steps
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/fedramp-m/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/control-mapping.md
articles to learn about the blueprint and how to deploy this sample:
> [!div class="nextstepaction"] > [FedRAMP Moderate blueprint - Overview](./index.md)
-> [FodRAMP Moderate blueprint - Deploy steps](./deploy.md)
+> [FedRAMP Moderate blueprint - Deploy steps](./deploy.md)
Additional articles about blueprints and how to use them:
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
import-export https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-data-from-blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/import-export/storage-import-export-data-from-blobs.md
Previously updated : 01/14/2021 Last updated : 02/16/2021
You must:
Perform the following steps to create an export job in the Azure portal. 1. Log on to <https://portal.azure.com/>.
-2. Go to **All services > Storage > Import/export jobs**.
+2. Search for **import/export jobs**.
- ![Go to Import/export jobs](./media/storage-import-export-data-from-blobs/export-from-blob1.png)
+ ![Search for import/export jobs](./media/storage-import-export-data-to-blobs/import-to-blob-1.png)
-3. Click **Create Import/export Job**.
+3. Select **+ New**.
- ![Click Import/export job](./media/storage-import-export-data-from-blobs/export-from-blob2.png)
+ ![Select + New to create a new ](./media/storage-import-export-data-to-blobs/import-to-blob-2.png)
4. In **Basics**:
Perform the following steps to create an export job in the Azure portal.
- Select a subscription. - Enter or select a resource group.
- ![Basics](./media/storage-import-export-data-from-blobs/export-from-blob3.png)
+ ![Basics](./media/storage-import-export-data-from-blobs/export-from-blob-3.png)
5. In **Job details**:
Perform the following steps to create an export job in the Azure portal.
- Specify the blob data you wish to export from your storage account to your blank drive or drives. - Choose to **Export all** blob data in the storage account.
- ![Export all](./media/storage-import-export-data-from-blobs/export-from-blob4.png)
+ ![Export all](./media/storage-import-export-data-from-blobs/export-from-blob-4.png)
- You can specify which containers and blobs to export. - **To specify a blob to export**: Use the **Equal To** selector. Specify the relative path to the blob, beginning with the container name. Use *$root* to specify the root container. - **To specify all blobs starting with a prefix**: Use the **Starts With** selector. Specify the prefix, beginning with a forward slash '/'. The prefix may be the prefix of the container name, the complete container name, or the complete container name followed by the prefix of the blob name. You must provide the blob paths in valid format to avoid errors during processing, as shown in this screenshot. For more information, see [Examples of valid blob paths](#examples-of-valid-blob-paths).
- ![Export selected containers and blobs](./media/storage-import-export-data-from-blobs/export-from-blob5.png)
+ ![Export selected containers and blobs](./media/storage-import-export-data-from-blobs/export-from-blob-5.png)
- You can export from the blob list file.
- ![Export from blob list file](./media/storage-import-export-data-from-blobs/export-from-blob6.png)
+ ![Export from blob list file](./media/storage-import-export-data-from-blobs/export-from-blob-6.png)
> [!NOTE] > If the blob to be exported is in use during data copy, Azure Import/Export service takes a snapshot of the blob and copies the snapshot.
import-export https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-data-to-blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/import-export/storage-import-export-data-to-blobs.md
Previously updated : 01/14/2021 Last updated : 02/16/2021
Perform the following steps to prepare the drives.
Perform the following steps to create an import job in the Azure portal. 1. Log on to https://portal.azure.com/.
-2. Go to **All services > Storage > Import/export jobs**.
+2. Search for **import/export jobs**.
- ![Go to Import/export jobs](./media/storage-import-export-data-to-blobs/import-to-blob1.png)
+ ![Search on import/export jobs](./media/storage-import-export-data-to-blobs/import-to-blob-1.png)
-3. Click **Create Import/export Job**.
+3. Select **+ New**.
- ![Click Create Import/export job](./media/storage-import-export-data-to-blobs/import-to-blob2.png)
+ ![Select New to create a new ](./media/storage-import-export-data-to-blobs/import-to-blob-2.png)
4. In **Basics**:
Perform the following steps to create an import job in the Azure portal.
* Select a subscription. * Enter or select a resource group.
- ![Create import job - Step 1](./media/storage-import-export-data-to-blobs/import-to-blob3.png)
+ ![Create import job - Step 1](./media/storage-import-export-data-to-blobs/import-to-blob-3.png)
5. In **Job details**:
Perform the following steps to create an import job in the Azure portal.
* Select the destination storage account where data will reside. * The dropoff location is automatically populated based on the region of the storage account selected.
- ![Create import job - Step 2](./media/storage-import-export-data-to-blobs/import-to-blob4.png)
+ ![Create import job - Step 2](./media/storage-import-export-data-to-blobs/import-to-blob-4.png)
6. In **Return shipping info**:
Perform the following steps to create an import job in the Azure portal.
> [!TIP] > Instead of specifying an email address for a single user, provide a group email. This ensures that you receive notifications even if an admin leaves.
- ![Create import job - Step 3](./media/storage-import-export-data-to-blobs/import-to-blob5.png)
+ ![Create import job - Step 3](./media/storage-import-export-data-to-blobs/import-to-blob-5.png)
7. In the **Summary**: * Review the job information provided in the summary. Make a note of the job name and the Azure datacenter shipping address to ship disks back to Azure. This information is used later on the shipping label. * Click **OK** to create the import job.
- ![Create import job - Step 4](./media/storage-import-export-data-to-blobs/import-to-blob6.png)
+ ![Create import job - Step 4](./media/storage-import-export-data-to-blobs/import-to-blob-6.png)
### [Azure CLI](#tab/azure-cli)
import-export https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-data-to-files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/import-export/storage-import-export-data-to-files.md
Previously updated : 01/14/2021 Last updated : 02/16/2021
For additional samples, go to [Samples for journal files](#samples-for-journal-f
Perform the following steps to create an import job in the Azure portal. 1. Log on to https://portal.azure.com/.
-2. Go to **All services > Storage > Import/export jobs**.
+2. Search for **import/export jobs**.
- ![Go to Import/export](./media/storage-import-export-data-to-blobs/import-to-blob1.png)
+ ![Search on import/export jobs](./media/storage-import-export-data-to-blobs/import-to-blob-1.png)
-3. Click **Create Import/export Job**.
+3. Select **+ New**.
- ![Click Import/export job](./media/storage-import-export-data-to-blobs/import-to-blob2.png)
+ ![Select New to create a new ](./media/storage-import-export-data-to-blobs/import-to-blob-2.png)
4. In **Basics**:
Perform the following steps to create an import job in the Azure portal.
- Select a subscription. - Select a resource group.
- ![Create import job - Step 1](./media/storage-import-export-data-to-blobs/import-to-blob3.png)
+ ![Create import job - Step 1](./media/storage-import-export-data-to-blobs/import-to-blob-3.png)
3. In **Job details**:
Perform the following steps to create an import job in the Azure portal.
- Select the storage account that the data will be imported into. - The dropoff location is automatically populated based on the region of the storage account selected.
- ![Create import job - Step 2](./media/storage-import-export-data-to-blobs/import-to-blob4.png)
+ ![Create import job - Step 2](./media/storage-import-export-data-to-blobs/import-to-blob-4.png)
4. In **Return shipping info**:
Perform the following steps to create an import job in the Azure portal.
> [!TIP] > Instead of specifying an email address for a single user, provide a group email. This ensures that you receive notifications even if an admin leaves.
- ![Create import job - Step 3](./media/storage-import-export-data-to-blobs/import-to-blob5.png)
+ ![Create import job - Step 3](./media/storage-import-export-data-to-blobs/import-to-blob-5.png)
5. In the **Summary**:
Perform the following steps to create an import job in the Azure portal.
- Provide the Azure datacenter shipping address for shipping disks back to Azure. Ensure that the job name and the full address are mentioned on the shipping label. - Click **OK** to complete import job creation.
- ![Create import job - Step 4](./media/storage-import-export-data-to-blobs/import-to-blob6.png)
+ ![Create import job - Step 4](./media/storage-import-export-data-to-blobs/import-to-blob-6.png)
### [Azure CLI](#tab/azure-cli)
iot-develop https://docs.microsoft.com/en-us/azure/iot-develop/about-iot-develop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/about-iot-develop.md
+
+ Title: Introduction to Azure IoT device and application development
+description: Learn how to use Azure IoT to do embedded device development and build device-enabled cloud applications.
++++ Last updated : 01/11/2021++
+# What is Azure IoT device and application development?
+
+Azure IoT is a collection of managed and platform services that connect, monitor, and control your IoT devices. Azure IoT offers developers a comprehensive set of options. Your options include device platforms, supporting cloud services, SDKs, and tools for building device-enabled cloud applications.
+
+This article overviews several key considerations for developers who are getting started with Azure IoT. These concepts will orient you, as an IoT device developer, to your Azure IoT options and how to begin. Specifically, the article overviews these concepts:
+- [Understanding device development roles](#device-development-roles)
+- [Choosing your hardware](#choosing-your-hardware)
+- [Choosing an SDK](#choosing-an-sdk)
+- [Selecting connection options](#selecting-connection-options)
+
+## Device development roles
+This article discusses two common roles that you can observe among device developers. As used here, a role is a collection of related development tasks. It's useful to understand what type of development role you're currently working in. Your role impacts many development choices you make.
+
+* **Device application development:** Aligns with modern development practices, targets many of the higher-order languages, and executes on a general-purpose operating system such as Windows or Linux.
+
+* **Embedded device development:** Describes development targeting resource constrained devices. A resource constrained device will often be used to reduce per unit costs, power consumption, or device size. These devices have direct control over the hardware platform they execute on.
+
+### Device application development
+Device application developers are adapting existing devices to connect to the cloud and integrate into their IoT solutions. These devices can support higher-order languages, such as C# or Python, and often support a robust general purpose operating system such as Windows or Linux. Common target devices include PCs, Containers, Raspberry Pis, and mobile devices.
+
+Rather than develop constrained devices at scale, these developers focus on enabling a specific IoT scenario required by their cloud solution. Some of these developers will also work on constrained devices for their cloud solution. For developers working with constrained devices, see [Embedded Device Development](#embedded-device-development) path below.
+
+> [!TIP]
+> See the [Unconstrained Device SDKs](about-iot-sdks.md#unconstrained-device-sdks) to get started.
+
+### Embedded device development
+Embedded development targets constrained devices that have limited memory and processing. Constrained devices restrict what can be achieved compared to a traditional development platform.
+
+Embedded devices typically use a real-time operating system (RTOS), or no operating system at all. Embedded devices have full control over their hardware, due to the lack of a general purpose operating system. That fact makes embedded devices a good choice for real-time systems.
+
+The current embedded SDKs target the **C** language. The embedded SDKs provide either no operating system, or Azure RTOS support. They are designed with embedded targets in mind. The design considerations include the need for a minimal footprint, and a non-memory allocating design.
+
+If your device is able to run a general-purpose operating system, we recommend following the [Device Application Development](#device-application-development) path. It provides a richer set of development options.
+
+> [!TIP]
+> See the [Constrained Device SDKs](about-iot-sdks.md#constrained-device-sdks) to get started.
+
+## Choosing your hardware
+Azure IoT devices are the basic building blocks of an IoT solution and are responsible for observing and interacting with their environment. There are many different types of IoT devices, and it's helpful to understand the kinds of devices that exist and how these can impact your development process.
+
+For more information on the difference between devices types covered in this article, read [About IoT Device Types](concepts-iot-device-types.md).
+
+## Choosing an SDK
+As an Azure IoT device developer, you have a diverse set of device SDKs, and Azure service SDKs, to help you build device-enabled cloud applications. The SDKs will streamline your development effort and simplify much of the complexity of connecting and managing devices.
+
+As indicated in the [Device development roles](#device-development-roles) section, there are three kinds of IoT SDKs for device development:
+- Embedded device SDKs (for constrained devices)
+- Device SDKs (for using higher order languages to connect existing devices to IoT applications)
+- Service SDKs (for building Azure IoT solutions that connect devices to services)
+
+To learn more about choosing an Azure IoT device or service SDK, see [Overview of Azure IoT Device SDKs](about-iot-sdks.md).
+
+## Selecting connection options
+An important step in the development process is choosing the set of options you will use to connect and manage your devices. There are two critical aspects to consider:
+- Choosing an IoT application platform to host your devices. For Azure IoT, this means choosing IoT Hub or IoT Central.
+- Choosing developer tools to help you connect, manage, and monitor devices.
+
+To learn more about selecting an application platform and tools, see [Overview: Connection options for Azure IoT device developers](concepts-overview-connection-options.md).
+
+## Next steps
+Select one of the following quickstart series that is most relevant to your development role. These articles demonstrate the basics of creating an Azure IoT application to host devices, using an SDK, connecting a device, and sending telemetry.
+- For device application development: [Quickstart: Send telemetry from a device to Azure IoT Central](quickstart-send-telemetry-python.md)
+- For embedded device development: [Getting started with Azure IoT embedded device development](quickstart-device-development.md)
iot-develop https://docs.microsoft.com/en-us/azure/iot-develop/about-iot-sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/about-iot-sdks.md
+
+ Title: Overview of Azure IoT device SDK options
+description: Learn which Azure IoT device SDK to use based on your development role and tasks.
++++ Last updated : 02/11/2021++
+# Overview of Azure IoT Device SDKs
+
+The Azure IoT device SDKs are a set of device client libraries, developer guides, samples, and documentation. The device SDKs help you to programmatically connect devices to Azure IoT services.
++
+As the diagram shows, there are several device SDKs available to fit your device and programming language needs. Guidance on selecting the appropriate device SDK is available in [Which SDK should I use](#which-sdk-should-i-use). There are also Azure IoT service SDKs available to connect your cloud-based application with Azure IoT services on the backend. This article focuses on the device SDKs, but you can learn more about Azure service SDKs [here](#service-sdks).
+
+## Why should I use the Azure IoT Device SDKs?
+
+To connect devices to Azure IoT, you can build a custom connection layer or use Azure IoT Device SDKs. There are several advantages to using Azure IoT Device SDKs:
+
+| Development cost &nbsp; &nbsp; &nbsp; &nbsp; | Custom connection layer | Azure IoT Device SDKs |
+| :-- | : | :- |
+| Support | Need to support and document whatever you build | Have access to Microsoft support (GitHub, Microsoft Q&A, Microsoft Docs, Customer Support teams) |
+| New Features | Need to add new Azure features to custom middleware | Can immediately take advantage of new features that Microsoft constantly adds to the IoT SDKs |
+| Investment | Invest hundreds of hours of embedded development to design, build, test, and maintain a custom version | Can take advantage of free, open-source tools. The only cost associated with the SDKs is the learning curve. |
+
+## Which SDK should I use?
+
+Azure IoT Device SDKs are available in popular programming languages including C, C#, Java, Node.js, and Python. There are two primary considerations when you choose an SDK: device capabilities, and your team's familiarity with the programming language.
+
+### Device capabilities
+
+When you're choosing an SDK, you'll need to consider the limits of the devices you're using. A constrained device is one that has a single micro-controller (MCU) and limited memory. If you're using a constrained device, we recommend that you use the [Embedded C SDK](#embedded-c-sdk). This SDK is designed to provide the bare minimum set of capabilities to connect to Azure IoT. You can also select components (MQTT client, TLS, and socket libraries) that are most optimized for your embedded device. If your constrained device also runs Azure RTOS, you can use the Azure RTOS middleware to connect to Azure IoT. The Azure RTOS middleware wraps the Embedded C SDK with extra functionality to simplify connecting your Azure RTOS device to the cloud.
+
+An unconstrained device is one that has a more robust CPU, which is capable of running an operating system to support a language runtime such as .NET or Python. If you're using an unconstrained device, the main consideration is familiarity with the language.
+
+### Your teamΓÇÖs familiarity with the programming language
+
+Azure IoT device SDKs are implemented in multiple languages so you can choose the language that your prefer. The device SDKs also integrate with other familiar, language-specific tools. Being able to work with a familiar development language and tools, enables your team to optimize the development cycle of research, prototyping, product development, and ongoing maintenance.
+
+Whenever possible, select an SDK that feels familiar to your development team. All Azure IoT SDKs are open source and have several samples available for your team to evaluate and test before committing to a specific SDK.
+
+## How can I get started?
+
+The place to start is to explore the GitHub repositories of the Azure Device SDKs. You can also try a [quickstart](quickstart-send-telemetry-python.md) that shows how to quickly use an SDK to send telemetry to Azure IoT.
+
+Your options to get started depend on what kind of device you have:
+- For constrained devices, use the [Embedded C SDK](#embedded-c-sdk).
+- For devices that run on Azure RTOS, you can develop with the [Azure RTOS middleware](#azure-rtos-middleware).
+- For devices that are unconstrained, then you can [choose an SDK](#unconstrained-device-sdks) in a language of your choice.
+
+### Constrained Device SDKs
+These SDKs are specialized to run on devices with limited compute or memory resources. To learn more about common device types, see [Overview of Azure IoT device types](concepts-iot-device-types.md).
+
+#### Embedded C SDK
+* [GitHub Repository](https://github.com/Azure/azure-sdk-for-c/tree/1.0.0/sdk/docs/iot)
+* [Samples](https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/samples/iot/README.md)
+* [Reference Documentation](https://azure.github.io/azure-sdk-for-c/)
+* [How to build the Embedded C SDK](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot#build)
+* [Size chart for constrained devices](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot#size-chart)
+
+#### Azure RTOS Middleware
+
+* [GitHub Repository](https://github.com/azure-rtos/threadx)
+* [Getting Started Guides](https://github.com/azure-rtos/getting-started) and [more samples](https://github.com/azure-rtos/samples)
+* [Reference Documentation](https://docs.microsoft.com/azure/rtos/threadx/)
+
+### Unconstrained Device SDKs
+These SDKs can run on any device that can support a higher-order language runtime. This includes devices such as PCs, Raspberry Pis, and smartphones. They're differentiated primarily by language so you can choose whichever library that best suits your team and scenario.
+
+#### C Device SDK
+* [GitHub Repository](https://github.com/Azure/azure-iot-sdk-c)
+* [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples)
+* [Packages](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)
+* [Reference Documentation](/azure/iot-hub/iot-c-sdk-ref/)
+* [Edge Module Reference Documentation](/azure/iot-hub/iot-c-sdk-ref/iothub-module-client-h)
+* [Compile the C Device SDK](https://github.com/Azure/azure-iot-sdk-c/blob/master/iothub_client/readme.md#compiling-the-c-device-sdk)
+* [Porting the C SDK to other platforms](https://github.com/Azure/azure-c-shared-utility/blob/master/devdoc/porting_guide.md)
+* [Developer documentation](https://github.com/Azure/azure-iot-sdk-c/tree/master/doc) for information on cross-compiling and getting started on different platforms
+* [Azure IoT Hub C SDK resource consumption information](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/c_sdk_resource_information.md)
+
+#### C# Device SDK
+
+* [GitHub Repository](https://github.com/Azure/azure-iot-sdk-csharp)
+* [Samples](https://github.com/Azure/azure-iot-sdk-csharp#samples)
+* [Package](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client/)
+* [Reference Documentation](/dotnet/api/microsoft.azure.devices?view=azure-dotnet&preserve-view=true)
+* [Edge Module Reference Documentation](/dotnet/api/microsoft.azure.devices.client.moduleclient?view=azure-dotnet&preserve-view=true)
+
+#### Java Device SDK
+
+* [GitHub Repository](https://github.com/Azure/azure-iot-sdk-java)
+* [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples)
+* [Package](https://github.com/Azure/azure-iot-sdk-jav#for-the-device-sdk)
+* [Reference Documentation](/java/api/com.microsoft.azure.sdk.iot.device)
+* [Edge Module Reference Documentation](/java/api/com.microsoft.azure.sdk.iot.device.moduleclient?view=azure-java-stable&preserve-view=true)
+
+#### Node.js Device SDK
+
+* [GitHub Repository](https://github.com/Azure/azure-iot-sdk-node)
+* [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/master/device/samples)
+* [Package](https://www.npmjs.com/package/azure-iot-device)
+* [Reference Documentation](/javascript/api/azure-iot-device/?view=azure-iot-typescript-latest&preserve-view=true)
+* [Edge Module Reference Documentation](/javascript/api/azure-iot-device/moduleclient?view=azure-node-latest&preserve-view=true)
+
+#### Python Device SDK
+
+* [GitHub Repository](https://github.com/Azure/azure-iot-sdk-python)
+* [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples)
+* [Package](https://pypi.org/project/azure-iot-device/)
+* [Reference Documentation](/python/api/azure-iot-device)
+* [Edge Module Reference Documentation](/python/api/azure-iot-device/azure.iot.device.iothubmoduleclient?view=azure-python&preserve-view=true)
+
+### Service SDKs
+Azure IoT also offers service SDKs that enable you to build solution-side applications to manage devices, gain insights, visualize data, and more. These SDKs are specific to each Azure IoT service and are available in C#, Java, JavaScript, and Python to simplify your development experience.
+
+#### IoT Hub
+
+The IoT Hub service SDKs allow you to build applications that easily interact with your IoT Hub to manage devices and security. You can use these SDKs to send cloud-to-device messages, invoke direct methods on your devices, update device properties, and more.
+
+[**Learn more about IoT Hub**](https://azure.microsoft.com/services/iot-hub/) | [**Try controlling a device**](/azure/iot-hub/quickstart-control-device-python)
+
+**C# IoT Hub Service SDK**: [GitHub Repository](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/iothub/service) | [Package](https://www.nuget.org/packages/Microsoft.Azure.Devices/) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/iothub/service/samples) | [Reference Documentation](/dotnet/api/microsoft.azure.devices)
+
+**Java IoT Hub Service SDK**: [GitHub Repository](https://github.com/Azure/azure-iot-sdk-jav#for-the-service-sdk) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/service/iot-service-samples) | [Reference Documentation](/java/api/com.microsoft.azure.sdk.iot.service)
+
+**JavaScript IoT Hub Service SDK**: [GitHub Repository](https://github.com/Azure/azure-iot-sdk-node/tree/master/service) | [Package](https://www.npmjs.com/package/azure-iothub) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/master/service/samples) | [Reference Documentation](/javascript/api/azure-iothub/?view=azure-iot-typescript-latest&preserve-view=true)
+
+**Python IoT Hub Service SDK**: [GitHub Repository](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-hub) | [Package](https://pypi.python.org/pypi/azure-iot-hub/) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-hub/samples) | [Reference Documentation](/python/api/azure-iot-hub)
+
+#### Azure Digital Twins
+
+Azure Digital Twins is a platform as a service (PaaS) offering that enables the creation of knowledge graphs based on digital models of entire environments. These environments could be buildings, factories, farms, energy networks, railways, stadiums, and moreΓÇöeven entire cities. These digital models can be used to gain insights that drive better products, optimized operations, reduced costs, and breakthrough customer experiences. Azure IoT offers service SDKs to make it easy to build applications that use the power of Azure Digital Twins.
+
+[**Learn more about Azure Digital Twins**](https://azure.microsoft.com/services/digital-twins/) | [**Code an ADT application**](/azure/digital-twins/tutorial-code)
+
+**C# ADT Service SDK**: [GitHub Repository](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/digitaltwins/Azure.DigitalTwins.Core) | [Package](https://www.nuget.org/packages/Azure.DigitalTwins.Core) | [Samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/digitaltwins/Azure.DigitalTwins.Core/samples) | [Reference Documentation](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true)
+
+**Java ADT Service SDK**: [GitHub Repository](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/digitaltwins/azure-digitaltwins-core) | [Package](https://search.maven.org/artifact/com.azure/azure-digitaltwins-core/1.0.0/jar) | [Samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/digitaltwins/azure-digitaltwins-core/src/samples) | [Reference Documentation](/java/api/overview/azure/digitaltwins/client?preserve-view=true&view=azure-java-stable)
+
+**Node.js ADT Service SDK**: [GitHub Repository](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/digitaltwins/digital-twins-core) | [Package](https://www.npmjs.com/package/@azure/digital-twins-core) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/digitaltwins/digital-twins-core/samples) | [Reference Documentation](/javascript/api/@azure/digital-twins-core/?branch=master&view=azure-node-latest&preserve-view=true)
+
+**Python ADT Service SDK**: [GitHub Repository](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/digitaltwins/azure-digitaltwins-core) | [Package](https://pypi.org/project/azure-digitaltwins-core/) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/digitaltwins/azure-digitaltwins-core/samples) | [Reference Documentation](/python/api/azure-digitaltwins-core/azure.digitaltwins.core?view=azure-python&preserve-view=true)
+
+#### Device Provisioning Service
+
+The IoT Hub Device Provisioning Service (DPS) is a helper service for IoT Hub that enables zero-touch, just-in-time provisioning to the right IoT hub without requiring human intervention. DPS enables the provisioning of millions of devices in a secure and scalable way. The DPS Service SDKs allow you to build applications that can securely manage your devices by creating enrollment groups and doing bulk operations.
+
+[**Learn more about the Device Provisioning Service**](/azure/iot-dps/) | [**Try creating a group enrollment for X.509 Devices**](/azure/iot-dps/quick-enroll-device-x509-csharp)
+
+**C# Device Provisioning Service SDK**: [GitHub Repository](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/provisioning/service) | [Package](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/provisioning/service/samples) | [Reference Documentation](/dotnet/api/microsoft.azure.devices.provisioning.service?view=azure-dotnet&preserve-view=true)
+
+**Java Device Provisioning Service SDK**: [GitHub Repository](https://github.com/Azure/azure-iot-sdk-java/tree/master/provisioning/provisioning-service-client/src) | [Package](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-service-client) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/provisioning/provisioning-samples#provisioning-service-client) | [Reference Documentation](/java/api/com.microsoft.azure.sdk.iot.provisioning.service?view=azure-java-stable&preserve-view=true)
+
+**Node.js Device Provisioning Service SDK**: [GitHub Repository](https://github.com/Azure/azure-iot-sdk-node/tree/master/provisioning/service) | [Package](https://www.npmjs.com/package/azure-iot-provisioning-service) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/master/provisioning/service/samples) | [Reference Documentation](/javascript/api/azure-iot-provisioning-service)
+
+## Next Steps
+
+* [Quickstart: Connect a device to IoT Central (Python)](quickstart-send-telemetry-python.md)
+* [Quickstart: Connect a device to IoT Hub (Python)](quickstart-send-telemetry-cli-python.md)
+* [Get started with embedded development](quickstart-device-development.md)
+* Learn more about the [benefits of developing using Azure IoT SDKs](https://azure.microsoft.com/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/)
iot-develop https://docs.microsoft.com/en-us/azure/iot-develop/concepts-iot-device-types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-iot-device-types.md
+
+ Title: Overview of Azure IoT device types
+description: Learn the different device types supported by Azure IoT and the tools available.
++++ Last updated : 01/11/2021++
+# Overview of Azure IoT device types
+IoT devices exist across a broad selection of hardware platforms. There are small 8-bit MCUs all the way up to the latest x86 CPUs as found in a desktop computer. Many variables factor into the decision for which hardware you to choose for a IoT device and this article outlined some of the key differences.
+
+## Key hardware differentiators
+Some important factors when choosing your hardware are cost, power consumption, networking, and available inputs and outputs.
+
+* **Cost:** Smaller cheaper devices are typically used when mass producing the final product. However the trade-off is that development of the device can be more expensive given the highly constrained device. The development cost can be spread across all produced devices so the per unit development cost will be low.
+
+* **Power:** How much power a device consumes is important if the device will be utilizing batteries and not connected to the power grid. MCUs are often designed for lower power scenarios and can be a better choice for extending battery life.
+
+* **Network Access:** There are many ways to connect a device to a cloud service. Ethernet, Wi-fi and cellular and some of the available options. The connection type you choose will depend on where the device is deployed and how it is used. For example, cellular can be an attractive option given the high coverage, however for high traffic devices it can an expensive. Hardwired ethernet provides cheaper data costs but with the downside of being less portable.
+
+* **Input and Outputs:** The inputs and outputs available on the device directly affect the devices operating capabilities. A microcontroller will typically have many I/O functions built directly into the chip and provides a wide choice of sensors to connect directly.
+
+## Microcontrollers vs Microprocessors
+IoT devices can be separated into two broad categories, microcontrollers (MCUs) and microprocessors (MPUs).
+
+**MCUs** are less expensive and simpler to operate than MPUs. An MCU will contain many of the functions, such as memory, interfaces, and I/O within the chip itself. An MPU will draw this functionality from components in supporting chips. An MCU will often use a real-time OS (RTOS) or run bare-metal (No OS) and provide real-time response and highly deterministic reactions to external events.
+
+**MPUs** will generally run a general purpose OS, such as Windows, Linux, or MacOSX, that provide a non-deterministic real-time response. There is typically no guarantee to when a task will be completed.
++
+Below is a table showing some of the defining differences between an MCU and an MPU based system:
+
+||Microcontroller (MCU)|Microprocessor (MPU)|
+|-|-|-|
+|**CPU**| Less | More |
+|**RAM**| Less | More |
+|**Flash**| Less | More |
+|**OS**| No or RTOS | General Purpose |
+|**Development Difficulty**| Harder | Easier |
+|**Power Consumption**| Lower | Higher |
+|**Cost**| Lower | Higher |
+|**Deterministic**| Yes | No - with exceptions|
+|**Device Size**| Smaller | Larger |
iot-develop https://docs.microsoft.com/en-us/azure/iot-develop/concepts-overview-connection-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-overview-connection-options.md
+
+ Title: Learn about connection options for Azure IoT device developers
+description: Learn about commonly used device connection options and tools for Azure IoT device developers.
++++ Last updated : 02/11/2021++
+# Overview: Connection options for Azure IoT device developers
+As a developer who works with devices, you have several options for connecting and managing Azure IoT devices. This article overviews the most commonly used options and tools to help you connect and manage devices.
+
+As you evaluate Azure IoT connection options for your devices, it's helpful to compare the following items:
+- Azure IoT device application platforms
+- Tools for connecting and managing devices
+
+## Application platforms: IoT Central and IoT Hub
+Azure IoT contains two services that are platforms for device-enabled cloud applications: Azure IoT Central, and Azure IoT Hub. You can use either one to host an IoT application and connect devices.
+- [IoT Central](../iot-central/core/overview-iot-central.md) is designed to reduce the complexity and cost of working with IoT solutions. It's a software-as-a-service (SaaS) application that provides a complete platform for hosting IoT applications. You can use the IoT Central web UI to streamline the entire lifecycle of creating and managing IoT applications. The web UI simplifies the tasks of creating applications, and connecting and managing from a few up to millions of devices. IoT Central uses IoT Hub to create and manage applications, but keeps the details transparent to the user. In general, IoT Central provides reduced complexity and development effort, and simplified device management through the web UI.
+- [IoT Hub](../iot-hub/about-iot-hub.md) acts as a central message hub for bi-directional communication between IoT applications and connected devices. It's a platform-as-a-service (PaaS) application that also provides a platform for hosting IoT applications. Like IoT Central, it can scale to support millions of devices. In general, IoT Hub offers greater control and customization over your application design, and more developer tool options for working with the service, at the cost of some increase in development and management complexity.
+
+## Tools to connect and manage devices
+After you select IoT Hub or IoT Central to host your IoT application, you have several options of developer tools. You can use these tools to connect to your selected IoT application platform, and to create and manage applications and devices. The following table summarizes common tool options.
+
+> [!NOTE]
+> In addition to the following tools, you can programmatically create and manage IoT applications by using REST API's, Azure SDKs, or Azure Resource Manager templates. You can learn more in the [IoT Hub](../iot-hub/about-iot-hub.md) and [IoT Central](../iot-central/core/overview-iot-central.md) service documentation.
+
+|Tool |Supports IoT platform &nbsp; &nbsp; &nbsp; &nbsp; |Documentation |Description |
+|||||
+|Central web UI | Central | [Central quickstart](../iot-central/core/quick-deploy-iot-central.md) | Browser-based portal for IoT Central. |
+|Azure portal | Hub, Central | [Create an IoT hub with Azure portal](../iot-hub/iot-hub-create-through-portal.md), [Manage IoT Central from the Azure portal](../iot-central/core/howto-manage-iot-central-from-portal.md)| Browser-based portal for IoT Hub and devices. Also works with other Azure resources including IoT Central. |
+|Azure CLI | Hub, Central | [Create an IoT hub with CLI](../iot-hub/iot-hub-create-using-cli.md), [Manage IoT Central from Azure CLI](../iot-central/core/howto-manage-iot-central-from-cli.md) | Command-line interface for creating and managing IoT applications. |
+|Azure PowerShell | Hub, Central | [Create an IoT hub with PowerShell](../iot-hub/iot-hub-create-using-powershell.md), [Manage IoT Central from Azure PowerShell](../iot-central/core/howto-manage-iot-central-from-powershell.md) | PowerShell interface for creating and managing IoT applications |
+|Azure IoT Tools for VS Code | Hub | [Create an IoT hub with Tools for VS Code](../iot-hub/iot-hub-create-use-iot-toolkit.md) | VS Code extension for IoT Hub applications. |
+|Azure IoT Explorer | Hub | [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer) | Cannot create IoT hubs. Connects to an existing IoT hub to manage devices. Often used with CLI or Portal.|
+
+## Next steps
+To learn more about your options for connecting devices to Azure IoT, explore the following quickstarts:
+- IoT Central: [Create an Azure IoT Central application](../iot-central/core/quick-deploy-iot-central.md)
+- IoT Hub: [Send telemetry from a device to an IoT hub and monitor it with the Azure CLI](../iot-hub/quickstart-send-telemetry-cli.md)
iot-develop https://docs.microsoft.com/en-us/azure/iot-develop/quickstart-device-development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-device-development.md
+
+ Title: Azure IoT embedded device development quickstart
+description: A quickstart guide that shows how to do embedded device development using Azure RTOS and Azure IoT.
++++ Last updated : 02/15/2021++
+# Getting started with Azure IoT embedded device development
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)
+
+This getting started guide contains a set of quickstarts that shows you how to start working with embedded devices and Azure IoT.
+
+In each quickstart, you complete the following basic tasks:
+* Install a set of embedded development tools for programming a specific device in C
+* Build an image that includes Azure RTOS components and samples, and then flash a device
+* Securely connect a device to Azure IoT
+* View device telemetry, view properties, and invoke cloud-to-device methods
+
+## Quickstarts
+The following tutorials are included in the getting started guide:
+
+|Quickstart|Device|
+||--|
+|[Getting started with the ST Microelectronics B-L475E-IOT01 Discovery kit](https://go.microsoft.com/fwlink/p/?linkid=2129536) |[ST Microelectronics B-L475E-IOT01](https://www.st.com/content/st_com/en/products/evaluation-tools/product-evaluation-tools/mcu-mpu-eval-tools/stm32-mcu-mpu-eval-tools/stm32-discovery-kits/b-l475e-iot01a.html)|
+|[Getting started with the ST Microelectronics B-L4S5I-IOT01 Discovery kit](https://github.com/azure-rtos/getting-started/tree/master/STMicroelectronics/STM32L4_L4+) |[ST Microelectronics B-L4S5I-IOT01](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html)|
+|[Getting started with the NXP MIMXRT1060-EVK Evaluation kit](https://go.microsoft.com/fwlink/p/?linkid=2129821) |[NXP MIMXRT1060-EVK](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/mimxrt1060-evk-i-mx-rt1060-evaluation-kit:MIMXRT1060-EVK)|
+|[Getting started with the NXP MIMXRT1050-EVKB Evaluation kit](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1050-EVKB) |[NXP MIMXRT1050-EVKB](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/i-mx-rt1050-evaluation-kit:MIMXRT1050-EVK)|
+|[Getting started with the Microchip ATSAME54-XPRO Evaluation kit](https://go.microsoft.com/fwlink/p/?linkid=2129537) |[Microchip ATSAME54-XPRO](https://www.microchip.com/developmenttools/productdetails/atsame54-xpro)|
+|[Getting started with the MXChip AZ3166 IoT DevKit](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166) |[MXChip AZ3166 IoT DevKit](https://microsoft.github.io/azure-iot-developer-kit/)|
+|[Getting started with the Renesas Starter Kit+ for RX65N-2MB](https://github.com/azure-rtos/getting-started/tree/master/Renesas/RSK_RX65N_2MB) |[Renesas Starter Kit+ for RX65N-2MB](https://www.renesas.com/us/en/products/microcontrollers-microprocessors/rx-32-bit-performance-efficiency-mcus/rx65n-2mb-starter-kit-plus-renesas-starter-kit-rx65n-2mb)|
+
+## Next steps
+After you complete a device-specific quickstart in this guide, explore the other device-specific articles and samples in the Azure RTOS getting started repo:
+* [Getting started with Azure RTOS and Azure IoT](https://github.com/azure-rtos/getting-started)
iot-develop https://docs.microsoft.com/en-us/azure/iot-develop/quickstart-send-telemetry-cli-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-cli-node.md
+
+ Title: Send device telemetry to Azure IoT Hub quickstart (Node.js)
+description: In this quickstart, you use the Azure IoT Hub Device SDK for Node.js to send telemetry from a device to an Iot hub.
+++
+ms.devlang: node
+ Last updated : 01/11/2021++
+# Quickstart: Send telemetry from a device to an IoT hub (Node.js)
+
+**Applies to**: [Device application development](about-iot-develop.md#device-application-development)
+
+In this quickstart, you learn a basic IoT device application development workflow. You use the Azure CLI to create an Azure IoT hub and a simulated device, then you use the Azure IoT Node.js SDK to access the device and send telemetry to the hub.
+
+## Prerequisites
+- If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Azure CLI. You can run all commands in this quickstart using the Azure Cloud Shell, an interactive CLI shell that runs in your browser. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this quickstart requires Azure CLI version 2.0.76 or later. Run az --version to find the version. To install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+- [Node.js 10+](https://nodejs.org). If you are using the Azure Cloud Shell, do not update the installed version of Node.js. The Azure Cloud Shell already has the latest Node.js version.
+
+ Verify the current version of Node.js on your development machine by using the following command:
+
+ ```cmd/sh
+ node --version
+ ```
++
+## Use the Node.js SDK to send messages
+In this section, you will use the Node.js SDK to send messages from your simulated device to your IoT hub.
+
+1. Open a new terminal window. You will use this terminal to install the Node.js SDK and work with Node.js sample code. You should now have two terminals open: the one you just opened to work with Node.js, and the CLI shell that you used in previous sections to enter Azure CLI commands.
+
+1. Copy the [Azure IoT Node.js SDK device samples](https://github.com/Azure/azure-iot-sdk-node/tree/master/device/samples) to your local machine:
+
+ ```console
+ git clone https://github.com/Azure/azure-iot-sdk-node
+ ```
+
+1. Navigate to the *azure-iot-sdk-node/device/samples* directory:
+
+ ```console
+ cd azure-iot-sdk-node/device/samples
+ ```
+1. Install the Azure IoT Node.js SDK and necessary dependencies:
+
+ ```console
+ npm install
+ ```
+ This command installs the proper dependencies as specified in the *package.json* file in the device samples directory.
+
+1. Set the Device Connection String as an environment variable called `DEVICE_CONNECTION_STRING`. The string value to use is the string you obtained in the previous section after creating your simulated Node.js device.
+
+ **Windows (cmd)**
+
+ ```console
+ set DEVICE_CONNECTION_STRING=<your connection string here>
+ ```
+
+ > [!NOTE]
+ > For Windows CMD there are no quotation marks surrounding the connection string.
+
+ **Linux (bash)**
+
+ ```bash
+ export DEVICE_CONNECTION_STRING="<your connection string here>"
+ ```
+
+1. In your open CLI shell, run the [az iot hub monitor-events](https://docs.microsoft.com/cli/azure/ext/azure-iot/iot/hub?view=azure-cli-latest#ext-azure-iot-az-iot-hub-monitor-events&preserve-view=true) command to begin monitoring for events on your simulated IoT device. Event messages will be printed in the terminal as they arrive.
+
+ ```azurecli
+ az iot hub monitor-events --output table --hub-name {YourIoTHubName}
+ ```
+
+1. In your Node.js terminal, run the code for the installed sample file *simple_sample_device.js* . This code accesses the simulated IoT device and sends a message to the IoT hub.
+
+ To run the Node.js sample from the terminal:
+ ```console
+ node ./simple_sample_device.js
+ ```
+
+ Optionally, you can run the Node.js code from the sample in your JavaScript IDE:
+ ```javascript
+ 'use strict';
+
+ const Protocol = require('azure-iot-device-mqtt').Mqtt;
+ // Uncomment one of these transports and then change it in fromConnectionString to test other transports
+ // const Protocol = require('azure-iot-device-amqp').AmqpWs;
+ // const Protocol = require('azure-iot-device-http').Http;
+ // const Protocol = require('azure-iot-device-amqp').Amqp;
+ // const Protocol = require('azure-iot-device-mqtt').MqttWs;
+ const Client = require('azure-iot-device').Client;
+ const Message = require('azure-iot-device').Message;
+
+ // String containing Hostname, Device Id & Device Key in the following formats:
+ // "HostName=<iothub_host_name>;DeviceId=<device_id>;SharedAccessKey=<device_key>"
+ const deviceConnectionString = process.env.DEVICE_CONNECTION_STRING;
+ let sendInterval;
+
+ function disconnectHandler () {
+ clearInterval(sendInterval);
+ client.open().catch((err) => {
+ console.error(err.message);
+ });
+ }
+
+ // The AMQP and HTTP transports have the notion of completing, rejecting or abandoning the message.
+ // For example, this is only functional in AMQP and HTTP:
+ // client.complete(msg, printResultFor('completed'));
+ // If using MQTT calls to complete, reject, or abandon are no-ops.
+ // When completing a message, the service that sent the C2D message is notified that the message has been processed.
+ // When rejecting a message, the service that sent the C2D message is notified that the message won't be processed by the device. the method to use is client.reject(msg, callback).
+ // When abandoning the message, IoT Hub will immediately try to resend it. The method to use is client.abandon(msg, callback).
+ // MQTT is simpler: it accepts the message by default, and doesn't support rejecting or abandoning a message.
+ function messageHandler (msg) {
+ console.log('Id: ' + msg.messageId + ' Body: ' + msg.data);
+ client.complete(msg, printResultFor('completed'));
+ }
+
+ function generateMessage () {
+ const windSpeed = 10 + (Math.random() * 4); // range: [10, 14]
+ const temperature = 20 + (Math.random() * 10); // range: [20, 30]
+ const humidity = 60 + (Math.random() * 20); // range: [60, 80]
+ const data = JSON.stringify({ deviceId: 'myFirstDevice', windSpeed: windSpeed, temperature: temperature, humidity: humidity });
+ const message = new Message(data);
+ message.properties.add('temperatureAlert', (temperature > 28) ? 'true' : 'false');
+ return message;
+ }
+
+ function errorCallback (err) {
+ console.error(err.message);
+ }
+
+ function connectCallback () {
+ console.log('Client connected');
+ // Create a message and send it to the IoT Hub every two seconds
+ sendInterval = setInterval(() => {
+ const message = generateMessage();
+ console.log('Sending message: ' + message.getData());
+ client.sendEvent(message, printResultFor('send'));
+ }, 2000);
+
+ }
+
+ // fromConnectionString must specify a transport constructor, coming from any transport package.
+ let client = Client.fromConnectionString(deviceConnectionString, Protocol);
+
+ client.on('connect', connectCallback);
+ client.on('error', errorCallback);
+ client.on('disconnect', disconnectHandler);
+ client.on('message', messageHandler);
+
+ client.open()
+ .catch(err => {
+ console.error('Could not connect: ' + err.message);
+ });
+
+ // Helper function to print results in the console
+ function printResultFor(op) {
+ return function printResult(err, res) {
+ if (err) console.log(op + ' error: ' + err.toString());
+ if (res) console.log(op + ' status: ' + res.constructor.name);
+ };
+ }
+ ```
+
+As the Node.js code sends a simulated telemetry message from your device to the IoT hub, the message appears in your CLI shell that is monitoring events:
+
+```output
+event:
+ component: ''
+ interface: ''
+ module: ''
+ origin: <your device name>
+ payload: '{"deviceId":"myFirstDevice","windSpeed":11.853592092144627,"temperature":22.62484121157508,"humidity":66.17960805575937}'
+```
+
+Your device is now securely connected and sending telemetry to Azure IoT Hub.
+
+## Clean up resources
+If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete them.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+
+To delete a resource group by name:
+1. Run the [az group delete](https://docs.microsoft.com/cli/azure/group?view=azure-cli-latest#az-group-delete&preserve-view=true) command. This command removes the resource group, the IoT Hub, and the device registration you created.
+
+ ```azurecli
+ az group delete --name MyResourceGroup
+ ```
+1. Run the [az group list](https://docs.microsoft.com/cli/azure/group?view=azure-cli-latest#az-group-list&preserve-view=true) command to confirm the resource group is deleted.
+
+ ```azurecli
+ az group list
+ ```
+
+## Next steps
+
+In this quickstart, you learned a basic Azure IoT application workflow for securely connecting a device to the cloud and sending device-to-cloud telemetry. You used the Azure CLI to create an IoT hub and a simulated device, then you used the Azure IoT Node.js SDK to access the device and send telemetry to the hub.
+
+As a next step, explore the Azure IoT Node.js SDK through application samples.
+
+- [More Node.js Samples](https://github.com/Azure/azure-iot-sdk-node/tree/master/device/samples): This directory contains more samples from the Node.js SDK repository to showcase IoT Hub scenarios.
iot-develop https://docs.microsoft.com/en-us/azure/iot-develop/quickstart-send-telemetry-cli-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-cli-python.md
+
+ Title: Send device telemetry to Azure IoT Hub quickstart (Python)
+description: In this quickstart, you use the Azure IoT Hub Device SDK for Python to send telemetry from a device to an Iot hub.
+++
+ms.devlang: python
+ Last updated : 01/11/2021++
+# Quickstart: Send telemetry from a device to an Azure IoT hub (Python)
+
+**Applies to**: [Device application development](about-iot-develop.md#device-application-development)
+
+In this quickstart, you learn a basic IoT device application development workflow. You use the Azure CLI to create an Azure IoT hub and a device, then you use the Azure IoT Python SDK to build a simulated client device and send telemetry to the hub.
+
+## Prerequisites
+- If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Azure CLI. You can run all commands in this quickstart using the Azure Cloud Shell, an interactive CLI shell that runs in your browser. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this quickstart requires Azure CLI version 2.0.76 or later. Run az --version to find the version. To install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+- [Python 3.7+](https://www.python.org/downloads/). For other versions of Python supported, see [Azure IoT Device Features](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device#azure-iot-device-features).
+
+ To ensure that your Python version is up to date, run `python --version`. If you have both Python 2 and Python 3 installed, and are using a Python 3 environment, install all libraries using `pip3`. This ensures that the libraries are installed to your Python 3 runtime.
+ > [!IMPORTANT]
+ > In the Python installer, select the option to **Add Python to PATH**. If you already have Python 3.7 or higher installed, confirm that you've added the Python installation folder to the `PATH` environment variable.
++
+## Use the Python SDK to send messages
+In this section, you will use the Python SDK to send messages from your simulated device to your IoT hub.
+
+1. Open a new terminal window. You will use this terminal to install the Python SDK and work with Python sample code. You should now have two terminals open: the one you just opened to work with Python, and the CLI shell that you used in previous sections to enter Azure CLI commands.
+
+1. Copy the [Azure IoT Python SDK device samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples) to your local machine:
+
+ ```console
+ git clone https://github.com/Azure/azure-iot-sdk-python
+ ```
+
+ and navigating to the *azure-iot-sdk-python/azure-iot-device/samples* directory:
+
+ ```console
+ cd azure-iot-sdk-python/azure-iot-device/samples
+ ```
+1. Install the Azure IoT Python SDK:
+
+ ```console
+ pip install azure-iot-device
+ ```
+1. Set the Device Connection String as an environment variable called `IOTHUB_DEVICE_CONNECTION_STRING`. This is the string you obtained in the previous section after creating your simulated Python device.
+
+ **Windows (cmd)**
+
+ ```console
+ set IOTHUB_DEVICE_CONNECTION_STRING=<your connection string here>
+ ```
+
+ > [!NOTE]
+ > For Windows CMD there are no quotation marks surrounding the connection string.
+
+ **Linux (bash)**
+
+ ```bash
+ export IOTHUB_DEVICE_CONNECTION_STRING="<your connection string here>"
+ ```
+
+1. In your open CLI shell, run the [az iot hub monitor-events](https://docs.microsoft.com/cli/azure/ext/azure-iot/iot/hub?view=azure-cli-latest#ext-azure-iot-az-iot-hub-monitor-events&preserve-view=true) command to begin monitoring for events on your simulated IoT device. Event messages will be printed in the terminal as they arrive.
+
+ ```azurecli
+ az iot hub monitor-events --output table --hub-name {YourIoTHubName}
+ ```
+
+1. In your Python terminal, run the code for the installed sample file *simple_send_message.py* . This code accesses the simulated IoT device and sends a message to the IoT hub.
+
+ To run the Python sample from the terminal:
+ ```console
+ python ./simple_send_message.py
+ ```
+
+ Optionally, you can run the Python code from the sample in your Python IDE:
+ ```python
+ import os
+ import asyncio
+ from azure.iot.device.aio import IoTHubDeviceClient
++
+ async def main():
+ # Fetch the connection string from an environment variable
+ conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
+
+ # Create instance of the device client using the authentication provider
+ device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)
+
+ # Connect the device client.
+ await device_client.connect()
+
+ # Send a single message
+ print("Sending message...")
+ await device_client.send_message("This is a message that is being sent")
+ print("Message successfully sent!")
+
+ # finally, disconnect
+ await device_client.disconnect()
++
+ if __name__ == "__main__":
+ asyncio.run(main())
+ ```
+
+As the Python code sends a message from your device to the IoT hub, the message appears in your CLI shell that is monitoring events:
+
+```output
+Starting event monitor, use ctrl-c to stop...
+event:
+origin: <your Device name>
+payload: This is a message that is being sent
+```
+
+Your device is now securely connected and sending telemetry to Azure IoT Hub.
+
+## Clean up resources
+If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete them.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+
+To delete a resource group by name:
+1. Run the [az group delete](https://docs.microsoft.com/cli/azure/group?view=azure-cli-latest#az-group-delete&preserve-view=true) command. This removes the resource group, the IoT Hub, and the device registration you created.
+
+ ```azurecli
+ az group delete --name MyResourceGroup
+ ```
+1. Run the [az group list](https://docs.microsoft.com/cli/azure/group?view=azure-cli-latest#az-group-list&preserve-view=true) command to confirm the resource group is deleted.
+
+ ```azurecli
+ az group list
+ ```
+
+## Next steps
+In this quickstart, you learned a basic Azure IoT application workflow for securely connecting a device to the cloud and sending device-to-cloud telemetry. You used the Azure CLI to create an IoT hub and a device, then you used the Azure IoT Python SDK to build a simulated device and send telemetry to the hub.
+
+As a next step, explore the Azure IoT Python SDK through application samples.
+
+- [Asynchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-hub-scenarios): This directory contains asynchronous Python samples for additional IoT Hub scenarios.
+- [Synchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/sync-samples): This directory contains Python samples for use with Python 2.7 or synchronous compatibility scenarios for Python 3.5+
+- [IoT Edge samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-edge-scenarios): This directory contains Python samples for working with Edge modules and downstream devices.
iot-develop https://docs.microsoft.com/en-us/azure/iot-develop/quickstart-send-telemetry-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-python.md
+
+ Title: Send device telemetry to Azure IoT Central quickstart (Python)
+description: In this quickstart, you use the Azure IoT Hub Device SDK for Python to send telemetry from a device to IoT Central.
+++
+ms.devlang: python
+ Last updated : 01/11/2021++
+# Quickstart: Send telemetry from a device to Azure IoT Central (Python)
+
+**Applies to**: [Device application development](about-iot-develop.md#device-application-development)
+
+In this quickstart, you learn a basic IoT device application development workflow. First you use Azure IoT Central to create a cloud application. Then you use the Azure IoT Python SDK to build a simulated device, connect to IoT Central, and send device-to-cloud telemetry.
+
+## Prerequisites
+- [Python 3.7+](https://www.python.org/downloads/). For other versions of Python supported, see [Azure IoT Device Features](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device#azure-iot-device-features).
+
+ To ensure that your Python version is up to date, run `python --version`. If you have both Python 2 and Python 3 installed, and are using a Python 3 environment, install all libraries using `pip3`. Running `pip3` ensures that the libraries are installed to your Python 3 runtime.
+ > [!IMPORTANT]
+ > In the Python installer, select the option to **Add Python to PATH**. If you already have Python 3.7 or higher installed, confirm that you've added the Python installation folder to the `PATH` environment variable.
+
+## Create an application
+In this section, you create an IoT Central application. IoT Central is a portal-based IoT application platform that helps reduce the complexity and cost of developing and managing IoT solutions.
+
+To create an Azure IoT Central application:
+1. Browse to [Azure IoT Central](https://apps.azureiotcentral.com/) and sign in with a Microsoft personal, work, or school account.
+1. Navigate to **Build** and select **Custom apps**.
+ :::image type="content" source="media/quickstart-send-telemetry-python/iot-central-build.png" alt-text="IoT Central start page":::
+1. In **Application name**, enter a unique name or use the generated name.
+1. In **URL**, enter a memorable application URL prefix or use the generated URL prefix.
+1. Leave **Application template** set to *Custom application*. The dropdown might show other options, if any templates already exist in your account.
+1. Select a **Pricing plan** option.
+ - To use the application for free for seven days, select **Free**. You can convert a free application to standard pricing before it expires.
+ - Optionally, you can select a standard pricing plan. If you select standard pricing, more options appear and you'll need to set a **Directory**, an **Azure subscription**, and a **Location**. To learn about pricing, see [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/).
+ - **Directory** is the Azure Active Directory in which you create your application. An Azure Active Directory contains user identities, credentials, and other organizational information. If you don't have an Azure Active Directory, one is created when you create an Azure subscription.
+ - An **Azure subscription** enables you to create instances of Azure services. IoT Central provisions resources in your subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/). After you create the subscription, return to the IoT Central **New application** page. Your new subscription appears in the **Azure subscription** drop-down.
+ - **Location** is the [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) in which you create an application. Select a location that's physically closest to your devices to get optimal performance. After you choose a location, you can't move the application to a different location.
+
+ :::image type="content" source="media/quickstart-send-telemetry-python/iot-central-pricing.png" alt-text="IoT Central new application dialog":::
+1. Select **Create**.
+
+ After IoT Central creates the application, it redirects you to the application dashboard.
+ :::image type="content" source="media/quickstart-send-telemetry-python/iot-central-created.png" alt-text="IoT Central new application dashboard":::
+
+## Add a device
+In this section, you add a new device to your IoT Central application. The device is an instance of a device template that represents a real or simulated device that you'll connect to the application.
+
+To create a new device:
+1. In the left pane select **Devices**, then select **+New**. This opens the new device dialog.
+1. Leave **Device template** set to *Unassigned*.
+
+ > [!NOTE]
+ > In this quickstart for simplicity, you connect a simulated device that uses an unassigned template. If you continue using IoT Central to manage devices, you'll learn about using device templates. For an overview of working with device templates, see [Quickstart: Add a simulated device to your IoT Central application](../iot-central/core/quick-create-simulated-device.md).
+1. Set a friendly **Device name** and **Device ID**. Optionally, use the generated values.
+ :::image type="content" source="media/quickstart-send-telemetry-python/iot-central-create-device.png" alt-text="IoT Central new device dialog":::
+1. Select **Create**.
+
+ The created device appears in the **All devices** list.
+ :::image type="content" source="media/quickstart-send-telemetry-python/iot-central-devices-list.png" alt-text="IoT Central all devices list":::
+
+To retrieve connection details for the new device:
+1. In the **All devices** list, double-click the linked device name to display details.
+1. In the top menu, select **Connect**.
+
+ The **Device connection** dialog displays the connection details:
+ :::image type="content" source="media/quickstart-send-telemetry-python/iot-central-device-connect.png" alt-text="IoT Central device connection details":::
+1. Copy the following values from the **Device connection** dialog to a safe location. You'll use these in the next section to connect your device to IoT Central.
+ * `ID scope`
+ * `Device ID`
+ * `Primary key`
+
+## Send messages and monitor telemetry
+In this section, you will use the Python SDK to build a simulated device and send telemetry to your IoT Central application.
+
+1. Open a terminal using Windows CMD, or PowerShell, or Bash (for Windows or Linux). You'll use the terminal to install the Python SDK, update environment variables, and run the Python code sample.
+
+1. Copy the [Azure IoT Python SDK device samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples) to your local machine.
+
+ ```console
+ git clone https://github.com/Azure/azure-iot-sdk-python
+ ```
+
+1. Navigate to the *azure-iot-sdk-python/azure-iot-device/samples* directory.
+
+ ```console
+ cd azure-iot-sdk-python/azure-iot-device/samples
+ ```
+1. Install the Azure IoT Python SDK.
+
+ ```console
+ pip install azure-iot-device
+ ```
+
+1. Set each of the following environment variables, to enable your simulated device to connect to IoT Central. For `ID_SCOPE`, `DEVICE_ID`, and `DEVICE_KEY`, use the values that you saved from the IoT Central *Device connection* dialog.
+
+ **Windows CMD**
+
+ ```console
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ ```
+ ```console
+ set ID_SCOPE=<your ID scope>
+ ```
+ ```console
+ set DEVICE_ID=<your device ID>
+ ```
+ ```console
+ set DEVICE_KEY=<your device's primary key>
+ ```
+
+ > [!NOTE]
+ > For Windows CMD there are no quotation marks surrounding the connection string or other variable values.
+
+ **PowerShell**
+
+ ```azurepowershell
+ $env:PROVISIONING_HOST='global.azure-devices-provisioning.net'
+ ```
+ ```azurepowershell
+ $env:ID_SCOPE='<your ID scope>'
+ ```
+ ```azurepowershell
+ $env:DEVICE_ID='<your device ID>'
+ ```
+ ```azurepowershell
+ $env:DEVICE_KEY='<your device's primary key>'
+ ```
+
+ **Bash (Linux or Windows)**
+
+ ```bash
+ export PROVISIONING_HOST='global.azure-devices-provisioning.net'
+ ```
+ ```bash
+ export ID_SCOPE='<your ID scope>'
+ ```
+ ```bash
+ export DEVICE_ID='<your device ID>'
+ ```
+ ```bash
+ export DEVICE_KEY='<your device's primary key>'
+ ```
+
+1. In your terminal, run the code for the sample file *simple_send_temperature.py. This code accesses the simulated IoT device and sends a message to IoT Central.
+
+ To run the Python sample from the terminal:
+ ```console
+ python ./simple_send_temperature.py
+ ```
+
+ Optionally, you can run the Python code from the sample in your Python IDE:
+ ```python
+ import asyncio
+ import os
+ from azure.iot.device.aio import ProvisioningDeviceClient
+ from azure.iot.device.aio import IoTHubDeviceClient
+ from azure.iot.device import Message
+ import uuid
+ import json
+ import random
+
+ # ensure environment variables are set for your device and IoT Central application credentials
+ provisioning_host = os.getenv("PROVISIONING_HOST")
+ id_scope = os.getenv("ID_SCOPE")
+ registration_id = os.getenv("DEVICE_ID")
+ symmetric_key = os.getenv("DEVICE_KEY")
+
+ # allows the user to quit the program from the terminal
+ def stdin_listener():
+ """
+ Listener for quitting the sample
+ """
+ while True:
+ selection = input("Press Q to quit\n")
+ if selection == "Q" or selection == "q":
+ print("Quitting...")
+ break
+
+ async def main():
+
+ # provisions the device to IoT Central-- this uses the Device Provisioning Service behind the scenes
+ provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
+ provisioning_host=provisioning_host,
+ registration_id=registration_id,
+ id_scope=id_scope,
+ symmetric_key=symmetric_key,
+ )
+
+ registration_result = await provisioning_device_client.register()
+
+ print("The complete registration result is")
+ print(registration_result.registration_state)
+
+ if registration_result.status == "assigned":
+ print("Your device has been provisioned. It will now begin sending telemetry.")
+ device_client = IoTHubDeviceClient.create_from_symmetric_key(
+ symmetric_key=symmetric_key,
+ hostname=registration_result.registration_state.assigned_hub,
+ device_id=registration_result.registration_state.device_id,
+ )
+
+ # Connect the client.
+ await device_client.connect()
+
+ # Send the current temperature as a telemetry message
+ async def send_telemetry():
+ print("Sending telemetry for temperature")
+
+ while True:
+ current_temp = random.randrange(10, 50) # Current temperature in Celsius (randomly generated)
+ # Send a single temperature report message
+ temperature_msg = {"temperature": current_temp}
+
+ msg = Message(json.dumps(temperature_msg))
+ msg.content_encoding = "utf-8"
+ msg.content_type = "application/json"
+ print("Sent message")
+ await device_client.send_message(msg)
+ await asyncio.sleep(8)
+
+ send_telemetry_task = asyncio.create_task(send_telemetry())
+
+ # Run the stdin listener in the event loop
+ loop = asyncio.get_running_loop()
+ user_finished = loop.run_in_executor(None, stdin_listener)
+ # Wait for user to indicate they are done listening for method calls
+ await user_finished
+
+ send_telemetry_task.cancel()
+ # Finally, shut down the client
+ await device_client.disconnect()
+
+ if __name__ == "__main__":
+ asyncio.run(main())
+
+ # If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
+ # loop = asyncio.get_event_loop()
+ # loop.run_until_complete(main())
+ # loop.close()
+ ```
+
+As the Python code sends a message from your device to your IoT Central application, the messages appear in the **Raw data** tab of your device in IoT Central. You might need to refresh the page to show recent messages.
+
+ :::image type="content" source="media/quickstart-send-telemetry-python/iot-central-telemetry-output.png" alt-text="Screen shot of IoT Central raw data output":::
+
+Your device is now securely connected and sending telemetry to Azure IoT.
+
+## Clean up resources
+If you no longer need the IoT Central resources created in this tutorial, you can delete them from the IoT Central portal. Optionally, if you plan to continue following the documentation in this guide, you can keep the application you created and reuse it for other samples.
+
+To remove the Azure IoT Central sample application and all its devices and resources:
+1. Select **Administration** > **Your application**.
+1. Select **Delete**.
+
+## Next steps
+
+In this quickstart, you learned a basic Azure IoT application workflow for securely connecting a device to the cloud and sending device-to-cloud telemetry. You used the Azure IoT Central to create an application and a device, then you used the Azure IoT Python SDK to create a simulated device and send telemetry. You also used IoT Central to monitor the telemetry.
+
+As a next step, explore the Azure IoT Python SDK through application samples.
+
+- [Asynchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-hub-scenarios): This directory contains asynchronous Python samples for additional IoT Hub scenarios.
+- [Synchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/sync-samples): This directory contains Python samples for use with Python 2.7 or synchronous compatibility scenarios for Python 3.5+
+- [IoT Edge samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-edge-scenarios): This directory contains Python samples for working with Edge modules and downstream devices.
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-access-host-storage-from-module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-access-host-storage-from-module.md
You can find more details about create options from [docker docs](https://docs.d
## Encrypted data in module storage
-When modules invoke the IoT Edge daemon's workload API to encrypt data, the encryption key is derived using the module ID and module's generation ID. A generation ID is used to protect secrets if a module is removed from the deployment and then another module with the same module ID is later deployed to the same device. You can view a module's generation ID using the Azure CLI command [az iot hub module-identity show](/cli/azure/ext/azure-cli-iot-ext/iot/hub/module-identity#ext-azure-cli-iot-ext-az-iot-hub-module-identity-show).
+When modules invoke the IoT Edge daemon's workload API to encrypt data, the encryption key is derived using the module ID and module's generation ID. A generation ID is used to protect secrets if a module is removed from the deployment and then another module with the same module ID is later deployed to the same device. You can view a module's generation id using the Azure CLI command [az iot hub module-identity show](/cli/azure/ext/azure-iot/iot/hub/module-identity).
If you want to share files between modules across generations, they must not contain any secrets or they will fail to be decrypted.
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-continuous-integration-continuous-deployment-classic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-continuous-integration-continuous-deployment-classic.md
This pipeline is now configured to run automatically when you push new code to y
>[!NOTE] >If you wish to use **layered deployments** in your pipeline, layered deployments are not yet supported in Azure IoT Edge tasks in Azure DevOps. >
->However, you can use an [Azure CLI task in Azure DevOps](/azure/devops/pipelines/tasks/deploy/azure-cli) to create your deployment as a layered deployment. For the **Inline Script** value, you can use the [az iot edge deployment create command](/cli/azure/ext/azure-cli-iot-ext/iot/edge/deployment):
+>However, you can use an [Azure CLI task in Azure DevOps](/azure/devops/pipelines/tasks/deploy/azure-cli) to create your deployment as a layered deployment. For the **Inline Script** value, you can use the [az iot edge deployment create command](/cli/azure/ext/azure-iot/iot/edge/deployment):
> > ```azurecli-interactive > az iot edge deployment create -d {deployment_name} -n {hub_name} --content modules_content.json --layered true
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-monitor-module-twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-monitor-module-twins.md
If you make changes, select **Update Module Twin** above the code in the editor
To see if IoT Edge is running, use the [az iot hub invoke-module-method](how-to-edgeagent-direct-method.md#ping) to ping the IoT Edge agent.
-The [az iot hub module-twin](/cli/azure/ext/azure-cli-iot-ext/iot/hub/module-twin) structure provides these commands:
+The [az iot hub module-twin](/cli/azure/ext/azure-iot/iot/hub/module-twin) structure provides these commands:
* **az iot hub module-twin show** - Show a module twin definition. * **az iot hub module-twin update** - Update a module twin definition.
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-publish-subscribe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-publish-subscribe.md
The [Azure IoT Device SDKs](https://github.com/Azure/azure-iot-sdks) already let
Sending telemetry data to IoT Hub is similar to publishing on a user-defined topic, but using a specific IoT Hub topic: -- For a device, telemetry is sent on topic: `devices/<device_name>/messages/events`-- For a module, telemetry is sent on topic: `devices/<device_name>/<module_name>/messages/events`
+- For a device, telemetry is sent on topic: `devices/<device_name>/messages/events/`
+- For a module, telemetry is sent on topic: `devices/<device_name>/<module_name>/messages/events/`
Additionally, create a route such as `FROM /messages/* INTO $upstream` to send telemetry from the IoT Edge MQTT broker to IoT hub. To learn more about routing, see [Declare routes](module-composition.md#declare-routes).
Other notes on the IoT Edge hub MQTT bridge:
## Next steps
-[Understand the IoT Edge hub](iot-edge-runtime.md#iot-edge-hub)
+[Understand the IoT Edge hub](iot-edge-runtime.md#iot-edge-hub)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/managed-hsm/access-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/access-control.md
tags: azure-resource-manager
Previously updated : 09/15/2020 Last updated : 02/17/2021 # Customer intent: As the admin for managed HSMs, I want to set access policies and configure the Managed HSM, so that I can ensure it's secure and auditors can properly monitor all activities for these managed HSMs.
The following table shows the endpoints for the management and data planes.
| Access&nbsp;plane | Access endpoints | Operations | Access control mechanism | | | | | | | Management plane | **Global:**<br> management.azure.com:443<br> | Create, read, update, delete, and move managed HSMs<br>Set managed HSM tags | Azure RBAC |
-| Data plane | **Global:**<br> &lt;hsm-name&gt;.vault.azure.net:443<br> | **Keys**: decrypt, encrypt,<br> unwrap, wrap, verify, sign, get, list, update, create, import, delete, backup, restore, purge<br/><br/> **Data plane role-management (Managed HSM local RBAC)***: list role definitions, assign roles, delete role assignments, define custom roles<br/><br/>**Backup/restore**: backup, restore, check status backup/restore operations <br/><br/>**Security domain**: download and upload security domain | Managed HSM local RBAC |
+| Data plane | **Global:**<br> &lt;hsm-name&gt;.managedhsm.azure.net:443<br> | **Keys**: decrypt, encrypt,<br> unwrap, wrap, verify, sign, get, list, update, create, import, delete, backup, restore, purge<br/><br/> **Data plane role-management (Managed HSM local RBAC)***: list role definitions, assign roles, delete role assignments, define custom roles<br/><br/>**Backup/restore**: backup, restore, check status backup/restore operations <br/><br/>**Security domain**: download and upload security domain | Managed HSM local RBAC |
||||| ## Management plane and Azure RBAC
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-overview.md
Load balancer supports both inbound and outbound scenarios. Load balancer provid
Key scenarios that you can accomplish using Standard Load Balancer include: -- Load balance **[internal](./quickstart-load-balancer-standard-internal-portal.md)** and **[external](./tutorial-load-balancer-standard-manage-portal.md)** traffic to Azure virtual machines.
+- Load balance **[internal](./quickstart-load-balancer-standard-internal-portal.md)** and **[external](./quickstart-load-balancer-standard-public-portal.md)** traffic to Azure virtual machines.
- Increase availability by distributing resources **[within](./tutorial-load-balancer-standard-public-zonal-portal.md)** and **[across](./tutorial-load-balancer-standard-public-zone-redundant-portal.md)** zones.
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/tutorial-load-balancer-standard-manage-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-load-balancer-standard-manage-portal.md
- Title: 'Tutorial: Load balance internet traffic to VMs - Azure portal'-
-description: This tutorial shows how to create and manage a Standard Load Balancer by using the Azure portal.
---
-Customer intent: I want to create and Standard Load Balancer so that I can load balance internet traffic to VMs and add and remove VMs from the load-balanced set.
--- Previously updated : 03/11/2019----
-# Tutorial: Load balance internet traffic to VMs using the Azure portal
-
-Load balancing provides a higher level of availability and scale by spreading incoming requests across multiple virtual machines. In this tutorial, you learn about the different components of the Azure Standard Load Balancer that distribute internet traffic to VMs and provide high availability. You learn how to:
--
-> [!div class="checklist"]
-> * Create an Azure Load Balancer
-> * Create Load Balancer resources
-> * Create virtual machines and install IIS server
-> * View Load Balancer in action
-> * Add and remove VMs from a Load Balancer
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Sign in to the Azure portal
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Create a Standard Load Balancer
-
-In this section, you create a Standard Load Balancer that helps load balance virtual machines. Standard Load Balancer only supports a Standard Public IP address. When you create a Standard Load Balancer, you must also create a new Standard Public IP address that is configured as the frontend (named as *LoadBalancerFrontend* by default) for the Standard Load Balancer.
-
-1. On the top left-hand side of the screen, click **Create a resource** > **Networking** > **Load Balancer**.
-2. In the **Basics** tab of the **Create load balancer** page, enter or select the following information, accept the defaults for the remaining settings, and then select **Review + create**:
-
- | Setting | Value |
- | | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new** and type *myResourceGroupSLB* in the text box.|
- | Name | *myLoadBalancer* |
- | Region | Select **West Europe**. |
- | Type | Select **Public**. |
- | SKU | Select **Standard**. |
- | Public IP address | Select **Create new**. |
- | Public IP address name | Type *myPublicIP* in the text box. |
- |Availability zone| Select **Zone redundant**. |
-
-3. In the **Review + create** tab, click **Create**.
-
- ![Create a Standard Load Balancer](./media/quickstart-load-balancer-standard-public-portal/create-standard-load-balancer.png)
-
-## Create Load Balancer resources
-
-In this section, you configure Load Balancer settings for a backend address pool, a health probe, and specify a balancer rule.
-
-### Create a backend address pool
-
-To distribute traffic to the VMs, a backend address pool contains the IP addresses of the virtual (NICs) connected to the Load Balancer. Create the backend address pool *myBackendPool* to include virtual machines for load-balancing internet traffic.
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then click **myLoadBalancer** from the resources list.
-2. Under **Settings**, click **Backend pools**, then click **Add**.
-3. On the **Add a backend pool** page, for name, type *myBackendPool*, as the name for your backend pool, and then select **Add**.
-
-### Create a health probe
-
-To allow the Load Balancer to monitor the status of your app, you use a health probe. The health probe dynamically adds or removes VMs from the Load Balancer rotation based on their response to health checks. Create a health probe *myHealthProbe* to monitor the health of the VMs.
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then click **myLoadBalancer** from the resources list.
-2. Under **Settings**, click **Health probes**, then click **Add**.
-3. Use these values to create the health probe:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter *myHealthProbe*. |
- | Protocol | Select **HTTP**. |
- | Port | Enter *80*.|
- | Interval | Enter *15* for number of **Interval** in seconds between probe attempts. |
- | Unhealthy threshold | Select *2* for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.|
-
-4. Select **OK**.
-
-### Create a Load Balancer rule
-
-A Load Balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic, along with the required source and destination port. Create a Load Balancer rule *myLoadBalancerRuleWeb* for listening to port 80 in the frontend *FrontendLoadBalancer* and sending load-balanced network traffic to the backend address pool *myBackEndPool* also using port 80.
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then click **myLoadBalancer** from the resources list.
-2. Under **Settings**, click **Load balancing rules**, then click **Add**.
-3. Use these values to configure the load-balancing rule:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter *myHTTPRule*. |
- | Protocol | Select **TCP**. |
- | Port | Enter *80*.|
- | Backend port | Enter *80*. |
- | Backend pool | Select *myBackendPool*.|
- | Health probe | Select *myHealthProbe*. |
-
-4. Leave the rest of the defaults and select **OK**.
-
-## Create backend servers
-
-In this section, you create a virtual network, create three virtual machines for the backend pool of the Load Balancer, and then install IIS on the virtual machines to help test the Load Balancer.
-
-## Virtual network and parameters
-
-In this section you'll need to replace the following parameters in the steps with the information below:
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroupSLB (Select existing resource group) |
-| **\<virtual-network-name>** | myVNet |
-| **\<region-name>** | West Europe |
-| **\<IPv4-address-space>** | 10.1.0.0/16 |
-| **\<subnet-name>** | mySubnet |
-| **\<subnet-address-range>** | 10.1.0.0/24 |
--
-### Create virtual machines
-
-Standard Load Balancer only supports VMs with Standard IP addresses in the backend pool. In this section, you will create three VMs (*myVM1*, *myVM2*, and *myVM3*) with a Standard public IP address in three different zones (*Zone 1*, *Zone 2*, and *Zone 3*) that are added to the backend pool of the Standard Load Balancer that was created earlier.
-
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Windows Server 2016 Datacenter**.
-
-1. In **Create a virtual machine**, type or select the following values in the **Basics** tab:
- - **Subscription** > **Resource Group**: Select **myResourceGroupSLB**.
- - **Instance Details** > **Virtual machine name**: Type *myVM1*.
- - **Instance Details** > **Region** > select **West Europe**.
- - **Instance Details** > **Availability Options** > Select **Availability zones**.
- - **Instance Details** > **Availability zone** > Select **1**.
-
-1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
- - Make sure the following are selected:
- - **Virtual network**: **myVnet**
- - **Subnet**: **myBackendSubnet**
- - **Public IP** > select **Create new**, and in the **Create public IP address** window, for **SKU**, select **Standard**, and for **Availability zone**, select **Zone-redundant**
-
- - To create a new network security group (NSG), a type of firewall, under **Network Security Group**, select **Advanced**.
- 1. In the **Configure network security group** field, select **Create new**.
- 1. Type *myNetworkSecurityGroup*, and select **OK**.
-
- - To make the VM a part of the Load Balancer's backend pool, complete the following steps:
- - In **Load Balancing**, for **Place this virtual machine behind an existing load balancing solution?**, select **Yes**.
- - In **Load balancing settings**, for **Load balancing options**, select **Azure load balancer**.
- - For **Select a load balancer**, *myLoadBalancer*.
-1. Select the **Management** tab, or select **Next** > **Management**. Under **Monitoring**, set **Boot diagnostics** to **Off**.
-1. Select **Review + create**.
-1. Review the settings, and then select **Create**.
-1. Follow the steps to create two additional VMs - *myVM2* and *myVM3*, with a Standard SKU public IP address in **Availability zone** **2** and **3** respectively, and all the other settings the same as *myVM1*.
-
-### Create network security group rule
-
-In this section, you create a network security group rule to allow inbound connections using HTTP.
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list click **myNetworkSecurityGroup** that is located in the **myResourceGroupSLB** resource group.
-2. Under **Settings**, click **Inbound security rules**, and then click **Add**.
-3. Enter these values for the inbound security rule named *myHTTPRule* to allow for an inbound HTTP connections using port 80:
- - *Service Tag* - for **Source**.
- - *Internet* - for **Source service tag**
- - *80* - for **Destination port ranges**
- - *TCP* - for **Protocol**
- - *Allow* - for **Action**