Updates from: 01/05/2021 04:04:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/add-identity-provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-identity-provider.md
@@ -6,7 +6,7 @@ author: msmimart
manager: celestedg ms.author: mimart
-ms.date: 12/07/2020
+ms.date: 01/04/2021
ms.custom: mvc ms.topic: how-to ms.service: active-directory
@@ -41,7 +41,7 @@ You typically use only one identity provider in your applications, but you have
* [LinkedIn](identity-provider-linkedin.md) * [Microsoft Account](identity-provider-microsoft-account.md) * [QQ](identity-provider-qq.md)
-* [Salesforce](identity-provider-salesforce.md)
+* [Salesforce](identity-provider-salesforce-saml.md)
* [Twitter](identity-provider-twitter.md) * [WeChat](identity-provider-wechat.md) * [Weibo](identity-provider-weibo.md)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-multi-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/04/2020
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -20,18 +20,19 @@ zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-::: zone pivot="b2c-custom-policy"
+::: zone pivot="b2c-user-flow"
-[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
+[!INCLUDE [active-directory-b2c-limited-to-custom-policy](../../includes/active-directory-b2c-limited-to-custom-policy.md)]
::: zone-end
-## Prerequisites
-
-[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
+::: zone pivot="b2c-custom-policy"
This article shows you how to enable sign-in for users using the multi-tenant endpoint for Azure Active Directory (Azure AD). This allows users from multiple Azure AD tenants to sign in using Azure AD B2C, without you having to configure an identity provider for each tenant. However, guest members in any of these tenants **will not** be able to sign in. For that, you need to [individually configure each tenant](identity-provider-azure-ad-single-tenant.md).
+## Prerequisites
+
+[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
## Register an application
@@ -68,41 +69,6 @@ If you want to get the `family_name` and `given_name` claims from Azure AD, you
1. Select the optional claims to add, `family_name` and `given_name`. 1. Click **Add**.
-::: zone pivot="b2c-user-flow"
-
-## Configure Azure AD as an identity provider
-
-1. Make sure you're using the directory that contains Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure AD B2C tenant.
-1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
-1. Select **Identity providers**, and then select **New OpenID Connect provider**.
-1. Enter a **Name**. For example, enter *Contoso Azure AD*.
-1. For **Metadata url**, enter the following URL replacing `{tenant}` with the domain name of your Azure AD tenant:
-
- ```
- https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
- ```
-
- For example, `https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration`.
- For example, `https://login.microsoftonline.com/contoso.com/v2.0/.well-known/openid-configuration`.
-
-1. For **Client ID**, enter the application ID that you previously recorded.
-1. For **Client secret**, enter the client secret that you previously recorded.
-1. For the **Scope**, enter the `openid profile`.
-1. Leave the default values for **Response type**, **Response mode**, and **Domain hint**.
-1. Under **Identity provider claims mapping**, select the following claims:
-
- - **User ID**: *oid*
- - **Display name**: *name*
- - **Given name**: *given_name*
- - **Surname**: *family_name*
- - **Email**: *preferred_username*
-
-1. Select **Save**.
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
- ## Create a policy key You need to store the application key that you created in your Azure AD B2C tenant.
@@ -240,24 +206,6 @@ Now that you have a button in place, you need to link it to an action. The actio
3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add Azure AD identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the Azure AD identity provider.
-1. Under the **Social identity providers**, select **Contoso Azure AD**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
- ## Update and test the relying party file Update the relying party (RP) file that initiates the user journey that you created:
@@ -282,4 +230,4 @@ When working with custom policies, you might sometimes need additional informati
To help diagnose issues, you can temporarily put the policy into "developer mode" and collect logs with Azure Application Insights. Find out how in [Azure Active Directory B2C: Collecting Logs](troubleshoot-with-application-insights.md).
-::: zone-end
\ No newline at end of file
+::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/one-time-password-technical-profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/one-time-password-technical-profile.md
@@ -41,7 +41,7 @@ The following example shows a one-time password technical profile:
## Generate code
-The first mode of this technical profile is to generate a code. Below are the options that can be configured for this mode.
+The first mode of this technical profile is to generate a code. Below are the options that can be configured for this mode. Codes generated and attempts are tracked within the session.
### Input claims
@@ -69,7 +69,7 @@ The following settings can be used to configure code generation mode:
| Attribute | Required | Description | | --------- | -------- | ----------- |
-| CodeExpirationInSeconds | No | Time in seconds until code expiration. Minimum: `60`; Maximum: `1200`; Default: `600`. Every time a code is provided (same code using `ReuseSameCode`, or a new code), the code expiration is extended. |
+| CodeExpirationInSeconds | No | Time in seconds until code expiration. Minimum: `60`; Maximum: `1200`; Default: `600`. Every time a code is provided (same code using `ReuseSameCode`, or a new code), the code expiration is extended. This time is also used to set retry timeout (once max attempts are reached, user is locked out from attempting to obtain new codes until this time expires) |
| CodeLength | No | Length of the code. The default value is `6`. | | CharacterSet | No | The character set for the code, formatted for use in a regular expression. For example, `a-z0-9A-Z`. The default value is `0-9`. The character set must include a minimum of 10 different characters in the set specified. | | NumRetryAttempts | No | The number of verification attempts before the code is considered invalid. The default value is `5`. |
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/relyingparty.md
@@ -227,6 +227,7 @@ When the protocol is `SAML`, a metadata element contains the following elements.
| KeyEncryptionMethod| No | Indicates the method that Azure AD B2C uses to encrypt the copy of the key that was used to encrypt the data. The metadata controls the value of the `<EncryptedKey>` element in the SAML response. Possible values: ` Rsa15` (default) - RSA Public Key Cryptography Standard (PKCS) Version 1.5 algorithm, ` RsaOaep` - RSA Optimal Asymmetric Encryption Padding (OAEP) encryption algorithm. | | UseDetachedKeys | No | Possible values: `true`, or `false` (default). When the value is set to `true`, Azure AD B2C changes the format of the encrypted assertions. Using detached keys adds the encrypted assertion as a child of the EncrytedAssertion as opposed to the EncryptedData. | | WantsSignedResponses| No | Indicates whether Azure AD B2C signs the `Response` section of the SAML response. Possible values: `true` (default) or `false`. |
+| RemoveMillisecondsFromDateTime| No | Indicates whether the millisconds will be removed from datetime values within the SAML response (these include IssueInstant, NotBefore, NotOnOrAfter and AuthnInstant). Possible values: `false` (default) or `true`. |
### OutputClaims
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/technicalprofiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/technicalprofiles.md
@@ -48,7 +48,7 @@ All types of technical profiles share the same concept. You send input claims, r
![Diagram illustrating the technical profile flow](./media/technical-profiles/technical-profile-flow.png) 1. **Single sign-on (SSO) session management** - Restores technical profile's session state, using [SSO session management](custom-policy-reference-sso.md).
-1. **Input claims transformation** - Before the technical profile is started, Azure AD B2C runs input [claims transformation].(claimstransformations.md).
+1. **Input claims transformation** - Before the technical profile is started, Azure AD B2C runs input [claims transformation](claimstransformations.md).
1. **Input claims** - Claims are picked up from the claims bag that are used for the technical profile. 1. **Technical profile execution** - The technical profile exchanges the claims with the configured party. For example: - Redirect the user to the identity provider to complete the sign-in. After successful sign-in, the user returns back and the technical profile execution continues.
active-directory-domain-services https://docs.microsoft.com/en-us/azure/active-directory-domain-services/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/overview.md
@@ -40,7 +40,7 @@ In a hybrid environment with an on-premises AD DS environment, [Azure AD Connect
![Synchronization in Azure AD Domain Services with Azure AD and on-premises AD DS using AD Connect](./media/active-directory-domain-services-design-guide/sync-topology.png)
-Azure AD DS replicates identity information from Azure AD, so it works with Azure AD tenants that are cloud-only, or synchronized with an on-premises (AD DS environment. The same set of Azure AD DS features exists for both environments.
+Azure AD DS replicates identity information from Azure AD, so it works with Azure AD tenants that are cloud-only, or synchronized with an on-premises AD DS environment. The same set of Azure AD DS features exists for both environments.
* If you have an existing on-premises AD DS environment, you can synchronize user account information to provide a consistent identity for users. To learn more, see [How objects and credentials are synchronized in a managed domain][synchronization]. * For cloud-only environments, you don't need a traditional on-premises AD DS environment to use the centralized identity services of Azure AD DS.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md
@@ -104,7 +104,7 @@ To configure or review the *Remain signed-in* option, complete the following ste
1. Select **Company Branding**, then for each locale, choose **Show option to remain signed in**. 1. Choose *Yes*, then select **Save**.
-To remember Multi-factor authentication settings, complete the following steps:
+To remember Multi-factor authentication settings on trusted devices, complete the following steps:
1. In the Azure AD portal, search for and select *Azure Active Directory*. 1. Select **Security**, then **MFA**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/tutorial-enable-azure-mfa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-azure-mfa.md
@@ -20,7 +20,7 @@ ms.collection: M365-identity-device-management
Multi-factor authentication (MFA) is a process where a user is prompted during a sign-in event for additional forms of identification. This prompt could be to enter a code on their cellphone or to provide a fingerprint scan. When you require a second form of authentication, security is increased as this additional factor isn't something that's easy for an attacker to obtain or duplicate.
-Azure AD Multi-Factor Authentication and Conditional Access policies give the flexibility to enable MFA for users during specific sign-in events.
+Azure AD Multi-Factor Authentication and Conditional Access policies give the flexibility to enable MFA for users during specific sign-in events. Here's a [video on How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM) (**Recommended**)
> [!IMPORTANT] > This tutorial shows an administrator how to enable Azure AD Multi-Factor Authentication.
@@ -132,4 +132,4 @@ In this tutorial, you enabled Azure AD Multi-Factor Authentication using Conditi
> * Test the MFA process as a user > [!div class="nextstepaction"]
-> [Enable password writeback for self-service password reset (SSPR)](./tutorial-enable-sspr-writeback.md)
\ No newline at end of file
+> [Enable password writeback for self-service password reset (SSPR)](./tutorial-enable-sspr-writeback.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/app-protection-based-conditional-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/app-protection-based-conditional-access.md
@@ -57,8 +57,8 @@ Organizations must complete the following steps in order to require the use of a
1. Select **Mobile apps and desktop clients** and deselect everything else. 1. Under **Access controls** > **Grant**, select the following options: - **Require approved client app**
- - **Require app protection policy (preview)**
- - **Require all the selected controls**
+ - **Require app protection policy**
+ - **Require one of the selected controls**
1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create and enable your policy.
@@ -108,8 +108,8 @@ Organizations must complete the following steps in order to require the use of a
1. Select **Browser** and deselect everything else. 1. Under **Access controls** > **Grant**, select the following options: - **Require approved client app**
- - **Require app protection policy (preview)**
- - **Require all the selected controls**
+ - **Require app protection policy**
+ - **Require one of the selected controls**
1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create and enable your policy.
@@ -141,7 +141,7 @@ Organizations must complete the following three steps in order to require the us
1. Select **Mobile apps and desktop clients** and deselect everything else. 1. Under **Access controls** > **Grant**, select the following options: - **Require approved client app**
- - **Require app protection policy (preview)**
+ - **Require app protection policy**
- **Require one of the selected controls** 1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create and enable your policy.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/groups-assign-sensitivity-labels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
@@ -43,7 +43,7 @@ To apply published labels to groups, you must first enable the feature. These st
``` > [!NOTE]
- > If no group settings have been created for this Azure AD organization, you must first create the settings. Follow the steps in [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md) to create group settings for this Azure AD organization.
+ > If no group settings have been created for this Azure AD organization you will get an error in the above cmdlet that reads "Cannot bind argument to parameter 'Id' because it is null". In this case you must first create the settings. Follow the steps in [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md) to create group settings for this Azure AD organization.
1. Next, display the current group settings.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/groups-settings-cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
@@ -78,10 +78,11 @@ These steps create settings at directory level, which apply to all Microsoft 365
```powershell $Setting = $Template.CreateDirectorySetting() ```
-4. Then update the usage guideline value:
+4. Then update the settings object with a new value. The two examples below change the usage guideline value and enable sensitivity labels. Set these or any other setting in the template as required:
```powershell $Setting["UsageGuidelinesUrl"] = "https://guideline.example.com"
+ $Setting["EnableMIPLabels"] = "True"
``` 5. Then apply the setting:
@@ -112,7 +113,7 @@ To update the value for UsageGuideLinesUrl in the setting template, read the cur
```powershell Name Value ---- -----
- EnableMIPLabels false
+ EnableMIPLabels True
CustomBlockedWordsList EnableMSStandardBlockedWords False ClassificationDescriptions
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/users-bulk-download https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-bulk-download.md
@@ -6,7 +6,7 @@ services: active-directory
author: curtand ms.author: curtand manager: daveba
-ms.date: 12/02/2020
+ms.date: 01/04/2021
ms.topic: how-to ms.service: active-directory ms.subservice: enterprise-users
@@ -56,7 +56,6 @@ To download the list of users from the Azure AD admin center, you must be signed
- postalCode - telephoneNumber - mobile
- - authenticationPhoneNumber
- authenticationAlternativePhoneNumber - authenticationEmail - alternateEmailAddress
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/whats-new-archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
@@ -138,7 +138,7 @@ The Azure AD provisioning service currently operates on a cyclic basis. The serv
**Service category:** Other **Product capability:** Entitlement Management
-A new delegated permission EntitlementManagement.Read.All is now available for use with the Entitlement Management API in Microsoft Graph beta. To find out more about the available APIs, see [Working with the Azure AD entitlement management API](/graph/api/resources/entitlementmanagement-root?view=graph-rest-beta).
+A new delegated permission EntitlementManagement.Read.All is now available for use with the Entitlement Management API in Microsoft Graph beta. To find out more about the available APIs, see [Working with the Azure AD entitlement management API](/graph/api/resources/entitlementmanagement-root?view=graph-rest-beta&preserve-view=true).
---
@@ -336,7 +336,7 @@ For listing your application in the Azure AD app gallery, please read the detail
**Service category:** Conditional Access **Product capability:** Identity Security & Protection
-[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their impact before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only modeΓÇöover 26M users are already in scope of a report-only policy. With the announcement today, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the impact of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-beta) as well.
+[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their impact before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only modeΓÇöover 26M users are already in scope of a report-only policy. With the announcement today, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the impact of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-beta&preserve-view=true) as well.
---
@@ -405,7 +405,7 @@ You can now automate creating, updating, and deleting user accounts for these ne
* [Juno Journey](../saas-apps/juno-journey-provisioning-tutorial.md) * [MediusFlow](../saas-apps/mediusflow-provisioning-tutorial.md) * [New Relic by Organization](../saas-apps/new-relic-by-organization-provisioning-tutorial.md)
-* [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-infratstructure-console-provisioning-tutorial.md)
+* [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md)
For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
@@ -548,7 +548,7 @@ We're expanding B2B invitation capability to allow existing internal accounts to
**Product capability:** Identity Security & Protection
-[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their impact before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only mode, with over 26M users already in scope of a report-only policy. With this announcement, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the impact of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can also [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-beta).
+[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their impact before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only mode, with over 26M users already in scope of a report-only policy. With this announcement, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the impact of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can also [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-beta&preserve-view=true).
---
@@ -600,7 +600,7 @@ For more information about the apps, see [SaaS application integration with Azur
**Product capability:** Developer Experience
-Delta query for oAuth2PermissionGrant is available for public preview! You can now track changes without having to continuously poll Microsoft Graph. [Learn more.](/graph/api/oAuth2PermissionGrant-delta?tabs=http&view=graph-rest-beta)
+Delta query for oAuth2PermissionGrant is available for public preview! You can now track changes without having to continuously poll Microsoft Graph. [Learn more.](/graph/api/oAuth2PermissionGrant-delta?tabs=http&view=graph-rest-beta&preserve-view=true)
---
@@ -635,7 +635,7 @@ Delta query for applications is generally available! You can now track changes i
**Service category:** MS Graph **Product capability:** Developer Experience
-Delta query for administrative units is available for public preview! You can now track changes without having to continuously poll Microsoft Graph. [Learn more.](/graph/api/administrativeunit-delta?tabs=http&view=graph-rest-beta)
+Delta query for administrative units is available for public preview! You can now track changes without having to continuously poll Microsoft Graph. [Learn more.](/graph/api/administrativeunit-delta?tabs=http&view=graph-rest-beta&preserve-view=true)
---
@@ -653,7 +653,7 @@ These APIs are a key tool for managing your usersΓÇÖ authentication methods. Now
- Reset a userΓÇÖs password - Turn on and off SMS-sign-in
-For more information, see [Azure AD authentication methods API overview](/graph/api/resources/authenticationmethods-overview?view=graph-rest-beta).
+For more information, see [Azure AD authentication methods API overview](/graph/api/resources/authenticationmethods-overview?view=graph-rest-beta&preserve-view=true).
---
@@ -1459,7 +1459,7 @@ For more information about using application-specific role definitions, see [Add
**Service category:** Identity Protection **Product capability:** Identity Security & Protection
-In response to developer feedback, Azure AD Premium P2 subscribers can now perform complex queries on Azure AD Identity Protection's risk detection data by using the new riskDetection API for Microsoft Graph. The existing [identityRiskEvent](/graph/api/resources/identityriskevent?view=graph-rest-beta) API beta version will stop returning data around **January 10, 2020**. If your organization is using the identityRiskEvent API, you should transition to the new riskDetection API.
+In response to developer feedback, Azure AD Premium P2 subscribers can now perform complex queries on Azure AD Identity Protection's risk detection data by using the new riskDetection API for Microsoft Graph. The existing [identityRiskEvent](/graph/api/resources/identityriskevent?view=graph-rest-beta&preserve-view=true) API beta version will stop returning data around **January 10, 2020**. If your organization is using the identityRiskEvent API, you should transition to the new riskDetection API.
For more information about the new riskDetection API, see the [Risk detection API reference documentation](/graph/api/resources/riskdetection).
@@ -2291,7 +2291,7 @@ For more information about these updates, see [Filter audit logs](../reports-mon
We're pleased to announce the new riskDetections API for Microsoft Graph is now in public preview. You can use this new API to view a list of your organization's Identity Protection-related user and sign-in risk detections. You can also use this API to more efficiently query your risk detections, including details about the detection type, status, level, and more.
-For more information, see the [Risk detection API reference documentation](/graph/api/resources/riskdetection?view=graph-rest-beta).
+For more information, see the [Risk detection API reference documentation](/graph/api/resources/riskdetection?view=graph-rest-beta&preserve-view=true).
---
@@ -2461,7 +2461,7 @@ For more information, see [Microsoft identity platform](../develop/index.yml) an
We're pleased to announce that you can now use the Risky Users API to retrieve users' risk history, dismiss risky users, and to confirm users as compromised. This change helps you to more efficiently update the risk status of your users and understand their risk history.
-For more information, see the [Risky Users API reference documentation](/graph/api/resources/riskyuser?view=graph-rest-beta).
+For more information, see the [Risky Users API reference documentation](/graph/api/resources/riskyuser?view=graph-rest-beta&preserve-view=true).
---
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
@@ -500,7 +500,7 @@ Risk-based Conditional Access and risk detection features of Identity Protection
In October 2020 we have added following 27 new applications in our App gallery with Federation support:
-[Sentry](../saas-apps/sentry-tutorial.md), [Bumblebee - Productivity Superapp](https://app.yellowmessenger.com/user/login), [ABBYY FlexiCapture Cloud](../saas-apps/abbyy-flexicapture-cloud-tutorial.md), [EAComposer](../saas-apps/eacomposer-tutorial.md), [Genesys Cloud Integration for Azure](https://apps.mypurecloud.com/msteams-integration/), [Zone Technologies Portal](https://portail.zonetechnologie.com/signin), [Beautiful.ai](../saas-apps/beautiful.ai-tutorial.md), [Datawiza Access Broker](https://console.datawiza.com/), [ZOKRI](https://app.zokri.com/), [CheckProof](../saas-apps/checkproof-tutorial.md), [Ecochallenge.org](https://events.ecochallenge.org/users/login), [atSpoke](http://atspoke.com/login), [Appointment Reminder](https://app.appointmentreminder.co.nz/account/login), [Cloud.Market](https://cloud.market/), [TravelPerk](../saas-apps/travelperk-tutorial.md), [Greetly](https://app.greetly.com/), [OrgVitality SSO}(../saas-apps/orgvitality-sso-tutorial.md), [Web Cargo Air](../saas-apps/web-cargo-air-tutorial.md), [Loop Flow CRM](../saas-apps/loop-flow-crm-tutorial.md), [Starmind](../saas-apps/starmind-tutorial.md), [Workstem](https://hrm.workstem.com/login), [Retail Zipline](../saas-apps/retail-zipline-tutorial.md), [Hoxhunt](../saas-apps/hoxhunt-tutorial.md), [MEVISIO](../saas-apps/mevisio-tutorial.md), [Samsara](../saas-apps/samsara-tutorial.md), [Nimbus](../saas-apps/nimbus-tutorial.md), [Pulse Secure virtual Traffic Manager](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md)
+[Sentry](../saas-apps/sentry-tutorial.md), [Bumblebee - Productivity Superapp](https://app.yellowmessenger.com/user/login), [ABBYY FlexiCapture Cloud](../saas-apps/abbyy-flexicapture-cloud-tutorial.md), [EAComposer](../saas-apps/eacomposer-tutorial.md), [Genesys Cloud Integration for Azure](https://apps.mypurecloud.com/msteams-integration/), [Zone Technologies Portal](https://portail.zonetechnologie.com/signin), [Beautiful.ai](../saas-apps/beautiful.ai-tutorial.md), [Datawiza Access Broker](https://console.datawiza.com/), [ZOKRI](https://app.zokri.com/), [CheckProof](../saas-apps/checkproof-tutorial.md), [Ecochallenge.org](https://events.ecochallenge.org/users/login), [atSpoke](http://atspoke.com/login), [Appointment Reminder](https://app.appointmentreminder.co.nz/account/login), [Cloud.Market](https://cloud.market/), [TravelPerk](../saas-apps/travelperk-tutorial.md), [Greetly](https://app.greetly.com/), [OrgVitality SSO](../saas-apps/orgvitality-sso-tutorial.md), [Web Cargo Air](../saas-apps/web-cargo-air-tutorial.md), [Loop Flow CRM](../saas-apps/loop-flow-crm-tutorial.md), [Starmind](../saas-apps/starmind-tutorial.md), [Workstem](https://hrm.workstem.com/login), [Retail Zipline](../saas-apps/retail-zipline-tutorial.md), [Hoxhunt](../saas-apps/hoxhunt-tutorial.md), [MEVISIO](../saas-apps/mevisio-tutorial.md), [Samsara](../saas-apps/samsara-tutorial.md), [Nimbus](../saas-apps/nimbus-tutorial.md), [Pulse Secure virtual Traffic Manager](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/reference-connect-accounts-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-accounts-permissions.md
@@ -13,7 +13,7 @@ ms.workload: identity
ms.tgt_pltfrm: na ms.devlang: na ms.topic: reference
-ms.date: 05/18/2020
+ms.date: 01/04/2021
ms.subservice: hybrid ms.author: billmath
@@ -39,7 +39,7 @@ In addition to these three accounts used to run Azure AD Connect, you will also
- **AD DS Enterprise Administrator account**: Optionally used to create the ΓÇ£AD DS Connector accountΓÇ¥ above. -- **Azure AD Global Administrator account**: used to create the Azure AD Connector account and configure Azure AD.
+- **Azure AD Global Administrator account**: used to create the Azure AD Connector account and configure Azure AD. You can view global administrator accounts in the azure portal. See [View Roles](../../active-directory/roles/manage-roles-portal.md#view-all-roles).
- **SQL SA account (optional)**: used to create the ADSync database when using the full version of SQL Server. This SQL Server may be local or remote to the Azure AD Connect installation. This account may be the same account as the Enterprise Administrator. Provisioning the database can now be performed out of band by the SQL administrator and then installed by the Azure AD Connect administrator with database owner rights. For information on this see [Install Azure AD Connect using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-integrate-with-remote-desktop-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-remote-desktop-services.md
@@ -37,18 +37,13 @@ In an RDS deployment, the RD Web role and the RD Gateway role run on Internet-fa
## Requirements - Both the RD Web and RD Gateway endpoints must be located on the same machine, and with a common root. RD Web and RD Gateway are published as a single application with Application Proxy so that you can have a single sign-on experience between the two applications.--- You should already have [deployed RDS](/windows-server/remote/remote-desktop-services/rds-in-azure), and [enabled Application Proxy](application-proxy-add-on-premises-application.md).-
+- You should already have [deployed RDS](/windows-server/remote/remote-desktop-services/rds-in-azure), and [enabled Application Proxy](application-proxy-add-on-premises-application.md). Ensure you have satisfied the pre-requisites to enable Application Proxy, such as installing the connector, opening required ports and URLS, and enabling TLS 1.2 on the server.
- Your end users must use a compatible browser to connect to RD Web or the RD Web client. For more details see [Support for client configurations](#support-for-other-client-configurations).- - When publishing RD Web, it is recommended to use the same internal and external FQDN. If the internal and external FQDNs are different then you should disable Request Header Translation to avoid the client receiving invalid links.- - If you are using RD Web on Internet Explorer, you will need to enable the RDS ActiveX add-on.- - If you are using the RD Web client, you will need to use the Application Proxy [connector version 1.5.1975 or later](./application-proxy-release-version-history.md).- - For the Azure AD pre-authentication flow, users can only connect to resources published to them in the **RemoteApp and Desktops** pane. Users can't connect to a desktop using the **Connect to a remote PC** pane.
+- If you are using Windows Server 2019, you may need to disable HTTP2 protocol. For more information, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md).
## Deploy the joint RDS and Application Proxy scenario
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/imperva-data-security-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/imperva-data-security-tutorial.md
@@ -33,7 +33,7 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Imperva Data Security supports **IDP** initiated SSO
+* Imperva Data Security supports **SP** initiated SSO
## Adding Imperva Data Security from the gallery
@@ -72,11 +72,17 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<IMPERVA_DNS_NAME>:8443`
+ a. In the **Identifier** text box, type an identifier using the following pattern:
+ `application-name`
b. In the **Reply URL** text box, type a URL using the following pattern: `https://<IMPERVA_DNS_NAME>:8443`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<IMPERVA_DNS_NAME>:8443`
+
+ d. In the **Logout URL** text box, type a URL using the following pattern:
+ `https://<IMPERVA_DNS_NAME>:8443`
> [!NOTE] > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Imperva Data Security Client support team](mailto:support@jsonar.imperva.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-input.md
@@ -80,6 +80,62 @@ public static void Run(string myQueueItem, string myInputBlob, out string myOutp
} ```
+# [Java](#tab/java)
+
+This section contains the following examples:
+
+* [HTTP trigger, look up blob name from query string](#http-trigger-look-up-blob-name-from-query-string)
+* [Queue trigger, receive blob name from queue message](#queue-trigger-receive-blob-name-from-queue-message)
+
+#### HTTP trigger, look up blob name from query string
+
+ The following example shows a Java function that uses the `HttpTrigger` annotation to receive a parameter containing the name of a file in a blob storage container. The `BlobInput` annotation then reads the file and passes its contents to the function as a `byte[]`.
+
+```java
+ @FunctionName("getBlobSizeHttp")
+ @StorageAccount("Storage_Account_Connection_String")
+ public HttpResponseMessage blobSize(
+ @HttpTrigger(name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @BlobInput(
+ name = "file",
+ dataType = "binary",
+ path = "samples-workitems/{Query.file}")
+ byte[] content,
+ final ExecutionContext context) {
+ // build HTTP response with size of requested blob
+ return request.createResponseBuilder(HttpStatus.OK)
+ .body("The size of \"" + request.getQueryParameters().get("file") + "\" is: " + content.length + " bytes")
+ .build();
+ }
+```
+
+#### Queue trigger, receive blob name from queue message
+
+ The following example shows a Java function that uses the `QueueTrigger` annotation to receive a message containing the name of a file in a blob storage container. The `BlobInput` annotation then reads the file and passes its contents to the function as a `byte[]`.
+
+```java
+ @FunctionName("getBlobSize")
+ @StorageAccount("Storage_Account_Connection_String")
+ public void blobSize(
+ @QueueTrigger(
+ name = "filename",
+ queueName = "myqueue-items-sample")
+ String filename,
+ @BlobInput(
+ name = "file",
+ dataType = "binary",
+ path = "samples-workitems/{queueTrigger}")
+ byte[] content,
+ final ExecutionContext context) {
+ context.getLogger().info("The size of \"" + filename + "\" is: " + content.length + " bytes");
+ }
+```
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@BlobInput` annotation on parameters whose value would come from a blob. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+ # [JavaScript](#tab/javascript) <!--Same example for input and output. -->
@@ -129,6 +185,35 @@ module.exports = function(context) {
}; ```
+# [PowerShell](#tab/powershell)
+
+The following example shows a blob input binding, defined in the _function.json_ file, which makes the incoming blob data available to the [PowerShell](functions-reference-powershell.md) function.
+
+Here's the json configuration:
+
+```json
+{
+ "bindings": [
+ {
+ "name": "InputBlob",
+ "type": "blobTrigger",
+ "direction": "in",
+ "path": "source/{name}",
+ "connection": "AzureWebJobsStorage"
+ }
+ ]
+}
+```
+
+Here's the function code:
+
+```powershell
+# Input bindings are passed in via param block.
+param([byte[]] $InputBlob, $TriggerMetadata)
+
+Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($InputBlob.Length) bytes"
+```
+ # [Python](#tab/python) <!--Same example for input and output. -->
@@ -178,7 +263,6 @@ The `dataType` property determines which binding is used. The following values a
| `string` | N | Uses generic binding and casts the input type as a `string` | `def main(input: str)` | | `binary` | N | Uses generic binding and casts the input blob as `bytes` Python object | `def main(input: bytes)` | - Here's the Python code: ```python
@@ -191,62 +275,6 @@ def main(queuemsg: func.QueueMessage, inputblob: func.InputStream) -> func.Input
return inputblob ```
-# [Java](#tab/java)
-
-This section contains the following examples:
-
-* [HTTP trigger, look up blob name from query string](#http-trigger-look-up-blob-name-from-query-string)
-* [Queue trigger, receive blob name from queue message](#queue-trigger-receive-blob-name-from-queue-message)
-
-#### HTTP trigger, look up blob name from query string
-
- The following example shows a Java function that uses the `HttpTrigger` annotation to receive a parameter containing the name of a file in a blob storage container. The `BlobInput` annotation then reads the file and passes its contents to the function as a `byte[]`.
-
-```java
- @FunctionName("getBlobSizeHttp")
- @StorageAccount("Storage_Account_Connection_String")
- public HttpResponseMessage blobSize(
- @HttpTrigger(name = "req",
- methods = {HttpMethod.GET},
- authLevel = AuthorizationLevel.ANONYMOUS)
- HttpRequestMessage<Optional<String>> request,
- @BlobInput(
- name = "file",
- dataType = "binary",
- path = "samples-workitems/{Query.file}")
- byte[] content,
- final ExecutionContext context) {
- // build HTTP response with size of requested blob
- return request.createResponseBuilder(HttpStatus.OK)
- .body("The size of \"" + request.getQueryParameters().get("file") + "\" is: " + content.length + " bytes")
- .build();
- }
-```
-
-#### Queue trigger, receive blob name from queue message
-
- The following example shows a Java function that uses the `QueueTrigger` annotation to receive a message containing the name of a file in a blob storage container. The `BlobInput` annotation then reads the file and passes its contents to the function as a `byte[]`.
-
-```java
- @FunctionName("getBlobSize")
- @StorageAccount("Storage_Account_Connection_String")
- public void blobSize(
- @QueueTrigger(
- name = "filename",
- queueName = "myqueue-items-sample")
- String filename,
- @BlobInput(
- name = "file",
- dataType = "binary",
- path = "samples-workitems/{queueTrigger}")
- byte[] content,
- final ExecutionContext context) {
- context.getLogger().info("The size of \"" + filename + "\" is: " + content.length + " bytes");
- }
-```
-
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@BlobInput` annotation on parameters whose value would come from a blob. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
- --- ## Attributes and annotations
@@ -288,17 +316,21 @@ You can use the `StorageAccount` attribute to specify the storage account at cla
Attributes are not supported by C# Script.
+# [Java](#tab/java)
+
+The `@BlobInput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [input example](#example) for details.
+ # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Attributes are not supported by Python.
+Attributes are not supported by PowerShell.
-# [Java](#tab/java)
+# [Python](#tab/python)
-The `@BlobInput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [input example](#example) for details.
+Attributes are not supported by Python.
---
@@ -328,17 +360,21 @@ The following table explains the binding configuration properties that you set i
[!INCLUDE [functions-bindings-blob-storage-input-usage.md](../../includes/functions-bindings-blob-storage-input-usage.md)]
+# [Java](#tab/java)
+
+The `@BlobInput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [input example](#example) for details.
+ # [JavaScript](#tab/javascript) Access blob data using `context.bindings.<NAME>` where `<NAME>` matches the value defined in *function.json*.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Access blob data via the parameter typed as [InputStream](/python/api/azure-functions/azure.functions.inputstream?view=azure-python). Refer to the [input example](#example) for details.
+Access the blob data via a parameter that matches the name designated by binding's name parameter in the _function.json_ file.
-# [Java](#tab/java)
+# [Python](#tab/python)
-The `@BlobInput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [input example](#example) for details.
+Access blob data via the parameter typed as [InputStream](/python/api/azure-functions/azure.functions.inputstream?view=azure-python&preserve-view=true). Refer to the [input example](#example) for details.
---
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-output.md
@@ -118,6 +118,73 @@ public static void Run(string myQueueItem, string myInputBlob, out string myOutp
} ```
+# [Java](#tab/java)
+
+This section contains the following examples:
+
+* [HTTP trigger, using OutputBinding](#http-trigger-using-outputbinding-java)
+* [Queue trigger, using function return value](#queue-trigger-using-function-return-value-java)
+
+#### HTTP trigger, using OutputBinding (Java)
+
+ The following example shows a Java function that uses the `HttpTrigger` annotation to receive a parameter containing the name of a file in a blob storage container. The `BlobInput` annotation then reads the file and passes its contents to the function as a `byte[]`. The `BlobOutput` annotation binds to `OutputBinding outputItem`, which is then used by the function to write the contents of the input blob to the configured storage container.
+
+```java
+ @FunctionName("copyBlobHttp")
+ @StorageAccount("Storage_Account_Connection_String")
+ public HttpResponseMessage copyBlobHttp(
+ @HttpTrigger(name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @BlobInput(
+ name = "file",
+ dataType = "binary",
+ path = "samples-workitems/{Query.file}")
+ byte[] content,
+ @BlobOutput(
+ name = "target",
+ path = "myblob/{Query.file}-CopyViaHttp")
+ OutputBinding<String> outputItem,
+ final ExecutionContext context) {
+ // Save blob to outputItem
+ outputItem.setValue(new String(content, StandardCharsets.UTF_8));
+
+ // build HTTP response with size of requested blob
+ return request.createResponseBuilder(HttpStatus.OK)
+ .body("The size of \"" + request.getQueryParameters().get("file") + "\" is: " + content.length + " bytes")
+ .build();
+ }
+```
+
+#### Queue trigger, using function return value (Java)
+
+ The following example shows a Java function that uses the `QueueTrigger` annotation to receive a message containing the name of a file in a blob storage container. The `BlobInput` annotation then reads the file and passes its contents to the function as a `byte[]`. The `BlobOutput` annotation binds to the function return value, which is then used by the runtime to write the contents of the input blob to the configured storage container.
+
+```java
+ @FunctionName("copyBlobQueueTrigger")
+ @StorageAccount("Storage_Account_Connection_String")
+ @BlobOutput(
+ name = "target",
+ path = "myblob/{queueTrigger}-Copy")
+ public String copyBlobQueue(
+ @QueueTrigger(
+ name = "filename",
+ dataType = "string",
+ queueName = "myqueue-items")
+ String filename,
+ @BlobInput(
+ name = "file",
+ path = "samples-workitems/{queueTrigger}")
+ String content,
+ final ExecutionContext context) {
+ context.getLogger().info("The content of \"" + filename + "\" is: " + content);
+ return content;
+ }
+```
+
+ In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@BlobOutput` annotation on function parameters whose value would be written to an object in blob storage. The parameter type should be `OutputBinding<T>`, where T is any native Java type or a POJO.
+ # [JavaScript](#tab/javascript) <!--Same example for input and output. -->
@@ -167,6 +234,46 @@ module.exports = function(context) {
}; ```
+# [PowerShell](#tab/powershell)
+
+The following example demonstrates how to create a copy of an incoming blob as the output from a [PowerShell function](functions-reference-powershell.md).
+
+In the function's configuration file (*function.json*), the `trigger` metadata property is used to specify the output blob name in the `path` properties.
+
+> [!NOTE]
+> To avoid infinite loops, make sure your input and output paths are different.
+
+```json
+{
+ "bindings": [
+ {
+ "name": "myInputBlob",
+ "path": "data/{trigger}",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "in",
+ "type": "blobTrigger"
+ },
+ {
+ "name": "myOutputBlob",
+ "type": "blob",
+ "path": "data/copy/{trigger}",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the PowerShell code:
+
+```powershell
+# Input bindings are passed in via param block.
+param([byte[]] $myInputBlob, $TriggerMetadata)
+Write-Host "PowerShell Blob trigger function Processed blob Name: $($TriggerMetadata.Name)"
+Push-OutputBinding -Name myOutputBlob -Value $myInputBlob
+```
+ # [Python](#tab/python) <!--Same example for input and output. -->
@@ -220,73 +327,6 @@ def main(queuemsg: func.QueueMessage, inputblob: func.InputStream,
outputblob.set(inputblob) ```
-# [Java](#tab/java)
-
-This section contains the following examples:
-
-* [HTTP trigger, using OutputBinding](#http-trigger-using-outputbinding-java)
-* [Queue trigger, using function return value](#queue-trigger-using-function-return-value-java)
-
-#### HTTP trigger, using OutputBinding (Java)
-
- The following example shows a Java function that uses the `HttpTrigger` annotation to receive a parameter containing the name of a file in a blob storage container. The `BlobInput` annotation then reads the file and passes its contents to the function as a `byte[]`. The `BlobOutput` annotation binds to `OutputBinding outputItem`, which is then used by the function to write the contents of the input blob to the configured storage container.
-
-```java
- @FunctionName("copyBlobHttp")
- @StorageAccount("Storage_Account_Connection_String")
- public HttpResponseMessage copyBlobHttp(
- @HttpTrigger(name = "req",
- methods = {HttpMethod.GET},
- authLevel = AuthorizationLevel.ANONYMOUS)
- HttpRequestMessage<Optional<String>> request,
- @BlobInput(
- name = "file",
- dataType = "binary",
- path = "samples-workitems/{Query.file}")
- byte[] content,
- @BlobOutput(
- name = "target",
- path = "myblob/{Query.file}-CopyViaHttp")
- OutputBinding<String> outputItem,
- final ExecutionContext context) {
- // Save blob to outputItem
- outputItem.setValue(new String(content, StandardCharsets.UTF_8));
-
- // build HTTP response with size of requested blob
- return request.createResponseBuilder(HttpStatus.OK)
- .body("The size of \"" + request.getQueryParameters().get("file") + "\" is: " + content.length + " bytes")
- .build();
- }
-```
-
-#### Queue trigger, using function return value (Java)
-
- The following example shows a Java function that uses the `QueueTrigger` annotation to receive a message containing the name of a file in a blob storage container. The `BlobInput` annotation then reads the file and passes its contents to the function as a `byte[]`. The `BlobOutput` annotation binds to the function return value, which is then used by the runtime to write the contents of the input blob to the configured storage container.
-
-```java
- @FunctionName("copyBlobQueueTrigger")
- @StorageAccount("Storage_Account_Connection_String")
- @BlobOutput(
- name = "target",
- path = "myblob/{queueTrigger}-Copy")
- public String copyBlobQueue(
- @QueueTrigger(
- name = "filename",
- dataType = "string",
- queueName = "myqueue-items")
- String filename,
- @BlobInput(
- name = "file",
- path = "samples-workitems/{queueTrigger}")
- String content,
- final ExecutionContext context) {
- context.getLogger().info("The content of \"" + filename + "\" is: " + content);
- return content;
- }
-```
-
- In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@BlobOutput` annotation on function parameters whose value would be written to an object in blob storage. The parameter type should be `OutputBinding<T>`, where T is any native Java type or a POJO.
- --- ## Attributes and annotations
@@ -323,17 +363,21 @@ public static void Run(
Attributes are not supported by C# Script.
+# [Java](#tab/java)
+
+The `@BlobOutput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [output example](#example) for details.
+ # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Attributes are not supported by Python.
+Attributes are not supported by PowerShell.
-# [Java](#tab/java)
+# [Python](#tab/python)
-The `@BlobOutput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [output example](#example) for details.
+Attributes are not supported by Python.
---
@@ -366,9 +410,17 @@ The following table explains the binding configuration properties that you set i
[!INCLUDE [functions-bindings-blob-storage-output-usage.md](../../includes/functions-bindings-blob-storage-output-usage.md)]
+# [Java](#tab/java)
+
+The `@BlobOutput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [output example](#example) for details.
+ # [JavaScript](#tab/javascript)
-In JavaScript, access the blob data using `context.bindings.<name from function.json>`.
+Access the blob data using `context.bindings.<BINDING_NAME>`, where the binding name is defined in the _function.json_ file.
+
+# [PowerShell](#tab/powershell)
+
+Access the blob data via a parameter that matches the name designated by binding's name parameter in the _function.json_ file.
# [Python](#tab/python)
@@ -379,10 +431,6 @@ You can declare function parameters as the following types to write out to blob
Refer to the [output example](#example) for details.
-# [Java](#tab/java)
-
-The `@BlobOutput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [output example](#example) for details.
- --- ## Exceptions and return codes
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-trigger.md
@@ -12,7 +12,7 @@ ms.custom: "devx-track-csharp, devx-track-python"
The Blob storage trigger starts a function when a new or updated blob is detected. The blob contents are provided as [input to the function](./functions-bindings-storage-blob-input.md).
-The Azure Blob storage trigger requires a general-purpose storage account. Storage V2 accounts with [hierarchal namespaces](../storage/blobs/data-lake-storage-namespace.md) are also supported. To use a blob-only account, or if your application has specialized needs, review the alternatives to using this trigger.
+The Azure Blob storage trigger requires a general-purpose storage account. Storage V2 accounts with [hierarchical namespaces](../storage/blobs/data-lake-storage-namespace.md) are also supported. To use a blob-only account, or if your application has specialized needs, review the alternatives to using this trigger.
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
@@ -109,6 +109,24 @@ public static void Run(CloudBlockBlob myBlob, string name, ILogger log)
} ```
+# [Java](#tab/java)
+
+This function writes a log when a blob is added or updated in the `myblob` container.
+
+```java
+@FunctionName("blobprocessor")
+public void run(
+ @BlobTrigger(name = "file",
+ dataType = "binary",
+ path = "myblob/{name}",
+ connection = "MyStorageAccountAppSetting") byte[] content,
+ @BindingName("name") String filename,
+ final ExecutionContext context
+) {
+ context.getLogger().info("Name: " + filename + " Size: " + content.length + " bytes");
+}
+```
+ # [JavaScript](#tab/javascript) The following example shows a blob trigger binding in a *function.json* file and [JavaScript code](functions-reference-node.md) that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` container.
@@ -143,6 +161,34 @@ module.exports = function(context) {
}; ```
+# [PowerShell](#tab/powershell)
+
+The following example demonstrates how to create a function that runs when a file is added to `source` blob storage container.
+
+The function configuration file (_function.json_) includes a binding with the `type` of `blobTrigger` and `direction` set to `in`.
+
+```json
+{
+ "bindings": [
+ {
+ "name": "InputBlob",
+ "type": "blobTrigger",
+ "direction": "in",
+ "path": "source/{name}",
+ "connection": "MyStorageAccountConnectionString"
+ }
+ ]
+}
+```
+
+Here's the associated code for the _run.ps1_ file.
+
+```powershell
+param([byte[]] $InputBlob, $TriggerMetadata)
+
+Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($InputBlob.Length) bytes"
+```
+ # [Python](#tab/python) The following example shows a blob trigger binding in a *function.json* file and [Python code](functions-reference-python.md) that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
@@ -180,24 +226,6 @@ def main(myblob: func.InputStream):
logging.info('Python Blob trigger function processed %s', myblob.name) ```
-# [Java](#tab/java)
-
-This function writes a log when a blob is added or updated in the `myblob` container.
-
-```java
-@FunctionName("blobprocessor")
-public void run(
- @BlobTrigger(name = "file",
- dataType = "binary",
- path = "myblob/{name}",
- connection = "MyStorageAccountAppSetting") byte[] content,
- @BindingName("name") String filename,
- final ExecutionContext context
-) {
- context.getLogger().info("Name: " + filename + " Size: " + content.length + " bytes");
-}
-```
- --- ## Attributes and annotations
@@ -262,17 +290,21 @@ The storage account to use is determined in the following order:
Attributes are not supported by C# Script.
+# [Java](#tab/java)
+
+The `@BlobTrigger` attribute is used to give you access to the blob that triggered the function. Refer to the [trigger example](#example) for details.
+ # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Attributes are not supported by Python.
+Attributes are not supported by PowerShell.
-# [Java](#tab/java)
+# [Python](#tab/python)
-The `@BlobTrigger` attribute is used to give you access to the blob that triggered the function. Refer to the [trigger example](#example) for details.
+Attributes are not supported by Python.
---
@@ -300,17 +332,21 @@ The following table explains the binding configuration properties that you set i
[!INCLUDE [functions-bindings-blob-storage-trigger](../../includes/functions-bindings-blob-storage-trigger.md)]
+# [Java](#tab/java)
+
+The `@BlobTrigger` attribute is used to give you access to the blob that triggered the function. Refer to the [trigger example](#example) for details.
+ # [JavaScript](#tab/javascript) Access blob data using `context.bindings.<NAME>` where `<NAME>` matches the value defined in *function.json*.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Access blob data via the parameter typed as [InputStream](/python/api/azure-functions/azure.functions.inputstream?view=azure-python). Refer to the [trigger example](#example) for details.
+Access the blob data via a parameter that matches the name designated by binding's name parameter in the _function.json_ file.
-# [Java](#tab/java)
+# [Python](#tab/python)
-The `@BlobTrigger` attribute is used to give you access to the blob that triggered the function. Refer to the [trigger example](#example) for details.
+Access blob data via the parameter typed as [InputStream](/python/api/azure-functions/azure.functions.inputstream?view=azure-python&preserve-view=true). Refer to the [trigger example](#example) for details.
---
@@ -369,6 +405,10 @@ If the blob is named *{20140101}-soundfile.mp3*, the `name` variable value in th
[!INCLUDE [functions-bindings-blob-storage-trigger](../../includes/functions-bindings-blob-storage-metadata.md)]
+# [Java](#tab/java)
+
+Metadata is not available in Java.
+ # [JavaScript](#tab/javascript) ```javascript
@@ -378,13 +418,13 @@ module.exports = function (context, myBlob) {
}; ```
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Metadata is not available in Python.
+Metadata is available through the `$TriggerMetadata` parameter.
-# [Java](#tab/java)
+# [Python](#tab/python)
-Metadata is not available in Java.
+Metadata is not available in Python.
---
@@ -394,11 +434,11 @@ The Azure Functions runtime ensures that no blob trigger function gets called mo
Azure Functions stores blob receipts in a container named *azure-webjobs-hosts* in the Azure storage account for your function app (defined by the app setting `AzureWebJobsStorage`). A blob receipt has the following information:
-* The triggered function ("*&lt;function app name>*.Functions.*&lt;function name>*", for example: "MyFunctionApp.Functions.CopyBlob")
+* The triggered function (`<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>`, for example: `MyFunctionApp.Functions.CopyBlob`)
* The container name
-* The blob type ("BlockBlob" or "PageBlob")
+* The blob type (`BlockBlob` or `PageBlob`)
* The blob name
-* The ETag (a blob version identifier, for example: "0x8D1DC6E70A277EF")
+* The ETag (a blob version identifier, for example: `0x8D1DC6E70A277EF`)
To force reprocessing of a blob, delete the blob receipt for that blob from the *azure-webjobs-hosts* container manually. While reprocessing might not occur immediately, it's guaranteed to occur at a later point in time. To reprocess immediately, the *scaninfo* blob in *azure-webjobs-hosts/blobscaninfo* can be updated. Any blobs with a last modified timestamp after the `LatestScan` property will be scanned again.
@@ -408,11 +448,11 @@ When a blob trigger function fails for a given blob, Azure Functions retries tha
If all 5 tries fail, Azure Functions adds a message to a Storage queue named *webjobs-blobtrigger-poison*. The maximum number of retries is configurable. The same MaxDequeueCount setting is used for poison blob handling and poison queue message handling. The queue message for poison blobs is a JSON object that contains the following properties:
-* FunctionId (in the format *&lt;function app name>*.Functions.*&lt;function name>*)
-* BlobType ("BlockBlob" or "PageBlob")
+* FunctionId (in the format `<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>`)
+* BlobType (`BlockBlob` or `PageBlob`)
* ContainerName * BlobName
-* ETag (a blob version identifier, for example: "0x8D1DC6E70A277EF")
+* ETag (a blob version identifier, for example: `0x8D1DC6E70A277EF`)
## Concurrency and memory usage
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-overview.md
@@ -11,7 +11,7 @@ ms.custom: contperf-fy21q2
# Introduction to Azure Functions
-Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date servers needed to keep your applications running.
+Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running.
You focus on the pieces of code that matter most to you, and Azure Functions handles the rest.<br /><br />
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/change-analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis.md
@@ -180,7 +180,6 @@ If it's the first time you view Change history after its integration with Applic
``` - **Failed to register Microsoft.ChangeAnalysis resource provider**. This message means something failed immediately as the UI sent request to register the resource provider, and it's not related to permission issue. Likely it might be a temporary internet connectivity issue. Try refreshing the page and checking your internet connection. If the error persists, contact changeanalysishelp@microsoft.com-- **Failed to query Microsoft.ChangeAnalysis resource provider** with message *Azure lighthouse subscription is not supported, the changes are only available in the subscription's home tenant*. There is a limitation right now for Change Analysis resource provider to be registered through Azure Lighthouse subscription for users not in home tenant. We expect this limitation to be addressed in the near future. If this is a blocking issue for you, there is a workaround that involves creating a service principal and explicitly assigning the role to allow the access. Contact changeanalysishelp@microsoft.com to learn more about it. - **This is taking longer than expected**. This message means the registration is taking longer than 2 minutes. This is unusual but does not necessarily mean something went wrong. You can go to **Subscriptions | Resource provider** to check for **Microsoft.ChangeAnalysis** resource provider registration status. You can try to use the UI to unregister, re-register or refresh to see if it helps. If issue persists, contact changeanalysishelp@microsoft.com for support. ![Troubleshoot RP registration taking too long](./media/change-analysis/troubleshoot-registration-taking-too-long.png)
@@ -189,6 +188,10 @@ If it's the first time you view Change history after its integration with Applic
![Screenshot of the tile for the Analyze recent changes troubleshooting tool for a Virtual Machine.](./media/change-analysis/analyze-recent-changes.png)
+### Azure Lighthouse subscription is not supported
+
+- **Failed to query Microsoft.ChangeAnalysis resource provider** with message *Azure lighthouse subscription is not supported, the changes are only available in the subscription's home tenant*. There is a limitation right now for Change Analysis resource provider to be registered through Azure Lighthouse subscription for users not in home tenant. We expect this limitation to be addressed in the near future. If this is a blocking issue for you, there is a workaround that involves creating a service principal and explicitly assigning the role to allow the access. Contact changeanalysishelp@microsoft.com to learn more about it.
+ ## Next steps - Enable Application Insights for [Azure App Services apps](azure-web-apps.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript.md
@@ -335,7 +335,7 @@ Breaking changes in the SDK V2 version:
- To allow for better API signatures, some of the API calls, such as trackPageView and trackException, have been updated. Running in Internet Explorer 8 and earlier versions of the browser is not supported. - The telemetry envelope has field name and structure changes due to data schema updates. - Moved `context.operation` to `context.telemetryTrace`. Some fields were also changed (`operation.id` --> `telemetryTrace.traceID`).
- - To manually refresh the current pageview ID (for example, in SPA apps), use `appInsights.properties.context.telemetryTrace.traceID = Util.generateW3CId()`.
+ - To manually refresh the current pageview ID (for example, in SPA apps), use `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`.
> [!NOTE] > To keep the trace ID unique, where you previously used `Util.newId()`, now use `Util.generateW3CId()`. Both ultimately end up being the operation ID.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/learn/tutorial-metrics-explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/learn/tutorial-metrics-explorer.md
@@ -35,7 +35,7 @@ You can open metrics explorer either from the Azure Monitor menu or from a resou
1. Select **Metrics** from the **Azure Monitor** menu or from the **Monitoring** section of a resource's menu.
-1. Select the **Scope**, which is the resource you want to see metrics for. The scope is already populated if you opened metrics explorer from a resource's menu.
+1. Select the **Scope**, which is the resource you want to see metrics for. The scope is already populated if you opened metrics explorer from a resource's menu. To learn more about the various capabilities of the resource scope picker, visit [this article](../platform/metrics-charts.md#resource-scope-picker).
![Select a scope](media/tutorial-metrics-explorer/scope-picker.png)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-dynamic-thresholds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-dynamic-thresholds.md
@@ -4,7 +4,7 @@ description: Create Alerts with machine learning based Dynamic Thresholds
author: yanivlavi ms.author: yalavi ms.topic: conceptual
-ms.date: 02/16/2020
+ms.date: 01/04/2021
--- # Metric Alerts with Dynamic Thresholds in Azure Monitor
@@ -34,7 +34,7 @@ Dynamic Thresholds continuously learns the data of the metric series and tries t
The thresholds are selected in such a way that a deviation from these thresholds indicates an anomaly in the metric behavior. > [!NOTE]
-> Seasonal pattern detection is set to a hour, day, or week interval. This means other patterns like bihourly pattern or semiweekly might not be detected.
+> Dynamic Thresholds can detect seasonality for hourly, daily, or weekly patterns. Other patterns like bi-hourly or semi-weekly seasonality might not be detected. To detect weekly seasonality, at least three weeks of historical data are required.
## What does 'Sensitivity' setting in Dynamic Thresholds mean?
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/diagnostic-settings.md
@@ -29,7 +29,7 @@ The following video walks you through routing platform logs with diagnostic sett
> [Platform metrics](metrics-supported.md) are sent automatically to [Azure Monitor Metrics](data-platform-metrics.md). Diagnostic settings can be used to send metrics for certain Azure services into Azure Monitor Logs for analysis with other monitoring data using [log queries](../log-query/log-query-overview.md) with certain limitations. > >
-> Sending multi-dimensional metrics via diagnostic settings is not currently supported. Metrics with dimensions are exported as flattened single dimensional metrics, aggregated across dimension values. *For example*: The 'IOReadBytes' metric on an Blockchain can be explored and charted on a per node level. However, when exported via diagnostic settings, the metric exported represents as all read bytes for all nodes. In addition, due to internal limitations not all metrics are exportable to Azure Monitor Logs / Log Analytics. For more information, see the [list of exportable metrics](metrics-supported-export-diagnostic-settings.md).
+> Sending multi-dimensional metrics via diagnostic settings is not currently supported. Metrics with dimensions are exported as flattened single dimensional metrics, aggregated across dimension values. *For example*: The 'IOReadBytes' metric on a Blockchain can be explored and charted on a per node level. However, when exported via diagnostic settings, the metric exported represents as all read bytes for all nodes. In addition, due to internal limitations not all metrics are exportable to Azure Monitor Logs / Log Analytics. For more information, see the [list of exportable metrics](metrics-supported-export-diagnostic-settings.md).
> > > To get around these limitations for specific metrics, we suggest you manually extract them using the [Metrics REST API](/rest/api/monitor/metrics/list) and import them into Azure Monitor Logs using the [Azure Monitor Data collector API](data-collector-api.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-definition.md
@@ -125,7 +125,22 @@ Use the following procedure to create action groups:
9. If you select **Create individual work items for each Configuration Item**, every configuration item will have its own work item. Meaning there will be one work item per configuration item.
- * In a case you select in the work item dropdown "Incident" or "Alert": If you clear the **Create individual work items for each Configuration Item** check box, every alert will create a new work item. There can be more than one alert per configuration item.
+ * In a case you select in the work item dropdown "Incident" or "Alert":
+ * If you check the **Create individual work items for each Configuration Item** check box, every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system.
+
+ For example:
+ 1) Alert 1 with 3 Configuration Items: A, B, C will create 3 work items.
+ 2) Alert 2 with 1 Configuration Item: D will create 1 work item.
+
+ **By the end of this flow there will be 4 alerts**
+ * If you clear the **Create individual work items for each Configuration Item** check box, there will be alerts that will not create a new work item. work items will be merged according to alert rule.
+
+ For example:
+ 1) Alert 1 with 3 Configuration Items: A, B, C will create 1 work item.
+ 2) Alert 2 for the same alert rule as phase 1 with 1 Configuration Item: D will be merged to the work item in phase 1.
+ 3) Alert 3 for a different alert rule with 1 Configuration Item: E will create 1 work item.
+
+ **By the end of this flow there will be 2 alerts**
![Screenshot that shows the ITSM Incident window.](media/itsmc-overview/itsm-action-configuration.png)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/manage-cost-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/manage-cost-storage.md
@@ -11,7 +11,7 @@ ms.service: azure-monitor
ms.workload: na ms.tgt_pltfrm: na ms.topic: conceptual
-ms.date: 12/16/2020
+ms.date: 12/24/2020
ms.author: bwren ms.subservice: ---
@@ -128,9 +128,9 @@ None of the legacy pricing tiers has regional-based pricing.
## Change the data retention period
-The following steps describe how to configure how long log data is kept by in your workspace. Data retention can be configured from 30 to 730 days (2 years) for all workspaces unless they are using the legacy Free pricing tier.[Learn more](https://azure.microsoft.com/pricing/details/monitor/) about pricing for longer data retention.
+The following steps describe how to configure how long log data is kept by in your workspace. Data retention at the workspace level can be configured from 30 to 730 days (2 years) for all workspaces unless they are using the legacy Free pricing tier.[Learn more](https://azure.microsoft.com/pricing/details/monitor/) about pricing for longer data retention. Retention for individual data types can be set as low as 4 days.
-### Default retention
+### Workspace level default retention
To set the default retention for your workspace,
@@ -154,7 +154,7 @@ Note that the Log Analytics [purge API](/rest/api/loganalytics/workspacepurge/pu
### Retention by data type
-It is also possible to specify different retention settings for individual data types from 30 to 730 days (except for workspaces in the legacy Free pricing tier). Each data type is a sub-resource of the workspace. For instance the SecurityEvent table can be addressed in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
+It is also possible to specify different retention settings for individual data types from 4 to 730 days (except for workspaces in the legacy Free pricing tier) that override the workspace level default retention. Each data type is a sub-resource of the workspace. For instance the SecurityEvent table can be addressed in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
``` /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent
@@ -347,7 +347,8 @@ Usage
| where TimeGenerated > ago(32d) | where StartTime >= startofday(ago(31d)) and EndTime < startofday(now()) | where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution | render barchart
+| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution
+| render columnchart
``` The clause with `TimeGenerated` is only to ensure that the query experience in the Azure portal will look back beyond the default 24 hours. When using the Usage data type, `StartTime` and `EndTime` represent the time buckets for which results are presented.
@@ -361,7 +362,8 @@ Usage
| where TimeGenerated > ago(32d) | where StartTime >= startofday(ago(31d)) and EndTime < startofday(now()) | where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), DataType | render barchart
+| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), DataType
+| render columnchart
``` Or to see a table by solution and type for the last month,
@@ -661,4 +663,5 @@ There are some additional Log Analytics limits, some of which depend on the Log
- To configure an effective event collection policy, review [Azure Security Center filtering policy](../../security-center/security-center-enable-data-collection.md). - Change [performance counter configuration](data-sources-performance-counters.md). - To modify your event collection settings, review [event log configuration](data-sources-windows-events.md).
+- To modify your syslog collection settings, review [syslog configuration](data-sources-syslog.md).
- To modify your syslog collection settings, review [syslog configuration](data-sources-syslog.md).\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metrics-supported.md
@@ -4,7 +4,7 @@ description: List of metrics available for each resource type with Azure Monitor
author: rboucher services: azure-monitor ms.topic: reference
-ms.date: 12/09/2020
+ms.date: 01/04/2021
ms.author: robb ms.subservice: metrics ---
@@ -612,8 +612,8 @@ For important additional information, see [Monitoring Agents Overview](agents-ov
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |---|---|---|---|---|---|---|
-|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine|No Dimensions|
-|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst|No Dimensions|
+|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine. Only available on [B-series burstable VMs](../../virtual-machines/sizes-b-series-burstable.md). See |No Dimensions|
+|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst. Only available on [B-series burstable VMs](../../virtual-machines/sizes-b-series-burstable.md).|No Dimensions|
|Data Disk Bandwidth Consumed Percentage|Yes|Data Disk Bandwidth Consumed Percentage|Percent|Average|Percentage of data disk bandwidth consumed per minute|LUN| |Data Disk IOPS Consumed Percentage|Yes|Data Disk IOPS Consumed Percentage|Percent|Average|Percentage of data disk I/Os consumed per minute|LUN| |Data Disk Queue Depth|Yes|Data Disk Queue Depth (Preview)|Count|Average|Data Disk Queue Depth(or Queue Length)|LUN|
@@ -665,8 +665,8 @@ For important additional information, see [Monitoring Agents Overview](agents-ov
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |---|---|---|---|---|---|---|
-|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine|No Dimensions|
-|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst|No Dimensions|
+|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine. Only available on [B-series burstable VMs](../../virtual-machines/sizes-b-series-burstable.md).|No Dimensions|
+|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst. Only available on [B-series burstable VMs](../../virtual-machines/sizes-b-series-burstable.md).|No Dimensions|
|Data Disk Queue Depth|Yes|Data Disk Queue Depth (Preview)|Count|Average|Data Disk Queue Depth(or Queue Length)|LUN, VMName| |Data Disk Read Bytes/sec|Yes|Data Disk Read Bytes/Sec (Preview)|CountPerSecond|Average|Bytes/Sec read from a single disk during monitoring period|LUN, VMName| |Data Disk Read Operations/Sec|Yes|Data Disk Read Operations/Sec (Preview)|CountPerSecond|Average|Read IOPS from a single disk during monitoring period|LUN, VMName|
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-solution-architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: conceptual
-ms.date: 12/15/2020
+ms.date: 01/04/2020
ms.author: b-juche --- # Solution architectures using Azure NetApp Files
@@ -94,6 +94,7 @@ This section provides references for Virtual Desktop infrastructure solutions.
* [Create an FSLogix profile container for a host pool using Azure NetApp Files](../virtual-desktop/create-fslogix-profile-container.md) * [Windows Virtual Desktop at enterprise scale](/azure/architecture/example-scenario/wvd/windows-virtual-desktop) * [Microsoft FSLogix for the enterprise - Azure NetApp Files best practices](/azure/architecture/example-scenario/wvd/windows-virtual-desktop-fslogix#azure-netapp-files-best-practices)
+* [Setting up Azure NetApp Files for MSIX App Attach](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/setting-up-azure-netapp-files-for-msix-app-attach-step-by-step/m-p/1990021)
## HPC solutions
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/create-volumes-dual-protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: how-to
-ms.date: 12/15/2020
+ms.date: 01/04/2020
ms.author: b-juche --- # Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
@@ -34,7 +34,7 @@ Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
* Create a reverse lookup zone on the DNS server and then add a pointer (PTR) record of the AD host machine in that reverse lookup zone. Otherwise, the dual-protocol volume creation will fail. * Ensure that the NFS client is up to date and running the latest updates for the operating system. * Ensure that the Active Directory (AD) LDAP server is up and running on the AD. You can do so by installing and configuring the [Active Directory Lightweight Directory Services (AD LDS)](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/hh831593(v=ws.11)) role on the AD machine.
-* Ensure that a certificate authority (CA) is created on the AD using the [Active Directory Certificate Services (AD CS)](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) role to generate and export the self-signed root CA certificate.
+* Ensure that a certificate authority (CA) is created for the AD using the [Active Directory Certificate Services (AD CS)](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) role to generate and export the self-signed root CA certificate.
* Dual-protocol volumes do not currently support Azure Active Directory Domain Services (AADDS). * The NFS version used by a dual-protocol volume is NFSv3. As such, the following considerations apply: * Dual protocol does not support the Windows ACLS extended attributes `set/get` from NFS clients.
@@ -127,7 +127,8 @@ Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
* A Windows-based client that has joined the domain and has the root certificate installed * Another machine in the domain containing the root certificate
-3. Export the root certificate.
+3. Export the root CA certificate.
+ Root CA certificates can be exported from Personal or Trusted Root Certification Authorities.
Ensure that the certificate is exported in the Base-64 encoded X.509 (.CER) format: ![Certificate Export Wizard](../media/azure-netapp-files/certificate-export-wizard.png)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/overview.md
@@ -97,7 +97,7 @@ The Azure Resource Manager service is designed for resiliency and continuous ava
* Distributed across regions. Some services are regional.
-* Distributed across Availability Zones (as well regions) in locations that have multiple Availability Zones.
+* Distributed across Availability Zones (as well as regions) in locations that have multiple Availability Zones.
* Not dependent on a single logical data center.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/tag-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
@@ -2,7 +2,7 @@
title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. ms.topic: conceptual
-ms.date: 12/03/2020
+ms.date: 01/04/2021
ms.custom: devx-track-azurecli --- # Use tags to organize your Azure resources and management hierarchy
@@ -432,9 +432,12 @@ If your tag names or values include spaces, enclose them in double quotes.
az tag update --resource-id $group --operation Merge --tags "Cost Center"=Finance-1222 Location="West US" ```
-## Templates
+## ARM templates
-You can tag resources, resource groups, and subscriptions during deployment with a Resource Manager template.
+You can tag resources, resource groups, and subscriptions during deployment with an Azure Resource Manager template (ARM template).
+
+> [!NOTE]
+> The tags you apply through the ARM template overwrite any existing tags.
### Apply values
@@ -442,7 +445,7 @@ The following example deploys a storage account with three tags. Two of the tags
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "utcShort": {
@@ -481,7 +484,7 @@ You can define an object parameter that stores several tags, and apply that obje
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "location": {
@@ -519,7 +522,7 @@ To store many values in a single tag, apply a JSON string that represents the va
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "location": {
@@ -552,7 +555,7 @@ To apply tags from a resource group to a resource, use the [resourceGroup()](../
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "location": {
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
@@ -2,7 +2,7 @@
title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. ms.topic: conceptual
-ms.date: 11/18/2020
+ms.date: 01/04/2021
--- # Resource functions for ARM templates
@@ -169,7 +169,7 @@ Built-in policy definitions are tenant level resources. For an example of deploy
`list{Value}(resourceName or resourceIdentifier, apiVersion, functionValues)`
-The syntax for this function varies by name of the list operations. Each implementation returns values for the resource type that supports a list operation. The operation name must start with `list`. Some common usages are `listKeys`, `listKeyValue`, and `listSecrets`.
+The syntax for this function varies by name of the list operations. Each implementation returns values for the resource type that supports a list operation. The operation name must start with `list` and may have a suffix. Some common usages are `list`, `listKeys`, `listKeyValue`, and `listSecrets`.
### Parameters
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/signalr-concept-serverless-development-config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-concept-serverless-development-config.md
@@ -44,11 +44,11 @@ To learn about how to create an authenticated token, refer to [Using App Service
### Handle messages sent from SignalR Service
-Use the *SignalR Trigger* binding to handle messages sent from SignalR Service. You can be triggered when clients send messages or clients get connected or disconnected.
+Use the *SignalR Trigger* binding to handle messages sent from SignalR Service. You can get notified when clients send messages or clients get connected or disconnected.
For more information, see the [*SignalR trigger* binding reference](../azure-functions/functions-bindings-signalr-service-trigger.md).
-You also need to configure your function endpoint as an upstream so that service will trigger the function where there is message from client. For more information about how to configure upstream, please refer to this [doc](concept-upstream.md).
+You also need to configure your function endpoint as an upstream so that service will trigger the function when there is message from client. For more information about how to configure upstream, please refer to this [doc](concept-upstream.md).
### Sending messages and managing group membership
@@ -322,4 +322,4 @@ For information on other languages, see the [Azure SignalR Service bindings](../
## Next steps
-In this article, you have learned how to develop and configure serverless SignalR Service applications using Azure Functions. Try creating an application yourself using one of the quick starts or tutorials on the [SignalR Service overview page](index.yml).
\ No newline at end of file
+In this article, you have learned how to develop and configure serverless SignalR Service applications using Azure Functions. Try creating an application yourself using one of the quick starts or tutorials on the [SignalR Service overview page](index.yml).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-vcore-elastic-pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
@@ -120,7 +120,7 @@ You can set the service tier, compute size (service objective), and storage amou
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1, 2|0, 0.25, 0.5, 1...4|0, 0.25, 0.5, 1...6|0, 0.25, 0.5, 1...8|0, 0.25, 0.5, 1...10|0, 0.25, 0.5, 1...12|0, 0.25, 0.5, 1...14| |Number of replicas|1|1|1|1|1|1|1|
-|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|N/A|
+|Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
@@ -152,7 +152,7 @@ You can set the service tier, compute size (service objective), and storage amou
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1...16|0, 0.25, 0.5, 1...18|0, 0.25, 0.5, 1...20|0, 0.25, 0.5, 1...20, 24|0, 0.25, 0.5, 1...20, 24, 32|0, 0.25, 0.5, 1...16, 24, 32, 40|0, 0.25, 0.5, 1...16, 24, 32, 40, 80| |Number of replicas|1|1|1|1|1|1|1|
-|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|N/A|
+|Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-vcore-single-databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
@@ -301,7 +301,7 @@ The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max concurrent workers (requests)|200|400|600|800|1000|1200|1400| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|1|
-|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|N/A|
+|Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
@@ -326,7 +326,7 @@ The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max concurrent workers (requests)|1600|1800|2000|2400|3200|4000|8000| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|1|
-|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|N/A|
+|Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/faq.md
@@ -2,7 +2,7 @@
title: Frequently asked questions description: Provides answers to some of the common questions about Azure VMware Solution. ms.topic: conceptual
-ms.date: 12/22/2020
+ms.date: 1/4/2020
--- # Frequently asked questions about Azure VMware Solution
@@ -196,7 +196,7 @@ Azure Virtual WAN doesn't provide transitive routing between two connected Expre
#### Could I use HCX through public Internet communications as a workaround for the non-supportability of HCX when using VPN S2S with vWAN for on-premises communications?
-Currently, the only supported method for HCX is through ExpressRoute.
+Currently, the only supported method for VMware HCX is through ExpressRoute.
## Accounts and privileges
backup https://docs.microsoft.com/en-us/azure/backup/backup-center-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-faq.md
@@ -29,7 +29,9 @@ No. Backup Center comes ready out of the box. However, to view [Backup Reports](
### Do I need to have any special permissions to use Backup Center?
-Backup Center as such doesn't need any new permissions. As long as you have the right level of Azure RBAC access for the resources you're managing, you can use Backup Center for these resources. For example, to view information about your backups, you'll need **Reader** access to your vaults. To configure backup and perform other backup-related actions, you'll need **Backup Contributor** or **Backup Operator** roles. Learn more about [Azure roles for Azure Backup](./backup-rbac-rs-vault.md).
+Backup Center as such doesn't need any new permissions. As long as you have the right level of Azure RBAC access for the resources you're managing, you can use Backup Center for these resources. For example, to view information about your backups, you'll need **Reader** access to your vaults. To configure backup and perform other backup-related actions, you'll need **Backup Contributor** or **Backup Operator** roles. Learn more about [Azure roles for Azure Backup](./backup-rbac-rs-vault.md).
+
+If you're using [Backup Reports](./configure-reports.md) under Backup Center, you will need access to the Log Analytics workspace(s) that your vault(s) are sending data to, to view reports for these vaults.
## Pricing
backup https://docs.microsoft.com/en-us/azure/backup/manage-monitor-sql-database-backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-monitor-sql-database-backup.md
@@ -15,7 +15,7 @@ If you haven't yet configured backups for your SQL Server databases, see [Back u
Azure Backup shows all scheduled and on-demand operations under **Backup jobs** in the portal, except the scheduled log backups since they can be very frequent. The jobs you see in this portal include database discovery and registration, configure backup, and backup and restore operations.
-![The Backup jobs portal](./media/backup-azure-sql-database/jobs-list.png)
+![The Backup jobs portal](./media/backup-azure-sql-database/sql-backup-jobs-list.png)
For details on Monitoring scenarios, go to [Monitoring in the Azure portal](backup-azure-monitoring-built-in-monitor.md) and [Monitoring using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md).
@@ -31,13 +31,9 @@ To monitor database backup alerts:
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. On the vault dashboard, select **Alerts and Events**.
+2. On the vault dashboard, select **Backup Alerts**.
- ![Select Alerts and Events](./media/backup-azure-sql-database/vault-menu-alerts-events.png)
-
-3. In **Alerts and Events**, select **Backup Alerts**.
-
- ![Select Backup Alerts](./media/backup-azure-sql-database/backup-alerts-dashboard.png)
+ ![Select Backup Alerts](./media/backup-azure-sql-database/sql-backup-alerts-list.png)
## Stop protection for a SQL Server database
backup https://docs.microsoft.com/en-us/azure/backup/modify-vm-policy-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/modify-vm-policy-cli.md new file mode 100644
@@ -0,0 +1,110 @@
+---
+title: Update the existing VM backup policy using CLI
+description: Learn how to update the existing VM backup policy using Azure CLI.
+ms.topic: conceptual
+ms.date: 12/31/2020
+---
+# Update the existing VM backup policy using CLI
+
+You can use Azure CLI to update an existing VM backup policy. This article will explain how to export the existing policy to a JSON file, modify the file, and then use Azure CLI to update the policy with the modified policy.
+
+## Modify an existing policy
+
+To modify an existing VM backup policy, follow these steps:
+
+1. Execute the [az backup policy show](https://docs.microsoft.com/cli/azure/backup/policy#az_backup_policy_show) command to retrieve the details of policy you want to update.
+
+ Example:
+
+ ```azurecli
+ az backup policy show --name testing123 --resource-group rg1234 --vault-name testvault
+ ```
+
+ The example above shows the details for a VM policy with the name *testing123*.
+
+ Output:
+
+ ```json
+ {
+ "eTag": null,
+ "id": "/Subscriptions/efgsf-123-test-subscription/resourceGroups/rg1234/providers/Microsoft.RecoveryServices/vaults/testvault/backupPolicies/testing123",
+ "location": null,
+ "name": "testing123",
+ "properties": {
+ "backupManagementType": "AzureIaasVM",
+ "instantRpDetails": {
+ "azureBackupRgNamePrefix": null,
+ "azureBackupRgNameSuffix": null
+ },
+ "instantRpRetentionRangeInDays": 2,
+ "protectedItemsCount": 0,
+ "retentionPolicy": {
+ "dailySchedule": {
+ "retentionDuration": {
+ "count": 180,
+ "durationType": "Days"
+ },
+ "retentionTimes": [
+ "2020-08-03T04:30:00+00:00"
+ ]
+ },
+ "monthlySchedule": null,
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Weeks"
+ },
+ "retentionTimes": [
+ "2020-08-03T04:30:00+00:00"
+ ]
+ },
+ "yearlySchedule": null
+ },
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunDays": null,
+ "scheduleRunFrequency": "Daily",
+ "scheduleRunTimes": [
+ "2020-08-03T04:30:00+00:00"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "timeZone": "UTC"
+ },
+ "resourceGroup": "azurefiles",
+ "tags": null,
+ "type": "Microsoft.RecoveryServices/vaults/backupPolicies"
+ }
+ ```
+
+1. Save the above output in a .json file. For example, let's save it as *Policy.json*.
+1. Update the JSON file based on your requirements and save the changes.
+
+ Example:
+ To update the weekly retention to 60 days, update the following section of the JSON file by changing the count to 60.
+
+ ```json
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Weeks"
+ }
+
+ ```
+
+1. Save the changes.
+1. Execute the [az backup policy set](https://docs.microsoft.com/cli/azure/backup/policy#az_backup_policy_set) command and pass the complete path of the updated JSON file as the value for the **- - policy** parameter.
+
+ ```azurecli
+ az backup policy set --resource-group rg1234 --vault-name testvault --policy C:\temp2\Policy.json --name testing123
+ ```
+
+>[!NOTE]
+>You can also retrieve the sample JSON policy by executing the [az backup policy get-default-for-vm](https://docs.microsoft.com/cli/azure/backup/policy#az_backup_policy_get_default_for_vm) command.
+
+## Next steps
+
+- [Manage Azure VM backups with the Azure Backup service](backup-azure-manage-vms.md)
baremetal-infrastructure https://docs.microsoft.com/en-us/azure/baremetal-infrastructure/know-baremetal-terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/know-baremetal-terms.md
@@ -2,24 +2,20 @@
title: Know the terms of Azure BareMetal Infrastructure description: Know the terms of Azure BareMetal Infrastructure. ms.topic: conceptual
-ms.date: 12/31/2020
+ms.date: 1/4/2021
--- # Know the terms for BareMetal Infrastructure In this article, we'll cover some important BareMetal terms. -- **Revision**: There are two different stamp revisions for BareMetal Instance stamps. Each version differs in architecture and proximity to Azure virtual machine hosts:
- - **Revision 3** (Rev 3): is the original design.
- - **Revision 4** (Rev 4): is a new design that provides closer proximity to the Azure virtual machine (VM) hosts and lowers the latency between Azure VMs and BareMetal Instance units.
- - **Revision 4.2** (Rev 4.2): is the latest rebranded BareMetal Infrastructure that uses the existing Rev 4 architecture. You can access and manage your BareMetal instances through the Azure portal.
+- **Revision**: There's an original stamp revision known as Revision 3 (Rev 3), and two different stamp revisions for BareMetal Instance stamps. Each stamp differs in architecture and proximity to Azure virtual machine hosts:
+ - **Revision 4** (Rev 4): a newer design that provides closer proximity to the Azure virtual machine (VM) hosts and lowers the latency between Azure VMs and BareMetal Instance units.
+ - **Revision 4.2** (Rev 4.2): the latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Rev 4 stamps or rows. You can access and manage your BareMetal instances through the Azure portal.
-- **Stamp**: Defines the Microsoft internal deployment size of BareMetal Instances. Before instance units can get deployed, a BareMetal Instance stamp consisting out of compute, network, and storage racks must be deployed in a datacenter location. Such a deployment is called a BareMetal Instance stamp or from Revision 4.2.
+- **Stamp**: Defines the Microsoft internal deployment size of BareMetal Instances. Before instance units can get deployed, a BareMetal Instance stamp consisting of compute, network, and storage racks must be deployed in a datacenter location. Such a deployment is called a BareMetal Instance stamp or from Revision 4.2.
- **Tenant**: A customer deployed in BareMetal Instance stamp gets isolated into a *tenant.* A tenant is isolated in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to the different tenants can't see each other or communicate with each other on the BareMetal Instance stamp level. A customer can choose to have deployments into different tenants. Even then, there's no communication between tenants on the BareMetal Instance stamp level. ## Next steps
-Learn how to identify and interact with BareMetal Instance units through the [Azure portal](workloads/sap/baremetal-infrastructure-portal.md).
--
-
\ No newline at end of file
+Learn more about the [BareMetal Infrastructure](workloads/sap/baremetal-overview-architecture.md) or how to [identify and interact with BareMetal Instance units](workloads/sap/baremetal-infrastructure-portal.md).
\ No newline at end of file
baremetal-infrastructure https://docs.microsoft.com/en-us/azure/baremetal-infrastructure/workloads/sap/baremetal-infrastructure-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/sap/baremetal-infrastructure-portal.md
@@ -2,7 +2,7 @@
title: BareMetal Instance units in Azure description: Learn how to identify and interact with BareMetal Instance units through the Azure portal. ms.topic: how-to
-ms.date: 12/31/2020
+ms.date: 1/4/2021
--- # Manage BareMetal Instances through the Azure portal
@@ -10,7 +10,9 @@ ms.date: 12/31/2020
This article shows how the [Azure portal](https://portal.azure.com/) displays [BareMetal Instances](baremetal-overview-architecture.md). This article also shows you the activities you can do in the Azure portal with your deployed BareMetal Instance units. ## Register the resource provider
-An Azure resource provider for BareMetal Instances provides visibility of the instances in the Azure portal, currently in public preview. By default, the Azure subscription you use for BareMetal Instance deployments registers the *BareMetalInfrastructure* resource provider. If you don't see your deployed BareMetal Instance units, you must register the resource provider with your subscription. There are two ways to register the BareMetal Instance resource provider:
+An Azure resource provider for BareMetal Instances provides visibility of the instances in the Azure portal, currently in public preview. By default, the Azure subscription you use for BareMetal Instance deployments registers the *BareMetalInfrastructure* resource provider. If you don't see your deployed BareMetal Instance units, you must register the resource provider with your subscription.
+
+There are two ways to register the BareMetal Instance resource provider:
* [Azure CLI](#azure-cli)
@@ -80,15 +82,15 @@ The attributes in the image don't look much different than the Azure virtual mac
On the right, you'll see the unit's name, operating system (OS), IP address, and SKU that shows the number of CPU threads and memory. You'll also see the power state and hardware version (revision of the BareMetal Instance stamp). The power state indicates if the hardware unit is powered on or off. The operating system details, however, don't indicate whether it's up and running. The possible hardware revisions are:+
+* Revision 3 (Rev 3)
+
+* Revision 4 (Rev 4)
-* Revision 3
-
-* Revision 4
-
-* Revision 4.2
+* Revision 4.2 (Rev 4.2)
>[!NOTE]
->Revision 4.2 is the latest rebranded BareMetal Infrastructure using the Revision 4 architecture. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Revision 4 stamps or rows. For more information about the different revisions, see [BareMetal Infrastructure on Azure](baremetal-overview-architecture.md).
+>Rev 4.2 is the latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Rev 4 stamps or rows. You can access and manage your BareMetal instances through the Azure portal. For more information, see [BareMetal Infrastructure on Azure](baremetal-overview-architecture.md).
Also, on the right side, you'll find the [Azure Proximity Placement Group's](../../../virtual-machines/linux/co-location.md) name, which is created automatically for each deployed BareMetal Instance unit. Reference the Proximity Placement Group when you deploy the Azure VMs that host the application layer. When you use the Proximity Placement Group associated with the BareMetal Instance unit, you ensure that the Azure VMs get deployed close to the BareMetal Instance unit.
baremetal-infrastructure https://docs.microsoft.com/en-us/azure/baremetal-infrastructure/workloads/sap/baremetal-overview-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/sap/baremetal-overview-architecture.md
@@ -1,9 +1,9 @@
--- title: Overview of BareMetal Infrastructure Preview in Azure
-description: Overview of how to deploy BareMetal Infrastructure in Azure.
+description: Overview of the BareMetal Infrastructure in Azure.
ms.custom: references_regions ms.topic: conceptual
-ms.date: 12/31/2020
+ms.date: 1/4/2021
--- # What is BareMetal Infrastructure Preview on Azure?
@@ -20,7 +20,7 @@ BareMetal Infrastructure for specialized and general-purpose workloads is availa
- South Central US >[!NOTE]
->**Rev 4.2** is the latest rebranded BareMetal Infrastructure that uses the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts and lowers the latency between Azure VMs and BareMetal Instance units. You can access and manage your BareMetal instances through the Azure portal.
+>**Rev 4.2** is the latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Rev 4 stamps or rows. You can access and manage your BareMetal instances through the Azure portal.
## Support BareMetal Infrastructure is ISO 27001, ISO 27017, SOC 1, and SOC 2 compliant. It also uses a bring-your-own-license (BYOL) model: OS, specialized workload, and third-party applications.
@@ -30,13 +30,13 @@ As soon as you receive root access and full control, you assume responsibility f
- Licensing, security, and support for OS and third-party software Microsoft is responsible for:-- Providing certified hardware for specialized workloads
+- Providing the hardware for specialized workloads
- Provisioning the OS :::image type="content" source="media/baremetal-support-model.png" alt-text="BareMetal Infrastructure support model" border="false"::: ## Compute
-BareMetal Infrastructure offers multiple SKUs certified for specialized workloads. Available SKUs available range from the smaller two-socket system to the 24-socket system. Use the workload-specific certified SKUs for your specialized workload.
+BareMetal Infrastructure offers multiple SKUs for specialized workloads. Available SKUs available range from the smaller two-socket system to the 24-socket system. Use the workload-specific SKUs for your specialized workload.
The BareMetal instance stamp itself combines the following components:
@@ -67,13 +67,13 @@ The available Linux OS versions are:
- SLES 15 SP1 ## Storage
-BareMetal instances based on specific SKU type come with predefined NFS storage based on specific workload type. When you provision BareMetal, you can provision additional storage based on your estimated growth by submitting a support request. All storage comes with an all-flash disk in Revision 4.2 with support for NFSv3 and NFSv4. The newer Revision 4.5 NVMe SSD will be available. For more information on storage sizing, see the [BareMetal workload type](../../../virtual-machines/workloads/sap/get-started.md) section.
+BareMetal instances based on specific SKU type come with predefined NFS storage for the specific workload type. When you provision BareMetal, you can provision more storage based on your estimated growth by submitting a support request. All storage comes with an all-flash disk in Revision 4.2 with support for NFSv3 and NFSv4. The newer Revision 4.5 NVMe SSD will be available. For more information on storage sizing, see the [BareMetal workload type](../../../virtual-machines/workloads/sap/get-started.md) section.
>[!NOTE]
->The storage used for BareMetal meets FIPS 140-2 security requirements offering Encryption at Rest by default. The data is stored securely on the disks.
+>The storage used for BareMetal meets [Federal Information Processing Standard (FIPS) Publication 140-2](/microsoft-365/compliance/offering-fips-140-2) requirements offering Encryption at Rest by default. The data is stored securely on the disks.
## Networking
-The architecture of Azure network services is a key component for a successful deployment of specialized workloads in BareMetal instances. It is likely that not all IT systems are located in Azure already. Azure offers you network technology to make Azure look like a virtual data center to your on-premises software deployments. The Azure network functionality required for BareMetal instances is:
+The architecture of Azure network services is a key component for a successful deployment of specialized workloads in BareMetal instances. It's likely that not all IT systems are located in Azure already. Azure offers you network technology to make Azure look like a virtual data center to your on-premises software deployments. The Azure network functionality required for BareMetal instances is:
- Azure virtual networks are connected to the ExpressRoute circuit that connects to your on-premises network assets. - An ExpressRoute circuit that connects on-premises to Azure should have a minimum bandwidth of 1 Gbps or higher.
@@ -86,10 +86,10 @@ BareMetal instances are provisioned within your Azure VNET server IP address ran
:::image type="content" source="media/baremetal-infrastructure-portal/baremetal-infrastructure-diagram.png" alt-text="Azure BareMetal Infrastructure diagram" lightbox="media/baremetal-infrastructure-portal/baremetal-infrastructure-diagram.png" border="false"::: The architecture shown is divided into three sections:-- **Left:** Shows the customer on-premise infrastructure that runs different applications, connecting through the partner or local Edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../../../expressroute/expressroute-locations.md).-- **Center:** Shows [ExpressRoute](../../../expressroute/expressroute-introduction.md) provisioned using your Azure subscription offering connectivity to Azure edge network.-- **Right:** Shows Azure IaaS, and in this case use of VMs to host your applications, which are provisioned within your Azure virtual network.-- **Bottom:** Shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../../../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
+- **Left:** shows the customer on-premise infrastructure that runs different applications, connecting through the partner or local edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../../../expressroute/expressroute-locations.md).
+- **Center:** shows [ExpressRoute](../../../expressroute/expressroute-introduction.md) provisioned using your Azure subscription offering connectivity to Azure edge network.
+- **Right:** shows Azure IaaS, and in this case use of VMs to host your applications, which are provisioned within your Azure virtual network.
+- **Bottom:** shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../../../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
>[!TIP] >To support this, your ExpressRoute Gateway should be UltraPerformance. For more information, see [About ExpressRoute virtual network gateways](../../../expressroute/expressroute-about-virtual-network-gateways.md).
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-guestos-update-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-update-matrix.md
@@ -10,7 +10,7 @@ ms.service: cloud-services
ms.topic: article ms.tgt_pltfrm: na ms.workload: tbd
-ms.date: 12/21/2020
+ms.date: 1/4/2021
ms.author: yohaddad --- # Azure Guest OS releases and SDK compatibility matrix
@@ -181,7 +181,7 @@ The September Guest OS has released.
| Configuration string | Release date | Disable date | | --- | --- | --- |
-| WA-GUEST-OS-5.49_202011-02 | December 19, 2020 | Post 5.51 |
+| WA-GUEST-OS-5.49_202011-01 | December 19, 2020 | Post 5.51 |
| WA-GUEST-OS-5.48_202010-02 | November 17, 2020 | Post 5.50 | |~~WA-GUEST-OS-5.47_202009-01~~| October 10, 2020 | December 19, 2020 | |~~WA-GUEST-OS-5.46_202008-02~~| September 5, 2020 | November 17, 2020 |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Anomaly-Detector/anomaly-detector-container-howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/anomaly-detector-container-howto.md
@@ -170,10 +170,6 @@ The Anomaly Detector containers send billing information to Azure, using an _Ano
For more information about these options, see [Configure containers](anomaly-detector-container-configuration.md).
-<!--blogs/samples/video coures -->
-
-[!INCLUDE [Discoverability of more container information](../../../includes/cognitive-services-containers-discoverability.md)]
- ## Summary In this article, you learned concepts and workflow for downloading, installing, and running Anomaly Detector containers. In summary:
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/computer-vision-how-to-install-containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
@@ -385,10 +385,6 @@ The Cognitive Services containers send billing information to Azure, using the c
For more information about these options, see [Configure containers](./computer-vision-resource-container-config.md).
-<!--blogs/samples/video course -->
-
-[!INCLUDE [Discoverability of more container information](../../../includes/cognitive-services-containers-discoverability.md)]
- ## Summary In this article, you learned concepts and workflow for downloading, installing, and running Computer Vision containers. In summary:
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Face/face-how-to-install-containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/face-how-to-install-containers.md
@@ -133,10 +133,6 @@ The Face service containers send billing information to Azure by using a Face re
For more information about these options, see [Configure containers](./face-resource-container-config.md).
-<!--blogs/samples/video coures -->
-
-[!INCLUDE [Discoverability of more container information](../../../includes/cognitive-services-containers-discoverability.md)]
- ## Summary In this article, you learned concepts and workflow for how to download, install, and run Face service containers. In summary:
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-container-howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-container-howto.md
@@ -369,9 +369,6 @@ The LUIS container sends billing information to Azure, using a _Cognitive Servic
For more information about these options, see [Configure containers](luis-container-configuration.md).
-<!--blogs/samples/video courses -->
-[!INCLUDE [Discoverability of more container information](../../../includes/cognitive-services-containers-discoverability.md)]
- ## Summary In this article, you learned concepts and workflow for downloading, installing, and running Language Understanding (LUIS) containers. In summary:
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/keyword-recognition/keyword-basics-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/keyword-recognition/keyword-basics-csharp.md
@@ -2,7 +2,7 @@
author: trevorbye ms.service: cognitive-services ms.topic: include
-ms.date: 11/03/2020
+ms.date: 01/04/2021
ms.author: trbye ---
@@ -25,4 +25,12 @@ KeywordRecognitionResult result = await keywordRecognizer.RecognizeOnceAsync(key
> [!NOTE] > The example shown here uses local keyword recognition, since it does not require a `SpeechConfig`
-object for authentication context, and does not contact the back-end. However, you can run both keyword recognition and verification [utilizing a continuous back-end connection](../../../tutorial-voice-enable-your-bot-speech-sdk.md#view-the-source-code-that-enables-keyword).
\ No newline at end of file
+object for authentication context, and does not contact the back-end. However, you can run both keyword recognition and verification [utilizing a direct back-end connection](../../../tutorial-voice-enable-your-bot-speech-sdk.md#view-the-source-code-that-enables-keyword).
+
+### Continuous recognition
+
+Other classes in the Speech SDK support continuous recognition (for both speech and intent recognition) with keyword recognition. This allows you to use the same code you would normally use for continuous recognition, with the ability to reference a `.table` file for your keyword model.
+
+For speech-to-text, follow the same design pattern shown in the [quickstart](https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started-speech-to-text?tabs=script%2Cbrowser%2Cwindowsinstall&pivots=programming-language-csharp#continuous-recognition) to set up continuous recognition. Then, replace the call to `recognizer.StartContinuousRecognitionAsync()` with `recognizer.StartKeywordRecognitionAsync(KeywordRecognitionModel)`, and pass your `KeywordRecognitionModel` object. To stop continuous recognition with keyword spotting, use `recognizer.StopKeywordRecognitionAsync()` instead of `recognizer.StopContinuousRecognitionAsync()`.
+
+Intent recognition uses an identical pattern with the [`StartKeywordRecognitionAsync`](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.intent.intentrecognizer.startkeywordrecognitionasync?view=azure-dotnet#Microsoft_CognitiveServices_Speech_Intent_IntentRecognizer_StartKeywordRecognitionAsync_Microsoft_CognitiveServices_Speech_KeywordRecognitionModel_) and [`StopKeywordRecognitionAsync`](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.intent.intentrecognizer.stopkeywordrecognitionasync?view=azure-dotnet#Microsoft_CognitiveServices_Speech_Intent_IntentRecognizer_StopKeywordRecognitionAsync) functions.
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/speech-container-howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto.md
@@ -745,10 +745,6 @@ The Speech containers send billing information to Azure, using a *Speech* resour
For more information about these options, see [Configure containers](speech-container-configuration.md).
-<!--blogs/samples/video courses -->
-
-[!INCLUDE [Discoverability of more container information](../../../includes/cognitive-services-containers-discoverability.md)]
- ## Summary In this article, you learned concepts and workflow for downloading, installing, and running Speech containers. In summary:
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/language-support.md
@@ -250,7 +250,7 @@ View reference documentation for the [Dictionary Lookup](reference/v3-0-dictiona
| Norwegian | `nb` | | Persian | `fa` | | Polish | `pl` |
-| Portuguese (Brazil) | `pt-br` |
+| Portuguese (Brazil) | `pt` |
| Romanian | `ro` | | Russian | `ru` | | Serbian (Latin) | `sr-Latn` |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-container-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-container-support.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.custom: seodec18, cog-serv-seo-aug-2020 ms.service: cognitive-services ms.topic: article
-ms.date: 10/22/2020
+ms.date: 12/16/2020
ms.author: aahi keywords: on-premises, Docker, container, Kubernetes #As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
@@ -16,29 +16,12 @@ keywords: on-premises, Docker, container, Kubernetes
# Azure Cognitive Services containers
-> [!WARNING]
-> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States.
-
-Azure Cognitive Services provides several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security or other operational reasons.
-
-Container support is currently available for a subset of Azure Cognitive Services, including parts of:
-
-> [!div class="checklist"]
-> * [Anomaly Detector][ad-containers]
-> * [Read OCR (Optical Character Recognition) ][cv-containers]
-> * [Spatial analysis][spa-containers]
-> * [Face][fa-containers]
-> * [Form Recognizer][fr-containers]
-> * [Language Understanding (LUIS)][lu-containers]
-> * [Speech Service API][sp-containers]
-> * [Text Analytics][ta-containers]
+Azure Cognitive Services provides several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure Cognitive Services.
> [!VIDEO https://www.youtube.com/embed/hdfbn4Q8jbo] Containerization is an approach to software distribution in which an application or service, including its dependencies & configuration, is packaged together as a container image. With little or no modification, a container image can be deployed on a container host. Containers are isolated from each other and the underlying operating system, with a smaller footprint than a virtual machine. Containers can be instantiated from container images for short-term tasks, and removed when no longer needed.
-Cognitive Services resources are available on [Microsoft Azure](https://azure.microsoft.com). Sign into the [Azure portal](https://portal.azure.com/) to create and explore Azure resources for these services.
- ## Features and benefits - **Immutable infrastructure**: Enable DevOps teams' to leverage a consistent and reliable set of known system parameters, while being able to adapt to change. Containers provide the flexibility to pivot within a predictable ecosystem and avoid configuration drift.
@@ -50,43 +33,62 @@ Cognitive Services resources are available on [Microsoft Azure](https://azure.mi
## Containers in Azure Cognitive Services
-Azure Cognitive Services containers provide the following set of Docker containers, each of which contains a subset of functionality from services in Azure Cognitive Services:
+Azure Cognitive Services containers provide the following set of Docker containers, each of which contains a subset of functionality from services in Azure Cognitive Services. You can find instructions and image locations in the tables below. A list of [container images](containers/container-image-tags.md) is also available.
+
+### Decision containers
+
+| Service | Container | Description | Availability |
+|--|--|--|--|
+| [Anomaly detector][ad-containers] | **Anomaly Detector** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-decision-anomaly-detector)) | The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data with machine learning. | Generally available |
+
+### Language containers
+
+| Service | Container | Description | Availability |
+|--|--|--|--|
+| [LUIS][lu-containers] | **LUIS** ([image](https://go.microsoft.com/fwlink/?linkid=2043204&clcid=0x409)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. | Generally available |
+| [Text Analytics][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://go.microsoft.com/fwlink/?linkid=2018757&clcid=0x409)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Preview |
+| [Text Analytics][ta-containers-language] | **Text Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Preview |
+| [Text Analytics][ta-containers-sentiment] | **Sentiment Analysis v3** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available |
+| [Text Analytics][ta-containers-health] | **Text Analytics for health** | Extract and label medical information from unstructured clinical text. | Gated preview. [Request access][request-access]. |
+
+### Speech containers
+
+> [!NOTE]
+> To use Speech containers, you will need to complete an [online request form](https://aka.ms/csgate).
+
+| Service | Container | Description | Availability |
+|--|--|--|
+| [Speech Service API][sp-containers-stt] | **Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text. | Generally available |
+| [Speech Service API][sp-containers-cstt] | **Custom Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text using a custom model. | Generally available |
+| [Speech Service API][sp-containers-tts] | **Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-text-to-speech)) | Converts text to natural-sounding speech. | Generally available |
+| [Speech Service API][sp-containers-ctts] | **Custom Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-text-to-speech)) | Converts text to natural-sounding speech using a custom model. | Gated preview |
+| [Speech Service API][sp-containers-ntts] | **Neural Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available |
+| [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-language-detection)) | Determines the language of spoken audio. | Gated preview |
+
+### Vision containers
+
+> [!WARNING]
+> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States.
-| Service | Supported Pricing Tier | Container | Description |
+| Service | Container | Description | Availability |
|--|--|--|--|
-| [Anomaly detector][ad-containers] | F0, S0 | **Anomaly-Detector** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-decision-anomaly-detector)) | The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data with machine learning.<br>[Request access][request-access] |
-| [Computer Vision][cv-containers] | F0, S1 | **Read** OCR ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/concept-recognizing-text.md).<br>[Request access][request-access] |
-| [Face][fa-containers] | F0, S0 | **Face** | Detects human faces in images, and identifies attributes, including face landmarks (such as noses and eyes), gender, age, and other machine-predicted facial features. In addition to detection, Face can check if two faces in the same image or different images are the same by using a confidence score, or compare faces against a database to see if a similar-looking or identical face already exists. It can also organize similar faces into groups, using shared visual traits. |
-| [Form recognizer][fr-containers] | F0, S0 | **Form Recognizer** | Form Understanding applies machine learning technology to identify and extract key-value pairs and tables from forms. |
-| [LUIS][lu-containers] | F0, S0 | **LUIS** ([image](https://go.microsoft.com/fwlink/?linkid=2043204&clcid=0x409)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. |
-| [Speech Service API][sp-containers-stt] | F0, S0 | **Speech-to-text** ([image](https://hub.docker.com/_/azure-cognitive-services-speechservices-speech-to-text)) | Transcribes continuous real-time speech into text. |
-| [Speech Service API][sp-containers-cstt] | F0, S0 | **Custom Speech-to-text** ([image](https://hub.docker.com/_/azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text using a custom model. |
-| [Speech Service API][sp-containers-tts] | F0, S0 | **Text-to-speech** ([image](https://hub.docker.com/_/azure-cognitive-services-speechservices-text-to-speech)) | Converts text to natural-sounding speech. |
-| [Speech Service API][sp-containers-ctts] | F0, S0 | **Custom Text-to-speech** ([image](https://hub.docker.com/_/azure-cognitive-services-speechservices-custom-text-to-speech)) | Converts text to natural-sounding speech using a custom model. |
-| [Speech Service API][sp-containers-ntts] | F0, S0 | **Neural Text-to-speech** ([image](https://hub.docker.com/_/azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. |
-| [Text Analytics][ta-containers-keyphrase] | F0, S | **Key Phrase Extraction** ([image](https://go.microsoft.com/fwlink/?linkid=2018757&clcid=0x409)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". |
-| [Text Analytics][ta-containers-language] | F0, S | **Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. |
-| [Text Analytics][ta-containers-sentiment] | F0, S | **Sentiment Analysis v3** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. |
-| [Text Analytics][ta-containers-health] | F0, S | **Text Analytics for health** | Extract and label medical information from unstructured clinical text. |
-| [Spatial Analysis][spa-containers] | S0 | **Spatial analysis** | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. |
+| [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/concept-recognizing-text.md). | Gated preview. [Request access][request-access]. |
+| [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-spatial-analysis)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Gated preview. [Request access][request-access]. |
+| [Face][fa-containers] | **Face** | Detects human faces in images, and identifies attributes, including face landmarks (such as noses and eyes), gender, age, and other machine-predicted facial features. In addition to detection, Face can check if two faces in the same image or different images are the same by using a confidence score, or compare faces against a database to see if a similar-looking or identical face already exists. It can also organize similar faces into groups, using shared visual traits. | Unavailable |
+| [Form recognizer][fr-containers] | **Form Recognizer** | Form Understanding applies machine learning technology to identify and extract key-value pairs and tables from forms. | Unavailable |
+ <!-- |[Personalizer](./personalizer/what-is-personalizer.md) |F0, S0|**Personalizer** ([image](https://go.microsoft.com/fwlink/?linkid=2083928&clcid=0x409))|Azure Personalizer is a cloud-based API service that allows you to choose the best experience to show to your users, learning from their real-time behavior.| -->
-In addition, some containers are supported in Cognitive Services [**All-In-One offering**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource keys. You can create one single Cognitive Services All-In-One resource and use the same billing key across supported services for the following services:
+Additionally, some containers are supported in the Cognitive Services [multi-service resource](cognitive-services-apis-create-account.md) offering. You can create one single Cognitive Services All-In-One resource and use the same billing key across supported services for the following services:
* Computer Vision * Face * LUIS * Text Analytics
-## Container availability in Azure Cognitive Services
-
-Azure Cognitive Services containers are publicly available through your Azure subscription, and Docker container images can be pulled from either the Microsoft Container Registry or Docker Hub. You can use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from the appropriate registry.
-
-[!INCLUDE [Container repositories and images](containers/includes/cognitive-services-container-images.md)]
- ## Prerequisites You must satisfy the following prerequisites before using Azure Cognitive Services containers:
@@ -103,7 +105,9 @@ Individual containers can have their own requirements, as well, including server
[!INCLUDE [Cognitive Services container security](containers/includes/cognitive-services-container-security.md)]
-[!INCLUDE [Discoverability of more container information](../../includes/cognitive-services-containers-discoverability.md)]
+## Developer samples
+
+Developer samples are available at our [GitHub repository](https://github.com/Azure-Samples/cognitive-services-containers-samples).
## Next steps
@@ -129,6 +133,7 @@ Install and explore the functionality provided by containers in Azure Cognitive
[lu-containers]: luis/luis-container-howto.md [sp-containers]: speech-service/speech-container-howto.md [spa-containers]: ./computer-vision/spatial-analysis-container.md
+[sp-containers-lid]: speech-service/speech-container-howto.md?tabs=lid
[sp-containers-stt]: speech-service/speech-container-howto.md?tabs=stt [sp-containers-cstt]: speech-service/speech-container-howto.md?tabs=cstt [sp-containers-tts]: speech-service/speech-container-howto.md?tabs=tts
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/containers/container-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/container-faq.md
@@ -17,7 +17,7 @@ ms.author: aahi
**Q: What is available?**
-**A:** Azure Cognitive Services containers allow developers to use the same intelligent APIs that are available in Azure, but with the [benefits](../cognitive-services-container-support.md#features-and-benefits) of containerization. Some containers are available as a gated preview which may require an application to access. Other containers are publicly available as an ungated preview, or are generally available. You can find a full list of containers and their availability in the [Container support in Azure Cognitive Services](../cognitive-services-container-support.md#container-availability-in-azure-cognitive-services) article. You can also view the containers in the [Docker Hub](https://hub.docker.com/_/microsoft-azure-cognitive-services).
+**A:** Azure Cognitive Services containers allow developers to use the same intelligent APIs that are available in Azure, but with the [benefits](../cognitive-services-container-support.md#features-and-benefits) of containerization. Some containers are available as a gated preview which may require an application to access. Other containers are publicly available as an ungated preview, or are generally available. You can find a full list of containers and their availability in the [Container support in Azure Cognitive Services](../cognitive-services-container-support.md) article. You can also view the containers in the [Docker Hub](https://hub.docker.com/_/microsoft-azure-cognitive-services).
**Q: Is there any difference between the Cognitive Services cloud and the containers?**
@@ -159,7 +159,7 @@ Explore the following tags for potential questions and answers that align with y
**Q: How do I discover the containers?**
-**A:** Cognitive Services containers are available in various locations, such as the Azure portal, Docker hub, and Azure container registries. For the most recent container locations, refer to [container repositories and images](../cognitive-services-container-support.md#container-repositories-and-images).
+**A:** Cognitive Services containers are available in various locations, such as the Azure portal, Docker hub, and Azure container registries. For the most recent container locations, refer to [container images](container-image-tags.md).
**Q: How does Cognitive Services containers compare to AWS and Google offerings?**
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/containers/includes/cognitive-services-container-images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/includes/cognitive-services-container-images.md deleted file mode 100644
@@ -1,91 +0,0 @@
-title: Container repositories and images
-services: cognitive-services
-author: aahill
-manager: nitinme
-description: Two tables representing the container registries, repositories and image names for all Cognitive Service offerings.
-ms.service: cognitive-services
-ms.topic: include
-ms.date: 09/03/2020
-ms.author: aahi
-
-### Container repositories and images
-
-The tables below are a listing of the available container images offered by Azure Cognitive Services. For a complete list of all the available container image names and their available tags, see [Cognitive Services container image tags](../container-image-tags.md).
-
-#### Generally available
-
-The Microsoft Container Registry (MCR) syndicates all of the generally available containers for Cognitive Services. The containers are also available directly from the [Docker hub](https://hub.docker.com/_/microsoft-azure-cognitive-services).
-
-**LUIS**
-
-| Container | Container Registry / Repository / Image Name |
-|--|--|
-| LUIS | `mcr.microsoft.com/azure-cognitive-services/language/luis` |
-
-See [How to run and install LUIS containers](../../LUIS/luis-container-howto.md) for more information.
-
-**Text Analytics**
-
-| Container | Container Registry / Repository / Image Name |
-|--|--|
-| Sentiment Analysis v3 (English) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-en` |
-| Sentiment Analysis v3 (Spanish) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-es` |
-| Sentiment Analysis v3 (French) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-fr` |
-| Sentiment Analysis v3 (Italian) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-it` |
-| Sentiment Analysis v3 (German) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-de` |
-| Sentiment Analysis v3 (Chinese - simplified) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-zh` |
-| Sentiment Analysis v3 (Chinese - traditional) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-zht` |
-| Sentiment Analysis v3 (Japanese) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-ja` |
-| Sentiment Analysis v3 (Portuguese) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-pt` |
-| Sentiment Analysis v3 (Dutch) | `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-nl` |
-
-See [How to run and install Text Analytics containers](../../text-analytics/how-tos/text-analytics-how-to-install-containers.md) for more information.
-
-**Anomaly Detector**
-
-| Container | Container Registry / Repository / Image Name |
-|--|--|
-| Anomaly detector | `mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector` |
-
-See [How to run and install Anomaly detector containers](../../anomaly-detector/anomaly-detector-container-howto.md) for more information.
-
-**Speech Service**
-
-> [!NOTE]
-> To use Speech containers, you will need to complete an [online request form](https://aka.ms/csgate).
-
-| Container | Container Registry / Repository / Image Name |
-|--|--|
-| [Speech-to-text](../../speech-service/speech-container-howto.md?tab=stt) | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text` |
-| [Custom Speech-to-text](../../speech-service/speech-container-howto.md?tab=cstt) | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text` |
-| [Text-to-speech](../../speech-service/speech-container-howto.md?tab=tts) | `mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech` |
-
-#### "Ungated" preview
-
-The following preview containers are available publicly. The Microsoft Container Registry (MCR) syndicates all of the publicly available ungated containers for Cognitive Services. The containers are also available directly from the [Docker hub](https://hub.docker.com/_/microsoft-azure-cognitive-services).
-
-| Service | Container | Container Registry / Repository / Image Name |
-|--|--|--|
-| [Text Analytics](../../text-analytics/how-tos/text-analytics-how-to-install-containers.md) | Key Phrase Extraction | `mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase` |
-| [Text Analytics](../../text-analytics/how-tos/text-analytics-how-to-install-containers.md) | Language Detection | `mcr.microsoft.com/azure-cognitive-services/textanalytics/language` |
--
-#### "Gated" preview
-
-Previously, gated preview containers were hosted on the `containerpreview.azurecr.io` repository. Starting September 22nd 2020, these containers (except Text Analytics for health) are hosted on the Microsoft Container Registry (MCR), and downloading them doesn't require using the docker login command. To use the container you will need to:
-
-1. Complete a [request form](https://aka.ms/csgate) with your Azure Subscription ID and user scenario.
-2. Upon approval, download the container from the MCR.
-3. Use the key and endpoint from an appropriate Azure resource to authenticate the container at runtime.
-
-| Service | Container | Container Registry / Repository / Image Name |
-|--|--|--|
-| [Computer Vision](../../Computer-vision/computer-vision-how-to-install-containers.md) | Read v2.0 | `mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview` |
-| [Computer Vision](../../Computer-vision/computer-vision-how-to-install-containers.md) | Read v3.1 | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.1-preview` |
-| [Computer Vision](../../computer-vision/spatial-analysis-container.md) | Spatial Analysis | `mcr.microsoft.com/azure-cognitive-services/vision/spatial-analysis` |
-| [Speech Service API](../../speech-service/speech-container-howto.md?tab=ctts) | Custom Text-to-speech | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-text-to-speech` |
-| [Speech Service API](../../speech-service/speech-container-howto.md?tab=lid) | Language Detection | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection` |
-| [Speech Service API](../../speech-service/speech-container-howto.md?tab=ntts) | Neural Text-to-speech | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech` |
-| [Text Analytics for health](../../text-analytics/how-tos/text-analytics-how-to-install-containers.md?tabs=health) | Text Analytics for health | `containerpreview.azurecr.io/microsoft/cognitive-services-healthcare` |
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/containers/includes/cognitive-services-container-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/includes/cognitive-services-container-security.md
@@ -7,7 +7,7 @@ author: aahill
manager: nitinme ms.service: cognitive-services ms.topic: include
-ms.date: 11/11/2020
+ms.date: 12/17/2020
ms.author: aahi ---
@@ -39,7 +39,7 @@ The host should allow list **port 443** and the following domains:
#### Disable deep packet inspection
-> [Deep packet inspection](https://en.wikipedia.org/wiki/Deep_packet_inspection) (DPI) is a type of data processing that inspects in detail the data being sent over a computer network, and usually takes action by blocking, re-routing, or logging it accordingly.
+[Deep packet inspection](https://en.wikipedia.org/wiki/Deep_packet_inspection) (DPI) is a type of data processing that inspects in detail the data being sent over a computer network, and usually takes action by blocking, re-routing, or logging it accordingly.
Disable DPI on the secure channels that the Cognitive Services containers create to Microsoft servers. Failure to do so will prevent the container from functioning correctly.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/containers/includes/create-container-instances-resource-from-azure-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/includes/create-container-instances-resource-from-azure-cli.md
@@ -75,6 +75,6 @@ The output of the command is `Running...` if valid, after sometime the output ch
[azure-container-create]: /cli/azure/container#az-container-create [template-format]: /azure/templates/Microsoft.ContainerInstance/2018-10-01/containerGroups#template-format [aci-yaml-ref]: ../../../container-instances/container-instances-reference-yaml.md
-[repositories-and-images]: ../../cognitive-services-container-support.md#container-repositories-and-images
+[repositories-and-images]: ../container-image-tags.md
[location-to-resource]: ../../../container-instances/container-instances-region-availability.md [secure-values]: ../../../container-instances/container-instances-environment-variables.md#secure-values
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/containers/includes/create-container-instances-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/includes/create-container-instances-resource.md
@@ -23,8 +23,8 @@ ms.author: aahi
|Resource group|Select the available resource group or create a new one such as `cognitive-services`.| |Container name|Enter a name such as `cognitive-container-instance`. The name must be in lower caps.| |Location|Select a region for deployment.|
- |Image type|If your container image is stored in a container registry that doesnΓÇÖt require credentials, choose `Public`. If accessing your container image requires credentials, choose `Private`. Refer to [container repositories and images](../../cognitive-services-container-support.md#container-repositories-and-images) for details on whether or not the container image is `Public` or `Private` ("Public Preview"). |
- |Image name|Enter the Cognitive Services container location. The location is what's used as an argument to the `docker pull` command. Refer to the [container repositories and images](../../cognitive-services-container-support.md#container-repositories-and-images) for the available image names and their corresponding repository.<br><br>The image name must be fully qualified specifying three parts. First, the container registry, then the repository, finally the image name: `<container-registry>/<repository>/<image-name>`.<br><br>Here is an example, `mcr.microsoft.com/azure-cognitive-services/keyphrase` would represent the Key Phrase Extraction image in the Microsoft Container Registry under the Azure Cognitive Services repository. Another example is, `containerpreview.azurecr.io/microsoft/cognitive-services-speech-to-text` which would represent the Speech to Text image in the Microsoft repository of the Container Preview container registry. |
+ |Image type|If your container image is stored in a container registry that doesnΓÇÖt require credentials, choose `Public`. If accessing your container image requires credentials, choose `Private`. Refer to [container repositories and images](../container-image-tags.md) for details on whether or not the container image is `Public` or `Private` ("Public Preview"). |
+ |Image name|Enter the Cognitive Services container location. The location is what's used as an argument to the `docker pull` command. Refer to the [container repositories and images](../container-image-tags.md) for the available image names and their corresponding repository.<br><br>The image name must be fully qualified specifying three parts. First, the container registry, then the repository, finally the image name: `<container-registry>/<repository>/<image-name>`.<br><br>Here is an example, `mcr.microsoft.com/azure-cognitive-services/keyphrase` would represent the Key Phrase Extraction image in the Microsoft Container Registry under the Azure Cognitive Services repository. Another example is, `containerpreview.azurecr.io/microsoft/cognitive-services-speech-to-text` which would represent the Speech to Text image in the Microsoft repository of the Container Preview container registry. |
|OS type|`Linux`| |Size|Change size to the suggested recommendations for your specific Cognitive Service container:<br>2 CPU cores<br>4 GB
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/form-recognizer-container-howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/form-recognizer-container-howto.md
@@ -311,10 +311,6 @@ The Form Recognizer containers send billing information to Azure by using a _For
For more information about these options, see [Configure containers](form-recognizer-container-configuration.md).
-<!--blogs/samples/video courses -->
-
-[!INCLUDE [Discoverability of more container information](../../../includes/cognitive-services-containers-discoverability.md)]
- ## Summary In this article, you learned concepts and workflow for downloading, installing, and running Form Recognizer containers. In summary:
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md
@@ -150,10 +150,6 @@ The Text Analytics containers send billing information to Azure, using a _Text A
For more information about these options, see [Configure containers](../text-analytics-resource-container-config.md).
-<!--blogs/samples/video course -->
-
-[!INCLUDE [Discoverability of more container information](../../../../includes/cognitive-services-containers-discoverability.md)]
- ## Summary In this article, you learned concepts and workflow for downloading, installing, and running Text Analytics containers. In summary:
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-virtual-network-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-virtual-network-concepts.md
@@ -36,6 +36,7 @@ Container groups deployed into an Azure virtual network enable scenarios like:
* You can't use a [managed identity](container-instances-managed-identity.md) in a container group deployed to a virtual network. * You can't enable a [liveness probe](container-instances-liveness-probe.md) or [readiness probe](container-instances-readiness-probe.md) in a container group deployed to a virtual network. * Due to the additional networking resources involved, deployments to a virtual network are typically slower than deploying a standard container instance.
+* If you are connecting your container group to an Azure Storage Account, you must add a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to that resource.
[!INCLUDE [container-instances-restart-ip](../../includes/container-instances-restart-ip.md)]
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-volume-azure-files.md
@@ -16,6 +16,9 @@ By default, Azure Container Instances are stateless. If the container crashes or
> Mounting an Azure Files share to a container instance is similar to a Docker [bind mount](https://docs.docker.com/storage/bind-mounts/). Be aware that if you mount a share into a container directory in which files or directories exist, these files or directories are obscured by the mount and are not accessible while the container runs. >
+> [!IMPORTANT]
+> If you are deploying container groups into an Azure Virtual Network, you must add a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to your Azure Storage Account.
+ ## Create an Azure file share Before using an Azure file share with Azure Container Instances, you must create it. Run the following script to create a storage account to host the file share, and the share itself. The storage account name must be globally unique, so the script adds a random value to the base string.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed-pull-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed-pull-model.md
@@ -7,7 +7,7 @@ ms.service: cosmos-db
ms.subservice: cosmosdb-sql ms.devlang: dotnet ms.topic: conceptual
-ms.date: 12/04/2020
+ms.date: 01/04/2021
ms.reviewer: sngun ---
@@ -110,7 +110,8 @@ while (iteratorForThePartitionKey.HasMoreResults)
Console.WriteLine($"Detected change for user with id {user.id}"); } }
- catch {
+ catch (CosmosException exception) when (exception.StatusCode == System.Net.HttpStatusCode.NotModified)
+ {
Console.WriteLine($"No new changes"); Thread.Sleep(5000); }
@@ -152,7 +153,8 @@ while (iteratorA.HasMoreResults)
Console.WriteLine($"Detected change for user with id {user.id}"); } }
- catch {
+ catch (CosmosException exception) when (exception.StatusCode == System.Net.HttpStatusCode.NotModified)
+ {
Console.WriteLine($"No new changes"); Thread.Sleep(5000); }
@@ -173,7 +175,8 @@ while (iteratorB.HasMoreResults)
Console.WriteLine($"Detected change for user with id {user.id}"); } }
- catch {
+ catch (CosmosException exception) when (exception.StatusCode == System.Net.HttpStatusCode.NotModified)
+ {
Console.WriteLine($"No new changes"); Thread.Sleep(5000); }
@@ -200,7 +203,8 @@ while (iterator.HasMoreResults)
Console.WriteLine($"Detected change for user with id {user.id}"); } }
- catch {
+ catch (CosmosException exception) when (exception.StatusCode == System.Net.HttpStatusCode.NotModified)
+ {
Console.WriteLine($"No new changes"); Thread.Sleep(5000); }
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-introduction.md
@@ -18,7 +18,7 @@ ms.author: sivethe
## Wire protocol compatibility
-Azure Cosmos DB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with native MongoDB client SDKs, drivers, and tools. Azure Cosmos DB does host the MongoDB database engine. The details of the supported features by MongoDB can be found here:
+Azure Cosmos DB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with native MongoDB client SDKs, drivers, and tools. Azure Cosmos DB does not host the MongoDB database engine. The details of the supported features by MongoDB can be found here:
- [Azure Cosmos DB's API for Mongo DB engine version 3.6](mongodb-feature-support-36.md) - [Azure Cosmos DB's API for Mongo DB engine version 3.2](mongodb-feature-support.md)
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/quick-acm-cost-analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
@@ -3,7 +3,7 @@ title: Quickstart - Explore Azure costs with cost analysis
description: This quickstart helps you use cost analysis to explore and analyze your Azure organizational costs. author: bandersmsft ms.author: banders
-ms.date: 11/20/2020
+ms.date: 01/04/2021
ms.topic: quickstart ms.service: cost-management-billing ms.subservice: cost-management
@@ -66,6 +66,8 @@ Cost forecast shows a projection of your estimated costs for the selected time p
The model uses a maximum of six months of training data to project costs for a year. At a minimum, it needs seven days of training data to change its prediction. The prediction is based on dramatic changes, such as spikes and dips, in cost and usage patterns. Forecast doesn't generate individual projections for each item in **Group by** properties. It only provides a forecast for total accumulated costs. If you use multiple currencies, the model provides forecast for costs only in USD.
+Because of the model's reliance on data dips and spikes, large purchases like reserved instances will cause your forecast to become artificially inflated. The forecast time period and the size of purchases affect how long the forecast is affected. The forecast returns to normal when spending stabilizes.
+ ## Customize cost views Cost analysis has four built-in views, optimized for the most common goals:
@@ -82,7 +84,7 @@ Invoice details | What charges did I have on my last invoice?
However, there are many cases where you need deeper analysis. Customization starts at the top of the page, with the date selection.
-Cost analysis shows data for the current month by default. Use the date selector to switch to common date ranges quickly. Examples include the last seven days, the last month, the current year, or a custom date range. Pay-as-you-go subscriptions also include date ranges based on your billing period, which isn't bound to the calendar month, like the current billing period or last invoice. Use the **<PREVIOUS** and **NEXT>** links at the top of the menu to jump to the previous or next period, respectively. For example, **<PREVIOUS** will switch from the **Last 7 days** to **8-14 days ago** or **15-21 days ago**. When selecting a custom date range, keep in mind that you can select up to a full year (e.g. January 1-December 31).
+Cost analysis shows data for the current month by default. Use the date selector to switch to common date ranges quickly. Examples include the last seven days, the last month, the current year, or a custom date range. Pay-as-you-go subscriptions also include date ranges based on your billing period, which isn't bound to the calendar month, like the current billing period or last invoice. Use the **<PREVIOUS** and **NEXT>** links at the top of the menu to jump to the previous or next period, respectively. For example, **<PREVIOUS** will switch from the **Last 7 days** to **8-14 days ago** or **15-21 days ago**. When selecting a custom date range, keep in mind that you can select up to a full year (for example, January 1-December 31).
![Date selector showing an example selection for this month](./media/quick-acm-cost-analysis/date-selector.png)
@@ -113,13 +115,13 @@ Here's a view of Azure service costs for the current month.
![Grouped daily accumulated view showing example Azure service costs for last month](./media/quick-acm-cost-analysis/grouped-daily-accum-view.png)
-By default, cost analysis shows all usage and purchase costs as they are accrued and will show on your invoice, also known as **Actual cost**. Viewing actual cost is ideal for reconciling your invoice. However, purchase spikes in cost can be alarming when you're keeping an eye out for spending anomalies and other changes in cost. To flatten out spikes caused by reservation purchase costs, switch to **Amortized cost**.
+By default, cost analysis shows all usage and purchase costs as they're accrued and will show on your invoice, also known as **Actual cost**. Viewing actual cost is ideal for reconciling your invoice. However, purchase spikes in cost can be alarming when you're keeping an eye out for spending anomalies and other changes in cost. To flatten out spikes caused by reservation purchase costs, switch to **Amortized cost**.
![Change between actual and amortized cost to see reservation purchases spread across the term and allocated to the resources that used the reservation](./media/quick-acm-cost-analysis/metric-picker.png)
-Amortized cost breaks down reservation purchases into daily chunks and spreads them over the duration of the reservation term. For example, instead of seeing a $365 purchase on January 1, you'll see a $1.00 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated by using the specific resources that used the reservation. For example, if that $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of `UnusedReservation`. Note that unused reservation costs can be seen only when viewing amortized cost.
+Amortized cost breaks down reservation purchases into daily chunks and spreads them over the duration of the reservation term. For example, instead of seeing a $365 purchase on January 1, you'll see a $1.00 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated by using the specific resources that used the reservation. For example, if that $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of `UnusedReservation`. Unused reservation costs can be seen only when viewing amortized cost.
-Due to the change in how costs are represented, it's important to note that actual cost and amortized cost views will show different total numbers. In general, the total cost of months with a reservation purchase will decrease when viewing amortized costs, and months following a reservation purchase will increase. Amortization is available only for reservation purchases and doesn't apply to Azure Marketplace purchases at this time.
+Because of the change in how costs are represented, it's important to note that actual cost and amortized cost views will show different total numbers. In general, the total cost of months with a reservation purchase will decrease when viewing amortized costs, and months following a reservation purchase will increase. Amortization is available only for reservation purchases and doesn't apply to Azure Marketplace purchases at this time.
The following image shows resource group names. You can group by tag to view total costs per tag or use the **Cost by resource** view to see all tags for a particular resource.
@@ -145,17 +147,17 @@ Watch the video [Sharing and saving views in Azure Cost Management](https://www.
>[!VIDEO https://www.youtube.com/embed/kQkXXj-SmvQ]
-To pin cost analysis, select the pin icon in the upper-right corner or just after the "<Subscription Name> | Cost analysis". Pinning cost analysis will save only the main chart or table view. Share the dashboard to give others access to the tile. Note that this shares only the dashboard configuration and doesn't grant others access to the underlying data. If you don't have access to costs but do have access to a shared dashboard, you'll see an "access denied" message.
+To pin cost analysis, select the pin icon in the upper-right corner or just after the "<Subscription Name> | Cost analysis". Pinning cost analysis will save only the main chart or table view. Share the dashboard to give others access to the tile. Sharing only shares the dashboard configuration and doesn't grant others access to the underlying data. If you don't have access to costs but do have access to a shared dashboard, you'll see an "access denied" message.
-To share a link to cost analysis, select **Share** at the top of the blade. A custom URL will show, which opens this specific view for this specific scope. If you don't have cost access and get this URL, you'll see an "access denied" message.
+To share a link to cost analysis, select **Share** at the top of the window. A custom URL will show, which opens this specific view for this specific scope. If you don't have cost access and get this URL, you'll see an "access denied" message.
## Download usage data ### [Portal](#tab/azure-portal)
-There are times when you need to download the data for further analysis, merge it with your own data, or integrate it into your own systems. Cost Management offers a few different options. As a starting point, if you need an ad hoc high-level summary, like what you get within cost analysis, build the view you need. Then download it by selecting **Export** and selecting **Download data to CSV** or **Download data to Excel**. The Excel download provides additional context on the view you used to generate the download, like scope, query configuration, total, and date generated.
+There are times when you need to download the data for further analysis, merge it with your own data, or integrate it into your own systems. Cost Management offers a few different options. As a starting point, if you need a quick high-level summary, like what you get within cost analysis, build the view you need. Then download it by selecting **Export** and selecting **Download data to CSV** or **Download data to Excel**. The Excel download provides more context on the view you used to generate the download, like scope, query configuration, total, and date generated.
-If you need the full, unaggregated dataset, download it from the billing account. Then, from the list of services in the portal's left navigation pane, go to **Cost Management + Billing**. Select your billing account, if applicable. Go to **Usage + charges**, and then select the **Download** icon for the desired billing period.
+If you need the full, unaggregated dataset, download it from the billing account. Then, from the list of services in the portal's left navigation pane, go to **Cost Management + Billing**. Select your billing account, if applicable. Go to **Usage + charges**, and then select the **Download** icon for a billing period.
### [Azure CLI](#tab/azure-cli)
@@ -206,7 +208,7 @@ You also have the option of using the [az costmanagement export](/cli/azure/ext/
## Clean up resources -- If you pinned a customized view for cost analysis and you no longer need it, go to the dashboard where you pinned it and and delete the pinned view.
+- If you pinned a customized view for cost analysis and you no longer need it, go to the dashboard where you pinned it and delete the pinned view.
- If you downloaded usage data files and you no longer need them, be sure to delete them. ## Next steps
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/understand-usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/understand-usage.md
@@ -7,7 +7,7 @@ tags: billing
ms.service: cost-management-billing ms.subservice: billing ms.topic: conceptual
-ms.date: 08/20/2020
+ms.date: 01/04/2021
ms.author: banders ---
@@ -122,12 +122,24 @@ UsageDate | Date
UsageEnd | Date UsageStart | Date - ## Ensure charges are correct
-To learn more about detailed usage and charges, read about how to understand your
-[pay-as-you-go](review-individual-bill.md)
-or [Microsoft Customer Agreement](review-customer-agreement-bill.md) invoice.
+To learn more about detailed usage and charges, read about how to understand your [pay-as-you-go](review-individual-bill.md) or [Microsoft Customer Agreement](review-customer-agreement-bill.md) invoice.
+
+## Unexpected usage or charges
+
+If you have usage or charges that you don't recognize, there are several things you can do to help understand why:
+
+- Review the invoice that has charges for the resource
+- Review your invoiced charges in Cost analysis
+- Find people responsible for the resource and engage with them
+- Analyze the audit logs
+- Analyze user permissions to the resource's parent scope
+- Create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to help identify the charges
+
+For more information, see [Analyze unexpected charges](analyze-unexpected-charges.md).
+
+Note that Azure doesn't log most user actions. Instead, Microsoft logs resource usage for billing. If you notice a usage spike in the past and you didn't have logging enabled, Microsoft can't pinpoint the cause. Enable logging for the service that you want to view the increased usage for so that the appropriate technical team can assist you with the issue.
## Need help? Contact us.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-data-flow-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-performance.md
@@ -308,7 +308,11 @@ If you put all of your logic inside of a single data flow, ADF will execute the
### Execute sinks in parallel
-On the pipeline execute data flow activity under the "Sink Properties" section is an option to turn on parallel sink loading. When you enable "run in parallel", you are instructing data flows write to connected sinks at the same time rather than in a sequential manner. The default behavior is that sinks write data one by one. In order to utilize the parallel option, the sinks must be group together and connected to the same stream via a New Branch or Conditional Split.
+The default behavior of data flow sinks is to execute each sink sequentially, in a serial manner, and to fail the data flow when an error is encountered in the sink. Additionally, all sinks are defaulted to the same group unless you go into the data flow properties and set different priorities for the sinks.
+
+Data flows allow you to group sinks together into groups from the data flow properties tab in the UI designer. You can both set the order of execution of your sinks as well as to group sinks together using the same group number. To help manage groups, you can ask ADF to run sinks in the same group, to run in parallel.
+
+On the pipeline execute data flow activity under the "Sink Properties" section is an option to turn on parallel sink loading. When you enable "run in parallel", you are instructing data flows write to connected sinks at the same time rather than in a sequential manner. In order to utilize the parallel option, the sinks must be group together and connected to the same stream via a New Branch or Conditional Split.
## Next steps
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-execute-data-flow-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-data-flow-activity.md
@@ -8,7 +8,7 @@ ms.service: data-factory
ms.workload: data-services ms.topic: conceptual ms.author: makromer
-ms.date: 11/24/2020
+ms.date: 01/03/2021
--- # Data Flow activity in Azure Data Factory
@@ -33,6 +33,8 @@ Use the Data Flow activity to transform and move data via mapping data flows. If
"computeType": "General" }, "traceLevel": "Fine",
+ "runConcurrently": true,
+ "continueOnError": true,
"staging": { "linkedService": { "referenceName": "MyStagingLinkedService",
@@ -91,6 +93,14 @@ If you do not require every pipeline execution of your data flow activities to f
![Logging level](media/data-flow/logging.png "Set logging level")
+## Sink properties
+
+The grouping feature in data flows allow you to both set the order of execution of your sinks as well as to group sinks together using the same group number. To help manage groups, you can ask ADF to run sinks, in the same group, in parallel. You can also set the sink group to continue even after one of the sinks encounters an error.
+
+The default behavior of data flow sinks is to execute each sink sequentially, in a serial manner, and to fail the data flow when an error is encountered in the sink. Additionally, all sinks are defaulted to the same group unless you go into the data flow properties and set different priorities for the sinks.
+
+![Sink properties](media/data-flow/sink-properties.png "Set sink properties")
+ ## Parameterizing Data Flows ### Parameterized datasets
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-script.md
@@ -243,7 +243,7 @@ derive(each(match(type=='string'), $$ = 'string'),
``` ### Fill down
-Here is how to implement the common "Fill Down" problem with data sets when you want to replace NULL values with the value from the previous non-NULL value in the sequence. Note that this operation can be negative performance implications because you must creat a synthetic window across your entire data set with a "dummy" category value. Additional, you must sort by a value to create the proper data sequence to find the previous non-NULL value. This snippet below creates the synthetic category as "dummy" and sorts by a surrogate key. You can remove the surrogate key and use your own data-specific sort key. This code snippet assumes you've already added a Source transformation called ```source1```
+Here is how to implement the common "Fill Down" problem with data sets when you want to replace NULL values with the value from the previous non-NULL value in the sequence. Note that this operation can have negative performance implications because you must create a synthetic window across your entire data set with a "dummy" category value. Additionally, you must sort by a value to create the proper data sequence to find the previous non-NULL value. This snippet below creates the synthetic category as "dummy" and sorts by a surrogate key. You can remove the surrogate key and use your own data-specific sort key. This code snippet assumes you've already added a Source transformation called ```source1```
``` source1 derive(dummy = 1) ~> DerivedColumn
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-stateless-application-git-ops-guestbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateless-application-git-ops-guestbook.md
@@ -23,6 +23,8 @@ The deployment is done using GitOps on the Arc enabled Kubernetes cluster on you
This procedure is intended for those who have reviewed the [Kubernetes workloads on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-workload-management.md) and are familiar with the concepts of [What is Azure Arc enabled Kubernetes (Preview)](../azure-arc/kubernetes/overview.md).
+> [!NOTE]
+> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
## Prerequisites
@@ -105,7 +107,6 @@ Follow these steps to configure the Azure Arc resource to deploy a GitOps config
![Screenshot shows the Azure Arc enabled Kubernetes cluster in an installed state.](media/azure-stack-edge-gpu-connect-powershell-interface/view-configurations-2.png) - ## Verify deployment The deployment via the GitOps configuration creates a `demotestguestbook` namespace as specified in the deployment `yaml` files located in the git repo.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-troubleshoot-activation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot-activation.md
@@ -26,9 +26,9 @@ The following table summarizes the errors related to device activation and the c
| If the Azure Key Vault used for activation is deleted before the device is activated with the activation key, then you receive this error. <br> ![Key vault error 1](./media/azure-stack-edge-gpu-troubleshoot-activation/key-vault-error-1.png) | If the key vault has been deleted, you can recover the key vault if the vault is in purge-protection duration. Follow the steps in [Recover a key vault](../key-vault/general/key-vault-recovery.md#list-recover-or-purge-soft-deleted-secrets-keys-and-certificates). <br>If the purge-protection duration has elapsed, then the key vault cannot be recovered. Contact Microsoft Support for next steps. | | If the Azure Key Vault is deleted after the device is activated, and you then try to perform any operation that involves encryption, for example: **Add User**, **Add Share**, **Configure Compute**, then you receive this error. <br> ![Key vault error 2](./media/azure-stack-edge-gpu-troubleshoot-activation/key-vault-error-2.png) | If the key vault has been deleted, you can recover the key vault if the vault is in purge-protection duration. Follow the steps in Recover a key vault. <br>If the purge-protection duration has elapsed, then the key vault cannot be recovered. Contact Microsoft Support for next steps. | | If the Channel Integrity Key in the Azure Key Vault is deleted and you, then try to perform any operations that involve encryption, for example: **Add User**, **Add Share**, **Configure Compute** - then you will receive this error. <br> ![Key vault error 3](./media/azure-stack-edge-gpu-troubleshoot-activation/key-vault-error-3.png) | If the Channel Integrity Key in the key vault is deleted, but it is still within the purge duration, follow the steps in [Undo Key vault key removal](/powershell/module/az.keyvault/undo-azkeyvaultkeyremoval). <br>If the purge protection duration has elapsed, and if you have the key backed up, you can restore from the backup else you can't recover the key. Contact Microsoft Support for next steps. |
-| If the activation key generation fails due to any error, then you receive this error. Additional details are present in the notification. <br> ![Key vault error 4](./media/azure-stack-edge-gpu-troubleshoot-activation/key-vault-error-4.png) | Wait a few minutes and retry the operation. If the problem persists, contact Microsoft Support. |
+| If the activation key generation fails due to any error, then you receive this error. Additional details are present in the notification. <br> ![Key vault error 4](./media/azure-stack-edge-gpu-troubleshoot-activation/key-vault-error-4.png) | Ensure that the ports and URLs specified in [Access Azure Key Vault behind a firewall](../key-vault/general/access-behind-firewall.md) are open on your firewall in order to access the Key Vault. Wait a few minutes and retry the operation. If the problem persists, contact Microsoft Support. |
| If the user has read-only permissions, then the user is not allowed to generate an activation key and this error is presented. <br> ![Key vault error 5](./media/azure-stack-edge-gpu-troubleshoot-activation/key-vault-error-5.png) | This could be because you don't have the right access or *Microsoft.KeyVault* is not registered.<li>Make sure that you have owner or contributor access at the resource group level used for your Azure Stack Edge resource.</li><li>Make sure that the Microsoft.KeyVault resource provider is registered. To register a resource provider, go to the subscription used for Azure Stack Edge resource. Go to **Resource providers**, search for *Microsoft.KeyVault* and select and **Register**.</li> | ## Next steps -- Learn more on how to [Troubleshoot device issues](azure-stack-edge-gpu-troubleshoot.md).\ No newline at end of file
+- Learn more on how to [Troubleshoot device issues](azure-stack-edge-gpu-troubleshoot.md).
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-the-on-premises-management-console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-the-on-premises-management-console.md
@@ -294,6 +294,26 @@ To reset your password:
> [!NOTE] > The sensor is linked to the subscription that it was originally connected to. You can recover the password only by using the same subscription that it's attached to.
+## Update the software version
+
+The following procedure describes how to update the on-premises management console software version. The update process takes about 30 minutes.
+
+1. Go to the [Azure portal](https://portal.azure.com/).
+
+1. Go to Defender for IoT.
+
+1. Go to the **Updates** page.
+
+1. Select a version from the on-premises management console section.
+
+1. Select **Download** and save the file.
+
+1. Log into on-premises management console and select **System Settings** from the side menu.
+
+1. On the **Version Update** pane, select **Update**.
+
+1. Select the file that you downloaded from the Defender for IoT **Updates** page.
+ ## See also [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-set-up-your-network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-set-up-your-network.md
@@ -4,7 +4,7 @@ description: Learn about solution architecture, network preparation, prerequisit
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 12/06/2020
+ms.date: 01/03/2021
ms.topic: how-to ms.service: azure ---
@@ -49,7 +49,7 @@ Record site information such as:
- Configuration workstation. -- SSL certificates (optional).
+- SSL certificates (optional but recommended).
- SMTP authentication (optional). To use the SMTP server with authentication, prepare the credentials required for your server.
@@ -564,7 +564,7 @@ An overview of the industrial network diagram will allow you to define the prope
> [!NOTE] > The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
-2. Provide the approximate number of devices in the networks (optional).
+2. Provide the approximate number of network devices that will be monitored. You will need this information when onboarding your subscription to the Azure Defender for IoT portal. During the onboarding process, you will be prompted to enter the number of devices in increments of 1000.
3. Provide a subnet list for the production networks and a description (optional).
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-use-apis-sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-apis-sdks.md
@@ -30,7 +30,7 @@ The control plane APIs are [ARM](../azure-resource-manager/management/overview.m
The most current control plane API version is _**2020-12-01**_. To use the control plane APIs:
-* You can call the APIs directly by referencing the latest Swagger in the [control plane Swagger folder](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins). This repo also includes a folder of examples that show the usage.
+* You can call the APIs directly by referencing the latest Swagger folder in the [control plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/stable). This folder also includes a folder of examples that show the usage.
* You can currently access SDKs for control APIs in... - [**.NET (C#)**](https://www.nuget.org/packages/Microsoft.Azure.Management.DigitalTwins/) ([reference [auto-generated]](/dotnet/api/overview/azure/digitaltwins/management?view=azure-dotnet&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/digitaltwins/Microsoft.Azure.Management.DigitalTwins)) - [**Java**](https://search.maven.org/artifact/com.microsoft.azure.digitaltwins.v2020_10_31/azure-mgmt-digitaltwins/1.0.0/jar) ([reference [auto-generated]](/java/api/overview/azure/digitaltwins?view=azure-java-stable&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/digitaltwins/mgmt-v2020_10_31))
@@ -52,7 +52,7 @@ The most current data plane API version is _**2020-10-31**_.
To use the data plane APIs: * You can call the APIs directly, by...
- - referencing the latest Swagger in the [data plane Swagger folder](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This repo also includes a folder of examples that show the usage.
+ - referencing the latest Swagger folder in the [data plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This folder also includes a folder of examples that show the usage.
- viewing the [API reference documentation](/rest/api/azure-digitaltwins/). * You can use the **.NET (C#) SDK**. To use the .NET SDK... - you can view and add the package from NuGet: [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core).
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/troubleshoot-known-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-known-issues.md
@@ -32,13 +32,21 @@ This article provides information about known issues associated with Azure Digit
| --- | --- | --- | | To determine whether your role assignment was successfully set up after running the script, follow the instructions in the [*Verify user role assignment*](how-to-set-up-instance-scripted.md#verify-user-role-assignment) section of the setup article. If your user is not shown with this role, this issue affects you. | For users logged in with a personal [Microsoft account (MSA)](https://account.microsoft.com/account), your user's Principal ID that identifies you in commands like this may be different from your user's sign-in email, making it difficult for the script to discover and use to assign the role properly. | To resolve, you can set up your role assignment manually using either the [CLI instructions](how-to-set-up-instance-cli.md#set-up-user-access-permissions) or [Azure portal instructions](how-to-set-up-instance-portal.md#set-up-user-access-permissions). |
-## Issue with interactive browser authentication
+## Issue with interactive browser authentication on Azure.Identity 1.2.0
**Issue description:** When writing authentication code in your Azure Digital Twins applications using version **1.2.0** of the **[Azure.Identity](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true) library**, you may experience issues with the [InteractiveBrowserCredential](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true) method. This presents as an error response of "Azure.Identity.AuthenticationFailedException" when trying to authenticate in a browser window. The browser window may fail to start up completely, or appear to authenticate the user successfully, while the client application still fails with the error. | Does this affect me? | Cause | Resolution | | --- | --- | --- |
-| The&nbsp;affected&nbsp;method&nbsp;is&nbsp;used&nbsp;in&nbsp;the&nbsp;following articles:<br><br>[*Tutorial: Code a client app*](tutorial-code.md)<br><br>[*How-to: Write app authentication code*](how-to-authenticate-client.md)<br><br>[*How-to: Use the Azure Digital Twins APIs and SDKs*](how-to-use-apis-sdks.md) | Some users have had this issue with version **1.2.0** of the `Azure.Identity` library. | To resolve, update your applications to use the [latest version](https://www.nuget.org/packages/Azure.Identity) of `Azure.Identity`. After updating the library version, the browser should load and authenticate as expected. |
+| The&nbsp;affected&nbsp;method&nbsp;is&nbsp;used&nbsp;in&nbsp;the&nbsp;following articles:<br><br>[*Tutorial: Code a client app*](tutorial-code.md)<br><br>[*How-to: Write app authentication code*](how-to-authenticate-client.md)<br><br>[*How-to: Use the Azure Digital Twins APIs and SDKs*](how-to-use-apis-sdks.md) | Some users have had this issue with version **1.2.0** of the `Azure.Identity` library. | To resolve, update your applications to use a [later version](https://www.nuget.org/packages/Azure.Identity) of `Azure.Identity`. After updating the library version, the browser should load and authenticate as expected. |
+
+## Issue with default Azure credential authentication on Azure.Identity 1.3.0
+
+**Issue description:** When writing authentication code in your Azure Digital Twins applications using version **1.3.0** of the **[Azure.Identity](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true) library**, you may experience issues with the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet?view=azure-dotnet&preserve-view=true) method used in many samples throughout these docs. This presents as an error response of "Azure.Identity.AuthenticationFailedException: SharedTokenCacheCredential authentication failed" when the code tries to authenticate.
+
+| Does this affect me? | Cause | Resolution |
+| --- | --- | --- |
+| DefaultAzureCredential is used in most of the documentation examples that include authentication. If you are writing authentication code using DefaultAzureCredential and using version 1.3.0 of the `Azure.Identity` library, this is likely to affect you. | This issue presents when using DefaultAzureCredential with version **1.3.0** of the `Azure.Identity` library. | To resolve, switch your application to use [version 1.2.2](https://www.nuget.org/packages/Azure.Identity/1.2.2) of `Azure.Identity`. After changing the library version, authentication should succeed as expected. |
## Next steps
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/tutorial-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-code.md
@@ -58,9 +58,12 @@ This will create several files inside your directory, including one called *Prog
Keep the command window open, as you'll continue to use it throughout the tutorial.
-Next, **add two dependencies to your project** that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add the latest version of each to your project.
-* [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
-* [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
+Next, **add two dependencies to your project** that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add each one to your project.
+* [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true). Add the latest version.
+* [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure. Add version 1.2.2.
+
+>[!NOTE]
+> There is currently a [known issue](troubleshoot-known-issues.md#issue-with-default-azure-credential-authentication-on-azureidentity-130) affecting the ability to use Azure.Identity version 1.3.0 with this tutorial. Please use version 1.2.2 while this issue persists.
## Get started with project code
event-grid https://docs.microsoft.com/en-us/azure/event-grid/cloudevents-schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/cloudevents-schema.md
@@ -175,8 +175,12 @@ module.exports = function (context, req) {
// If the request is for subscription validation, send back the validation code context.log('Validate request received');
- context.res = { status: 200 };
- context.res.headers.append('Webhook-Allowed-Origin', 'eventgrid.azure.net');
+ context.res = {
+ status: 200,
+ headers: {
+ 'Webhook-Allowed-Origin': 'eventgrid.azure.net',
+ },
+ };
} else {
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-capture-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-capture-python.md
@@ -2,7 +2,7 @@
title: Read Azure Event Hubs captured data from a Python app (latest) description: This article shows you how to write Python code to capture data that's sent to an event hub and read the captured event data from an Azure storage account. ms.topic: quickstart
-ms.date: 06/23/2020
+ms.date: 01/04/2021
--- # Capture Event Hubs data in Azure Storage and read it by using Python (azure-eventhub)
@@ -22,7 +22,11 @@ In this quickstart, you:
## Prerequisites -- Python 2.7, and 3.5 or later, with PIP installed and updated.
+- Python with PIP and the following packages installed. The code in this article has been tested against these versions.
+ - Python 3.7
+ - azure-eventhub 5.2.0
+ - azure-storage-blob 12.6.0
+ - avro-python3 1.10.1
- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin. - An active Event Hubs namespace and event hub. [Create an Event Hubs namespace and an event hub in the namespace](event-hubs-create.md). Record the name of the Event Hubs namespace, the name of the event hub, and the primary access key for the namespace. To get the access key, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). The default key name is *RootManageSharedAccessKey*. For this quickstart, you need only the primary key. You don't need the connection string.
@@ -150,6 +154,13 @@ In this example, the captured data is stored in Azure Blob storage. The script i
pip install azure-eventhub pip install avro-python3 ```+
+ > [!NOTE]
+ > The code in this article has been tested against these versions.
+ > - Python 3.7
+ > - azure-eventhub 5.2.0
+ > - azure-storage-blob 12.6.0
+ > - avro-python3 1.10.1
2. Change your directory to the directory where you saved *sender.py* and *capturereader.py*, and run this command: ```
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-federation-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-federation-overview.md
@@ -401,6 +401,7 @@ between Event Hubs and various other eventing and messaging systems:
- [Event replicator applications in Azure Functions][1] - [Replicating events between Event Hubs][2] - [Replicating events to Azure Service Bus][3]
+- [Use Apache Kafka MirrorMaker with Event Hubs][11]
[1]: event-hubs-federation-replicator-functions.md [2]: https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/EventHubCopy
@@ -411,4 +412,5 @@ between Event Hubs and various other eventing and messaging systems:
[7]: event-hubs-federation-patterns.md#routing [8]: event-hubs-federation-patterns.md#log-projection [9]: process-data-azure-stream-analytics.md
-[10]: event-hubs-federation-patterns.md#replication
\ No newline at end of file
+[10]: event-hubs-federation-patterns.md#replication
+[11]: event-hubs-kafka-mirror-maker-tutorial.md
\ No newline at end of file
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-federation-patterns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-federation-patterns.md
@@ -27,7 +27,7 @@ forwarded without making any modifications to the event payload.
The implementation of this pattern is covered by the [Event replication between Event Hubs](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/EventHubCopy) and [Event replication between Event Hubs and Service Bus](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/EventHubCopyToServiceBus)
-samples.
+samples and the [Use Apache Kafka MirrorMaker with Event Hubs](event-hubs-kafka-mirror-maker-tutorial.md) tutorial for the specific case of replicating data from an Apache Kafka broker into Event Hubs.
### Streams and order preservation
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-for-kafka-ecosystem-overview.md
@@ -112,9 +112,7 @@ The payload of any Event Hub event is a byte stream and the content can be compr
### Log Compaction
-Apache Kafka log compaction is a feature that allows evicting all but the last record of each key from a partition, which effectively turns an Apache Kafka topic into a key-value store where the last value added overrides the previous one. The key-value store pattern, even with frequent updates, is far better supported by database services like [Azure Cosmos DB](../cosmos-db/introduction.md).
-
-The log compaction feature is used by the Kafka Connect and Kafka Streams client frameworks.
+Apache Kafka log compaction is a feature that allows evicting all but the last record of each key from a partition, which effectively turns an Apache Kafka topic into a key-value store where the last value added overrides the previous one. This feature is presently not implemented by Azure Event Hubs. The key-value store pattern, even with frequent updates, is far better supported by database services like [Azure Cosmos DB](../cosmos-db/introduction.md). Please refer to the [Log Projection](event-hubs-federation-overview.md#log-projections) topic in the Event Hubs federation guidance for more details.
### Kafka Streams
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-geo-dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-geo-dr.md
@@ -6,10 +6,19 @@ ms.date: 06/23/2020
--- # Azure Event Hubs - Geo-disaster recovery
-When entire Azure regions or datacenters (if no [availability zones](../availability-zones/az-overview.md) are used) experience downtime, it's critical for data processing to continue to operate in a different region or datacenter. As such, *Geo-disaster recovery* and *Geo-replication* are important features for any enterprise. Azure Event Hubs supports both geo-disaster recovery and geo-replication, at the namespace level. 
-> [!NOTE]
-> The Geo-disaster recovery feature is only available for the [standard and dedicated SKUs](https://azure.microsoft.com/pricing/details/event-hubs/).
+Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in some cases even required by industry regulations.
+
+Azure Event Hubs already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions in the event of such failures. If an Event Hubs namespace has been created with the enabled option for [availability zones](../availability-zones/az-overview.md), the risk is outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility.
+
+The all-active Azure Event Hubs cluster model with availability zone support provides resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures cannot sufficiently defend against.
+
+The Event Hubs Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good and without having to change your application configurations. Abandoning an Azure region will typically involve several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration.
+
+The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Event Hubs, Consumer Groups and settings) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move will re-point the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated.
+
+> [!IMPORTANT]
+> The feature enables instantaneous continuity of operations with the same configuration, but **does not replicate the event data**. Unless the disaster caused the loss of all zones, the event data is preserved in the primary Event Hub after failover will be recoverable and the historic events can be obtained from there once access is restored. For replicating event data and operating corresponding namespaces in active/active configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the [replication guidance](event-hubs-federation-overview.md).
## Outages and disasters
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-kafka-mirror-maker-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-kafka-mirror-maker-tutorial.md
@@ -2,12 +2,14 @@
title: Use Apache Kafka MirrorMaker - Azure Event Hubs | Microsoft Docs description: This article provides information on how to use Kafka MirrorMaker to mirror a Kafka cluster in AzureEvent Hubs. ms.topic: how-to
-ms.date: 06/23/2020
+ms.date: 01/04/2021
---
-# Use Kafka MirrorMaker with Event Hubs for Apache Kafka
+# Use Apache Kafka MirrorMaker with Event Hubs
-This tutorial shows how to mirror a Kafka broker in an event hub using Kafka MirrorMaker.
+This tutorial shows how to mirror a Kafka broker into an Azure Event Hub using Kafka MirrorMaker. If you are hosting Apache Kafka on
+Kubernetes using the CNCF Strimzi operator, you can refer to the tutorial in [this blog post](https://strimzi.io/blog/2020/06/09/mirror-maker-2-eventhub/)
+to learn how to set up Kafka with Strimzi and Mirror Maker 2.
![Kafka MirrorMaker with Event Hubs](./media/event-hubs-kafka-mirror-maker-tutorial/evnent-hubs-mirror-maker1.png)
@@ -26,9 +28,12 @@ In this tutorial, you learn how to:
> * Run Kafka MirrorMaker ## Introduction
-One major consideration for modern cloud scale apps is the ability to update, improve, and change infrastructure without interrupting service. This tutorial shows how an event hub and Kafka MirrorMaker can integrate an existing Kafka pipeline into Azure by "mirroring" the Kafka input stream in the Event Hubs service.
+This tutorial shows how an event hub and Kafka MirrorMaker can integrate an existing Kafka pipeline into Azure by "mirroring" the Kafka input stream in the Event Hubs service, which allows for
+integration of Apache Kafka streams using several [federation patterns](event-hubs-federation-overview.md).
-An Azure Event Hubs Kafka endpoint enables you to connect to Azure Event Hubs using the Kafka protocol (that is, Kafka clients). By making minimal changes to a Kafka application, you can connect to Azure Event Hubs and enjoy the benefits of the Azure ecosystem. Event Hubs currently supports Kafka versions 1.0 and later.
+An Azure Event Hubs Kafka endpoint enables you to connect to Azure Event Hubs using the Kafka protocol (that is, Kafka clients). By making minimal changes to a Kafka application, you can connect to Azure Event Hubs and enjoy the benefits of the Azure ecosystem. Event Hubs currently supports the protocol of Apache Kafka versions 1.0 and later.
+
+You can use Apache Kafka's MirrorMaker 1 unidirectionally from Apache Kafka to Event Hubs. MirrorMaker 2 can be used in both directions, but the [`MirrorCheckpointConnector` and `MirrorHeartbeatConnector` that are configurable in MirrorMaker 2](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0) must both be configured to point to the Apache Kafka broker and not to Event Hubs. This tutorial shows configuring MirrorMaker 1.
## Prerequisites
firewall https://docs.microsoft.com/en-us/azure/firewall/dns-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/dns-settings.md
@@ -5,7 +5,7 @@ services: firewall
author: vhorne ms.service: firewall ms.topic: how-to
-ms.date: 11/06/2020
+ms.date: 01/04/2021
ms.author: victorh ---
@@ -60,6 +60,9 @@ $azFw | Set-AzFirewall
You can configure Azure Firewall to act as a DNS proxy. A DNS proxy is an intermediary for DNS requests from client virtual machines to a DNS server. If you configure a custom DNS server, then enable DNS proxy to avoid a DNS resolution mismatch, and enable FQDN (fully qualified domain name) filtering in the network rules.
+:::image type="content" source="media/dns-settings/dns-proxy-2.png" alt-text="D N S proxy configuration using a custom D N S server.":::
++ If you don't enable DNS proxy, then DNS requests from the client might travel to a DNS server at a different time or return a different response compared to that of the firewall. DNS proxy puts Azure Firewall in the path of the client requests to avoid inconsistency. When Azure Firewall is a DNS proxy, two caching function types are possible:
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/definition-structure.md
@@ -823,25 +823,6 @@ Policy, use one of the following methods:
:::image type="content" source="../media/extension-for-vscode/extension-hover-shows-property-alias.png" alt-text="Screenshot of the Azure Policy extension for Visual Studio Code hovering a property to display the alias names." border="false"::: -- Azure Resource Graph-
- Use the `project` operator to display the **alias** of a resource.
-
- ```kusto
- Resources
- | where type=~'microsoft.storage/storageaccounts'
- | limit 1
- | project aliases
- ```
-
- ```azurecli-interactive
- az graph query -q "Resources | where type=~'microsoft.storage/storageaccounts' | limit 1 | project aliases"
- ```
-
- ```azurepowershell-interactive
- Search-AzGraph -Query "Resources | where type=~'microsoft.storage/storageaccounts' | limit 1 | project aliases"
- ```
- - Azure PowerShell ```azurepowershell-interactive
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/effects.md
@@ -54,6 +54,8 @@ After the Resource Provider returns a success code on a Resource Manager mode re
**AuditIfNotExists** and **DeployIfNotExists** evaluate to determine if additional compliance logging or action is required.
+Additionally, `PATCH` requests that only modify `tags` related fields restricts policy evaluation to policies containing conditions that inspect `tags` related fields.
+ ## Append Append is used to add additional fields to the requested resource during creation or update. A
@@ -852,4 +854,4 @@ to validate the right policy assignments are affecting the right scopes.
- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).\ No newline at end of file
+- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/guest-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
@@ -1,6 +1,6 @@
--- title: Learn to audit the contents of virtual machines
-description: Learn how Azure Policy uses the Guest Configuration agent to audit settings inside virtual machines.
+description: Learn how Azure Policy uses the Guest Configuration client to audit settings inside virtual machines.
ms.date: 10/14/2020 ms.topic: conceptual ---
@@ -82,7 +82,7 @@ of the configuration within the machine.
## Supported client types Guest Configuration policy definitions are inclusive of new versions. Older versions of operating
-systems available in Azure Marketplace are excluded if the Guest Configuration agent isn't
+systems available in Azure Marketplace are excluded if the Guest Configuration client isn't
compatible. The following table shows a list of supported operating systems on Azure images: |Publisher|Name|Versions|
governance https://docs.microsoft.com/en-us/azure/governance/policy/how-to/guest-configuration-create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create.md
@@ -167,12 +167,41 @@ class ResourceName : OMI_BaseResource
}; ```
+If the resource has required properties, those must also be returned by `Get-TargetResource`
+in parallel with the `reasons` class. If `reasons` isn't included, the service includes a
+"catch-all" behavior that compares the values input to `Get-TargetResource` and the values
+returned by `Get-TargetResource`, and provides a detailed comparison as `reasons`.
+ ### Configuration requirements The name of the custom configuration must be consistent everywhere. The name of the .zip file for the content package, the configuration name in the MOF file, and the guest assignment name in the Azure Resource Manager template (ARM template), must be the same.
+### Policy requirements
+
+The policy definition `metadata` section must include two properties for the Guest Configuration
+service to automate provisioning and reporting of Guest Configuration assignments. The `category` property must
+be set to "Guest Configuration" and a section named `Guest Configuration` must contain information about the
+Guest Configuration assignment. The `New-GuestConfigurationPolicy` cmdlet creates this text automatically.
+See the step-by-step instructions on this page.
+
+The following example demonstrates the `metadata` section.
+
+```json
+ "metadata": {
+ "category": "Guest Configuration",
+ "guestConfiguration": {
+ "name": "test",
+ "version": "1.0.0",
+ "contentType": "Custom",
+ "contentUri": "CUSTOM-URI-HERE",
+ "contentHash": "CUSTOM-HASH-VALUE-HERE",
+ "configurationParameter": {}
+ }
+ },
+```
+ ### Scaffolding a Guest Configuration project Developers who would like to accelerate the process of getting started and work from sample code can
governance https://docs.microsoft.com/en-us/azure/governance/policy/troubleshoot/general https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/troubleshoot/general.md
@@ -37,10 +37,9 @@ An incorrect or non-existent alias is used in a policy definition.
#### Resolution First, validate that the Resource Manager property has an alias. Use
-[Azure Policy extension for Visual Studio Code](../how-to/extension-for-vscode.md),
-[Azure Resource Graph](../../resource-graph/samples/starter.md#distinct-alias-values), or SDK to
-look up available aliases. If the alias for a Resource Manager property doesn't exist, create a
-support ticket.
+[Azure Policy extension for Visual Studio Code](../how-to/extension-for-vscode.md) or SDK to look up
+available aliases. If the alias for a Resource Manager property doesn't exist, create a support
+ticket.
### Scenario: Evaluation details not up-to-date
governance https://docs.microsoft.com/en-us/azure/governance/policy/tutorials/create-custom-policy-definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/create-custom-policy-definition.md
@@ -211,7 +211,6 @@ tutorial:
- Azure Policy extension for VS Code - Azure CLI - Azure PowerShell-- Azure Resource Graph ### Get aliases in VS Code extension
@@ -255,130 +254,6 @@ earlier.
Like Azure CLI, the results show an alias supported by the storage accounts named **supportsHttpsTrafficOnly**.
-### Azure Resource Graph
-
-[Azure Resource Graph](../../resource-graph/overview.md) is a service that provides another method
-to find properties of Azure resources. Here is a sample query for looking at a single storage
-account with Resource Graph:
-
-```kusto
-Resources
-| where type=~'microsoft.storage/storageaccounts'
-| limit 1
-```
-
-```azurecli-interactive
-az graph query -q "Resources | where type=~'microsoft.storage/storageaccounts' | limit 1"
-```
-
-```azurepowershell-interactive
-Search-AzGraph -Query "Resources | where type=~'microsoft.storage/storageaccounts' | limit 1"
-```
-
-The results look similar to what we see in the ARM templates and through the Azure Resource
-Explorer. However, Azure Resource Graph results can also include
-[alias](../concepts/definition-structure.md#aliases) details by _projecting_ the _aliases_ array:
-
-```kusto
-Resources
-| where type=~'microsoft.storage/storageaccounts'
-| limit 1
-| project aliases
-```
-
-```azurecli-interactive
-az graph query -q "Resources | where type=~'microsoft.storage/storageaccounts' | limit 1 | project aliases"
-```
-
-```azurepowershell-interactive
-Search-AzGraph -Query "Resources | where type=~'microsoft.storage/storageaccounts' | limit 1 | project aliases"
-```
-
-Here is example output from a storage account for aliases:
-
-```json
-"aliases": {
- "Microsoft.Storage/storageAccounts/accessTier": null,
- "Microsoft.Storage/storageAccounts/accountType": "Standard_LRS",
- "Microsoft.Storage/storageAccounts/enableBlobEncryption": true,
- "Microsoft.Storage/storageAccounts/enableFileEncryption": true,
- "Microsoft.Storage/storageAccounts/encryption": {
- "keySource": "Microsoft.Storage",
- "services": {
- "blob": {
- "enabled": true,
- "lastEnabledTime": "2018-06-04T17:59:14.4970000Z"
- },
- "file": {
- "enabled": true,
- "lastEnabledTime": "2018-06-04T17:59:14.4970000Z"
- }
- }
- },
- "Microsoft.Storage/storageAccounts/encryption.keySource": "Microsoft.Storage",
- "Microsoft.Storage/storageAccounts/encryption.keyvaultproperties.keyname": null,
- "Microsoft.Storage/storageAccounts/encryption.keyvaultproperties.keyvaulturi": null,
- "Microsoft.Storage/storageAccounts/encryption.keyvaultproperties.keyversion": null,
- "Microsoft.Storage/storageAccounts/encryption.services": {
- "blob": {
- "enabled": true,
- "lastEnabledTime": "2018-06-04T17:59:14.4970000Z"
- },
- "file": {
- "enabled": true,
- "lastEnabledTime": "2018-06-04T17:59:14.4970000Z"
- }
- },
- "Microsoft.Storage/storageAccounts/encryption.services.blob": {
- "enabled": true,
- "lastEnabledTime": "2018-06-04T17:59:14.4970000Z"
- },
- "Microsoft.Storage/storageAccounts/encryption.services.blob.enabled": true,
- "Microsoft.Storage/storageAccounts/encryption.services.file": {
- "enabled": true,
- "lastEnabledTime": "2018-06-04T17:59:14.4970000Z"
- },
- "Microsoft.Storage/storageAccounts/encryption.services.file.enabled": true,
- "Microsoft.Storage/storageAccounts/networkAcls": {
- "bypass": "AzureServices",
- "defaultAction": "Allow",
- "ipRules": [],
- "virtualNetworkRules": []
- },
- "Microsoft.Storage/storageAccounts/networkAcls.bypass": "AzureServices",
- "Microsoft.Storage/storageAccounts/networkAcls.defaultAction": "Allow",
- "Microsoft.Storage/storageAccounts/networkAcls.ipRules": [],
- "Microsoft.Storage/storageAccounts/networkAcls.ipRules[*]": [],
- "Microsoft.Storage/storageAccounts/networkAcls.ipRules[*].action": [],
- "Microsoft.Storage/storageAccounts/networkAcls.ipRules[*].value": [],
- "Microsoft.Storage/storageAccounts/networkAcls.virtualNetworkRules": [],
- "Microsoft.Storage/storageAccounts/networkAcls.virtualNetworkRules[*]": [],
- "Microsoft.Storage/storageAccounts/networkAcls.virtualNetworkRules[*].action": [],
- "Microsoft.Storage/storageAccounts/networkAcls.virtualNetworkRules[*].id": [],
- "Microsoft.Storage/storageAccounts/networkAcls.virtualNetworkRules[*].state": [],
- "Microsoft.Storage/storageAccounts/primaryEndpoints": {
- "blob": "https://mystorageaccount.blob.core.windows.net/",
- "file": "https://mystorageaccount.file.core.windows.net/",
- "queue": "https://mystorageaccount.queue.core.windows.net/",
- "table": "https://mystorageaccount.table.core.windows.net/"
- },
- "Microsoft.Storage/storageAccounts/primaryEndpoints.blob": "https://mystorageaccount.blob.core.windows.net/",
- "Microsoft.Storage/storageAccounts/primaryEndpoints.file": "https://mystorageaccount.file.core.windows.net/",
- "Microsoft.Storage/storageAccounts/primaryEndpoints.queue": "https://mystorageaccount.queue.core.windows.net/",
- "Microsoft.Storage/storageAccounts/primaryEndpoints.table": "https://mystorageaccount.table.core.windows.net/",
- "Microsoft.Storage/storageAccounts/primaryEndpoints.web": null,
- "Microsoft.Storage/storageAccounts/primaryLocation": "eastus2",
- "Microsoft.Storage/storageAccounts/provisioningState": "Succeeded",
- "Microsoft.Storage/storageAccounts/sku.name": "Standard_LRS",
- "Microsoft.Storage/storageAccounts/sku.tier": "Standard",
- "Microsoft.Storage/storageAccounts/statusOfPrimary": "available",
- "Microsoft.Storage/storageAccounts/supportsHttpsTrafficOnly": false
-}
-```
-
-Azure Resource Graph can be used through [Cloud Shell](https://shell.azure.com), making it a fast
-and easy way to explore the properties of your resources.
- ## Determine the effect to use Deciding what to do with your non-compliant resources is nearly as important as deciding what to
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/explore-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/explore-resources.md
@@ -184,12 +184,6 @@ Resources
| project disk.id ```
-> [!NOTE]
-> Another way to get the SKU would have been by using the **aliases** property
-> **Microsoft.Compute/virtualMachines/sku.name**. See the
-> [Show aliases](../samples/starter.md#show-aliases) and
-> [Show distinct alias values](../samples/starter.md#distinct-alias-values) examples.
- ```azurecli-interactive az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualmachines' and properties.hardwareProfile.vmSize == 'Standard_B2s' | extend disk = properties.storageProfile.osDisk.managedDisk | where disk.storageAccountType == 'Premium_LRS' | project disk.id" ```
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/query-language.md
@@ -152,7 +152,7 @@ Here is the list of KQL tabular operators supported by Resource Graph with speci
|KQL |Resource Graph sample query |Notes | |---|---|---| |[count](/azure/kusto/query/countoperator) |[Count key vaults](../samples/starter.md#count-keyvaults) | |
-|[distinct](/azure/kusto/query/distinctoperator) |[Show distinct values for a specific alias](../samples/starter.md#distinct-alias-values) | |
+|[distinct](/azure/kusto/query/distinctoperator) |[Show resources that contain storage](../samples/starter.md#show-storage) | |
|[extend](/azure/kusto/query/extendoperator) |[Count virtual machines by OS type](../samples/starter.md#count-os) | | |[join](/azure/kusto/query/joinoperator) |[Key vault with subscription name](../samples/advanced.md#join) |Join flavors supported: [innerunique](/azure/kusto/query/joinoperator#default-join-flavor), [inner](/azure/kusto/query/joinoperator#inner-join), [leftouter](/azure/kusto/query/joinoperator#left-outer-join). Limit of 3 `join` in a single query. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use `join`, see [Resource Graph tables](#resource-graph-tables). | |[limit](/azure/kusto/query/limitoperator) |[List all public IP addresses](../samples/starter.md#list-publicip) |Synonym of `take`. Doesn't work with [Skip](./work-with-data.md#skipping-records). |
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/samples/starter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/starter.md
@@ -25,8 +25,6 @@ We'll walk through the following starter queries:
- [Count resources that have IP addresses configured by subscription](#count-resources-by-ip) - [List resources with a specific tag value](#list-tag) - [List all storage accounts with specific tag value](#list-specific-tag)-- [Show aliases for a virtual machine resource](#show-aliases)-- [Show distinct values for a specific alias](#distinct-alias-values) - [Show unassociated network security groups](#unassociated-nsgs) - [Get cost savings summary from Azure Advisor](#advisor-savings) - [Count machines in scope of Guest Configuration policies](#count-gcmachines)
@@ -490,78 +488,6 @@ Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Storage/storageAccou
> [!NOTE] > This example uses `==` for matching instead of the `=~` conditional. `==` is a case sensitive match.
-## <a name="show-aliases"></a>Show aliases for a virtual machine resource
-
-[Azure Policy aliases](../../policy/concepts/definition-structure.md#aliases) are used by Azure
-Policy to manage resource compliance. Azure Resource Graph can return the _aliases_ of a resource
-type. These values are useful for comparing the current value of aliases when creating a custom
-policy definition. The _aliases_ array isn't provided by default in the results of a query. Use
-`project aliases` to explicitly add it to the results.
-
-```kusto
-Resources
-| where type =~ 'Microsoft.Compute/virtualMachines'
-| limit 1
-| project aliases
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | limit 1 | project aliases"
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell-interactive
-Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | limit 1 | project aliases" | ConvertTo-Json
-```
-
-# [Portal](#tab/azure-portal)
-
-:::image type="icon" source="../media/resource-graph-small.png"::: Try this query in Azure Resource Graph Explorer:
--- Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20limit%201%0D%0A%7C%20project%20aliases" target="_blank">portal.azure.com <span class="docon docon-navigate-external x-hidden-focus"></span></a>-- Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20limit%201%0D%0A%7C%20project%20aliases" target="_blank">portal.azure.us <span class="docon docon-navigate-external x-hidden-focus"></span></a>-- Azure China 21Vianet portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20limit%201%0D%0A%7C%20project%20aliases" target="_blank">portal.azure.cn <span class="docon docon-navigate-external x-hidden-focus"></span></a>--
-## <a name="distinct-alias-values"></a>Show distinct values for a specific alias
-
-Seeing the value of aliases on a single resource is helpful, but it doesn't show the true value of
-using Azure Resource Graph to query across subscriptions. This example looks at all values of a
-specific alias and returns the distinct values.
-
-```kusto
-Resources
-| where type=~'Microsoft.Compute/virtualMachines'
-| extend alias = aliases['Microsoft.Compute/virtualMachines/storageProfile.osDisk.managedDisk.storageAccountType']
-| distinct tostring(alias)
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az graph query -q "Resources | where type=~'Microsoft.Compute/virtualMachines' | extend alias = aliases['Microsoft.Compute/virtualMachines/storageProfile.osDisk.managedDisk.storageAccountType'] | distinct tostring(alias)"
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell-interactive
-Search-AzGraph -Query "Resources | where type=~'Microsoft.Compute/virtualMachines' | extend alias = aliases['Microsoft.Compute/virtualMachines/storageProfile.osDisk.managedDisk.storageAccountType'] | distinct tostring(alias)"
-```
-
-# [Portal](#tab/azure-portal)
-
-:::image type="icon" source="../media/resource-graph-small.png"::: Try this query in Azure Resource Graph Explorer:
--- Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%3D~%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20extend%20alias%20%3D%20aliases%5B%27Microsoft.Compute%2FvirtualMachines%2FstorageProfile.osDisk.managedDisk.storageAccountType%27%5D%0D%0A%7C%20distinct%20tostring%28alias%29" target="_blank">portal.azure.com <span class="docon docon-navigate-external x-hidden-focus"></span></a>-- Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%3D~%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20extend%20alias%20%3D%20aliases%5B%27Microsoft.Compute%2FvirtualMachines%2FstorageProfile.osDisk.managedDisk.storageAccountType%27%5D%0D%0A%7C%20distinct%20tostring%28alias%29" target="_blank">portal.azure.us <span class="docon docon-navigate-external x-hidden-focus"></span></a>-- Azure China 21Vianet portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%3D~%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20extend%20alias%20%3D%20aliases%5B%27Microsoft.Compute%2FvirtualMachines%2FstorageProfile.osDisk.managedDisk.storageAccountType%27%5D%0D%0A%7C%20distinct%20tostring%28alias%29" target="_blank">portal.azure.cn <span class="docon docon-navigate-external x-hidden-focus"></span></a>-- ## <a name="unassociated-nsgs"></a>Show unassociated network security groups This query returns Network Security Groups (NSGs) that aren't associated to a network interface or
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hbase/apache-hbase-advisor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hbase/apache-hbase-advisor.md new file mode 100644
@@ -0,0 +1,92 @@
+---
+title: Optimize for cluster advisor recommendations
+titleSuffix: Azure HDInsight
+description: Optimize Apache HBase for cluster advisor recommendations in Azure HDInsight.
+author: ramkrish86
+ms.author: ramvasu
+ms.reviewer: jasonh
+ms.service: hdinsight
+ms.topic: conceptual
+ms.date: 01/03/2021
+#Customer intent: The azure advisories help to tune the cluster/query. This doc gives a much deeper understanding of the various advisories including the recommended configuration tunings.
+---
+# Apache HBase advisories in Azure HDInsight
+
+This article describes several advisories that help you optimize Apache HBase performance in Azure HDInsight.
+
+## Optimize HBase to read most recently written data
+
+When you use Apache HBase in Azure HDInsight, you can optimize the configuration of HBase for the scenario where your application reads the most recently written data. For high performance, it's optimal that HBase reads are to be served from memstore, instead of the remote storage.
+
+The query advisory indicates that for a given column family in a table has > 75% reads that are getting served from memstore. This indicator suggests that even if a flush happens on the memstore the recent file needs to be accessed and that needs to be in cache. The data is first written to memstore the system accesses the recent data there. There's a chance that the internal HBase flusher threads detect that a given region has reached 128M (default) size and can trigger a flush. This scenario happens to even the most recent data that was written when the memstore was around 128M in size. Therefore, a later read of those recent records may require a file read rather than from memstore. Hence it is best to optimize that even recent data that is recently flushed can reside in the cache.
+
+To optimize the recent data in cache, consider the following configuration settings:
+
+1. Set `hbase.rs.cacheblocksonwrite` to `true`. This default configuration in HDInsight HBase is `true`, so check that is it not reset to `false`.
+
+2. Increase the `hbase.hstore.compactionThreshold` value so that you can avoid the compaction from kicking in. By default this value is `3`. You can increase it to a higher value like `10`.
+
+3. If you follow step 2 and set compactionThreshold, then change `hbase.hstore.compaction.max` to a higher value for example `100`, and also increase the value for the config `hbase.hstore.blockingStoreFiles` to higher value for example `300`.
+
+4. If you're sure that you need to read only in the recent data, set `hbase.rs.cachecompactedblocksonwrite` configuration to **ON**. This configuration tells the system that even if compaction happens, the data stays in cache. The configurations can be set at the family level also.
+
+ In the HBase Shell, run the following command:
+
+ ```
+ alter '<TableName>', {NAME => '<FamilyName>', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => '300'}}
+ ```
+
+5. Block cache can be turned off for a given family in a table. Ensure that it's turned **ON** for families that have most recent data reads. By default, block cache is turned ON for all families in a table. In case you have disabled the block cache for a family and need to turn it ON, use the alter command from the hbase shell.
+
+ These configurations help ensure that the data is in cache and that the recent data does not undergo compaction. If a TTL is possible in your scenario, then consider using date-tiered compaction. For more information, see [Apache HBase Reference Guide: Date Tiered Compaction](https://hbase.apache.org/book.html#ops.date.tiered)
+
+## Optimize the flush queue
+
+The optimize the flush queue advisory indicates that HBase flushes may need tuning. The flush handlers might not be high enough as configured.
+
+In the region server UI, notice if the flush queue grows beyond 100. This threshold indicates the flushes are slow and you may have to tune the `hbase.hstore.flusher.count` configuration. By default, the value is 2. Ensure that the max flusher threads don't increase beyond 6.
+
+Additionally, see if you have a recommendation for region count tuning. If so first try the region tuning to see if that helps in faster flushes. Tuning the flusher threads might help in multiple ways like
+
+## Region count tuning
+
+The region count tuning advisory indicates that HBase has blocked updates, and the region count may be more than the optimally supported heap size. You can tune the heap size, memstore size, and the region count.
+
+As an example scenario:
+
+- Assume the heap size for the region server is 10 GB. By default the `hbase.hregion.memstore.flush.size` is `128M`. The default value for `hbase.regionserver.global.memstore.size` is `0.4`. Which means that out of the 10 GB, 4 GB is allocated for memstore (globally).
+
+- Assume there's an even distribution of the write load on all the regions and assuming every region grows upto 128 MB only then the max number of regions in this setup is `32` regions. If a given region server is configured to have 32 regions, the system better avoids blocking updates.
+
+- With these settings in place, the number of regions is 100. The 4-GB global memstore is now split across 100 regions. So effectively each region gets only 40 MB for memstore. When the writes are uniform, the system does frequent flushes and smaller size of the order < 40 MB. Having many flusher threads might increase the flush speed `hbase.hstore.flusher.count`.
+
+The advisory means that it would be good to reconsider the number of regions per server, the heap size, and the global memstore size configuration along with the flush threads tuning so that such updates getting blocked can be avoided.
+
+## Compaction queue tuning
+
+If the HBase compaction queue grows to more than 2000 and happens periodically, you can increase the compaction threads to a larger value.
+
+When there's an excessive number of files for compaction, it may lead to more heap usage related to how the files interact with the Azure file system. So it is better to complete the compaction as quickly as possible. Some times in older clusters the compaction configurations related to throttling might lead to slower compaction rate.
+
+Check the configurations `hbase.hstore.compaction.throughput.lower.bound` and `hbase.hstore.compaction.throughput.higher.bound`. If they are already set to 50M and 100M, leave them as it is. However, if you configured those settings to a lower value (which was the case with older clusters), change the limits to 50M and 100M respectively.
+
+The configurations are `hbase.regionserver.thread.compaction.small` and `hbase.regionserver.thread.compaction.large` (the defaults are 1 each).
+Cap the max value for this configuration to be less than 3.
+
+## Full table scan
+
+The full table scan advisory indicates that over 75% of the scans issued are full table/region scans. You can revisit the way your code calls the scans to improve query performance. Consider the following practices:
+
+* Set the proper start and stop row for each scan.
+
+* Use the **MultiRowRangeFilter** API so that you can query different ranges in one scan call. For more information, see [MultiRowRangeFilter API documentation](https://hbase.apache.org/2.1/apidocs/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html).
+
+* In cases where you need a full table or region scan, check if there's a possibility to avoid cache usage for those queries, so that other queries that use of the cache might not evict the blocks that are hot. To ensure the scans do not use cache, use the **scan** API with the **setCaching(false)** option in your code:
+
+ ```
+ scan#setCaching(false)
+ ```
+
+## Next steps
+
+[Optimize Apache HBase using Ambari](../optimize-hbase-ambari.md)
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-faq.md
@@ -193,7 +193,8 @@ In scenarios in which you must control the schedule, you can use the following s
1. Disable automatic execution using the following command:
- `/usr/local/vbin/azsecd config -s clamav -d Disabled`
+ sudo `usr/local/bin/azsecd config -s clamav -d Disabled`
+ sudo service azsecd restart
1. Add a Cron job that runs the following command as root:
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes.md
@@ -35,7 +35,7 @@ HDInsight now uses Azure virtual machines to provision the cluster. Starting fro
## Deprecation ### Deprecation of HDInsight 3.6 ML Services cluster
-HDInsight 3.6 ML Services cluster type will be end of support by December 31 2020. Customers won't create new 3.6 ML Services clusters after December 31 2020. Existing clusters will run as is without the support from Microsoft. Check the support expiration for HDInsight versions and cluster types [here](./hdinsight-component-versioning.md#available-versions).
+HDInsight 3.6 ML Services cluster type will be end of support by December 31 2020. Customers won't be able to create new 3.6 ML Services clusters after December 31 2020. Existing clusters will run as is without the support from Microsoft. Check the support expiration for HDInsight versions and cluster types [here](./hdinsight-component-versioning.md#available-versions).
### Disabled VM sizes Starting from November 16 2020, HDInsight will block new customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing customers who have used these VM sizes in the past three months won't be affected. Starting form January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
@@ -47,9 +47,15 @@ HDInsight added network security groups (NSGs) and user-defined routes (UDRs) ch
## Upcoming changes The following changes will happen in upcoming releases.
+### Default cluster VM size will be changed to Ev3 family
+Starting from next release (around end of January), default cluster VM sizes will be changed from D family to Ev3 family. This change applies to head nodes and worker nodes. To avoid this change, specify the VM sizes that you want to use in the ARM template.
+ ### Default cluster version will be changed to 4.0 Starting February 2021, the default version of HDInsight cluster will be changed from 3.6 to 4.0. For more information about available versions, see [available versions](./hdinsight-component-versioning.md#available-versions). Learn more about what is new in [HDInsight 4.0](./hdinsight-version-release.md)
+### OS version upgrade
+HDInsight is upgrading OS version from 16.04 to 18.04. The upgrade will complete before April, 2021.
+ ### HDInsight 3.6 end of support on June 30 2021 HDInsight 3.6 will be end of support. Starting form June 30 2021, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-create-and-manage-applications-csp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-and-manage-applications-csp.md
@@ -15,7 +15,7 @@ manager: philmea
The Microsoft Cloud Solution Provider (CSP) program is a Microsoft Reseller program. Its intent is to provide our channel partners with a one-stop program to resell all Microsoft Commercial Online Services. Learn more about the [Cloud Solution Provider program](https://partner.microsoft.com/cloud-solution-provider).
-[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+[!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
As a CSP, you can create and manage Microsoft Azure IoT Central applications on behalf of your customers through the [Microsoft Partner Center](https://partnercenter.microsoft.com/partner/home). When Azure IoT Central applications are created on behalf of customers by CSPs, just like with other CSP managed Azure services, CSPs manage billing for customers. A charge for Azure IoT Central will appear in your total bill in the Microsoft Partner Center.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-manage-iot-central-from-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-cli.md
@@ -25,7 +25,7 @@ Instead of creating and managing IoT Central applications on the [Azure IoT Cent
## Create an application
-[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+[!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
Use the [az iot central app create](/cli/azure/iot/central/app?view=azure-cli-latest#az-iot-central-app-create&preserve-view=true) command to create an IoT Central application in your Azure subscription. For example:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-manage-iot-central-from-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-portal.md
@@ -18,7 +18,8 @@ Instead of creating and managing IoT Central applications on the [Azure IoT Cent
## Create IoT Central applications
-[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+[!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
+ To create an application, navigate to the [Azure portal](https://ms.portal.azure.com) and select **Create a resource**.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-manage-iot-central-from-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-powershell.md
@@ -23,7 +23,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+[!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
If you prefer to run Azure PowerShell on your local machine, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps). When you run Azure PowerShell locally, use the **Connect-AzAccount** cmdlet to sign in to Azure before you try the cmdlets in this article.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-manage-iot-central-programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-programmatically.md
@@ -31,7 +31,7 @@ The following table lists the SDK repositories and package installation commands
The [Azure IoT Central ARM SDK samples](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/) repository has code samples for multiple programming languages that show you how to create, update, list, and delete Azure IoT Central applications.
-[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+[!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
## Next steps
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/quick-deploy-iot-central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-deploy-iot-central.md
@@ -15,7 +15,7 @@ manager: corywink
This quickstart shows you how to create an Azure IoT Central application.
-[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+[!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
## Create an application
iot-central https://docs.microsoft.com/en-us/azure/iot-central/energy/tutorial-solar-panel-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/tutorial-solar-panel-app.md
@@ -1,6 +1,6 @@
---
-title: 'Tutorial: Create a solar panel monitoring app with IoT Central'
-description: 'Tutorial: Learn how to create a solar panel application using Azure IoT Central application templates.'
+title: 'Tutorial: Create a solar panel monitoring app with Azure IoT Central'
+description: 'Tutorial: Learn how to create a solar panel application by using Azure IoT Central application templates.'
author: op-ravi ms.author: omravi ms.date: 12/11/2020
@@ -10,99 +10,95 @@ services: iot-central
manager: abjork ---
-# Tutorial: Create and walk-through the solar panel monitoring app template
+# Tutorial: Create and explore the solar panel monitoring app template
-This tutorial guides you through the process of creating the solar panel monitoring application, which includes a sample device model with simulated data. In this tutorial, you'll learn:
+This tutorial guides you through the process of creating a solar panel monitoring application, which includes a sample device model with simulated data. In this tutorial, you'll learn how to:
> [!div class="checklist"]
-> * Create the solar panel app for free
-> * Application walk-through
+> * Create a solar panel app for free
+> * Walk through the application
> * Clean up resources
-If you don't have a subscription, [create a free trial account](https://azure.microsoft.com/free)
+If you don't have a subscription, [create a free trial account](https://azure.microsoft.com/free).
## Prerequisites
-* None
-* Azure subscription is recommended, but not required to try out
+
+There are no prerequisites for completing this tutorial. A subscription to Azure is recommended, but not required.
## Create a solar panel monitoring app You can create this application in three simple steps:
-1. Open [Azure IoT Central home page](https://apps.azureiotcentral.com) and click **Build** to create a new application.
+1. Go to [Azure IoT Central](https://apps.azureiotcentral.com). To create a new application, select **Build**.
-1. Select **Energy** tab and click **Create app** under **Solar panel monitoring** application tile.
+1. Select the **Energy** tab. Under **Solar panel monitoring**, select **Create app**.
> [!div class="mx-imgBorder"]
- > ![Build App](media/tutorial-iot-central-solar-panel/solar-panel-build.png)
+ > ![Screenshot of Azure IoT Central Build options.](media/tutorial-iot-central-solar-panel/solar-panel-build.png)
-1. **Create app** will open **New application** form. Fill in the requested details as shown in the figure below:
- * **Application name**: Pick a name for your IoT Central application.
- * **URL**: Pick an IoT Central URL, the platform will verify its uniqueness.
- * **7-day free trial**: If you already have an Azure subscription, default setting is recommended. If you don't have an Azure subscription, start with free trial.
- * **Billing Info**: The application itself is free. The Directory, Azure subscription, and Region details are required to provision the resources for your app.
- * Click **Create** button at the bottom of the page and your app will be created in a minute or so.
- ![New application form](media/tutorial-iot-central-solar-panel/solar-panel-create-app.png)
+1. In the **New application** dialog box, fill in the requested details, and then select **Create**:
+ * **Application name**: Pick a name for your Azure IoT Central application.
+ * **URL**: Pick an Azure IoT Central URL. The platform verifies its uniqueness.
+ * **Pricing plan**: If you already have an Azure subscription, the default setting is recommended. If you don't have an Azure subscription, start with the free trial.
+ * **Billing info**: The application itself is free. The directory, Azure subscription, and region details are required to provision the resources for your app.
+ ![Screenshot of New application.](media/tutorial-iot-central-solar-panel/solar-panel-create-app.png)
- ![New application form billing info](media/tutorial-iot-central-solar-panel/solar-panel-create-app-billinginfo.png)
+ ![Screenshot of Billing info.](media/tutorial-iot-central-solar-panel/solar-panel-create-app-billinginfo.png)
### Verify the application and simulated data
-The newly created solar panel app is your app and you can modify it anytime. Let's ensure the app is deployed and working as expected before you modify it.
+You can modify your new solar panel app at any time. For now, ensure that the app is deployed and working as expected before you modify it.
-To verify the app creation and data simulation, go to the **Dashboard**. If you can see the tiles with some data, then your app deployment was successful. The data simulation may take a few minutes to generate the data, so give it 1-2 minutes.
+To verify the app creation and data simulation, go to the **Dashboard**. If you can see the tiles with some data, then your app deployment was successful. The data simulation can take a few minutes to generate the data.
## Application walk-through
-After you successfully deploy the app template, it comes with sample smart meter device, device model, and dashboard.
+After you successfully deploy the app template, you'll want to explore the app a bit more. Notice that it comes with sample smart meter device, device model, and dashboard.
-Adatum is a fictitious energy company, who monitors and manages solar panels. On the solar panel monitoring dashboard, you see solar panel properties, data, and sample commands. It enables operators and support teams to proactively perform the following activities before it turns into support incidents:
-* Review the latest panel info and its installed location on the map
-* Proactively check the panel status and connection status
-* Review the energy generation and temperature trends to catch any anomalous patterns
-* Track the total energy generation for planning and billing purposes
-* Command and control operations such as activate panel and update firmware version. In the template, the command buttons show the possible functionalities and don't send real commands.
+Adatum is a fictitious energy company that monitors and manages solar panels. On the solar panel monitoring dashboard, you see solar panel properties, data, and sample commands. This dashboard allows you or your support team to perform the following activities proactively, before any problems require additional support:
+* Review the latest panel info and its installed location on the map.
+* Check the panel status and connection status.
+* Review the energy generation and temperature trends to catch any anomalous patterns.
+* Track the total energy generation for planning and billing purposes.
+* Activate a panel and update the firmware version, if necessary. In the template, the command buttons show the possible functionalities, and don't send real commands.
> [!div class="mx-imgBorder"]
-> ![Solar panel monitoring dashboard](media/tutorial-iot-central-solar-panel/solar-panel-dashboard.png)
+> ![Screenshot of Solar Panel Monitoring Template Dashboard.](media/tutorial-iot-central-solar-panel/solar-panel-dashboard.png)
### Devices
-The app comes with a sample solar panel device. You can see the device details by clicking on the **Devices** tab.
+The app comes with a sample solar panel device. To see device details, select **Devices**.
> [!div class="mx-imgBorder"]
-> ![Solar panel devices](media/tutorial-iot-central-solar-panel/solar-panel-device.png)
-
+> ![Screenshot of Solar Panel Monitoring Template Devices.](media/tutorial-iot-central-solar-panel/solar-panel-device.png)
-Click on the sample device **SP0123456789** link to see the device details. On the **Update Properties** page, you can update the writable properties of the device and visualize the updated values on the dashboard.
+Select the sample device, **SP0123456789**. From the **Update Properties** tab, you can update the writable properties of the device and see a visual of the updated values on the dashboard.
> [!div class="mx-imgBorder"]
-> ![Solar panel properties](media/tutorial-iot-central-solar-panel/solar-panel-device-properties.png)
+> ![Screenshot of Solar Panel Monitoring Template Update Properties tab.](media/tutorial-iot-central-solar-panel/solar-panel-device-properties.png)
-### Device Template
-Click on the **Device templates** tab to see the solar panel device model. The model has pre-define interface for Data, Property, Commands, and Views.
+### Device template
+To see the solar panel device model, select the **Device templates** tab. The model has predefined interfaces for data, properties, commands, and views.
> [!div class="mx-imgBorder"]
-> ![Solar panel devices template](media/tutorial-iot-central-solar-panel/solar-panel-device-templates.png)
+> ![Screenshot of Solar Panel Monitoring Template Device templates.](media/tutorial-iot-central-solar-panel/solar-panel-device-templates.png)
## Clean up resources
-If you decide to not continue using this application, delete your application with the following these steps:
+If you decide not to continue using this application, delete your application with the following steps:
-1. From the left pane, open Administration tab
-1. Select Application settings and click Delete button at the bottom of the page.
+1. From the left pane, select **Administration**.
+1. Select **Application settings** > **Delete**.
> [!div class="mx-imgBorder"]
- > ![Delete application](media/tutorial-iot-central-solar-panel/solar-panel-delete-app.png)
+ > ![Screenshot of Solar Panel Monitoring Template Administration.](media/tutorial-iot-central-solar-panel/solar-panel-delete-app.png)
## Next steps
-* Learn about solar panel app architecture refer to
+
> [!div class="nextstepaction"]
-> [the concept article](./concept-iot-central-solar-panel-app.md)
-* Create solar panel application templates for free:
-[solar panel app](https://apps.azureiotcentral.com/build/new/solar-panel-monitoring)
-* Learn more about IoT Central, see
-[IoT Central overview](../index.yml)
+> [Azure IoT Central - solar panel app architecture](./concept-iot-central-solar-panel-app.md)
+* [Create solar panel application templates for free](https://apps.azureiotcentral.com/build/new/solar-panel-monitoring)
+* [Azure IoT Central overview](../index.yml)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/government/tutorial-connected-waste-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-connected-waste-management.md
@@ -1,6 +1,6 @@
--- title: 'Tutorial: Create a connected waste management app with Azure IoT Central'
-description: 'Tutorial: Learn to build Create a connected waste management application using Azure IoT Central application templates.'
+description: Learn to build a connected waste management application by using Azure IoT Central application templates.
author: miriambrus ms.author: miriamb ms.date: 12/11/2020
@@ -8,256 +8,248 @@ ms.topic: tutorial
ms.service: iot-central services: iot-central ---
-# Tutorial: Create a connected waste management application in IoT Central
+# Tutorial: Create a connected waste management app
-This tutorial guides you to create an Azure IoT Central connected waste management application from the IoT Central **Connected waste management** application template.
+This tutorial shows you how to use Azure IoT Central to create a connected waste management application.
-In this tutorial, you will learn how to:
+Specifically, you learn how to:
-* Use the Azure IoT Central **Connected waste management** template to create your connected waste management application
-* Explore and customize operator dashboard
-* Explore connected waste bin device template
-* Explore simulated devices
-* Explore and configure rules
-* Configure jobs
-* Customize your application branding using white labeling
+* Use the Azure IoT Central *Connected waste management* template to create your app.
+* Explore and customize the operator dashboard.
+* Explore the connected waste bin device template.
+* Explore simulated devices.
+* Explore and configure rules.
+* Configure jobs.
+* Customize your application branding.
## Prerequisites
-To complete this tutorial, you need:
-* An Azure subscription is recommended. You can optionally use a free 7-day trial. If you don't have an Azure subscription, you can create one on the [Azure sign-up page](https://aka.ms/createazuresubscription).
+An Azure subscription is recommended. Alternatively, you can use a free, 7-day trial. If you don't have an Azure subscription, you can create one on the [Azure sign-up page](https://aka.ms/createazuresubscription).
-## Create Connected Waste Management app in IoT Central
+## Create your app in Azure IoT Central
-In this section, you use the Azure IoT Central **Connected waste management template** to create your connected waste management application in IoT Central.
+In this section, you use the Connected waste management template to create your app in Azure IoT Central. Here's how:
-To create a new Azure IoT Central connected waste management application:
+1. Go to [Azure IoT Central](https://aka.ms/iotcentral).
-1. Navigate to the [Azure IoT Central Home page](https://aka.ms/iotcentral) website.
+ If you have an Azure subscription, sign in with the credentials you use to access it. Otherwise, sign in by using a Microsoft account:
- If you have an Azure subscription, sign in with the credentials you use to access it, otherwise sign in using a Microsoft account:
+ ![Screenshot of Microsoft Sign in.](./media/tutorial-connectedwastemanagement/sign-in.png)
- ![Enter your organization account](./media/tutorial-connectedwastemanagement/sign-in.png)
+1. From the left pane, select **Build**. Then select the **Government** tab. The government page displays several government application templates.
-1. Click on **Build** from the left pane and select the **Government** tab. The government page displays several government application templates.
+ ![Screenshot of Azure IoT Central Build page.](./media/tutorial-connectedwastemanagement/iotcentral-government-tab-overview.png)
- ![Build Government App templates](./media/tutorial-connectedwastemanagement/iotcentral-government-tab-overview.png)
+1. Select the **Connected waste management** application template.
+This template includes a sample connected waste bin device template, a simulated device, an operator dashboard, and preconfigured monitoring rules.
-1. Select the **Connected Waste Management** application template.
-This template includes sample connected waste bin device template, simulated device, operator dashboard, and pre-configured monitoring rules.
+1. Select **Create app**, which opens the **New application** dialog box. Fill in the information for the following fields:
+ * **Application name**. By default, the application uses **Connected waste management**, followed by a unique ID string that Azure IoT Central generates. Optionally, you can choose a friendly application name. You can change the application name later, too.
+ * **URL**. Optionally, you can choose your desired URL. You can change the URL later.
+ * **Pricing plan**. If you have an Azure subscription, enter your directory, Azure subscription, and region in the appropriate fields of the **Billing info** dialog box. If you don't have a subscription, select **Free** to enable 7-day trial subscription, and complete the required contact information.
-1. Click **Create app**, which will open **New application** creation form with the following fields:
- * **Application name**. By default the application uses *Connected waste management* followed by a unique ID string that IoT Central generates. Optionally, choose a friendly application name. You can change the application name later too.
- * **URL** ΓÇô Optionally, you can choose to your desired URL. You can change the URL later too.
- * If you have an Azure subscription, enter your *Directory, Azure subscription, and Region*. If you don't have a subscription, you can enable **7-day free trial** and complete the required contact information.
+ For more information about directories and subscriptions, see [Quickstart - Create an Azure IoT Central application](../core/quick-deploy-iot-central.md).
- For more information about directories and subscriptions, see the [create an application quickstart](../core/quick-deploy-iot-central.md).
+1. At the bottom of the page, select **Create**.
-1. Click **Create** button at the bottom of the page.
-
- ![Azure IoT Central Create Connected Waste Application page](./media/tutorial-connectedwastemanagement/new-application-connectedwastemanagement.png)
+ ![Screenshot of Azure IoT Central Create New application dialog box.](./media/tutorial-connectedwastemanagement/new-application-connectedwastemanagement.png)
- ![Azure IoT Central Create Connected Billing info](./media/tutorial-connectedwastemanagement/new-application-connectedwastemanagement-billinginfo.png)
+ ![Screenshot of Azure IoT Central Billing info dialog box.](./media/tutorial-connectedwastemanagement/new-application-connectedwastemanagement-billinginfo.png)
-1. You now have created a connected waste management app using the Azure IoT Central **Connected waste management template**.
+Your newly created application comes with preconfigured:
+* Sample operator dashboards.
+* Sample predefined connected waste bin device templates.
+* Simulated connected waste bin devices.
+* Rules and jobs.
+* Sample branding.
-Congratulations! Your newly created application comes with pre-configured:
-* Sample operator dashboards
-* Sample pre-defined connected waste bin device templates
-* Simulated connected waste bin devices
-* Pre-configured rules and jobs
-* Sample Branding using white labeling
+It's your application, and you can modify it anytime. Let's now explore the application and make some customizations.
-It is your application and you can modify it anytime. Let's now explore the application and make some customizations.
+## Explore and customize the operator dashboard
-## Explore and customize operator dashboard
-After creating the application you land in the **Wide Waste connected waste management dashboard**.
+Take a look at the **Wide World waste management dashboard**, which you see after creating your app.
- ![Connected Waste Management dashboard](./media/tutorial-connectedwastemanagement/connectedwastemanagement-dashboard1.png)
+ ![Screenshot of Wide World waste management dashboard.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-dashboard1.png)
-As a builder, you can create and customize views on the dashboard for operators. Before you try to customize, let's explore the dashboard.
+As a builder, you can create and customize views on the dashboard for operators. First, let's explore the dashboard.
->>[!NOTE]
->> All data displayed in the dashboard is based on simulated device data, which will be explored in the next section.
+>[!NOTE]
+>All data shown in the dashboard is based on simulated device data, which you'll see more of in the next section.
-The dashboard consists of different kinds of tiles:
+The dashboard consists of different tiles:
-* ***Wide World Waste utility image tile***: the first tile in the dashboard is an image tile of a fictitious Waste utility "Wide World Waste". You can customize the tile and put your own image or remove it.
+* **Wide World Waste utility image tile**: The first tile in the dashboard is an image tile of a fictitious waste utility, "Wide World Waste." You can customize the tile and put in your own image, or you can remove it.
-* ***Waste bin image tile***: you can use image and content tiles to create a visual representation of the device that is being monitored along with a descriptive text.
+* **Waste bin image tile**: You can use image and content tiles to create a visual representation of the device that's being monitored, along with a description.
-* ***Fill level KPI tile***: the tile displays a value reported by a *fill level* sensor in a waste bin. *Fill level* and other sensors like *odor meter* or *weight* in a waste bin can be remotely monitored. An operator can take action, like dispatching trash collection truck.
+* **Fill level KPI tile**: This tile displays a value reported by a *fill level* sensor in a waste bin. Fill level and other sensors, like *odor meter* or *weight* in a waste bin, can be remotely monitored. An operator can take action, like dispatching a trash collection truck.
-* ***Waste monitoring area map***: the map is using Azure Maps, which you can configure directly in Azure IoT Central. The map tile is displaying device location. Try to hover over the map and try the controls over the map, like zoom-in, zoom-out or expand.
+* **Waste monitoring area map**: This tile uses Azure Maps, which you can configure directly in Azure IoT Central. The map tile displays device location. Try to hover over the map and try the controls over the map, like zoom-in, zoom-out, or expand.
- ![Connected Waste Management dashboard map](./media/tutorial-connectedwastemanagement/connectedwastemanagement-dashboard-map.png)
+ ![Screenshot of Connected Waste Management Template Dashboard map.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-dashboard-map.png)
-* ***Fill, odor, weight level bar chart**: you can visualize one or multiple device telemetry data in a bar chart. You can also expand the bar chart.
+* **Fill, odor, weight level bar chart**: You can visualize one or multiple kinds of device telemetry data in a bar chart. You can also expand the bar chart.
- ![Connected Waste Management dashboard bar chart](./media/tutorial-connectedwastemanagement/connectedwastemanagement-dashboard-barchart.png)
+ ![Screenshot of Connected Waste Management Template Dashboard bar chart.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-dashboard-barchart.png)
-* **Field Services content tile**: the dashboard includes link to how to integrate with Dynamics 365 Field Services from your Azure IoT Central application. As an example, you can use Field Services to create tickets for dispatching trash collection services.
+* **Field Services**: The dashboard includes a link to how to integrate with Dynamics 365 Field Services from your Azure IoT Central application. For example, you can use Field Services to create tickets for dispatching trash collection services.
-### Customize dashboard
+### Customize the dashboard
-As a builder, you can customize views in dashboard for operators. You can try:
-1. Click on **Edit** to customize the **Wide World connected waste management dashboard**. You can customize the dashboard by clicking on the **Edit** menu. Once the dashboard is in **edit** mode, you can add new tiles, or you can configure
+You can customize the dashboard by selecting the **Edit** menu. Then you can add new tiles or configure existing ones. Here's what the dashboard looks like in editing mode:
- ![Edit Dashboard](./media/tutorial-connectedwastemanagement/edit-dashboard.png)
+![Screenshot of Connected Waste Management Template Dashboard in editing mode.](./media/tutorial-connectedwastemanagement/edit-dashboard.png)
-1. You can also click on **+ New** to create new dashboard and configure from scratch. You can have multiple dashboards and you can navigate between your dashboards from the dashboard menu.
+You can also select **+ New** to create a new dashboard and configure from scratch. You can have multiple dashboards, and you can switch between your dashboards from the dashboard menu.
-## Explore connected waste bin device template
+## Explore the device template
-A device template in Azure IoT Central defines the capability of a device, which can be telemetry, properties, or command. As a builder, you can define device templates that represent the capability of the devices you will connect.
+A device template in Azure IoT Central defines the capabilities of a device, which can include telemetry, properties, or commands. As a builder, you can define device templates that represent the capabilities of the devices you will connect.
-The **Connected waste management** application comes with a sample connected waste bin device template.
+The Connected waste management application comes with a sample template for a connected waste bin device.
To view the device template:
-1. Click on **Device templates** from the left pane of your application in IoT Central.
+1. In Azure IoT Central, from the left pane of your app, select **Device templates**.
- ![Screenshot showing the list of device templates in the application](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devicetemplate.png)
+ ![Screenshot showing the list of device templates in the application.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devicetemplate.png)
-1. In the Device templates list, you will see **Connected Waste Bin**. Open by clicking on the name.
+1. In the **Device templates** list, select **Connected Waste Bin**.
-1. Familiarize with the device template capabilities. You can see it defines sensors like *Fill level*, *Odor meter*, *weight*, *location*, and others.
+1. Examine the device template capabilities. You can see that it defines sensors like **Fill level**, **Odor meter**, **Weight**, and **Location**.
- ![Screenshot showing the details of the connected waste bin device template](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devicetemplate-connectedbin.png)
+ ![Screenshot showing the details of the Connected Waste Bin device template.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devicetemplate-connectedbin.png)
-### Customizing the device template
+### Customize the device template
Try to customize the following:
-1. Navigate to **Customize** from the device template menu
-1. Find the `Odor meter` telemetry type
-1. Update the **Display name** of `Odor meter` to `Odor level`
-1. You can also try update unit of measurement, or set *Min value* and *Max value*
-1. **Save** any changes
+1. From the device template menu, select **Customize**.
+1. Find the **Odor meter** telemetry type.
+1. Update the **Display name** of **Odor meter** to **Odor level**.
+1. Try to update the unit of measurement, or set **Min value** and **Max value**.
+1. Select **Save**.
### Add a cloud property
-1. Navigate to **Cloud property** from the device template menu
-1. Add a new cloud property by clicking **+ Add Cloud Property**. In IoT Central, you can add a property that is relevant to the device but not expected to be sent by a device. As an example, a cloud property could be an alerting threshold specific to installation area, asset information, or maintenance information, and other information.
-1. **Save** any changes
+Here's how:
+1. From the device template menu, select **Cloud property**.
+1. Select **+ Add Cloud Property**. In Azure IoT Central, you can add a property that is relevant to the device but isn't expected to be sent by a device. For example, a cloud property might be an alerting threshold specific to installation area, asset information, or maintenance information.
+1. Select **Save**.
### Views
-* The connected waste bin device template comes with pre-defined views. Explore the views and you can make updates. The views define how operators will see the device data but also inputting cloud properties.
+The connected waste bin device template comes with predefined views. Explore the views, and update them if you want to. The views define how operators see the device data and input cloud properties.
- ![Device Template Views](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devicetemplate-views.png)
+ ![Screenshot of Connected Waste Management Template Device templates views.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devicetemplate-views.png)
### Publish
-* If you made any changes make sure to **Publish** the device template.
+If you made any changes, remember to publish the device template.
### Create a new device template
-* Select **+ New** to create a new device template and follow the creation process.
-You will be able to create a custom device template from scratch or you can choose a device template from the Azure Device Catalog.
+To create a new device template, select **+ New**, and follow the steps. You can create a custom device template from scratch, or you can choose a device template from the Azure device catalog.
## Explore simulated devices
-In IoT Central, you can create simulated devices to test your device template and application.
+In Azure IoT Central, you can create simulated devices to test your device template and application.
-The **Connected waste management** application has two simulated devices mapped to the connected waste bin device template.
+The Connected waste management application has two simulated devices associated with the connected waste bin device template.
-### To view the devices:
+### View the devices
-1. Navigate to **Device** from IoT Central left pane.
+1. From the left pane of Azure IoT Central, select **Device**.
- ![Devices](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devices.png)
+ ![Screenshot of Connected Waste Management Template devices.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devices.png)
-1. Select and click on Connected Waste Bin device.
+1. Select **Connected Waste Bin** device.
- ![Device 1](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devices-bin1.png)
+ ![Screenshot of Connected Waste Management Template Device Properties.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-devices-bin1.png)
-1. Navigate to the **Cloud Properties** tab try updating the `Bin full alert threshold` value from `95` to `100`.
-* Explore the **Device Properties** tab and **Device Dashboard** tab.
+1. Go to the **Cloud Properties** tab. Update the value of **Bin full alert threshold** from **95** to **100**.
->> [!NOTE]
->> All the tabs have been configured from the **Device template Views**.
+Explore the **Device Properties** and **Device Dashboard** tabs.
+
+> [!NOTE]
+> All the tabs have been configured from the device template views.
### Add new devices
-* You can add new devices by clicking on **+ New** on the **Devices** tab.
+You can add new devices by selecting **+ New** on the **Devices** tab.
## Explore and configure rules
-In Azure IoT Central, you can create rules to automatically monitor on device telemetry, and trigger actions when one or more conditions are met. The actions may include sending email notifications, triggering a Power Automate action, or a webhook action to send data to other services.
+In Azure IoT Central, you can create rules to automatically monitor device telemetry, and to trigger actions when one or more conditions are met. The actions might include sending email notifications, triggering an action in Power Automate, or starting a webhook action to send data to other services.
-The **Connected waste management** application has four sample rules.
+The Connected waste management application has four sample rules.
-### To view rules:
-1. Navigate to **Rules** from IoT Central left pane
+### View rules
+1. From the left pane of Azure IoT Central, select **Rules**.
- ![Rules](./media/tutorial-connectedwastemanagement/connectedwastemanagement-rules.png)
+ ![Screenshot of Connected Waste Management Template Rules.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-rules.png)
-1. Select the **Bin full alert**
+1. Select **Bin full alert**.
- ![Bin full alert](./media/tutorial-connectedwastemanagement/connectedwastemanagement-binfullalert.png)
+ ![Screenshot of Bin full alert.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-binfullalert.png)
- 1. The `Bin full alert` checks when **Condition** `Fill level is greater than or equal to Bin full alert threshold`.
+ 1. The **Bin full alert** checks the following condition: **Fill level is greater than or equal to Bin full alert threshold**.
- The `Bin full alert threshold` is a *cloud property* defined in the `Connected waste bin` device template.
+ The **Bin full alert threshold** is a cloud property that's defined in the connected waste bin device template.
Now let's create an email action. ### Create an email action
-To configure an email action in the Rule's actions list:
+
+In the **Actions** list of the rule, you can configure an email action:
1. Select **+ Email**.
-1. Enter *High pH alert* as the friendly **Display name** for the action.
-1. Enter the email address associated with your IoT Central account in **To**.
-1. Optionally, enter a note to include in text of the email.
-1. Select **Done** to complete the action.
-1. Select **Save** to save and activate the new rule.
+1. For **Display name**, enter **High pH alert**.
+1. For **To**, enter the email address associated with your Azure IoT Central account.
+1. Optionally, enter a note to include in the text of the email.
+1. Select **Done** > **Save**.
-You should receive email when the configured **condition** is met.
+You'll now receive an email when the configured condition is met.
>[!NOTE]
->The application will send email each time a condition is met. **Disable** the rule to stop receiving email from the automated rule.
+>The application sends email each time a condition is met. Disable the rule to stop receiving email from the automated rule.
-To create a new rule:
-1. Select **+New** on the **Rules** from the left pane.
+To create a new rule, from the left pane of **Rules**, select **+New**.
-## Configure Jobs
+## Configure jobs
-In IoT Central, jobs allow you to trigger device or cloud properties updates on multiple devices. In addition to properties, you can also use jobs to trigger device commands on multiple devices. IoT Central will automate the workflow for you.
-
-1. Go to **Jobs** from the left pane.
-1. Click **+New** and configure one or more jobs.
+In Azure IoT Central, jobs allow you to trigger device or cloud properties updates on multiple devices. You can also use jobs to trigger device commands on multiple devices. Azure IoT Central automates the workflow for you.
+1. From the left pane of Azure IoT Central, select **Jobs**.
+1. Select **+New**, and configure one or more jobs.
## Customize your application As a builder, you can change several settings to customize the user experience in your application.
-### To change the application theme:
+### Change the application theme
-1. Go to **Administration > Customize your application**.
-1. Use the **Change** button to choose an image to upload as the **Application logo**.
-1. Use the **Change** button to choose a **Browser icon** image that will appear on browser tabs.
-1. You can also replace the default **Browser colors** by adding HTML hexadecimal color codes.
+Here's how:
+1. Go to **Administration** > **Customize your application**.
+1. Select **Change** to choose an image to upload for the **Application logo**.
+1. Select **Change** to choose an image to upload for the **Browser icon** (an image that will appear on browser tabs).
+1. You can also replace the default browser colors by adding HTML hexadecimal color codes. Use the **Header** and **Accent** fields for this purpose.
- ![Azure IoT Central customize your application](./media/tutorial-connectedwastemanagement/connectedwastemanagement-customize-your-application.png)
+ ![Screenshot of Connected Wast Management Template Customize your application.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-customize-your-application.png)
-1. You can also change application images by going to the **Administration > Application settings** and **Select image** button to choose an image to upload as the application image.
-1. Finally, you can also change the **Theme** by clicking **Settings** on the masthead of the application.
+1. You can also change application images. Select **Administration** > **Application settings** > **Select image** to choose an image to upload as the application image.
+1. Finally, you can also change the theme by selecting **Settings** on the masthead of the application.
-
## Clean up resources If you're not going to continue to use this application, delete your application with the following steps:
-1. Open the Administration tab from the left pane of your IoT Central application.
-1. Select Application settings and click Delete button at the bottom of the page.
+1. From the left pane of your Azure IoT Central app, select **Administration**.
+1. Select **Application settings** > **Delete**.
## Next steps
-* Learn about more about
-
> [!div class="nextstepaction"] > [Connected waste management concepts](./concepts-connectedwastemanagement-architecture.md)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/logging.md
@@ -146,7 +146,6 @@ The following table lists the **operationName** values and corresponding REST AP
| operationName | REST API command | | --- | --- |- | **CertificateGet** |[Get information about a certificate](/rest/api/keyvault/getcertificate) | | **CertificateCreate** |[Create a certificate](/rest/api/keyvault/createcertificate) | | **CertificateImport** |[Import a certificate into a vault](/rest/api/keyvault/importcertificate) |
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/troubleshooting-access-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/troubleshooting-access-issues.md
@@ -27,10 +27,10 @@ As you start to scale your service, the number of requests sent to your key vaul
### I am not able to modify access policy, how can it be enabled? The user needs to have sufficient AAD permissions to modify access policy. In this case, the user would need to have higher contributor role.
-### I am seeing 'Unkwown Policy' error. What does that mean?
+### I am seeing 'Unknown Policy' error. What does that mean?
There are two different possibilities of seeing access policy in Unknown section: * There might be a previous user who had access and for some reason that user does not exist.
-* If access policy is added via powershell and the access policy is added for the application objectid instead of the service priciple
+* If access policy is added via powershell and the access policy is added for the application objectid instead of the service principal.
### How can I assign access control per key vault object?
key-vault https://docs.microsoft.com/en-us/azure/key-vault/secrets/tutorial-rotation-dual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/tutorial-rotation-dual.md
@@ -218,8 +218,15 @@ Notice that `value` of the key is same as secret in key vault:
## Key Vault rotation functions for two sets of credentials -- [Storage account](https://github.com/jlichwa/KeyVault-Rotation-StorageAccountKey-PowerShell)-- [Redis cache](https://github.com/jlichwa/KeyVault-Rotation-RedisCacheKey-PowerShell)
+Rotation functions template for two sets of credentials and several ready to use functions:
+
+- [Function Template in PowerShell](https://github.com/Azure/KeyVault-Secrets-Rotation-Template-PowerShell)
+- [Redis cache](https://github.com/Azure/KeyVault-Secrets-Rotation-Redis-PowerShell)
+- [Storage account](https://github.com/Azure/KeyVault-Secrets-Rotation-StorageAccount-PowerShell)
+- [Cosmos DB](https://github.com/Azure/KeyVault-Secrets-Rotation-CosmosDB-PowerShell)
+
+> [!NOTE]
+> Above rotation functions are created by a member of the community and not by Microsoft. Community Azure Functions are not supported under any Microsoft support programme or service, and are made available AS IS without warranty of any kind.
## Next steps
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-overview.md
@@ -63,7 +63,7 @@ Key scenarios that you can accomplish using Standard Load Balancer include:
### <a name="securebydefault"></a>Secure by default
-Standard Load Balancer is built on the zero trust network security model at its core. Standard Load Balancer secure by default and is part of your virtual network. The virtual network is a private and isolated network. This means Standard Load Balancers and Standard Public IP addresses are closed to inbound flows unless opened by Network Security Groups. NSGs are used to explicitly permit allowed traffic. If you do not have an NSG on a subnet or NIC of your virtual machine resource, traffic is not allowed to reach this resource. To learn more about NSGs and how to apply them for your scenario, see [Network Security Groups](../virtual-network/network-security-groups-overview.md).
+Standard Load Balancer is built on the zero trust network security model at its core. Standard Load Balancer is secure by default and part of your virtual network. The virtual network is a private and isolated network. This means Standard Load Balancers and Standard Public IP addresses are closed to inbound flows unless opened by Network Security Groups. NSGs are used to explicitly permit allowed traffic. If you do not have an NSG on a subnet or NIC of your virtual machine resource, traffic is not allowed to reach this resource. To learn more about NSGs and how to apply them for your scenario, see [Network Security Groups](../virtual-network/network-security-groups-overview.md).
Basic Load Balancer is open to the internet by default. In addition, Load Balancer does not store customer data. ## Pricing and SLA
@@ -80,4 +80,4 @@ Subscribe to the RSS feed and view the latest Azure Load Balancer feature update
See [Create a public standard load balancer](quickstart-load-balancer-standard-public-portal.md) to get started with using a load balancer.
-For more information on Azure Load Balancer limitations and components see [Azure Load Balancer components](./components.md) and [Azure Load Balancer concepts](./concepts.md)
\ No newline at end of file
+For more information on Azure Load Balancer limitations and components see [Azure Load Balancer components](./components.md) and [Azure Load Balancer concepts](./concepts.md)
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-virtual-machine-scale-sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-standard-virtual-machine-scale-sets.md
@@ -21,26 +21,14 @@ When working with virtual machine scale sets and load balancer, the following gu
## Port Forwarding and inbound NAT rules: * After the scale set has been created, the backend port cannot be modified for a load balancing rule used by a health probe of the load balancer. To change the port, you can remove the health probe by updating the Azure virtual machine scale set, update the port and then configure the health probe again. * When using the virtual machine scale set in the backend pool of the load balancer, the default inbound NAT rules get created automatically.
+
## Inbound NAT pool: * Each virtual machine scale set must have at least one inbound NAT pool. * Inbound NAT pool is a collection of inbound NAT rules. One inbound NAT pool cannot support multiple virtual machine scales sets.
- * In order to delete a NAT pool from an existing virtual machine scale set, you need to first remove the NAT pool from the scale set. A full example using CLI is shown below:
-```azurecli-interactive
- az vmss update
- --resource-group MyResourceGroup
- --name MyVMSS
- --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools
- az vmss update-instances
- -ΓÇôinstance-ids *
- --resource-group MyResourceGroup
- --name MyVMSS
- az network lb inbound-nat-pool delete
- --resource-group MyResourceGroup
- -ΓÇôlb-name MyLoadBalancer
- --name MyNatPool
-```
+ ## Load balancing rules: * When using the virtual machine scale set in the backend pool of the load balancer, the default load balancing rule gets created automatically.
+
## Outbound rules: * To create outbound rule for a backend pool that is already referenced by a load balancing rule, you need to first mark **"Create implicit outbound rules"** as **No** in the portal when the inbound load balancing rule is created.
@@ -50,4 +38,5 @@ The following methods can be used to deploy a virtual machine scale set with an
* [Configure a virtual machine scale set with an existing Azure Load Balancer using the Azure portal](./configure-vm-scale-set-portal.md). * [Configure a virtual machine scale set with an existing Azure Load Balancer using Azure PowerShell](./configure-vm-scale-set-powershell.md).
-* [Configure a virtual machine scale set with an existing Azure Load Balancer using the Azure CLI](./configure-vm-scale-set-cli.md).
\ No newline at end of file
+* [Configure a virtual machine scale set with an existing Azure Load Balancer using the Azure CLI](./configure-vm-scale-set-cli.md).
+* [Update or delete existing Azure Load Balancer used by Virtual Machine Scale Set](./update-load-balancer-with-vm-scale-set.md)
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/update-load-balancer-with-vm-scale-set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/update-load-balancer-with-vm-scale-set.md new file mode 100644
@@ -0,0 +1,121 @@
+---
+title: Update or delete existing Azure Load Balancer used by Virtual Machine Scale Set
+titleSuffix: Update or delete existing Azure Load Balancer used by Virtual Machine Scale Set
+description: With this how-to article, get started with Azure Standard Load Balancer and Virtual Machine Scale Sets.
+services: load-balancer
+documentationcenter: na
+author: irenehua
+ms.custom: seodec18
+ms.service: load-balancer
+ms.devlang: na
+ms.topic: article
+ms.tgt_pltfrm: na
+ms.workload: infrastructure-services
+ms.date: 12/30/2020
+ms.author: irenehua
+---
+# How to update/delete Azure Load Balancer used by Virtual Machine Scale Sets
+
+## How to set up Azure Load Balancer for scaling out Virtual Machine Scale Sets
+ * Make sure that the Load Balancer has [inbound NAT pool](https://docs.microsoft.com/cli/azure/network/lb/inbound-nat-pool?view=azure-cli-latest) set up and that the Virtual Machine Scale Set is put in the backend pool of the Load Balancer. Azure Load Balancer will automatically create new inbound NAT rules in the inbound NAT pool when new Virtual Machine instances are added to the Virtual Machine Scale Set.
+ * To check whether inbound NAT pool is properly set up,
+ 1. Sign in to the Azure portal at https://portal.azure.com.
+
+ 1. Select **All resources** on the left menu, and then select **MyLoadBalancer** from the resource list.
+
+ 1. Under **Settings**, select **Inbound NAT Rules**.
+If you see on the right pane, a list of rules created for each individual instance in the Virtual Machine Scale Set, the congrats you are all set to go for scaling up at any time.
+
+## How to add inbound NAT rules?
+ * Individual inbound NAT rule cannot be added. However, you can add a set of inbound NAT rules with defined frontend port range and backend port for all instances in the Virtual Machine Scale Set.
+ * In order to add a whole set of inbound NAT rules for the Virtual Machine Scale Sets, you need to first create an inbound NAT pool in the Load Balancer, and then reference the inbound NAT pool from the network profile of Virtual Machine Scale Set. A full example using CLI is shown below.
+ * The new inbound NAT pool should not have overlapping frontend port range with existing inbound NAT pools. To view existing inbound NAT pools set up, you can use this [CLI command](https://docs.microsoft.com/cli/azure/network/lb/inbound-nat-pool?view=azure-cli-latest#az_network_lb_inbound_nat_pool_list)
+```azurecli-interactive
+az network lb inbound-nat-pool create
+ -g MyResourceGroup
+ --lb-name MyLb
+ -n MyNatPool
+ --protocol Tcp
+ --frontend-port-range-start 80
+ --frontend-port-range-end 89
+ --backend-port 80
+ --frontend-ip-name MyFrontendIp
+az vmss update
+ -g MyResourceGroup
+ -n myVMSS
+ --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools "{'id':'/subscriptions/mySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLb/inboundNatPools/MyNatPool'}"
+
+az vmss update-instances
+ -ΓÇôinstance-ids *
+ --resource-group MyResourceGroup
+ --name MyVMSS
+```
+## How to update inbound NAT rules?
+ * Individual inbound NAT rule cannot be updated. However, you can update a set of inbound NAT rules with defined frontend port range and backend port for all instances in the Virtual Machine Scale Set.
+ * In order to update a whole set of inbound NAT rules for the Virtual Machine Scale Sets, you need to update the inbound NAT pool in the Load Balancer.
+```azurecli-interactive
+az network lb inbound-nat-pool update
+ -g MyResourceGroup
+ --lb-name MyLb
+ -n MyNatPool
+ --protocol Tcp
+ --backend-port 8080
+```
+
+## How to delete inbound NAT rules?
+* Individual inbound NAT rule cannot be deleted. However, you can delete the entire set of inbound NAT rules.
+* In order to delete the whole set of inbound NAT rules used by the Scale Set, you need to first remove the NAT pool from the scale set. A full example using CLI is shown below:
+```azurecli-interactive
+ az vmss update
+ --resource-group MyResourceGroup
+ --name MyVMSS
+ az vmss update-instances
+ --instance-ids "*"
+ --resource-group MyResourceGroup
+ --name MyVMSS
+ az network lb inbound-nat-pool delete
+ --resource-group MyResourceGroup
+ -ΓÇôlb-name MyLoadBalancer
+ --name MyNatPool
+```
+
+## How to add multiple IP Configurations:
+1. Select **All resources** on the left menu, and then select **MyLoadBalancer** from the resource list.
+
+1. Under **Settings**, select **Frontend IP Configurations**, and then select **Add**.
+
+1. On the **Add frontend IP address** page, type in the values and select **OK**
+
+1. Follow [Step 5](https://docs.microsoft.com/azure/load-balancer/load-balancer-multiple-ip#step-5-configure-the-health-probe) and [Step 6](https://docs.microsoft.com/azure/load-balancer/load-balancer-multiple-ip#step-5-configure-the-health-probe) in this tutorial if new load balancing rules are needed
+
+1. Create new set of inbound NAT rules using the newly created frontend IP Configurations if needed. Example can be found here in the [previous section].
+
+## How to delete Frontend IP Configuration used by Virtual Machine Scale Set:
+ 1. To delete the Frontend IP Configuration in use by the Scale Set, you need to first delete the inbound NAT pool (set of inbound NAT rules) referencing the frontend IP configuration. Instructions on how to delete the inbound rules can be found in the previous section.
+ 1. Delete the Load Balancing rule referencing the Frontend IP Configuration.
+ 1. Delete the Frontend IP Configuration.
+
+
+## How to delete Azure Load Balancer used by Virtual Machine Scale Set:
+ 1. To delete the Frontend IP Configuration in use by the Scale Set, you need to first delete the inbound NAT pool (set of inbound NAT rules) referencing the frontend IP configuration. Instructions on how to delete the inbound rules can be found in the previous section.
+
+ 1. Delete the Load Balancing rule referencing backend pool containing the Virtual Machine Scale Set.
+
+ 1. Remove the loadBalancerBackendAddressPool reference from the network profile of the Virtual Machine Scale Set. A full example using CLI is shown below:
+ ```azurecli-interactive
+ az vmss update
+ --resource-group MyResourceGroup
+ --name MyVMSS
+ --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerBackendAddressPools
+ az vmss update-instances
+ --instance-ids "*"
+ --resource-group MyResourceGroup
+ --name MyVMSS
+```
+Finally, delete the Load Balancer Resource.
+
+## Next steps
+
+To learn more about Azure Load Balancer and Virtual Machine Scale Set, read more about the concepts.
+
+> [Azure Load Balancer with Azure virtual machine scale sets](load-balancer-standard-virtual-machine-scale-sets.md)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/azure-machine-learning-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
@@ -15,6 +15,18 @@ ms.date: 09/10/2020
In this article, learn about Azure Machine Learning releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro?preserve-view=true&view=azure-ml-py) reference page.
+## 2020-12-31
+### Azure Machine Learning Studio Notebooks Experience (December Update)
++ **New features**
+ + User Filename search. Users are now able to search all the files saved in a workspace.
+ + Markdown Side by Side support per Notebook Cell. In a notebook cell, users can now have the option to view rendered markdown and markdown syntax side-by-side.
+ + Cell Status Bar. The status bar indicates what state a code cell is in, whether a cell run was successful, and how long it took to run.
+
++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+
## 2020-12-07 ### Azure Machine Learning SDK for Python v1.19.0
@@ -78,7 +90,19 @@ In this article, learn about Azure Machine Learning releases. For the full SDK
+ Deprecated the use of Nccl and Gloo as a valid type of input for Estimator classes in favor of using PyTorchConfiguration with ScriptRunConfig. + Deprecated the use of Mpi as a valid type of input for Estimator classes in favor of using MpiConfiguration with ScriptRunConfig.
+## 2020-11-30
+### Azure Machine Learning Studio Notebooks Experience (November Update)
++ **New features**
+ + Native Terminal. Users will now have access to an integrated terminal as well as Git operation via the [integrated terminal.](https://docs.microsoft.com/azure/machine-learning/how-to-run-jupyter-notebooks#terminal)
+ + Duplicate Folder
+ + Costing for Compute Drop Down
+ + Offline Compute Pylance
++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+ + Large File Upload. You can now upload file >95mb
## 2020-11-09
@@ -2238,4 +2262,4 @@ The [`PipelineEndpoint`](/python/api/azureml-pipeline-core/azureml.pipeline.core
## Next steps
-Read the overview for [Azure Machine Learning](overview-what-is-azure-ml.md).
\ No newline at end of file
+Read the overview for [Azure Machine Learning](overview-what-is-azure-ml.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
@@ -127,7 +127,7 @@ These actions can be controlled by Azure RBAC:
* *Microsoft.MachineLearningServices/workspaces/computes/stop/action* * *Microsoft.MachineLearningServices/workspaces/computes/restart/action*
-Please note to create a compute instance user needs to have permissions for the following actions:
+To create a compute instance you need to have permissions for the following actions:
* *Microsoft.MachineLearningServices/workspaces/computes/write* * *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action*
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-models-with-mlflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-models-with-mlflow.md
@@ -15,18 +15,18 @@ ms.custom: how-to, devx-track-python
# Deploy MLflow models with Azure Machine Learning (preview)
-In this article, learn how to deploy your MLflow model as an Azure Machine Learning web service, so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models.
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model as an Azure Machine Learning web service, so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models.
Azure Machine Learning offers deployment configurations for: * Azure Container Instance (ACI) which is a suitable choice for a quick dev-test deployment. * Azure Kubernetes Service (AKS) which is recommended for scalable production deployments.
-[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. Its integration with Azure Machine Learning allows for you to extend this management beyond the model training phase, to the deployment phase of your production model.
+MLflow is an open-source library for managing the life cycle of your machine learning experiments. Its integration with Azure Machine Learning allows for you to extend this management beyond the model training phase to the deployment phase of your production model.
>[!NOTE] > As an open source library, MLflow changes frequently. As such, the functionality made available via the Azure Machine Learning and MLflow integration should be considered as a preview, and not fully supported by Microsoft.
-The following diagram demonstrates that with the MLflow deploy API you can deploy your existing MLflow model as an Azure Machine Learning web service, despite their frameworks--PyTorch, Tensorflow, scikit-learn, ONNX, etc., and manage your production models in your workspace.
+The following diagram demonstrates that with the MLflow deploy API and Azure Machine Learning, you can deploy models created with popular frameworks, like PyTorch, Tensorflow, scikit-learn, etc., as Azure Machine Learning web services and manage them in your workspace.
![ deploy mlflow models with azure machine learning](./media/how-to-use-mlflow/mlflow-diagram-deploy.png)
@@ -35,9 +35,11 @@ The following diagram demonstrates that with the MLflow deploy API you can deplo
## Prerequisites
-* [Set up the MLflow Tracking URI to connect Azure Machine Learning](how-to-use-mlflow.md).
+* A machine learning model. If you don't have a trained model, find the notebook example that best fits your compute scenario in [this repo](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/using-mlflow) and follow its instructions.
+* [Set up the MLflow Tracking URI to connect Azure Machine Learning](how-to-use-mlflow.md#track-local-runs).
* Install the `azureml-mlflow` package. * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install?preserve-view=true&view=azure-ml-py), which provides the connectivity for MLflow to access your workspace.
+* See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
## Deploy to ACI
@@ -135,7 +137,7 @@ If you don't plan to use your deployed web service, use `service.delete()` to de
## Example notebooks
-The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow) demonstrate and expand upon concepts presented in this article.
+The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/using-mlflow) demonstrate and expand upon concepts presented in this article.
> [!NOTE] > A community-driven repository of examples using mlflow can be found at https://github.com/Azure/azureml-examples.
@@ -145,3 +147,4 @@ The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNot
* [Manage your models](concept-model-management-and-deployment.md). * Monitor your production models for [data drift](./how-to-enable-data-collection.md). * [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md).+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-secure-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-web-service.md
@@ -8,7 +8,7 @@ ms.subservice: core
ms.reviewer: jmartens ms.author: aashishb author: aashishb
-ms.date: 11/18/2020
+ms.date: 01/04/2021
ms.topic: conceptual ms.custom: how-to ---
@@ -163,7 +163,7 @@ TLS/SSL certificates expire and must be renewed. Typically this happens every ye
### Update a Microsoft generated certificate
-If the certificate was originally generated by Microsoft (when using the *leaf_domain_label* to create the service), use one of the following examples to update the certificate:
+If the certificate was originally generated by Microsoft (when using the *leaf_domain_label* to create the service), **it will automatically renew** when needed. If you want to manually renew it, use one of the following examples to update the certificate:
> [!IMPORTANT] > * If the existing certificate is still valid, use `renew=True` (SDK) or `--ssl-renew` (CLI) to force the configuration to renew it. For example, if the existing certificate is still valid for 10 days and you don't use `renew=True`, the certificate may not be renewed.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow-azure-databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
@@ -33,6 +33,7 @@ See [Track experiment runs and create endpoints with MLflow and Azure Machine Le
* This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install?preserve-view=true&view=azure-ml-py), which provides the connectivity for MLflow to access your workspace. * An [Azure Databricks workspace and cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). * [Create an Azure Machine Learning Workspace](how-to-manage-workspace.md).
+ * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
## Track Azure Databricks runs
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow.md
@@ -60,6 +60,7 @@ The following diagram illustrates that with MLflow Tracking, you track an experi
* Install the `azureml-mlflow` package. * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install?preserve-view=true&view=azure-ml-py), which provides the connectivity for MLflow to access your workspace. * [Create an Azure Machine Learning Workspace](how-to-manage-workspace.md).
+ * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
## Track local runs
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-setup-local.md
@@ -62,8 +62,10 @@ tutorial
- `.azureml`: Hidden subdirectory for storing Azure Machine Learning configuration files. > [!TIP]
-> If you're on a Mac, in a Finder window use **Command + Shift + .** to toggle the ability to see and create directories that begin with a dot. Or use the command terminal to create the directory.
-
+> You can create the hidden .azureml subdirectory in a terminal window. Or use the following:
+> * In a Mac Finder window use **Command + Shift + .** to toggle the ability to see and create directories that begin with a dot.
+> * In Windows 10, see [how to view hidden files and folders](https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-10-97fbc472-c603-9d90-91d0-1166d1d9f4b5).
+> * In the Linux Graphical Interface, use **Ctrl + h** or the **View** menu and check the box to **Show hidden files**.
> [!div class="nextstepaction"] > [I created a directory](?success=create-dir#workspace) [I ran into an issue](https://www.research.net/r/7C8Z3DN?issue=create-dir)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-labeling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-labeling.md
@@ -101,7 +101,8 @@ Now that you have access to the data you want to have labeled, create your label
1. Use the following input for the **Create dataset from datastore** form: 1. On the **Basic info** form, add a name, here we'll use **images-for-tutorial**. Add a description if you wish. Then select **Next**.
- 1. On the **Datastore selection** form, use the dropdown to select your **Previously created datastore**, for example **tutorial_images (Azure Blob Storage)**
+ 1. On the **Datastore selection** form, select **Previously created datastore**, then click on the datastore name and select **Select datastore**.
+ 1. On the next page, verify that the currently selected datastore is correct. If not, select **Previously created datastore** and repeat the prior step.
1. Next, still on the **Datastore selection** form, select **Browse** and then select **MultiClass - DogsCats**. Select **Save** to use **/MultiClass - DogsCats** as the path. 1. Select **Next** to confirm details and then **Create** to create the dataset. 1. Select the circle next to the dataset name in the list, for example **images-for-tutorial**.
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/release-notes.md
@@ -35,16 +35,23 @@ To stay up-to-date with the most recent developments, this article provides you
> > For details, see: [the Azure portal limitations for Media Services v3](frequently-asked-questions.md#what-are-the-azure-portal-limitations-for-media-services-v3).
+## December 2020
+
+### Regional availability
+
+Azure Media Services is now available in the Norway East region in the Azure portal. There is no restV2 in this region.
+ ## October 2020 ### Basic Audio Analysis+ The Audio Analysis preset now includes a Basic mode pricing tier. The new Basic Audio Analyzer mode provides a low-cost option to extract speech transcription, and format output captions and subtitles. This mode performs speech-to-text transcription and generation of a VTT subtitle/caption file. The output of this mode includes an Insights JSON file including only the keywords, transcription,and timing information. Automatic language detection and speaker diarization are not included in this mode. See the list of [supported languages.](analyzing-video-audio-files-concept.md#built-in-presets) Customers using Indexer v1 and Indexer v2 should migrate to the Basic Audio Analysis preset. For more information about the Basic Audio Analyzer mode, see [Analyzing Video and Audio files](analyzing-video-audio-files-concept.md). To learn to use the Basic Audio Analyzer mode with the REST API, see [How to Create a Basic Audio Transform](how-to-create-basic-audio-transform.md).
-## Live Events
+### Live Events
Updates to most properties are now allowed when live events are stopped. In addition, users are allowed to specify a prefix for the static hostname for the live event's input and preview URLs. VanityUrl is now called `useStaticHostName` to better reflect the intent of the property.
@@ -54,7 +61,7 @@ A live event supports receiving various input aspect ratios. Stretch mode allows
Live encoding now adds the capability of outputting fixed key frame interval fragments between 0.5 to 20 seconds.
-## Accounts
+### Accounts
> [!WARNING] > If you create a Media Services account with the 2020-05-01 API version it wonΓÇÖt work with RESTv2
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/streaming-endpoint-concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/streaming-endpoint-concept.md
@@ -20,7 +20,7 @@ ms.author: inhenkel
In Microsoft Azure Media Services, a [Streaming Endpoint](/rest/api/media/streamingendpoints) represents a dynamic (just-in-time) packaging and origin service that can deliver your live and on-demand content directly to a client player app using one of the common streaming media protocols (HLS or DASH). In addition, the **Streaming Endpoint** provides dynamic (just-in-time) encryption to industry-leading DRMs.
-When you create a Media Services account, a **default** Streaming Endpoint is created for you in a stopped state. You can't delete the **default** Streaming Endpoint. More Streaming Endpoints can be created under the account (see [Quotas and limits](limits-quotas-constraints.md)).
+When you create a Media Services account, a **default** Streaming Endpoint is created for you in a stopped state. More Streaming Endpoints can be created under the account (see [Quotas and limits](limits-quotas-constraints.md)).
> [!NOTE] > To start streaming videos, you need to start the **Streaming Endpoint** from which you want to stream the video.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/deploy-iot-edge-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/deploy-iot-edge-device.md
@@ -103,7 +103,7 @@ A deployment manifest is a JSON document that describes which modules to deploy,
Examples: * **IoT Edge Module Name**: lvaEdge
- * **Image URI**: mcr.microsoft.com/media/live-video-analytics:1.0
+ * **Image URI**: mcr.microsoft.com/media/live-video-analytics:2.0
![Screenshot shows the Module Settings tab.](./media/deploy-iot-edge-device/add.png)
migrate https://docs.microsoft.com/en-us/azure/migrate/agent-based-migration-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/agent-based-migration-architecture.md
@@ -89,14 +89,14 @@ If you're replicating VMware VMs, you can use the [Site Recovery Deployment Plan
Use the values in this table to figure out whether you need an additional process server in your deployment. -- If your daily change rate (churn rate) is over 2 TB, deploy an additional process server.
+- If the daily change rate (churn rate) is over 2 TB, deploy an additional process server.
- If you're replicating more than 200 machines, deploy an additional replication appliance. **CPU** | **Memory** | **Free space-data caching** | **Churn rate** | **Replication limits** --- | --- | --- | --- | --- 8 vCPUs (2 sockets * 4 cores \@ 2.5 GHz) | 16 GB | 300 GB | 500 GB or less | < 100 machines 12 vCPUs (2 sockets * 6 cores \@ 2.5 GHz) | 18 GB | 600 GB | 501 GB to 1 TB | 100-150 machines.
-16 vCPUs (2 sockets * 8 cores \@ 2.5 GHz) | 32 G1 | 1 TB | 1 TB to 2 TB | 151-200 machines.
+16 vCPUs (2 sockets * 8 cores \@ 2.5 GHz) | 32 GB | 1 TB | 1 TB to 2 TB | 151-200 machines.
### Sizing scale-out process servers
@@ -105,7 +105,7 @@ If you need to deploy a scale-out process server, use this table to figure out s
**Process server** | **Free space for data caching** | **Churn rate** | **Replication limits** --- | --- | --- | --- 4 vCPUs (2 sockets * 2 cores \@ 2.5 GHz), 8-GB memory | 300 GB | 250 GB or less | Up to 85 machines
-8 vCPUs (2 sockets * 4 cores \@ 2.5 GHz), 12-GB memory | 600 GB | 251 GB to 1 TB | 86-150 machines.
+8 vCPUs (2 sockets * 4 cores \@ 2.5 GHz), 12-GB memory | 600 GB | 251 GB to 1 TB | 86-150 machines.
12 vCPUs (2 sockets * 6 cores \@ 2.5 GHz), 24-GB memory | 1 TB | 1-2 TB | 151-225 machines. ## Throttle upload bandwidth.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-discover-hyper-v https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-hyper-v.md
@@ -38,7 +38,7 @@ Before you start this tutorial, check you have these prerequisites in place.
**Requirement** | **Details** --- | --- **Hyper-V host** | Hyper-V hosts on which VMs are located can be standalone, or in a cluster.<br/><br/> The host must be running Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2.<br/><br/> Verify inbound connections are allowed on WinRM port 5985 (HTTP), so that the appliance can connect to pull VM metadata and performance data, using a Common Information Model (CIM) session.
-**Appliance deployment** | Hyper-v host needs resources to allocate a VM for the appliance:<br/><br/> - Windows Server 2016<br/><br/> -16 GB of RAM<br/><br/> - Eight vCPUs<br/><br/> - Around 80 GB of disk storage.<br/><br/> - An external virtual switch.<br/><br/> - Internet access on for the VM, directly or via a proxy.
+**Appliance deployment** | Hyper-V host needs resources to allocate a VM for the appliance:<br/><br/> - Windows Server 2016<br/><br/> -16 GB of RAM<br/><br/> - Eight vCPUs<br/><br/> - Around 80 GB of disk storage.<br/><br/> - An external virtual switch.<br/><br/> - Internet access on for the VM, directly or via a proxy.
**VMs** | VMs can be running any Windows or Linux operating system. Before you start, you can [review the data](migrate-appliance.md#collected-data---hyper-v) that the appliance collects during discovery.
@@ -84,7 +84,7 @@ Set up an account with Administrator access on the Hyper-V hosts. The appliance
## Set up a project
-et up a new Azure Migrate project.
+Set up a new Azure Migrate project.
1. In the Azure portal > **All services**, search for **Azure Migrate**. 2. Under **Services**, select **Azure Migrate**.
@@ -264,4 +264,4 @@ After discovery finishes, you can verify that the VMs appear in the portal.
## Next steps - [Assess Hyper-V VMs](tutorial-assess-hyper-v.md) for migration to Azure VMs.-- [Review the data](migrate-appliance.md#collected-data---hyper-v) that the appliance collects during discovery.\ No newline at end of file
+- [Review the data](migrate-appliance.md#collected-data---hyper-v) that the appliance collects during discovery.
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
@@ -352,6 +352,7 @@ https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecurity
**Storage account considerations**: - Location: The storage account used must be in the same region as the NSG.
+- Performance Tier: Currently, only standard tier storage accounts are supported.
- Self-manage key rotation: If you change/rotate the access keys to your storage account, NSG Flow Logs will stop working. To fix this issue, you must disable and then re-enable NSG Flow Logs. **Flow Logging Costs**: NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large flow log volume and the associated costs. NSG Flow log pricing does not include the underlying costs of storage. Using the retention policy feature with NSG Flow Logging means incurring separate storage costs for extended periods of time. If you do not require the retention policy feature, we recommend that you set this value to 0. For more information, see [Network Watcher Pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/) for additional details.
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-troubleshoot-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-troubleshoot-overview.md
@@ -203,10 +203,13 @@ Elapsed Time 330 sec
| 12 ikeext ike_sa_management_c3307 7857a320-42ee-6e90-d5d9-3f414e3ea2d3| ```
+## Considerations
+* CLI Bug: If you are using Azure CLI to run the command, the VPN Gateway and the Storage account need to be in same resource group. Customers with the resources in different resource groups can use PowerShell or the Azure portal instead.
+ ## Next steps To learn how to diagnose a problem with a gateway or gateway connection, see [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md). <!--Image references--> [1]: ./media/network-watcher-troubleshoot-overview/GatewayTenantWorkerLogs.png
-[2]: ./media/network-watcher-troubleshoot-overview/portal.png
\ No newline at end of file
+[2]: ./media/network-watcher-troubleshoot-overview/portal.png
networking https://docs.microsoft.com/en-us/azure/networking/azure-for-network-engineers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/azure-for-network-engineers.md
@@ -62,7 +62,7 @@ When you have competing entries in a routing table, Azure selects the next hop b
## Security
-You can filter network traffic to and from resources in a virtual network using network security groups. You cane also use network virtual appliances (NVA) such as Azure Firewall or firewalls from other vendors. You can control how Azure routes traffic from subnets. You can also limit who in your organization can work with resources in virtual networks.
+You can filter network traffic to and from resources in a virtual network using network security groups. You can also use network virtual appliances (NVA) such as Azure Firewall or firewalls from other vendors. You can control how Azure routes traffic from subnets. You can also limit who in your organization can work with resources in virtual networks.
A network security group (NSG) contains a list of Access Control List (ACL) rules that allow or deny network traffic to subnets, NICs, or both. NSGs can be associated with either subnets or individual NICs connected to a subnet. When an NSG is associated with a subnet, the ACL rules apply to all the VMs in that subnet. In addition, traffic to an individual NIC can be restricted by associating an NSG directly to a NIC.
openshift https://docs.microsoft.com/en-us/azure/openshift/howto-create-a-storageclass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-create-a-storageclass.md
@@ -55,7 +55,7 @@ ARO_RESOURCE_GROUP=aro-rg
CLUSTER=cluster ARO_SERVICE_PRINCIPAL_ID=$(az aro show -g $ARO_RESOURCE_GROUP -n $CLUSTER --query servicePrincipalProfile.clientId -o tsv)
-az role assignment create --role Contributor -ΓÇôassignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP
+az role assignment create --role Contributor --assignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP
``` ### Set ARO cluster permissions
@@ -77,6 +77,8 @@ oc adm policy add-cluster-role-to-user azure-secret-reader system:serviceaccount
This step will create a StorageClass with an Azure Files provisioner. Within the StorageClass manifest, the details of the storage account are required so that the ARO cluster knows to look at a storage account outside of the current resource group.
+During storage provisioning, a secret named by secretName is created for the mounting credentials. In a multi-tenancy context, it is strongly recommended to set the value for secretNamespace explicitly, otherwise the storage account credentials may be read by other users.
+ ```bash cat << EOF >> azure-storageclass-azure-file.yaml kind: StorageClass
@@ -86,6 +88,7 @@ metadata:
provisioner: kubernetes.io/azure-file parameters: location: $LOCATION
+ secretNamespace: kube-system
skuName: Standard_LRS storageAccount: $AZURE_STORAGE_ACCOUNT_NAME resourceGroup: $AZURE_FILES_RESOURCE_GROUP
@@ -112,7 +115,7 @@ Create a new application and assign storage to it.
```bash oc new-project azfiletest
-oc new-app ΓÇôtemplate httpd-example
+oc new-app -template httpd-example
#Wait for the pod to become Ready curl $(oc get route httpd-example -n azfiletest -o jsonpath={.spec.host})
postgresql https://docs.microsoft.com/en-us/azure/postgresql/howto-restart-server-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-restart-server-portal.md
@@ -1,8 +1,8 @@
--- title: Restart server - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how you can restart an Azure Database for PostgreSQL - Single Server using the Azure portal.
-author: ajlam
-ms.author: andrela
+author: lfittl-msft
+ms.author: lufittl
ms.service: postgresql ms.topic: how-to ms.date: 12/20/2020
@@ -44,4 +44,4 @@ The following steps restart the PostgreSQL server:
## Next steps
-Learn about [how to set parameters in Azure Database for PostgreSQL](howto-configure-server-parameters-using-portal.md)
\ No newline at end of file
+Learn about [how to set parameters in Azure Database for PostgreSQL](howto-configure-server-parameters-using-portal.md)
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/rbac-and-directory-admin-roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/rbac-and-directory-admin-roles.md
@@ -12,7 +12,7 @@ ms.workload: identity
ms.tgt_pltfrm: na ms.devlang: na ms.topic: overview
-ms.date: 07/07/2020
+ms.date: 01/04/2021
ms.author: rolyon ms.reviewer: bagovind ms.custom: it-pro;
@@ -40,7 +40,7 @@ Account Administrator, Service Administrator, and Co-Administrator are the three
| Classic subscription administrator | Limit | Permissions | Notes | | --- | --- | --- | --- |
-| Account Administrator | 1 per Azure account | <ul><li>Manage billing in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade)</li><li>Manage all subscriptions in an account</li><li>Create new subscriptions</li><li>Cancel subscriptions</li><li>Change the billing for a subscription</li><li>Change the Service Administrator</li></ul> | Conceptually, the billing owner of the subscription.<br>The Account Administrator has no access to the Azure portal. |
+| Account Administrator | 1 per Azure account | <ul><li>Manage billing in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade)</li><li>Manage all subscriptions in an account</li><li>Create new subscriptions</li><li>Cancel subscriptions</li><li>Change the billing for a subscription</li><li>Change the Service Administrator</li></ul> | Conceptually, the billing owner of the subscription. |
| Service Administrator | 1 per Azure subscription | <ul><li>Manage services in the [Azure portal](https://portal.azure.com)</li><li>Cancel the subscription</li><li>Assign users to the Co-Administrator role</li></ul> | By default, for a new subscription, the Account Administrator is also the Service Administrator.<br>The Service Administrator has the equivalent access of a user who is assigned the Owner role at the subscription scope.<br>The Service Administrator has full access to the Azure portal. | | Co-Administrator | 200 per subscription | <ul><li>Same access privileges as the Service Administrator, but canΓÇÖt change the association of subscriptions to Azure directories</li><li>Assign users to the Co-Administrator role, but cannot change the Service Administrator</li></ul> | The Co-Administrator has the equivalent access of a user who is assigned the Owner role at the subscription scope. |
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-powershell.md
@@ -23,6 +23,7 @@ To add or remove role assignments, you must have:
- `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner) - [PowerShell in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- The account you use to run the PowerShell command must have the Microsoft Graph `Directory.Read.All` permission.
## Steps to add a role assignment
@@ -412,4 +413,4 @@ If you get the error message: "The provided information does not map to a role a
- [List Azure role assignments using Azure PowerShell](role-assignments-list-powershell.md) - [Tutorial: Grant a group access to Azure resources using Azure PowerShell](tutorial-role-assignments-group-powershell.md)-- [Manage resources with Azure PowerShell](../azure-resource-manager/management/manage-resources-powershell.md)\ No newline at end of file
+- [Manage resources with Azure PowerShell](../azure-resource-manager/management/manage-resources-powershell.md)
search https://docs.microsoft.com/en-us/azure/search/search-howto-dotnet-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-dotnet-sdk.md
@@ -26,7 +26,7 @@ As with previous versions, you can use this library to:
+ Load and manage search documents in an index + Execute queries, all without having to deal with the details of HTTP and JSON
-The library is distributed as a single [Azure.Search.Document NuGet package](https://www.nuget.org/packages/Azure.Search.Documents/), which includes all APIs used for programmatic access to a search service.
+The library is distributed as a single [Azure.Search.Documents NuGet package](https://www.nuget.org/packages/Azure.Search.Documents/), which includes all APIs used for programmatic access to a search service.
The client library defines classes like `SearchIndex`, `SearchField`, and `SearchDocument`, as well as operations like `SearchIndexClient.CreateIndex` and `SearchClient.Search` on the `SearchIndexClient` and `SearchClient` classes. These classes are organized into the following namespaces:
@@ -641,4 +641,4 @@ This section concludes this introduction to the .NET SDK, but don't stop here. T
+ Review [naming conventions](/rest/api/searchservice/Naming-rules) to learn the rules for naming various objects
-+ Review [supported data types](/rest/api/searchservice/Supported-data-types)
\ No newline at end of file++ Review [supported data types](/rest/api/searchservice/Supported-data-types)
security-center https://docs.microsoft.com/en-us/azure/security-center/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
@@ -149,7 +149,7 @@ The recommendation "Web apps should request an SSL certificate for all incoming
Ensuring your web apps request a certificate certainly makes them more secure. However, for public-facing web apps it's irrelevant. If you access your site over HTTP and not HTTPS, you will not receive any client certificate. So if your application requires client certificates, you should not allow requests to your application over HTTP. Learn more in [Configure TLS mutual authentication for Azure App Service](../app-service/app-service-web-configure-tls-mutual-auth.md).
-Wish this change, the recommendation is now a recommended best practice which does not impact your score.
+With this change, the recommendation is now a recommended best practice which does not impact your score.
Learn which recommendations are in each security control in [Security controls and their recommendations](secure-score-security-controls.md#security-controls-and-their-recommendations).
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-geo-dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-geo-dr.md
@@ -2,17 +2,26 @@
title: Azure Service Bus Geo-disaster recovery | Microsoft Docs description: How to use geographical regions to failover and perform disaster recovery in Azure Service Bus ms.topic: article
-ms.date: 06/23/2020
+ms.date: 01/04/2021
--- # Azure Service Bus Geo-disaster recovery
-When entire Azure regions or datacenters (if no [availability zones](../availability-zones/az-overview.md) are used) experience downtime, it is critical for data processing to continue to operate in a different region or datacenter. As such, *Geo-disaster recovery* is an important feature for any enterprise. Azure Service Bus supports geo-disaster recovery at the namespace level.
+Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in some cases even required by industry regulations.
-The Geo-disaster recovery feature is globally available for the Service Bus Premium SKU.
+Azure Service Bus already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions in the event of such failures. If a Service Bus namespace has been created with the enabled option for [availability zones](../availability-zones/az-overview.md), the risk is outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility.
->[!NOTE]
-> Geo-Disaster recovery currently only ensures that the metadata (Queues, Topics, Subscriptions, Filters) are copied over from the primary namespace to secondary namespace when paired.
+The all-active Azure Service Bus cluster model with availability zone support is superior to any on-premises message broker product in terms of resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures cannot sufficiently defend against.
+
+The Service Bus Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good and without having to change your application configurations. Abandoning an Azure region will typically involve several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration. The feature is globally available for the Service Bus Premium SKU.
+
+The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Queues, Topics, Subscriptions, Filters) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move will re-point the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated.
+
+> [!IMPORTANT]
+> The feature enables instant continuity of operations with the same configuration, but **does not replicate the messages held in queues or topic subscriptions or dead-letter queues**. To preserve queue semantics, such a replication will require not only the replication of message data, but of every state change in the broker. For most Service Bus namespaces, the required replication traffic would far exceed the application traffic and with high-throughput queues, most messages would still replicate to the secondary while they are already being deleted from the primary, causing excessively wasteful traffic. For high-latency replication routes, which applies to many pairings you would choose for Geo-disaster recovery, it might also be impossible for the replication traffic to sustainably keep up with the application traffic due to latency-induced throttling effects.
+
+> [!TIP]
+> For replicating the contents of queues and topic subscriptions and operating corresponding namespaces in active/active configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the [replication guidance](service-bus-federation-overview.md).
## Outages and disasters
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-to-event-grid-integration-concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-to-event-grid-integration-concept.md
@@ -109,7 +109,7 @@ The schema for the event is as follows.
"id": "dede87b0-3656-419c-acaf-70c95ddc60f5", "data": { "namespaceName": "YOUR SERVICE BUS NAMESPACE WILL SHOW HERE",
- "requestUri": "https://YOUR-SERVICE-BUS-NAMESPACE-WILL-SHOW-HERE.servicebus.windows.net/TOPIC-NAME/subscriptions/SUBSCRIPTIONNAME/$deadletterqueue/messages/head",
+ "requestUri": "https://YOUR-SERVICE-BUS-NAMESPACE-WILL-SHOW-HERE.servicebus.windows.net/TOPIC-NAME/subscriptions/SUBSCRIPTIONNAME/messages/head",
"entityType": "subscriber", "queueName": "QUEUE NAME IF QUEUE", "topicName": "TOPIC NAME IF TOPIC",
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-azure-clusters-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-azure-clusters-overview.md
@@ -89,16 +89,17 @@ For more information, read [Upgrading clusters](service-fabric-cluster-upgrade.m
## Supported operating systems You are able to create clusters on virtual machines running these operating systems:
-| Operating system | Earliest supported Service Fabric version |
-| --- | --- |
-| Windows Server 2012 R2 | All versions |
-| Windows Server 2016 | All versions |
-| Windows Server 1709 | 6.0 |
-| Windows Server 1803 | 6.4 |
-| Windows Server 1809 | 6.4.654.9590 |
-| Windows Server 2019 | 6.4.654.9590 |
-| Linux Ubuntu 16.04 | 6.0 |
-| Linux Ubuntu 18.04 | 7.1 |
+| Operating system | Earliest supported Service Fabric version | Last supported Service Fabric version |
+| --- | --- | --- |
+| Windows Server 2019 | 6.4.654.9590 | N/A |
+| Windows Server 2016 | All versions | N/A |
+| Windows Server 20H2 | 7.2.445.9590 | N/A |
+| Windows Server 1809 | 6.4.654.9590 | 7.2.445.9590 |
+| Windows Server 1803 | 6.4 | 7.2.445.9590 |
+| Windows Server 1709 | 6.0 | 7.2.445.9590 |
+| Windows Server 2012 | All versions | N/A |
+| Linux Ubuntu 16.04 | 6.0 | N/A |
+| Linux Ubuntu 18.04 | 7.1 | N/A |
For additional information see [Supported Cluster Versions in Azure](./service-fabric-versions.md#supported-operating-systems)
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-versions.md
@@ -16,8 +16,31 @@ Refer to the following documents for details on how to keep your cluster running
- [Upgrade an Azure Service Fabric cluster](service-fabric-cluster-upgrade.md) - [Upgrade the Service Fabric version that runs on your standalone Windows Server cluster](service-fabric-cluster-upgrade-windows-server.md)
-## Supported versions
+## Unsupported Versions
+
+### Upgrade Alert for versions between 5.7 and below 6.3.63.*
+
+***All Service Fabric clusters that are on unsupported versions from 5.7 to 6.3.63.* will be impacted by a security breaking change that will be rolled out in Azure on January 7th,2021***.
+
+ To avoid serious service disruptions (including clusters not coming up), you must upgrade your clusters as soon as possible to one of the below supported versions of Service Fabric runtime that includes the fix for the security issue. We have reached out to the impacted customers with guidance. If you have a support plan and you need technical help, please reach out to us via [Azure support channels](https://docs.microsoft.com/azure/azure-portal/supportability/how-to-create-azure-support-request) by opening a support request and mention this context in the support ticket.
+
+ #### Supported Service Fabric Runtime versions including the fix for the security breaking change
+ Upgrade your Service Fabric clusters that are running on older unsupported versions impacted by the security breaking change to one of the below supported version.
+
+ | OS | Current Service Fabric runtime in the cluster | CU/Patch release |
+ | --- | --- |--- |
+ | Windows | 7.0.* | 7.0.478.9590 |
+ | Windows | 7.1.* | 7.1.503.9590 |
+ | Windows | 7.2.* | 7.2.445.9590 |
+ | Ubuntu 16 | 7.0.* | 7.0.472.1 |
+ | Ubuntu 16 | 7.1.* | 7.1.455.1 |
+ | Ubuntu 1804 | 7.1.* | 7.1.455.1804 |
+ | Ubuntu 16 | 7.2.* | 7.2.447.1 |
+ | Ubuntu 1804 | 7.2.* | 7.2.447.1804 |
+
+
+## Supported versions
The following table lists the versions of Service Fabric and their support end dates. | Service Fabric runtime in the cluster | Can upgrade directly from cluster version |Compatible SDK or NuGet package version | End of support |
service-health https://docs.microsoft.com/en-us/azure/service-health/resource-health-checks-resource-types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-health/resource-health-checks-resource-types.md
@@ -164,6 +164,11 @@ Below is a complete list of all the checks executed through resource health by r
|---| |<ul><li>Is performance of the Application Gateway degraded?</li><li>Is the Application Gateway available?</li></ul>|
+## Microsoft.network/bastionhosts
+|Executed Checks|
+|---|
+|<ul><li>Is the Bastion Host up and running?</li></ul>|
+ ## Microsoft.network/connections |Executed Checks| |---|
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
@@ -119,17 +119,17 @@ Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
--- | --- | --- | 14.04 LTS | [9.35](https://support.microsoft.com/help/4573888/), [9.36](https://support.microsoft.com/help/4578241/), [9.37](https://support.microsoft.com/help/4582666/), [9.38](https://support.microsoft.com/help/4590304/), [9.39](https://support.microsoft.com/help/4597409/)| 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure | |||
-16.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.4.0-21-generic to 4.4.0-194-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-123-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1098-azure </br> 4.4.0-197-generic, 4.15.0-1100-azure, 4.15.0-126-generic through 9.39 hot fix patch**||
+16.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.4.0-21-generic to 4.4.0-194-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-123-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1098-azure </br> 4.4.0-197-generic, 4.15.0-126-generic, 4.15.0-128-generic, 4.15.0-1100-azure, 4.15.0-1102-azure through 9.39 hot fix patch**||
16.04 LTS | [9.38](https://support.microsoft.com/help/4590304/) | 4.4.0-21-generic to 4.4.0-190-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-118-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1096-azure </br> 4.4.0-193-generic, 4.15.0-120-generic, 4.15.0-122-generic, 4.15.0-1098-azure through 9.38 hot fix patch**| 16.04 LTS | [9.37](https://support.microsoft.com/help/4582666/) | 4.4.0-21-generic to 4.4.0-189-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-115-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1093-azure </br> 4.4.0-190-generic, 4.15.0-117-generic, 4.15.0-118-generic, 4.15.0-1095-azure, 4.15.0-1096-azure through 9.37 hot fix patch**| 16.04 LTS | [9.36](https://support.microsoft.com/help/4578241/)| 4.4.0-21-generic to 4.4.0-187-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-112-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1092-azure | |||
-18.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.15.0-20-generic to 4.15.0-123-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-63-generic </br> 5.3.0-19-generic to 5.3.0-69-generic </br> 5.4.0-37-generic to 5.4.0-53-generic</br> 4.15.0-1009-azure to 4.15.0-1099-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1031-azure </br> 4.15.0-124-generic, 5.4.0-54-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 4.15.0-1100-azure, 4.15.0-126-generic through 9.39 hot fix patch**|
+18.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.15.0-20-generic to 4.15.0-123-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-63-generic </br> 5.3.0-19-generic to 5.3.0-69-generic </br> 5.4.0-37-generic to 5.4.0-53-generic</br> 4.15.0-1009-azure to 4.15.0-1099-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1031-azure </br> 4.15.0-124-generic, 5.4.0-54-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 4.15.0-1100-azure, 4.15.0-126-generic, 4.15.0-128-generic, 5.4.0-58-generic, 4.15.0-1102-azure, 5.4.0-1034-azure through 9.39 hot fix patch**|
18.04 LTS | [9.38](https://support.microsoft.com/help/4590304/) | 4.15.0-20-generic to 4.15.0-118-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-61-generic </br> 5.3.0-19-generic to 5.3.0-67-generic </br> 5.4.0-37-generic to 5.4.0-48-generic</br> 4.15.0-1009-azure to 4.15.0-1096-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1026-azure </br> 4.15.0-121-generic, 4.15.0-122-generic, 5.0.0-62-generic, 5.3.0-68-generic, 5.4.0-51-generic, 5.4.0-52-generic, 4.15.0-1099-azure, 5.4.0-1031-azure through 9.38 hot fix patch**| 18.04 LTS | [9.37](https://support.microsoft.com/help/4582666/) | 4.15.0-20-generic to 4.15.0-115-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-60-generic </br> 5.3.0-19-generic to 5.3.0-66-generic </br> 5.4.0-37-generic to 5.4.0-45-generic</br> 4.15.0-1009-azure to 4.15.0-1093-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1023-azure</br> 4.15.0-117-generic, 4.15.0-118-generic, 5.0.0-61-generic, 5.3.0-67-generic, 5.4.0-47-generic, 5.4.0-48-generic, 4.15.0-1095-azure, 4.15.0-1096-azure, 5.4.0-1025-azure, 5.4.0-1026-azure through 9.37 hot fix patch**| 18.04 LTS | [9.36](https://support.microsoft.com/help/4578241/) | 4.15.0-20-generic to 4.15.0-112-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-58-generic </br> 5.3.0-19-generic to 5.3.0-65-generic </br> 5.4.0-37-generic to 5.4.0-42-generic</br> 4.15.0-1009-azure to 4.15.0-1092-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1032-azure </br> 5.4.0-1020-azure to 5.4.0-1022-azure </br> 5.0.0-60-generic & 5.3.0-1035-azure through 9.36 hot fix patch**| |||
-20.04 LTS |[9.39](https://support.microsoft.com/help/4597409/) | 5.4.0-26-generic to 5.4.0-53 </br> -generic 5.4.0-1010-azure to 5.4.0-1031-azure </br> 5.4.0-54-generic, 5.8.0-29-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 5.8.0-31-generic through 9.39 hot fix patch**
+20.04 LTS |[9.39](https://support.microsoft.com/help/4597409/) | 5.4.0-26-generic to 5.4.0-53 </br> -generic 5.4.0-1010-azure to 5.4.0-1031-azure </br> 5.4.0-54-generic, 5.8.0-29-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 5.8.0-31-generic, 5.8.0-33-generic, 5.4.0-58-generic, 5.4.0-1034-azure through 9.39 hot fix patch**
20.04 LTS |[9.38](https://support.microsoft.com/help/4590304/) | 5.4.0-26-generic to 5.4.0-48 </br> -generic 5.4.0-1010-azure to 5.4.0-1026-azure </br> 5.4.0-51-generic, 5.4.0-52-generic, 5.8.0-23-generic, 5.8.0-25-generic, 5.4.0-1031-azure through 9.38 hot fix patch** 20.04 LTS |[9.37](https://support.microsoft.com/help/4582666/) | 5.4.0-26-generic to 5.4.0-45 </br> -generic 5.4.0-1010-azure to 5.4.0-1023-azure </br> 5.4.0-47-generic, 5.4.0-48-generic, 5.4.0-1025-azure, 5.4.0-1026-azure through 9.37 hot fix patch** 20.04 LTS |[9.36](https://support.microsoft.com/help/4578241/) | 5.4.0-26-generic to 5.4.0-42 </br> -generic 5.4.0-1010-azure to 5.4.0-1022-azure
@@ -145,7 +145,7 @@ Debian 7 | [9.35](https://support.microsoft.com/help/4573888/), [9.36](https://s
Debian 8 | [9.35](https://support.microsoft.com/help/4573888/, ), [9.36](https://support.microsoft.com/help/4578241/), [9.37](https://support.microsoft.com/help/4582666/), [9.38](https://support.microsoft.com/help/4590304/), [9.39](https://support.microsoft.com/help/4597409/) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 | Debian 8 | [9.34](https://support.microsoft.com/help/4570609) | 3.16.0-4-amd64 to 3.16.0-10-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 | |||
-Debian 9.1 | [9.39](https://support.microsoft.com/help/4597409/) | 4.9.0-1-amd64 to 4.9.0-14-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.12-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.12-cloud-amd64 </br>
+Debian 9.1 | [9.39](https://support.microsoft.com/help/4597409/) | 4.9.0-1-amd64 to 4.9.0-14-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.12-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.12-cloud-amd64 </br> 4.19.0-0.bpo.13-amd64, 4.19.0-0.bpo.13-cloud-amd64 through 9.39 hot fix patch**</br>
Debian 9.1 | [9.38](https://support.microsoft.com/help/4590304/) | 4.9.0-1-amd64 to 4.9.0-13-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.11-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.11-cloud-amd64 </br> 4.9.0-14-amd64, 4.19.0-0.bpo.12-amd64, 4.19.0-0.bpo.12-cloud-amd64 through 9.38 hot fix patch** Debian 9.1 | [9.37](https://support.microsoft.com/help/4582666/) | 4.9.0-3-amd64 to 4.9.0-13-amd64, 4.19.0-0.bpo.6-amd64 to 4.19.0-0.bpo.10-amd64, 4.19.0-0.bpo.6-cloud-amd64 to 4.19.0-0.bpo.10-cloud-amd64
@@ -153,7 +153,7 @@ Debian 9.1 | [9.37](https://support.microsoft.com/help/4582666/) | 4.9.0-3-amd64
**Release** | **Mobility service version** | **Kernel version** | --- | --- | --- |
-SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.39](https://support.microsoft.com/help/4597409/) | All [stock SUSE 12 SP1,SP2,SP3,SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.34-azure |
+SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.39](https://support.microsoft.com/help/4597409/) | All [stock SUSE 12 SP1,SP2,SP3,SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.34-azure </br> 4.12.14-16.38-azure through 9.39 hot fix patch**|
SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.38](https://support.microsoft.com/help/4590304/) | All [stock SUSE 12 SP1,SP2,SP3,SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.28-azure | SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.36](https://support.microsoft.com/help/4578241/), [9.37](https://support.microsoft.com/help/4582666/) | All [stock SUSE 12 SP1,SP2,SP3,SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.22-azure </br> 4.12.14-16.25-azure, 4.12.14-16.28-azure through 9.37 hot fix patch**| SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.35](https://support.microsoft.com/help/4573888/) | All [stock SUSE 12 SP1,SP2,SP3,SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.19-azure |
@@ -162,7 +162,7 @@ SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.35](https://support.
**Release** | **Mobility service version** | **Kernel version** | --- | --- | --- |
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.39](https://support.microsoft.com/help/4597409/) | By default, all [stock SUSE 15 and 15 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.47-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.21-azure </br> 4.12.14-8.52-azure, 5.3.18-18.24-azure through 9.39 hot fix patch**
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.39](https://support.microsoft.com/help/4597409/) | By default, all [stock SUSE 15 and 15 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.47-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.21-azure </br> 4.12.14-8.52-azure, 5.3.18-18.24-azure, 4.12.14-8.55-azure, 5.3.18-18.29-azure through 9.39 hot fix patch**
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.38](https://support.microsoft.com/help/4590304/) | By default, all [stock SUSE 15 and 15 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.44-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.18-azure </br> 4.12.14-8.47-azure, 5.3.18-18.21-azure through 9.38 hot fix patch** SUSE Linux Enterprise Server 15 and 15 SP1 | [9.36](https://support.microsoft.com/help/4578241/), [9.37](https://support.microsoft.com/help/4582666/) | By default, all [stock SUSE 15 and 15 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.38-azure </br> 4.12.14-8.41-azure, 4.12.14-8.44-azure through 9.37 hot fix patch** SUSE Linux Enterprise Server 15 and 15 SP1 | [9.35](https://support.microsoft.com/help/4573888/) | By default, all [stock SUSE 15 and 15 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.33-azure
spatial-anchors https://docs.microsoft.com/en-us/azure/spatial-anchors/how-tos/create-locate-anchors-unity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/how-tos/create-locate-anchors-unity.md
@@ -188,13 +188,14 @@ Learn more about the [CloudSpatialAnchor](/dotnet/api/microsoft.azure.spatialanc
Quaternion rotation = Quaternion.AngleAxis(0, Vector3.up); this.localAnchor = GameObject.Instantiate(/* some prefab */, hitPosition, rotation);
- this.localAnchor.CreateNativeAnchor();
+ this.localAnchor.AddComponent<CloudNativeAnchor>();
// If the user is placing some application content in their environment, // you might show content at this anchor for a while, then save when // the user confirms placement.
- CloudSpatialAnchor cloudAnchor = new CloudSpatialAnchor();
- cloudAnchor.LocalAnchor = this.localAnchor.GetNativeSpatialAnchorPtr();
+ CloudNativeAnchor cloudNativeAnchor = this.localAnchor.GetComponent<CloudNativeAnchor>();
+ if (cloudNativeAnchor.CloudAnchor == null) { cloudNativeAnchor.NativeToCloud(); }
+ CloudSpatialAnchor cloudAnchor = cloudNativeAnchor.CloudAnchor;
await this.cloudSession.CreateAnchorAsync(cloudAnchor); this.feedback = $"Created a cloud anchor with ID={cloudAnchor.Identifier}"); ```
static-web-apps https://docs.microsoft.com/en-us/azure/static-web-apps/publish-jekyll https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/publish-jekyll.md
@@ -145,7 +145,7 @@ Next, you add configuration settings that the build process uses to build your a
```yml - name: Set up Ruby
- uses: ruby/setup-ruby@ec106b438a1ff6ff109590de34ddc62c540232e0
+ uses: ruby/setup-ruby@v1.59.1
with: ruby-version: 2.6 - name: Install dependencies
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-redundancy.md
@@ -60,8 +60,8 @@ The following table shows which types of storage accounts support ZRS in which r
| Storage account type | Supported regions | Supported services | |--|--|--| | General-purpose v2<sup>1</sup> | Asia Southeast<br /> Australia East<br /> Europe North<br /> Europe West<br /> France Central<br /> Japan East<br /> South Africa North<br /> UK South<br /> US Central<br /> US East<br /> US East 2<br /> US West 2 | Block blobs<br /> Page blobs<sup>2</sup><br /> File shares (standard)<br /> Tables<br /> Queues<br /> |
-| BlockBlobStorage<sup>1</sup> | Asia Southeast<br /> Australia East<br /> Europe North<br /> Europe West<br /> US East <br /> US East 2 <br /> US West 2| Premium block blobs only |
-| FileStorage | Asia Southeast<br /> Australia East<br /> Europe North<br /> Europe West<br /> US East <br /> US East 2 <br /> US West 2 | Premium files shares only |
+| BlockBlobStorage<sup>1</sup> | Asia Southeast<br /> Australia East<br /> Europe North<br /> Europe West<br /> Japan East<br /> US East <br /> US East 2 <br /> US West 2| Premium block blobs only |
+| FileStorage | Asia Southeast<br /> Australia East<br /> Europe North<br /> Europe West<br /> Japan East<br /> US East <br /> US East 2 <br /> US West 2 | Premium files shares only |
<sup>1</sup> The archive tier is not currently supported for ZRS accounts.<br /> <sup>2</sup> Storage accounts that contain Azure managed disks for virtual machines always use LRS. Azure unmanaged disks should also use LRS. It is possible to create a storage account for Azure unmanaged disks that uses GRS, but it is not recommended due to potential issues with consistency over asynchronous geo-replication. Neither managed nor unmanaged disks support ZRS or GZRS. For more information on managed disks, see [Pricing for Azure managed disks](https://azure.microsoft.com/pricing/details/managed-disks/).
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-auth-active-directory-domain-service-enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
@@ -5,7 +5,7 @@ author: roygara
ms.service: storage ms.topic: how-to
-ms.date: 04/21/2020
+ms.date: 01/03/2021
ms.author: rogarana ms.subservice: files ms.custom: contperf-fy21q1, devx-track-azurecli
@@ -18,7 +18,7 @@ ms.custom: contperf-fy21q1, devx-track-azurecli
If you are new to Azure file shares, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles. > [!NOTE]
-> Azure Files supports Kerberos authentication with Azure AD DS with RC4-HMAC encryption. AES Kerberos encryption is not yet supported.
+> Azure Files supports Kerberos authentication with Azure AD DS with RC4-HMAC only. AES Kerberos encryption is not yet supported.
> Azure Files supports authentication for Azure AD DS with full synchronization with Azure AD. If you have enabled scoped synchronization in Azure AD DS which only sync a limited set of identities from Azure AD, authentication and authorization is not supported. ## Prerequisites
@@ -55,7 +55,7 @@ Before you enable Azure AD over SMB for Azure file shares, make sure you have co
## Regional availability
-Azure Files authentication with Azure AD DS is available in [all Azure Public and Gov regions](https://azure.microsoft.com/global-infrastructure/locations/).
+Azure Files authentication with Azure AD DS is available in [all Azure Public, Gov, and China regions](https://azure.microsoft.com/global-infrastructure/locations/).
## Overview of the workflow
@@ -145,4 +145,4 @@ You have now successfully enabled Azure AD DS authentication over SMB and assign
For more information about Azure Files and how to use Azure AD over SMB, see these resources: - [Overview of Azure Files identity-based authentication support for SMB access](storage-files-active-directory-overview.md)-- [FAQ](storage-files-faq.md)\ No newline at end of file
+- [FAQ](storage-files-faq.md)
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-deployment-guide.md
@@ -519,7 +519,7 @@ The recommended steps to onboard on Azure File Sync for the first with zero down
If you don't have extra storage for initial onboarding and would like to attach to the existing shares, you can pre-seed the data in the Azure files shares. This approach is suggested, if and only if you can accept downtime and absolutely guarantee no data changes on the server shares during the initial onboarding process. 1. Ensure that data on any of the servers can't change during the onboarding process.
-1. Pre-seed Azure file shares with the server data using any data transfer tool over the SMB. Robocopy, for example. YOu can also use AzCopy over REST. Be sure to use AzCopy with the appropriate switches to preserve ACLs timestamps and attributes.
+1. Pre-seed Azure file shares with the server data using any data transfer tool over the SMB. Robocopy, for example. You can also use AzCopy over REST. Be sure to use AzCopy with the appropriate switches to preserve ACLs timestamps and attributes.
1. Create Azure File Sync topology with the desired server endpoints pointing to the existing shares. 1. Let sync finish reconciliation process on all endpoints. 1. Once reconciliation is complete, you can open shares for changes.
@@ -625,4 +625,4 @@ For more information, see [Azure File Sync interop with Distributed File System
## Next steps - [Add or remove an Azure File Sync Server Endpoint](storage-sync-files-server-endpoint.md) - [Register or unregister a server with Azure File Sync](storage-sync-files-server-registration.md)-- [Monitor Azure File Sync](storage-sync-files-monitoring.md)\ No newline at end of file
+- [Monitor Azure File Sync](storage-sync-files-monitoring.md)
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-troubleshoot-windows-file-connection-problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
@@ -400,6 +400,8 @@ The cmdlet performs these checks below in sequence and provides guidance for fai
5. CheckSidHasAadUser: Check that the logged on AD user is synced to Azure AD. If you want to look up whether a specific AD user is synchronized to Azure AD, you can specify the -UserName and -Domain in the input parameters. 6. CheckGetKerberosTicket: Attempt to get a Kerberos ticket to connect to the storage account. If there isn't a valid Kerberos token, run the klist get cifs/storage-account-name.file.core.windows.net cmdlet and examine the error code to root-cause the ticket retrieval failure. 7. CheckStorageAccountDomainJoined: Check if the AD authentication has been enabled and the account's AD properties are populated. If not, refer to the instruction [here](./storage-files-identity-ad-ds-enable.md) to enable AD DS authentication on Azure Files.
+8. CheckUserRbacAssignment: Check if the AD user has the proper RBAC role assignment to provide share level permission to access Azure Files. If not, refer to the instruction [here](https://docs.microsoft.com/azure/storage/files/storage-files-identity-ad-ds-assign-permissions) to configure the share level permission. (Supported on AzFilesHybrid v0.2.3+ version)
+9. CheckUserFileAccess: Check if the AD user has the proper directory/file permission (Windows ACLs) to access Azure Files. If not, refer to the instruction [here](https://docs.microsoft.com/azure/storage/files/storage-files-identity-ad-ds-configure-permissions) to configure the directory/file level permission. (Supported on AzFilesHybrid v0.2.3+ version)
## Unable to configure directory/file level permissions (Windows ACLs) with Windows File Explorer
@@ -436,4 +438,4 @@ Update-AzStorageAccountAuthForAES256 -ResourceGroupName $ResourceGroupName -Stor
## Need help? Contact support.
-If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
\ No newline at end of file
+If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/fslogix-containers-azure-files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/fslogix-containers-azure-files.md
@@ -3,7 +3,7 @@ title: Windows Virtual Desktop FSLogix profile containers files - Azure
description: This article describes FSLogix profile containers within Windows Virtual Desktop and Azure files. author: Heidilohr ms.topic: conceptual
-ms.date: 08/07/2019
+ms.date: 01/04/2021
ms.author: helohr manager: lizross ---
@@ -65,7 +65,7 @@ S2D clusters require an operating system that is patched, updated, and maintaine
On November 19, 2018, [Microsoft acquired FSLogix](https://blogs.microsoft.com/blog/2018/11/19/microsoft-acquires-fslogix-to-enhance-the-office-365-virtualization-experience/). FSLogix addresses many profile container challenges. Key among them are: - **Performance:** The [FSLogix profile containers](/fslogix/configure-profile-container-tutorial/) are high performance and resolve performance issues that have historically blocked cached exchange mode.-- **OneDrive:** Without FSLogix profile containers, OneDrive for Business is not supported in non-persistent RDSH or VDI environments. [OneDrive for Business and FSLogix best practices](/fslogix/overview/) describes how they interact. For more information, see [Use the sync client on virtual desktops](/deployoffice/rds-onedrive-business-vdi/).
+- **OneDrive:** Without FSLogix profile containers, OneDrive for Business is not supported in non-persistent RDSH or VDI environments. The [OneDrive VDI support page](/onedrive/sync-vdi-support) will tell you how they interact. For more information, see [Use the sync client on virtual desktops](/deployoffice/rds-onedrive-business-vdi/).
- **Additional folders:** FSLogix provides the ability to extend user profiles to include additional folders. Since the acquisition, Microsoft started replacing existing user profile solutions, like UPD, with FSLogix profile containers.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/custom-script-windows.md
@@ -56,7 +56,7 @@ If your script is on a local server, then you may still need additional firewall
* There's 90 minutes allowed for the script to run, anything longer will result in a failed provision of the extension. * Don't put reboots inside the script, this action will cause issues with other extensions that are being installed. Post reboot, the extension won't continue after the restart. * If you have a script that will cause a reboot, then install applications and run scripts, you can schedule the reboot using a Windows Scheduled Task, or use tools such as DSC, Chef, or Puppet extensions.
-* It is not recommended to run a script that will cause a stop or update of the VM Agent. This can let the extension in a Transitioning state, leading to a timeout.
+* It is not recommended to run a script that will cause a stop or update of the VM Agent. This can leave the extension in a Transitioning state, leading to a timeout.
* The extension will only run a script once, if you want to run a script on every boot, then you need to use the extension to create a Windows Scheduled Task. * If you want to schedule when a script will run, you should use the extension to create a Windows Scheduled Task. * When the script is running, you will only see a 'transitioning' extension status from the Azure portal or CLI. If you want more frequent status updates of a running script, you'll need to create your own solution.
@@ -360,4 +360,4 @@ Path information after the first URI segment is kept for files downloaded via th
### Support
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). You can also file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
\ No newline at end of file
+If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). You can also file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/azure-hybrid-benefit-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
@@ -205,6 +205,10 @@ A: No, you can't. Reserved instances aren't currently in the scope of Azure Hybr
*Q: Can I use Azure Hybrid Benefit on a virtual machine deployed for SQL Server on RHEL images?* A: No, you can't. There is no plan for supporting these virtual machines.+
+*Q: Can I use Azure Hybrid Benefit on my RHEL Virtual Data Center subscription?*
+
+A: No, you cannot. VDC is not supported on Azure at all, including AHB.
## Common problems
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cloudinit-prepare-custom-image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/cloudinit-prepare-custom-image.md
@@ -21,7 +21,7 @@ You need to SSH into your Linux VM and run the following commands in order to in
```bash sudo yum makecache fast sudo yum install -y gdisk cloud-utils-growpart
-sudo yum install - y cloud-init
+sudo yum install -y cloud-init
``` Update the `cloud_init_modules` section in `/etc/cloud/cloud.cfg` to include the following modules:
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/instance-metadata-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/instance-metadata-service.md
@@ -11,27 +11,20 @@ ms.workload: infrastructure-services
ms.date: 04/29/2020 ms.author: sukumari ms.reviewer: azmetadatadev
+ms.custom: references_regions
--- # Azure Instance Metadata Service (IMDS) The Azure Instance Metadata Service (IMDS) provides information about currently running virtual machine instances. You can use it to manage and configure your virtual machines.
-This information includes the SKU, storage, network configurations, and upcoming maintenance events. For a complete list of the data available, see [metadata APIs](#metadata-apis).
+This information includes the SKU, storage, network configurations, and upcoming maintenance events. For a complete list of the data available, see the [Endpoint Categories Summary](#endpoint-categories).
-IMDS is available for running instances of virtual machines (VMs) and virtual machine scale set instances. All APIs support VMs created and managed by using [Azure Resource Manager](/rest/api/resources/). Only
-the attested and network endpoints support VMs created by using the classic deployment model. The attested endpoint does so only to a limited extent.
+IMDS is available for running instances of virtual machines (VMs) and virtual machine scale set instances. All endpoints support VMs created and managed by using [Azure Resource Manager](/rest/api/resources/). Only
+the Attested category and Network portion of the Instance category support VMs created by using the classic deployment model. The Attested endpoint does so only to a limited extent.
-IMDS is a REST endpoint that's available at a well-known, non-routable IP address (`169.254.169.254`). You access it only from within the VM. Communication between the VM and IMDS never leaves the host.
+IMDS is a REST API that's available at a well-known, non-routable IP address (`169.254.169.254`). You can only access it from within the VM. Communication between the VM and IMDS never leaves the host.
Have your HTTP clients bypass web proxies within the VM when querying IMDS, and treat `169.254.169.254` the same as [`168.63.129.16`](../../virtual-network/what-is-ip-address-168-63-129-16.md).
-## Security
-
-The IMDS endpoint is accessible only from within the running virtual machine instance on a non-routable IP address. In addition, any request with an `X-Forwarded-For` header is rejected by the service.
-Requests must also contain a `Metadata: true` header, to ensure that the actual request was directly intended and not a part of unintentional redirection.
-
-> [!IMPORTANT]
-> IMDS isn't a channel for sensitive data. The endpoint is open to all processes on the VM. Consider information exposed through this service as shared information to all applications running inside the VM.
- ## Usage ### Access Azure Instance Metadata Service
@@ -39,10 +32,13 @@ Requests must also contain a `Metadata: true` header, to ensure that the actual
To access IMDS, create a VM from [Azure Resource Manager](/rest/api/resources/) or the [Azure portal](https://portal.azure.com), and use the following samples. For more examples, see [Azure Instance Metadata Samples](https://github.com/microsoft/azureimds).
-Here's the sample code to retrieve all metadata for an instance. To access a specific data source, see the [Metadata API](#metadata-apis) section.
+Here's sample code to retrieve all metadata for an instance. To access a specific data source, see [Endpoint Categories](#endpoint-categories) for an overview of all available features.
**Request**
+> [!IMPORTANT]
+> This example bypasses proxies. You **must** bypass proxies when querying IMDS. See [Proxies](#proxies) for additional information.
+ ```bash curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2020-09-01" ```
@@ -173,17 +169,136 @@ curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?ap
} ```
-### Data output
+## Security and authentication
+
+The Instance Metadata Service is only accessible from within a running virtual machine instance on a non-routable IP address. VMs are limited to interacting with metadata/functionality that pertains to themselves. The API is HTTP only and never leaves the host.
+
+In order to ensure that requests are directly intended for IMDS and prevent unintended or unwanted redirection of requests, requests:
+- **Must** contain the header `Metadata: true`
+- Must **not** contain an `X-Forwarded-For` header
+
+Any request that does not meet **both** of these requirements will be rejected by the service.
+
+> [!IMPORTANT]
+> IMDS is **not** a channel for sensitive data. The API is unauthenticated and open to all processes on the VM. Information exposed through this service should be considered as shared information to all applications running inside the VM.
+
+## Proxies
+
+IMDS is **not** intended to be used behind a proxy and doing so is unsupported. Most HTTP clients provide an option for you to disable proxies on your requests, and this functionality must be utilized when communicating with IMDS. Consult your client's documentation for details.
+
+> [!IMPORTANT]
+> Even if you don't know of any proxy configuration in your environment, **you still must override any default client proxy settings**. Proxy configurations can be automatically discovered, and failing to bypass such configurations exposes you to outrage risks should the machine's configuration be changed in the future.
+
+## Rate limiting
+
+In general, requests to IMDS are limited to 5 requests per second. Requests exceeding this threshold will be rejected with 429 responses. Requests to the [Managed Identity](#managed-identity) category are limited to 20 requests per second and 5 concurrent requests.
+
+## HTTP verbs
+
+The following HTTP verbs are currently supported:
+
+| Verb | Description |
+|------|-------------|
+| `GET` | Retrieve the requested resource
+
+## Parameters
-By default, IMDS returns data in JSON format (`Content-Type: application/json`). However, some APIs can return data in different formats, if requested.
-The following table lists other data formats that APIs might support.
+Endpoints may support required and/or optional parameters. See [Schema](#schema) and the documentation for the specific endpoint in question for details.
-API | Default data format | Other formats
-/attested | json | none
-/identity | json | none
-/instance | json | text
-/scheduledevents | json | none
+### Query parameters
+
+IMDS endpoints support HTTP query string parameters. For example:
+
+```bash
+http://169.254.169.254/metadata/instance/compute?api-version=2019-06-04&format=json
+```
+
+Specifies the parameters:
+
+| Name | Value |
+|------|-------|
+| `api-version` | `2019-06-04`
+| `format` | `json`
+
+Requests with duplicate query parameter names will be rejected.
+
+### Route parameters
+
+For some endpoints that return larger json blobs, we support appending route parameters to the request endpoint to filter down to a subset of the response:
+
+```bash
+http://169.254.169.254/metadata/<endpoint>/[<filter parameter>/...]?<query parameters>
+```
+The parameters correspond to the indexes/keys that would be used to walk down the json object were you interacting with a parsed representation.
+
+For example, `/metatadata/instance` returns the json object:
+```json
+{
+ "compute": { ... },
+ "network": {
+ "interface": [
+ {
+ "ipv4": {
+ "ipAddress": [{
+ "privateIpAddress": "10.144.133.132",
+ "publicIpAddress": ""
+ }],
+ "subnet": [{
+ "address": "10.144.133.128",
+ "prefix": "26"
+ }]
+ },
+ "ipv6": {
+ "ipAddress": [
+ ]
+ },
+ "macAddress": "0011AAFFBB22"
+ },
+ ...
+ ]
+ }
+}
+```
+
+If we want to filter the response down to just the compute property, we would send the request:
+```bash
+http://169.254.169.254/metadata/instance/compute?api-version=<version>
+```
+
+Similarly, if we want to filter to a nested property or specific array element we keep appending keys:
+```bash
+http://169.254.169.254/metadata/instance/network/interface/0?api-version=<version>
+```
+would filter to the first element from the `Network.interface` property and return:
+
+```json
+{
+ "ipv4": {
+ "ipAddress": [{
+ "privateIpAddress": "10.144.133.132",
+ "publicIpAddress": ""
+ }],
+ "subnet": [{
+ "address": "10.144.133.128",
+ "prefix": "26"
+ }]
+ },
+ "ipv6": {
+ "ipAddress": [
+ ]
+ },
+ "macAddress": "0011AAFFBB22"
+}
+```
+
+> [!NOTE]
+> When filtering to a leaf node, `format=json` doesn't work. For these queries `format=text` needs to be explicitly specified since the default format is json.
+
+## Schema
+
+### Data format
+
+By default, IMDS returns data in JSON format (`Content-Type: application/json`). However, endpoints that support response filtering (see [Route Parameters](#route-parameters)) also support the format `text`.
To access a non-default response format, specify the requested format as a query string parameter in the request. For example:
@@ -191,14 +306,27 @@ To access a non-default response format, specify the requested format as a query
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2017-08-01&format=text" ```
-> [!NOTE]
-> For leaf nodes in `/metadata/instance`, the `format=json` doesn't work. For these queries, `format=text` needs to be explicitly specified because the default format is JSON.
+In json responses, all primitives will be of type `string`, and missing or inapplicable values are always included but will be set to an empty string.
+
+### Versioning
+
+IMDS is versioned and specifying the API version in the HTTP request is mandatory. The only exception to this requirement is the [versions](#versions) endpoint, which can be used to dynamically retrieve the available API versions.
-### Version
+As newer versions are added, older versions can still be accessed for compatibility if your scripts have dependencies on specific data formats.
-IMDS is versioned, and specifying the API version in the HTTP request is mandatory.
+When you don't specify a version, you get an error with a list of the newest supported versions:
+```json
+{
+ "error": "Bad request. api-version was not specified in the request. For more information refer to aka.ms/azureimds",
+ "newest-versions": [
+ "2020-10-01",
+ "2020-09-01",
+ "2020-07-15"
+ ]
+}
+```
-The supported API versions are:
+#### Supported API versions
- 2017-03-01 - 2017-04-02 - 2017-08-01
@@ -221,88 +349,309 @@ The supported API versions are:
- 2020-10-01 > [!NOTE]
-> Version 2020-10-01 might not yet be available in every region.
+> Version 2020-10-01 is currently being rolled out and may not yet be available in every region.
-As newer versions are added, you can still access older versions for compatibility if your scripts have dependencies on specific data formats.
+### Swagger
-When you don't specify a version, you get an error, with a list of the newest supported versions.
+A full Swagger definition for IMDS is available at: https://github.com/Azure/azure-rest-api-specs/blob/master/specification/imds/data-plane/readme.md
+
+## Regional availability
+
+The service is **generally available** in all Azure Clouds.
+
+## Root endpoint
+
+The root endpoint is `http://169.254.169.254/metadata`.
+
+## Endpoint categories
+
+The IMDS API contains multiple endpoint categories representing different data sources, each of which contains one or more endpoints. See each category for details.
+
+| Category root | Description | Version introduced |
+|---------------|-------------|--------------------|
+| `/metadata/attested` | See [Attested Data](#attested-data) | 2018-10-01
+| `/metadata/identity` | See [Managed Identity via IMDS](#managed-identity) | 2018-02-01
+| `/metadata/instance` | See [Instance Metadata](#instance-metadata) | 2017-04-02
+| `/metadata/scheduledevents` | See [Scheduled Events via IMDS](#scheduled-events) | 2017-08-01
+| `/metadata/versions` | See [Versions](#versions) | N/A
+
+## Versions
> [!NOTE]
-> The response is a JSON string. The following example indicates the error condition when the version isn't specified. The response is pretty-printed for readability.
+> This feature was released alongside version 2020-10-01, which is currently being rolled out and may not yet be available in every region.
-**Request**
+### List API versions
+
+Returns the set of supported API versions.
```bash
-curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance"
+GET /metadata/versions
```
-**Response**
+#### Parameters
+
+None (this endpoint is unversioned).
+
+#### Response
```json {
- "error": "Bad request. api-version was not specified in the request. For more information refer to aka.ms/azureimds",
- "newest-versions": [
- "2020-10-01",
- "2020-09-01",
- "2020-07-15"
- ]
+ "apiVersions": [
+ "2017-03-01",
+ "2017-04-02",
+ ...
+ ]
+}
+```
+
+## Instance metadata
+
+### Get VM metadata
+
+Exposes the important metadata for the VM instance, including compute, network, and storage.
+
+```bash
+GET /metadata/instance
+```
+
+#### Parameters
+
+| Name | Required/Optional | Description |
+|------|-------------------|-------------|
+| `api-version` | Required | The version used to service the request.
+| `format` | Optional* | The format (`json` or `text`) of the response. *Note: May be required when using request parameters
+
+This endpoint supports response filtering via [route parameters](#route-parameters).
+
+#### Response
+
+```json
+{
+ "compute": {
+ "azEnvironment": "AZUREPUBLICCLOUD",
+ "isHostCompatibilityLayerVm": "true",
+ "licenseType": "Windows_Client",
+ "location": "westus",
+ "name": "examplevmname",
+ "offer": "Windows",
+ "osProfile": {
+ "adminUsername": "admin",
+ "computerName": "examplevmname",
+ "disablePasswordAuthentication": "true"
+ },
+ "osType": "linux",
+ "placementGroupId": "f67c14ab-e92c-408c-ae2d-da15866ec79a",
+ "plan": {
+ "name": "planName",
+ "product": "planProduct",
+ "publisher": "planPublisher"
+ },
+ "platformFaultDomain": "36",
+ "platformUpdateDomain": "42",
+ "publicKeys": [{
+ "keyData": "ssh-rsa 0",
+ "path": "/home/user/.ssh/authorized_keys0"
+ },
+ {
+ "keyData": "ssh-rsa 1",
+ "path": "/home/user/.ssh/authorized_keys1"
+ }
+ ],
+ "publisher": "RDFE-Test-Microsoft-Windows-Server-Group",
+ "resourceGroupName": "macikgo-test-may-23",
+ "resourceId": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/virtualMachines/examplevmname",
+ "securityProfile": {
+ "secureBootEnabled": "true",
+ "virtualTpmEnabled": "false"
+ },
+ "sku": "Windows-Server-2012-R2-Datacenter",
+ "storageProfile": {
+ "dataDisks": [{
+ "caching": "None",
+ "createOption": "Empty",
+ "diskSizeGB": "1024",
+ "image": {
+ "uri": ""
+ },
+ "lun": "0",
+ "managedDisk": {
+ "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/disks/exampledatadiskname",
+ "storageAccountType": "Standard_LRS"
+ },
+ "name": "exampledatadiskname",
+ "vhd": {
+ "uri": ""
+ },
+ "writeAcceleratorEnabled": "false"
+ }],
+ "imageReference": {
+ "id": "",
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "16.04.0-LTS",
+ "version": "latest"
+ },
+ "osDisk": {
+ "caching": "ReadWrite",
+ "createOption": "FromImage",
+ "diskSizeGB": "30",
+ "diffDiskSettings": {
+ "option": "Local"
+ },
+ "encryptionSettings": {
+ "enabled": "false"
+ },
+ "image": {
+ "uri": ""
+ },
+ "managedDisk": {
+ "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/disks/exampleosdiskname",
+ "storageAccountType": "Standard_LRS"
+ },
+ "name": "exampleosdiskname",
+ "osType": "Linux",
+ "vhd": {
+ "uri": ""
+ },
+ "writeAcceleratorEnabled": "false"
+ }
+ },
+ "subscriptionId": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
+ "tags": "baz:bash;foo:bar",
+ "version": "15.05.22",
+ "vmId": "02aab8a4-74ef-476e-8182-f6d2ba4166a6",
+ "vmScaleSetName": "crpteste9vflji9",
+ "vmSize": "Standard_A3",
+ "zone": ""
+ },
+ "network": {
+ "interface": [{
+ "ipv4": {
+ "ipAddress": [{
+ "privateIpAddress": "10.144.133.132",
+ "publicIpAddress": ""
+ }],
+ "subnet": [{
+ "address": "10.144.133.128",
+ "prefix": "26"
+ }]
+ },
+ "ipv6": {
+ "ipAddress": [
+ ]
+ },
+ "macAddress": "0011AAFFBB22"
+ }]
+ }
} ```
-## Metadata APIs
-
-IMDS contains multiple APIs representing different data sources.
-
-API | Description | Version introduced
-/attested | See [Attested data](#attested-data) | 2018-10-01
-/identity | See [Acquire an access token](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) | 2018-02-01
-/instance | See [Instance API](#instance-api) | 2017-04-02
-/scheduledevents | See [Scheduled events](scheduled-events.md) | 2017-08-01
-
-## Instance API
-
-Instance API exposes the important metadata for the VM instances, including the VM, network, and storage.
-You can access the following categories through `instance/compute`:
-
-Data | Description | Version introduced
-azEnvironment | The Azure environment in which the VM is running. | 2018-10-01
-customData | This feature is currently disabled. | 2019-02-01
-isHostCompatibilityLayerVm | Identifies if the VM runs on the Host Compatibility Layer. | 2020-06-01
-licenseType | The type of license for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit). Note that this is only present for AHB-enabled VMs. | 2020-09-01
-location | The Azure region in which the VM is running. | 2017-04-02
-name | The name of the VM. | 2017-04-02
-offer | Offer information for the VM image. This is only present for images deployed from the Azure image gallery. | 2017-04-02
-osProfile.adminUsername | Specifies the name of the admin account. | 2020-07-15
-osProfile.computerName | Specifies the name of the computer. | 2020-07-15
-osProfile.disablePasswordAuthentication | Specifies if password authentication is disabled. Note that this is only present for Linux VMs. | 2020-10-01
-osType | Linux or Windows. | 2017-04-02
-placementGroupId | [Placement group](../../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md) of your virtual machine scale set. | 2017-08-01
-plan | [Plan](/rest/api/compute/virtualmachines/createorupdate#plan) containing the name, product, and publisher for a VM if it is an Azure Marketplace image. | 2018-04-02
-platformUpdateDomain | [Update domain](../manage-availability.md) in which the VM is running. | 2017-04-02
-platformFaultDomain | [Fault domain](../manage-availability.md) in which the VM is running. | 2017-04-02
-provider | The provider of the VM. | 2018-10-01
-publicKeys | [Collection of public keys](/rest/api/compute/virtualmachines/createorupdate#sshpublickey) assigned to the VM and paths. | 2018-04-02
-publisher | Publisher of the VM image. | 2017-04-02
-resourceGroupName | [Resource group](../../azure-resource-manager/management/overview.md) for your VM. | 2017-08-01
-resourceId | The [fully qualified](/rest/api/resources/resources/getbyid) ID of the resource. | 2019-03-11
-sku | The specific SKU for the VM image. | 2017-04-02
-securityProfile.secureBootEnabled | Identifies if UEFI secure boot is enabled on the VM. | 2020-06-01
-securityProfile.virtualTpmEnabled | Identifies if the virtual Trusted Platform Module (TPM) is enabled on the VM. | 2020-06-01
-storageProfile | See [Storage profile](#storage-metadata). | 2019-06-01
-subscriptionId | Azure subscription for the VM. | 2017-08-01
-tags | [Tags](../../azure-resource-manager/management/tag-resources.md) for your VM. | 2017-08-01
-tagsList | Tags formatted as a JSON array for easier programmatic parsing. | 2019-06-04
-version | The version of the VM image. | 2017-04-02
-vmId | [Unique identifier](https://azure.microsoft.com/blog/accessing-and-using-azure-vm-unique-id/) for the VM. | 2017-04-02
-vmScaleSetName | [Virtual machine scale set name](../../virtual-machine-scale-sets/overview.md) of your virtual machine scale set. | 2017-12-01
-vmSize | See [VM size](../sizes.md). | 2017-04-02
-zone | [Availability Zone](../../availability-zones/az-overview.md) of your VM. | 2017-12-01
-
-### Sample 1: Track a VM running on Azure
-
-As a service provider, you might need to track the number of VMs running your software, or have agents that need to track uniqueness of the VM. To be able to get a unique ID for a VM, use the `vmId` field from IMDS.
+Schema breakdown:
+
+**Compute**
+
+| Data | Description | Version introduced |
+|------|-------------|--------------------|
+| `azEnvironment` | Azure Environment where the VM is running in | 2018-10-01
+| `customData` | This feature is currently disabled. We will update this documentation when it becomes available | 2019-02-01
+| `isHostCompatibilityLayerVm` | Identifies if the VM runs on the Host Compatibility Layer | 2020-06-01
+| `licenseType` | Type of license for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit). This is only present for AHB-enabled VMs | 2020-09-01
+| `location` | Azure Region the VM is running in | 2017-04-02
+| `name` | Name of the VM | 2017-04-02
+| `offer` | Offer information for the VM image and is only present for images deployed from Azure image gallery | 2017-04-02
+| `osProfile.adminUsername` | Specifies the name of the admin account | 2020-07-15
+| `osProfile.computerName` | Specifies the name of the computer | 2020-07-15
+| `osProfile.disablePasswordAuthentication` | Specifies if password authentication is disabled. This is only present for Linux VMs | 2020-10-01
+| `osType` | Linux or Windows | 2017-04-02
+| `placementGroupId` | [Placement Group](../../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md) of your virtual machine scale set | 2017-08-01
+| `plan` | [Plan](/rest/api/compute/virtualmachines/createorupdate#plan) containing name, product, and publisher for a VM if it is an Azure Marketplace Image | 2018-04-02
+| `platformUpdateDomain` | [Update domain](../manage-availability.md) the VM is running in | 2017-04-02
+| `platformFaultDomain` | [Fault domain](../manage-availability.md) the VM is running in | 2017-04-02
+| `provider` | Provider of the VM | 2018-10-01
+| `publicKeys` | [Collection of Public Keys](/rest/api/compute/virtualmachines/createorupdate#sshpublickey) assigned to the VM and paths | 2018-04-02
+| `publisher` | Publisher of the VM image | 2017-04-02
+| `resourceGroupName` | [Resource group](../../azure-resource-manager/management/overview.md) for your Virtual Machine | 2017-08-01
+| `resourceId` | The [fully qualified](/rest/api/resources/resources/getbyid) ID of the resource | 2019-03-11
+| `sku` | Specific SKU for the VM image | 2017-04-02
+| `securityProfile.secureBootEnabled` | Identifies if UEFI secure boot is enabled on the VM | 2020-06-01
+| `securityProfile.virtualTpmEnabled` | Identifies if the virtual Trusted Platform Module (TPM) is enabled on the VM | 2020-06-01
+| `storageProfile` | See Storage Profile below | 2019-06-01
+| `subscriptionId` | Azure subscription for the Virtual Machine | 2017-08-01
+| `tags` | [Tags](../../azure-resource-manager/management/tag-resources.md) for your Virtual Machine | 2017-08-01
+| `tagsList` | Tags formatted as a JSON array for easier programmatic parsing | 2019-06-04
+| `version` | Version of the VM image | 2017-04-02
+| `vmId` | [Unique identifier](https://azure.microsoft.com/blog/accessing-and-using-azure-vm-unique-id/) for the VM | 2017-04-02
+| `vmScaleSetName` | [Virtual machine scale set Name](../../virtual-machine-scale-sets/overview.md) of your virtual machine scale set | 2017-12-01
+| `vmSize` | [VM size](../sizes.md) | 2017-04-02
+| `zone` | [Availability Zone](../../availability-zones/az-overview.md) of your virtual machine | 2017-12-01
+
+**Storage profile**
+
+The storage profile of a VM is divided into three categories: image reference, OS disk, and data disks.
+
+The image reference object contains the following information about the OS image:
+
+| Data | Description |
+|------|-------------|
+| `id` | Resource ID
+| `offer` | Offer of the platform or marketplace image
+| `publisher` | Image publisher
+| `sku` | Image sku
+| `version` | Version of the platform or marketplace image
+
+The OS disk object contains the following information about the OS disk used by the VM:
+
+| Data | Description |
+|------|-------------|
+| `caching` | Caching requirements
+| `createOption` | Information about how the VM was created
+| `diffDiskSettings` | Ephemeral disk settings
+| `diskSizeGB` | Size of the disk in GB
+| `image` | Source user image virtual hard disk
+| `lun` | Logical unit number of the disk
+| `managedDisk` | Managed disk parameters
+| `name` | Disk name
+| `vhd` | Virtual hard disk
+| `writeAcceleratorEnabled` | Whether or not writeAccelerator is enabled on the disk
+
+The data disks array contains a list of data disks attached to the VM. Each data disk object contains the following information:
+
+Data | Description |
+-----|-------------|
+| `caching` | Caching requirements
+| `createOption` | Information about how the VM was created
+| `diffDiskSettings` | Ephemeral disk settings
+| `diskSizeGB` | Size of the disk in GB
+| `encryptionSettings` | Encryption settings for the disk
+| `image` | Source user image virtual hard disk
+| `managedDisk` | Managed disk parameters
+| `name` | Disk name
+| `osType` | Type of OS included in the disk
+| `vhd` | Virtual hard disk
+| `writeAcceleratorEnabled` | Whether or not writeAccelerator is enabled on the disk
+
+**Network**
+
+| Data | Description | Version introduced |
+|------|-------------|--------------------|
+| `ipv4.privateIpAddress` | Local IPv4 address of the VM | 2017-04-02
+| `ipv4.publicIpAddress` | Public IPv4 address of the VM | 2017-04-02
+| `subnet.address` | Subnet address of the VM | 2017-04-02
+| `subnet.prefix` | Subnet prefix, example 24 | 2017-04-02
+| `ipv6.ipAddress` | Local IPv6 address of the VM | 2017-04-02
+| `macAddress` | VM mac address | 2017-04-02
+
+**VM tags**
+
+VM tags are included the instance API under instance/compute/tags endpoint.
+Tags may have been applied to your Azure VM to logically organize them into a taxonomy. The tags assigned to a VM can be retrieved by using the request below.
+
+The `tags` field is a string with the tags delimited by semicolons. This output can be a problem if semicolons are used in the tags themselves. If a parser is written to programmatically extract the tags, you should rely on the `tagsList` field. The `tagsList` field is a JSON array with no delimiters, and consequently, easier to parse.
++
+#### Sample 1: Tracking VM running on Azure
+
+As a service provider, you may require to track the number of VMs running your software or have agents that need to track uniqueness of the VM. To be able to get a unique ID for a VM, use the `vmId` field from Instance Metadata Service.
**Request**
@@ -316,7 +665,7 @@ curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
5c08b38e-4d57-4c23-ac45-aca61037f084 ```
-### Sample 2: Placement of different data replicas
+#### Sample 2: Placement of different data replicas
For certain scenarios, placement of different data replicas is of prime importance. For example, [HDFS replica placement](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html#Replica_Placement:_The_First_Baby_Steps) or container placement via an [orchestrator](https://kubernetes.io/docs/user-guide/node-selection/) might require you to know the `platformFaultDomain` and `platformUpdateDomain` the VM is running on.
@@ -335,9 +684,9 @@ curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
0 ```
-### Sample 3: Get more information about the VM during support case
+#### Sample 3: Get more information about the VM during support case
-As a service provider, you might get a support call where you want to know more information about the VM. Asking the customer to share the compute metadata can be useful in this case.
+As a service provider, you may get a support call where you would like to know more information about the VM. Asking the customer to share the compute metadata can provide basic information for the support professional to know about the kind of VM on Azure.
**Request**
@@ -450,9 +799,9 @@ curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
} ```
-### Sample 4: Get the Azure environment where the VM is running
+#### Sample 4: Get the Azure Environment where the VM is running
-Azure has various sovereign clouds, like [Azure Government](https://azure.microsoft.com/overview/clouds/government/). Sometimes you need the Azure environment to make some runtime decisions. The following sample shows you how you can achieve this behavior.
+Azure has various sovereign clouds like [Azure Government](https://azure.microsoft.com/overview/clouds/government/). Sometimes you need the Azure Environment to make some runtime decisions. The following sample shows you how you can achieve this behavior.
**Request**
@@ -468,30 +817,15 @@ AzurePublicCloud
The cloud and the values of the Azure environment are listed here.
- Cloud | Azure environment
-[All generally available global Azure regions](https://azure.microsoft.com/regions/) | AzurePublicCloud
-[Azure Government](https://azure.microsoft.com/overview/clouds/government/) | AzureUSGovernmentCloud
-[Azure China 21Vianet](https://azure.microsoft.com/global-infrastructure/china/) | AzureChinaCloud
-[Azure Germany](https://azure.microsoft.com/overview/clouds/germany/) | AzureGermanCloud
+| Cloud | Azure environment |
+|-------|-------------------|
+| [All generally available global Azure regions](https://azure.microsoft.com/regions/) | AzurePublicCloud
+| [Azure Government](https://azure.microsoft.com/overview/clouds/government/) | AzureUSGovernmentCloud
+| [Azure China 21Vianet](https://azure.microsoft.com/global-infrastructure/china/) | AzureChinaCloud
+| [Azure Germany](https://azure.microsoft.com/overview/clouds/germany/) | AzureGermanCloud
-## Network metadata
-Network metadata is part of the instance API. The following network categories are available through the `instance/network` endpoint.
-
-Data | Description | Version introduced
-ipv4/privateIpAddress | The local IPv4 address of the VM. | 2017-04-02
-ipv4/publicIpAddress | The public IPv4 address of the VM. | 2017-04-02
-subnet/address | The subnet address of the VM. | 2017-04-02
-subnet/prefix | The subnet prefix. Example: 24 | 2017-04-02
-ipv6/ipAddress | The local IPv6 address of the VM. | 2017-04-02
-macAddress | The VM mac address. | 2017-04-02
-
-> [!NOTE]
-> All API responses are JSON strings. All following example responses are pretty-printed for readability.
-
-#### Sample 1: Retrieve network information
+#### Sample 5: Retrieve network information
**Request**
@@ -526,232 +860,93 @@ curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/ne
} ] }- ```
-#### Sample 2: Retrieve public IP address
+#### Sample 6: Retrieve public IP address
```bash curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-08-01&format=text" ```
-## Storage metadata
-
-Storage metadata is part of the instance API, under `instance/compute/storageProfile` endpoint.
-It provides details about the storage disks associated with the VM.
-
-The storage profile of a VM is divided into three categories: image reference, operating system disk, and data disks.
-
-The image reference object contains the following information about the operating system image:
-
-Data | Description
-id | Resource ID
-offer | Offer of the platform or image
-publisher | Image publisher
-sku | Image SKU
-version | Version of the platform or image
-
-The operating system disk object contains the following information about the operating system disk used by the VM:
-
-Data | Description
-caching | Caching requirements
-createOption | Information about how the VM was created
-diffDiskSettings | Ephemeral disk settings
-diskSizeGB | Size of the disk in GB
-image | Source user image virtual hard disk
-lun | Logical unit number of the disk
-managedDisk | Managed disk parameters
-name | Disk name
-vhd | Virtual hard disk
-writeAcceleratorEnabled | Whether or not `writeAccelerator` is enabled on the disk
-
-The data disks array contains a list of data disks attached to the VM. Each data disk object contains the following information:
+## Attested data
-Data | Description
-caching | Caching requirements
-createOption | Information about how the VM was created
-diffDiskSettings | Ephemeral disk settings
-diskSizeGB | Size of the disk in GB
-encryptionSettings | Encryption settings for the disk
-image | Source user image virtual hard disk
-managedDisk | Managed disk parameters
-name | Disk name
-osType | Type of operating system included in the disk
-vhd | Virtual hard disk
-writeAcceleratorEnabled | Whether or not `writeAccelerator` is enabled on the disk
-
-The following example shows how to query the VM's storage information.
+### Get Attested data
-**Request**
+IMDS helps to provide guarantees that the data provided is coming from Azure. Microsoft signs part of this information, so you can confirm that an image in Azure Marketplace is the one you are running on Azure.
```bash
-curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/storageProfile?api-version=2019-06-01"
+GET /metadata/attested/document
```
-**Response**
+#### Parameters
-> [!NOTE]
-> The response is a JSON string. The following example response is pretty-printed for readability.
+| Name | Required/Optional | Description |
+|------|-------------------|-------------|
+| `api-version` | Required | The version used to service the request.
+| `nonce` | Optional | A 10-digit string that serves as a cryptographic nonce. If no value is provided, IMDS uses the current UTC timestamp.
+
+#### Response
```json {
- "dataDisks": [
- {
- "caching": "None",
- "createOption": "Empty",
- "diskSizeGB": "1024",
- "image": {
- "uri": ""
- },
- "lun": "0",
- "managedDisk": {
- "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/disks/exampledatadiskname",
- "storageAccountType": "Standard_LRS"
- },
- "name": "exampledatadiskname",
- "vhd": {
- "uri": ""
- },
- "writeAcceleratorEnabled": "false"
- }
- ],
- "imageReference": {
- "id": "",
- "offer": "UbuntuServer",
- "publisher": "Canonical",
- "sku": "16.04.0-LTS",
- "version": "latest"
- },
- "osDisk": {
- "caching": "ReadWrite",
- "createOption": "FromImage",
- "diskSizeGB": "30",
- "diffDiskSettings": {
- "option": "Local"
- },
- "encryptionSettings": {
- "enabled": "false"
- },
- "image": {
- "uri": ""
- },
- "managedDisk": {
- "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/disks/exampleosdiskname",
- "storageAccountType": "Standard_LRS"
- },
- "name": "exampleosdiskname",
- "osType": "Linux",
- "vhd": {
- "uri": ""
- },
- "writeAcceleratorEnabled": "false"
- }
+ "encoding":"pkcs7",
+ "signature":"MIIEEgYJKoZIhvcNAQcCoIIEAzCCA/8CAQExDzANBgkqhkiG9w0BAQsFADCBugYJKoZIhvcNAQcBoIGsBIGpeyJub25jZSI6IjEyMzQ1NjY3NjYiLCJwbGFuIjp7Im5hbWUiOiIiLCJwcm9kdWN0IjoiIiwicHVibGlzaGVyIjoiIn0sInRpbWVTdGFtcCI6eyJjcmVhdGVkT24iOiIxMS8yMC8xOCAyMjowNzozOSAtMDAwMCIsImV4cGlyZXNPbiI6IjExLzIwLzE4IDIyOjA4OjI0IC0wMDAwIn0sInZtSWQiOiIifaCCAj8wggI7MIIBpKADAgECAhBnxW5Kh8dslEBA0E2mIBJ0MA0GCSqGSIb3DQEBBAUAMCsxKTAnBgNVBAMTIHRlc3RzdWJkb21haW4ubWV0YWRhdGEuYXp1cmUuY29tMB4XDTE4MTEyMDIxNTc1N1oXDTE4MTIyMDIxNTc1NlowKzEpMCcGA1UEAxMgdGVzdHN1YmRvbWFpbi5tZXRhZGF0YS5henVyZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAML/tBo86ENWPzmXZ0kPkX5dY5QZ150mA8lommszE71x2sCLonzv4/UWk4H+jMMWRRwIea2CuQ5RhdWAHvKq6if4okKNt66fxm+YTVz9z0CTfCLmLT+nsdfOAsG1xZppEapC0Cd9vD6NCKyE8aYI1pliaeOnFjG0WvMY04uWz2MdAgMBAAGjYDBeMFwGA1UdAQRVMFOAENnYkHLa04Ut4Mpt7TkJFfyhLTArMSkwJwYDVQQDEyB0ZXN0c3ViZG9tYWluLm1ldGFkYXRhLmF6dXJlLmNvbYIQZ8VuSofHbJRAQNBNpiASdDANBgkqhkiG9w0BAQQFAAOBgQCLSM6aX5Bs1KHCJp4VQtxZPzXF71rVKCocHy3N9PTJQ9Fpnd+bYw2vSpQHg/AiG82WuDFpPReJvr7Pa938mZqW9HUOGjQKK2FYDTg6fXD8pkPdyghlX5boGWAMMrf7bFkup+lsT+n2tRw2wbNknO1tQ0wICtqy2VqzWwLi45RBwTGB6DCB5QIBATA/MCsxKTAnBgNVBAMTIHRlc3RzdWJkb21haW4ubWV0YWRhdGEuYXp1cmUuY29tAhBnxW5Kh8dslEBA0E2mIBJ0MA0GCSqGSIb3DQEBCwUAMA0GCSqGSIb3DQEBAQUABIGAld1BM/yYIqqv8SDE4kjQo3Ul/IKAVR8ETKcve5BAdGSNkTUooUGVniTXeuvDj5NkmazOaKZp9fEtByqqPOyw/nlXaZgOO44HDGiPUJ90xVYmfeK6p9RpJBu6kiKhnnYTelUk5u75phe5ZbMZfBhuPhXmYAdjc7Nmw97nx8NnprQ="
} ```
-## VM tags
-
-VM tags are included the instance API, under the `instance/compute/tags` endpoint.
-Tags might have been applied to your Azure VM to logically organize them into a taxonomy. You can retrieve the tags assigned to a VM by using the following request.
-
-**Request**
-
-```bash
-curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/tags?api-version=2018-10-01&format=text"
-```
-
-**Response**
-
-```text
-Department:IT;Environment:Test;Role:WebRole
-```
-
-The `tags` field is a string with the tags delimited by semicolons. This output can be a problem if semicolons are used in the tags themselves. If a parser is written to programmatically extract the tags, you should rely on the `tagsList` field. The `tagsList` field is a JSON array with no delimiters, and consequently it's easier to parse.
-
-**Request**
-
-```bash
-curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/tagsList?api-version=2019-06-04"
-```
-
-**Response**
-
-```json
-[
- {
- "name": "Department",
- "value": "IT"
- },
- {
- "name": "Environment",
- "value": "Test"
- },
- {
- "name": "Role",
- "value": "WebRole"
- }
-]
-```
-
-## Attested data
-
-IMDS helps to provide guarantees that the data provided is coming from Azure. Microsoft signs part of this information, so you can confirm that an image in Azure Marketplace is the one you are running on Azure.
+> [!NOTE]
+> Due to IMDS's caching mechanism, a previously cached nonce value may be returned.
-### Sample 1: Get attested data
+The signature blob is a [pkcs7](https://aka.ms/pkcs7)-signed version of document. It contains the certificate used for signing along with certain VM-specific details.
-> [!NOTE]
-> All API responses are JSON strings. The following example responses are pretty-printed for readability.
+For VMs created by using Azure Resource Manager, the document includes `vmId`, `sku`, `nonce`, `subscriptionId`, `timeStamp` for creation and expiry of the document, and the plan information about the image. The plan information is only populated for Azure Marketplace images.
-**Request**
+For VMs created by using the classic deployment model, only the `vmId` is guaranteed to be populated. You can extract the certificate from the response, and use it to confirm that the response is valid and is coming from Azure.
-```bash
-curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/attested/document?api-version=2018-10-01&nonce=1234567890"
-```
+The decoded document contains the following fields:
-`Api-version` is a mandatory field. Refer to the [usage section](#usage) for supported API versions.
-`Nonce` is an optional, 10-digit string. If it's not provided, IMDS returns the current Coordinated Universal Time timestamp in its place.
+| Data | Description | Version introduced |
+|------|-------------|--------------------|
+| `licenseType` | Type of license for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit). This is only present for AHB-enabled VMs. | 2020-09-01
+| `nonce` | A string that can be optionally provided with the request. If no `nonce` was supplied, the current Coordinated Universal Time timestamp is used. | 2018-10-01
+| `plan` | The [Azure Marketplace Image plan](/rest/api/compute/virtualmachines/createorupdate#plan). Contains the plan ID (name), product image or offer (product), and publisher ID (publisher). | 2018-10-01
+| `timestamp.createdOn` | The UTC timestamp for when the signed document was created | 2018-20-01
+| `timestamp.expiresOn` | The UTC timestamp for when the signed document expires | 2018-10-01
+| `vmId` | [Unique identifier](https://azure.microsoft.com/blog/accessing-and-using-azure-vm-unique-id/) for the VM | 2018-10-01
+| `subscriptionId` | Azure subscription for the Virtual Machine | 2019-04-30
+| `sku` | Specific SKU for the VM image | 2019-11-01
> [!NOTE]
-> Due to IMDS's caching mechanism, a previously cached `nonce` value might be returned.
-
-**Response**
+> For Classic (non-Azure Resource Manager) VMs, only the vmId is guaranteed to be populated.
+Example document:
```json {
- "encoding":"pkcs7","signature":"MIIEEgYJKoZIhvcNAQcCoIIEAzCCA/8CAQExDzANBgkqhkiG9w0BAQsFADCBugYJKoZIhvcNAQcBoIGsBIGpeyJub25jZSI6IjEyMzQ1NjY3NjYiLCJwbGFuIjp7Im5hbWUiOiIiLCJwcm9kdWN0IjoiIiwicHVibGlzaGVyIjoiIn0sInRpbWVTdGFtcCI6eyJjcmVhdGVkT24iOiIxMS8yMC8xOCAyMjowNzozOSAtMDAwMCIsImV4cGlyZXNPbiI6IjExLzIwLzE4IDIyOjA4OjI0IC0wMDAwIn0sInZtSWQiOiIifaCCAj8wggI7MIIBpKADAgECAhBnxW5Kh8dslEBA0E2mIBJ0MA0GCSqGSIb3DQEBBAUAMCsxKTAnBgNVBAMTIHRlc3RzdWJkb21haW4ubWV0YWRhdGEuYXp1cmUuY29tMB4XDTE4MTEyMDIxNTc1N1oXDTE4MTIyMDIxNTc1NlowKzEpMCcGA1UEAxMgdGVzdHN1YmRvbWFpbi5tZXRhZGF0YS5henVyZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAML/tBo86ENWPzmXZ0kPkX5dY5QZ150mA8lommszE71x2sCLonzv4/UWk4H+jMMWRRwIea2CuQ5RhdWAHvKq6if4okKNt66fxm+YTVz9z0CTfCLmLT+nsdfOAsG1xZppEapC0Cd9vD6NCKyE8aYI1pliaeOnFjG0WvMY04uWz2MdAgMBAAGjYDBeMFwGA1UdAQRVMFOAENnYkHLa04Ut4Mpt7TkJFfyhLTArMSkwJwYDVQQDEyB0ZXN0c3ViZG9tYWluLm1ldGFkYXRhLmF6dXJlLmNvbYIQZ8VuSofHbJRAQNBNpiASdDANBgkqhkiG9w0BAQQFAAOBgQCLSM6aX5Bs1KHCJp4VQtxZPzXF71rVKCocHy3N9PTJQ9Fpnd+bYw2vSpQHg/AiG82WuDFpPReJvr7Pa938mZqW9HUOGjQKK2FYDTg6fXD8pkPdyghlX5boGWAMMrf7bFkup+lsT+n2tRw2wbNknO1tQ0wICtqy2VqzWwLi45RBwTGB6DCB5QIBATA/MCsxKTAnBgNVBAMTIHRlc3RzdWJkb21haW4ubWV0YWRhdGEuYXp1cmUuY29tAhBnxW5Kh8dslEBA0E2mIBJ0MA0GCSqGSIb3DQEBCwUAMA0GCSqGSIb3DQEBAQUABIGAld1BM/yYIqqv8SDE4kjQo3Ul/IKAVR8ETKcve5BAdGSNkTUooUGVniTXeuvDj5NkmazOaKZp9fEtByqqPOyw/nlXaZgOO44HDGiPUJ90xVYmfeK6p9RpJBu6kiKhnnYTelUk5u75phe5ZbMZfBhuPhXmYAdjc7Nmw97nx8NnprQ="
+ "nonce":"20201130-211924",
+ "plan":{
+ "name":"planName",
+ "product":"planProduct",
+ "publisher":"planPublisher"
+ },
+ "sku":"Windows-Server-2012-R2-Datacenter",
+ "subscriptionId":"8d10da13-8125-4ba9-a717-bf7490507b3d",
+ "timeStamp":{
+ "createdOn":"11/30/20 21:19:19 -0000",
+ "expiresOn":"11/30/20 21:19:24 -0000"
+ },
+ "vmId":"02aab8a4-74ef-476e-8182-f6d2ba4166a6"
} ```
-The signature blob is a [pkcs7](https://aka.ms/pkcs7)-signed version of the document. It contains the certificate used for signing, along with certain VM-specific details.
-For VMs created by using Azure Resource Manager, this includes `vmId`, `sku`, `nonce`, `subscriptionId`, `timeStamp` for creation and expiry of the document, and the plan information about the image. The plan information is only populated for Azure Marketplace images.
-
-For VMs created by using the classic deployment model, only `vmId` is guaranteed to be populated. You can extract the certificate from the response, and use it to confirm that the response is valid and is coming from Azure.
-
-The document contains the following fields:
-
-Data | Description | Version introduced
-licenseType | Type of license for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit). Note that this is only present for AHB-enabled VMs. | 2020-09-01
-nonce | A string that can be optionally provided with the request. If no `nonce` was supplied, the current Coordinated Universal Time timestamp is used. | 2018-10-01
-plan | The [Azure Marketplace Image plan](/rest/api/compute/virtualmachines/createorupdate#plan). Contains the plan ID (name), product image or offer (product), and publisher ID (publisher). | 2018-10-01
-timestamp/createdOn | The Coordinated Universal Time timestamp for when the signed document was created. | 2018-20-01
-timestamp/expiresOn | The Coordinated Universal Time timestamp for when the signed document expires. | 2018-10-01
-vmId | [Unique identifier](https://azure.microsoft.com/blog/accessing-and-using-azure-vm-unique-id/) for the VM. | 2018-10-01
-subscriptionId | The Azure subscription for the VM. | 2019-04-30
-sku | The specific SKU for the VM image. | 2019-11-01
-
-### Sample 2: Validate that the VM is running in Azure
+#### Sample 1: Validate that the VM is running in Azure
Vendors in Azure Marketplace want to ensure that their software is licensed to run only in Azure. If someone copies the VHD to an on-premises environment, the vendor needs to be able to detect that. Through IMDS, these vendors can get signed data that guarantees response only from Azure. > [!NOTE] > This sample requires the jq utility to be installed.
-**Request**
+**Validation**
```bash # Get the signature
@@ -769,7 +964,7 @@ openssl x509 -inform der -in intermediate.cer -out intermediate.pem
openssl smime -verify -in sign.pk7 -inform pem -noverify ```
-**Response**
+**Results**
```json Verification successful
@@ -816,17 +1011,17 @@ The `nonce` in the signed document can be compared if you provided a `nonce` par
> [!NOTE] > The certificate for the public cloud and each sovereign cloud will be different.
-Cloud | Certificate
-[All generally available global Azure regions](https://azure.microsoft.com/regions/) | *.metadata.azure.com
-[Azure Government](https://azure.microsoft.com/overview/clouds/government/) | *.metadata.azure.us
-[Azure China 21Vianet](https://azure.microsoft.com/global-infrastructure/china/) | *.metadata.azure.cn
-[Azure Germany](https://azure.microsoft.com/overview/clouds/germany/) | *.metadata.microsoftazure.de
+| Cloud | Certificate |
+|-------|-------------|
+| [All generally available global Azure regions](https://azure.microsoft.com/regions/) | *.metadata.azure.com
+| [Azure Government](https://azure.microsoft.com/overview/clouds/government/) | *.metadata.azure.us
+| [Azure China 21Vianet](https://azure.microsoft.com/global-infrastructure/china/) | *.metadata.azure.cn
+| [Azure Germany](https://azure.microsoft.com/overview/clouds/germany/) | *.metadata.microsoftazure.de
> [!NOTE] > The certificates might not have an exact match of `metadata.azure.com` for the public cloud. For this reason, the certification validation should allow a common name from any `.metadata.azure.com` subdomain.
-In cases where the intermediate certificate can't be downloaded due to network constraints during validation, you can pin the intermediate certificate. Note that Azure rolls over the certificates, which is standard PKI practice. You need to update the pinned certificates when rollover happens. Whenever a change to update the intermediate certificate is planned, the Azure blog is updated, and Azure customers are notified.
+In cases where the intermediate certificate can't be downloaded due to network constraints during validation, you can pin the intermediate certificate. Azure rolls over the certificates, which is standard PKI practice. You must update the pinned certificates when rollover happens. Whenever a change to update the intermediate certificate is planned, the Azure blog is updated, and Azure customers are notified.
You can find the intermediate certificates in the [PKI repository](https://www.microsoft.com/pki/mscorp/cps/default.htm). The intermediate certificates for each of the regions can be different.
@@ -834,6 +1029,7 @@ You can find the intermediate certificates in the [PKI repository](https://www.m
> The intermediate certificate for Azure China 21Vianet will be from DigiCert Global Root CA, instead of Baltimore. If you pinned the intermediate certificates for Azure China as part of a root chain authority change, the intermediate certificates must be updated. + ## Managed identity A managed identity, assigned by the system, can be enabled on the VM. You can also assign one or more user-assigned managed identities to the VM.
@@ -844,42 +1040,38 @@ For detailed steps to enable this feature, see [Acquire an access token](../../a
## Scheduled events You can obtain the status of the scheduled events by using IMDS. Then the user can specify a set of actions to run upon these events. For more information, see [Scheduled events](scheduled-events.md).
-## Regional availability
-
-The service is generally available in all Azure clouds.
- ## Sample code in different languages The following table lists samples of calling IMDS by using different languages inside the VM:
-Language | Example
-Bash | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.sh
-C# | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.cs
-Go | https://github.com/Microsoft/azureimds/blob/master/imdssample.go
-Java | https://github.com/Microsoft/azureimds/blob/master/imdssample.java
-NodeJS | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.js
-Perl | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.pl
-PowerShell | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.ps1
-Puppet | https://github.com/keirans/azuremetadata
-Python | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.py
-Ruby | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.rb
+| Language | Example |
+|----------|---------|
+| Bash | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.sh
+| C# | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.cs
+| Go | https://github.com/Microsoft/azureimds/blob/master/imdssample.go
+| Java | https://github.com/Microsoft/azureimds/blob/master/imdssample.java
+| NodeJS | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.js
+| Perl | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.pl
+| PowerShell | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.ps1
+| Puppet | https://github.com/keirans/azuremetadata
+| Python | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.py
+| Ruby | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.rb
## Errors and debugging If there is a data element not found or a malformed request, the Instance Metadata Service returns standard HTTP errors. For example:
-HTTP status code | Reason
-200 OK |
-400 Bad Request | Missing `Metadata: true` header, or missing parameter `format=json` when querying a leaf node.
-404 Not Found | The requested element doesn't exist.
-405 Method Not Allowed | Only `GET` requests are supported.
-410 Gone | Retry after some time for a maximum of 70 seconds.
-429 Too Many Requests | The API currently supports a maximum of 5 queries per second.
-500 Service Error | Retry after some time.
+| HTTP status code | Reason |
+|------------------|--------|
+| `200 OK` | The request was successful.
+| `400 Bad Request` | Missing `Metadata: true` header or missing parameter `format=json` when querying a leaf node
+| `404 Not Found` | The requested element doesn't exist
+| `405 Method Not Allowed` | The HTTP method (verb) is not supported on the endpoint.
+| `410 Gone` | Retry after some time for a max of 70 seconds
+| `429 Too Many Requests` | API [Rate Limits](#rate-limiting) has been exceeded
+| `500 Service Error` | Retry after some time
-### Frequently asked questions
+## Frequently asked questions
**I am getting the error `400 Bad Request, Required metadata header not specified`. What does this mean?**
@@ -913,62 +1105,59 @@ Currently tags for virtual machine scale sets only show to the VM on a reboot, r
Metadata calls must be made from the primary IP address assigned to the primary network card of the VM. Additionally, if you've changed your routes, there must be a route for the 169.254.169.254/32 address in your VM's local routing table.
- * <details>
- <summary>Verifying your routing table</summary>
-
- 1. Dump your local routing table and look for the IMDS entry. For example:
- ```console
- > route print
- IPv4 Route Table
- ===========================================================================
- Active Routes:
- Network Destination Netmask Gateway Interface Metric
- 0.0.0.0 0.0.0.0 172.16.69.1 172.16.69.7 10
- 127.0.0.0 255.0.0.0 On-link 127.0.0.1 331
- 127.0.0.1 255.255.255.255 On-link 127.0.0.1 331
- 127.255.255.255 255.255.255.255 On-link 127.0.0.1 331
- 168.63.129.16 255.255.255.255 172.16.69.1 172.16.69.7 11
- 169.254.169.254 255.255.255.255 172.16.69.1 172.16.69.7 11
- ... (continues) ...
- ```
- 1. Verify that a route exists for `169.254.169.254`, and note the corresponding network interface (for example, `172.16.69.7`).
- 1. Dump the interface configuration and find the interface that corresponds to the one referenced in the routing table, noting the MAC (physical) address.
- ```console
- > ipconfig /all
- ... (continues) ...
- Ethernet adapter Ethernet:
-
- Connection-specific DNS Suffix . : xic3mnxjiefupcwr1mcs1rjiqa.cx.internal.cloudapp.net
- Description . . . . . . . . . . . : Microsoft Hyper-V Network Adapter
- Physical Address. . . . . . . . . : 00-0D-3A-E5-1C-C0
- DHCP Enabled. . . . . . . . . . . : Yes
- Autoconfiguration Enabled . . . . : Yes
- Link-local IPv6 Address . . . . . : fe80::3166:ce5a:2bd5:a6d1%3(Preferred)
- IPv4 Address. . . . . . . . . . . : 172.16.69.7(Preferred)
- Subnet Mask . . . . . . . . . . . : 255.255.255.0
- ... (continues) ...
- ```
- 1. Confirm that the interface corresponds to the VM's primary NIC and primary IP. You can find the primary NIC and IP by looking at the network configuration in the Azure portal, or by looking it up with the Azure CLI. Note the public and private IPs (and the MAC address if you're using the CLI). Here's a PowerShell CLI example:
- ```powershell
- $ResourceGroup = '<Resource_Group>'
- $VmName = '<VM_Name>'
- $NicNames = az vm nic list --resource-group $ResourceGroup --vm-name $VmName | ConvertFrom-Json | Foreach-Object { $_.id.Split('/')[-1] }
- foreach($NicName in $NicNames)
- {
- $Nic = az vm nic show --resource-group $ResourceGroup --vm-name $VmName --nic $NicName | ConvertFrom-Json
- Write-Host $NicName, $Nic.primary, $Nic.macAddress
- }
- # Output: wintest767 True 00-0D-3A-E5-1C-C0
- ```
- 1. If they don't match, update the routing table so that the primary NIC and IP are targeted.
- </details>
+1. Dump your local routing table and look for the IMDS entry. For example:
+ ```console
+ > route print
+ IPv4 Route Table
+ ===========================================================================
+ Active Routes:
+ Network Destination Netmask Gateway Interface Metric
+ 0.0.0.0 0.0.0.0 172.16.69.1 172.16.69.7 10
+ 127.0.0.0 255.0.0.0 On-link 127.0.0.1 331
+ 127.0.0.1 255.255.255.255 On-link 127.0.0.1 331
+ 127.255.255.255 255.255.255.255 On-link 127.0.0.1 331
+ 168.63.129.16 255.255.255.255 172.16.69.1 172.16.69.7 11
+ 169.254.169.254 255.255.255.255 172.16.69.1 172.16.69.7 11
+ ... (continues) ...
+ ```
+1. Verify that a route exists for `169.254.169.254`, and note the corresponding network interface (example, `172.16.69.7`).
+1. Dump the interface configuration and find the interface that corresponds to the one referenced in routing table, noting the MAC (physical) address.
+ ```console
+ > ipconfig /all
+ ... (continues) ...
+ Ethernet adapter Ethernet:
+
+ Connection-specific DNS Suffix . : xic3mnxjiefupcwr1mcs1rjiqa.cx.internal.cloudapp.net
+ Description . . . . . . . . . . . : Microsoft Hyper-V Network Adapter
+ Physical Address. . . . . . . . . : 00-0D-3A-E5-1C-C0
+ DHCP Enabled. . . . . . . . . . . : Yes
+ Autoconfiguration Enabled . . . . : Yes
+ Link-local IPv6 Address . . . . . : fe80::3166:ce5a:2bd5:a6d1%3(Preferred)
+ IPv4 Address. . . . . . . . . . . : 172.16.69.7(Preferred)
+ Subnet Mask . . . . . . . . . . . : 255.255.255.0
+ ... (continues) ...
+ ```
+1. Confirm that the interface corresponds to the VM's primary NIC and primary IP. You can find the primary NIC and IP by looking at the network configuration in the Azure portal, or by looking it up with the Azure CLI. Note the public and private IPs (and the MAC address if you're using the CLI). Here's a PowerShell CLI example:
+ ```powershell
+ $ResourceGroup = '<Resource_Group>'
+ $VmName = '<VM_Name>'
+ $NicNames = az vm nic list --resource-group $ResourceGroup --vm-name $VmName | ConvertFrom-Json |Foreach-Object { $_. id.Split('/')[-1] }
+ foreach($NicName in $NicNames)
+ {
+ $Nic = az vm nic show --resource-group $ResourceGroup --vm-name $VmName --nic $NicName ConvertFrom-Json
+ Write-Host $NicName, $Nic.primary, $Nic.macAddress
+ }
+ # Output: wintest767 True 00-0D-3A-E5-1C-C0
+ ```
+1. If they don't match, update the routing table so that the primary NIC and IP are targeted.
## Support If you aren't able to get a metadata response after multiple attempts, you can create a support issue in the Azure portal.
-For **Problem Type**, select **Management**. For **Category**, select **Instance Metadata Service**.
-![Screenshot of Instance Metadata Service support](./media/instance-metadata-service/InstanceMetadata-support.png)
+## Product feedback
+
+You can provide product feedback and ideas to our user feedback channel under Virtual Machines > Instance Metadata Service here: https://feedback.azure.com/forums/216843-virtual-machines?category_id=394627
## Next steps
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/redhat/redhat-extended-lifecycle-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/redhat-extended-lifecycle-support.md
@@ -35,6 +35,12 @@ Starting on 30 November 2020, Red Hat Enterprise Linux 6 will reach end of maint
#### What is the additional charge for using Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-On? The costs related to Extended Lifecycle support can be found with the [ELS form](https://aka.ms/els-form)
+#### I've deployed a VM by using custom image. How can I add Extended Lifecycle support to this VM?
+You need to contact Red Hat directly and get support directly from them.
+
+#### I've deployed a VM by using custom image. Can I convert this VM to a PAYG VM?
+No, you cannot. The conversion is not supported on Azure currently.
+ ## Next steps
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/get-started.md
@@ -14,7 +14,7 @@ ms.subservice: workloads
ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure-services
-ms.date: 12/21/2020
+ms.date: 01/04/2021
ms.author: juergent ms.custom: H1Hack27Feb2017
@@ -80,6 +80,8 @@ In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- 01/04/2021: Add new Azure regions supported by HLI into [What is SAP HANA on Azure (Large Instances)](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-overview-architecture)
+- 12/29/2020: Add architecture recommendations for specific Azure regions in [SAP workload configurations with Azure Availability Zones](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-ha-availability-zones)
- 12/21/2020: Add new certifications to SKUs of HANA Large Instances in [Available SKUs for HLI](./hana-available-skus.md) - 12/12/2020: Added pointer to SAP note clarifying details on Oracle Enterprise Linux support by SAP to [What SAP software is supported for Azure deployments](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-supported-product-on-azure#oracle-dbms-support) - 11/26/2020: Adapt [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) and [Azure Storage types for SAP workload](./planning-guide-storage.md) to changed single [VM SLAs](https://azure.microsoft.com/support/legal/sla/virtual-machines)
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-overview-architecture.md
@@ -11,7 +11,7 @@ ms.subservice: workloads
ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure
-ms.date: 07/12/2019
+ms.date: 01/04/2021
ms.author: juergent ms.custom: H1Hack27Feb2017
@@ -30,12 +30,14 @@ The customer isolation within the infrastructure stamp is performed in tenants,
These bare-metal server units are supported to run SAP HANA only. The SAP application layer or workload middle-ware layer runs in virtual machines. The infrastructure stamps that run the SAP HANA on Azure (Large Instances) units are connected to the Azure network services backbones. In this way, low-latency connectivity between SAP HANA on Azure (Large Instances) units and virtual machines is provided.
-As of July 2019, we differentiate between two different revisions of HANA Large Instance stamps and location of deployments:
+As of January 2021, we differentiate between two different revisions of HANA Large Instance stamps and location of deployments:
- "Revision 3" (Rev 3): Are the stamps that were made available for customer to deploy before July 2019 - "Revision 4" (Rev 4): New stamp design that is deployed in close proximity to Azure VM hosts and which so far are released in the Azure regions of: - West US2
- - East US
+ - East US
+ - East US2 (across two Availability Zones)
+ - South Central US (across two Availability Zones)
- West Europe - North Europe
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/troubleshoot-outbound-smtp-connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/troubleshoot-outbound-smtp-connectivity.md
@@ -11,7 +11,7 @@ ms.devlang: na
ms.topic: troubleshooting ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 11/20/2018
+ms.date: 01/04/2021
ms.author: genli ---
@@ -74,10 +74,6 @@ If you want to be able to send email from Azure VMs directly to external email p
After a subscription is exempted and the VMs have been stopped and restarted in the Azure portal, all VMs in that subscription are exempted going forward. The exemption applies only to the subscription requested and only to VM traffic that's routed directly to the internet.
-## Restrictions and limitations
-
-Routing port 25 traffic via Azure PaaS services like [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) isn't supported.
- ## Need help? Contact support If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly. Use this issue type: **Technical** > **Virtual Network** > **Connectivity** > **Cannot send email (SMTP/Port 25)**.
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/vnet-integration-for-azure-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/vnet-integration-for-azure-services.md
@@ -19,7 +19,7 @@ ms.author: kumud
Virtual Network (VNet) integration for an Azure service enables you to lock down access to the service to only your virtual network infrastructure. The VNet infrastructure also includes peered virtual networks and on-premises networks. VNet integration provides Azure services the benefits of network isolation and can be accomplished by one or more of the following methods:-- [Deploying dedicated instances of the service into a virtual network](virtual-network-service-endpoints-overview.md). The services can then be privately accessed within the virtual network and from on-premises networks.
+- [Deploying dedicated instances of the service into a virtual network](virtual-network-for-azure-services.md). The services can then be privately accessed within the virtual network and from on-premises networks.
- Using [Private Endpoint](../private-link/private-endpoint-overview.md) that connects you privately and securely to a service powered by [Azure Private Link](../private-link/private-link-overview.md). Private Endpoint uses a private IP address from your VNet, effectively bringing the service into your virtual network. - Accessing the service using public endpoints by extending a virtual network to the service, through [service endpoints](virtual-network-service-endpoints-overview.md). Service endpoints allow service resources to be secured to the virtual network. - Using [service tags](service-tags-overview.md) to allow or deny traffic to your Azure resources to and from public IP endpoints.