Updates from: 02/15/2021 04:06:11
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-temporary-access-pass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
@@ -0,0 +1,148 @@
+
+ Title: Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods
+description: Learn how to configure and enable users to to register Passwordless authentication methods by using a Temporary Access Pass (TAP)
+++++ Last updated : 02/12/2021+++++++++
+# Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods (Preview)
+
+Passwordless authentication methods, such as FIDO2 and Passwordless Phone Sign-in through the Microsoft Authenticator app, enable users to sign in securely without a password.
+Users can bootstrap Passwordless methods in one of two ways:
+
+- Using existing Azure AD multi-factor authentication methods
+- Using a Temporary Access Pass (TAP)
+
+TAP is a time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones.
+TAP also makes recovery easier when a user has lost or forgotten their strong authentication factor like a FIDO2 security key or Microsoft Authenticator app, but needs to sign in to register new strong authentication methods.
++
+This article shows you how to enable and use a TAP in Azure AD using the Azure portal.
+You can also perform these actions using the REST APIs.
+
+>[!NOTE]
+>Temporary Access Pass is currently in public preview. Some features might not be supported or have limited capabilities.
+
+## Enable the TAP policy
+
+A TAP policy defines settings, such as the lifetime of passes created in the tenant, or the users and groups who can use a TAP to sign-in.
+Before anyone can sign in with a TAP, you need to enable the authentication method policy and choose which users and groups can sign in by using a TAP.
+Although you can create a TAP for any user, only those included in the policy can sign-in with it.
+
+Global administrator and Authentication Method Policy administrator role holders can update the TAP authentication method policy.
+To configure the TAP authentication method policy:
+
+1. Sign in to the Azure portal as a Global admin and click **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass**.
+1. Click **Yes** to enable the policy, select which users have the policy applied, and any **General** settings.
+
+ ![Screenshot of how to enable the TAP authentication method policy](./media/how-to-authentication-temporary-access-pass/policy.png)
+
+ The default value and the range of allowed values are described in the following table.
++
+ | Setting | Default values | Allowed values | Comments | |
+ ||-||--||
+ Minimum lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Minimum number of minutes that the TAP is valid. | |
+ | Maximum lifetime | 24 hours | 10 ΓÇô 43200 Minutes (30 days) | Maximum number of minutes that the TAP is valid. | |
+ | Default lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Default values can be override by the individual passes, within the minimum and maximum lifetime configured by the policy | |
+ | One-time use | False | True / False | When the policy is set to false, passes in the tenant can be used either once or more than once during its validity (maximum lifetime). By enforcing one-time use in the TAP policy, all passes created in the tenant will be created as one-time use. | |
+ | Length | 8 | 8-48 characters | Defines the length of the passcode. | |
+
+## Create a TAP in the Azure AD Portal
+
+After you enable a TAP policy, you can create a TAP for a user in Azure AD.
+These roles can perform the following actions related to TAP.
+
+- Global administrator can create, delete, view TAP on any user (except themselves)
+- Privileged Authentication administrators can create, delete, view TAP on admins and members (except themselves)
+- Authentication administrators can create, delete, view TAP on members (except themselves)
+- Global Administrator can view the TAP details on the user (without reading the code itself).
+
+To create a TAP:
+
+1. Sign in to the portal as either a Global administrator, Privileged Authentication administrator, or Authentication administrator.
+1. Click **Azure Active Directory**, browse to Users, select a user, such as *Chris Green*, then choose **Authentication methods**.
+1. If needed, select the option to **Try the new user authentication methods experience**.
+1. Select the option to **Add authentication methods**.
+1. Below **Choose method**, click **Temporary Access Pass (Preview)**.
+1. Define a custom activation time or duration and click **Add**.
+
+ ![Screenshot of how to create a TAP](./media/how-to-authentication-temporary-access-pass/create.png)
+
+1. Once added, the details of the TAP are shown. Make a note of the actual TAP value. You provide this value to the user. You can't view this value after you click **Ok**.
+
+ ![Screenshot of TAP details](./media/how-to-authentication-temporary-access-pass/details.png)
+
+## Use a TAP
+
+The most common use for a TAP is for a user to register authentication details during the first sign-in, without the need to complete additional security prompts. Authentication methods are registered at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). Users can also update existing authentication methods here.
+
+1. Open a web browser to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo).
+1. Enter the UPN of the account you created the TAP for, such as *tapuser@contoso.com*.
+1. If the user is included in the TAP policy, they will see a screen to enter their TAP.
+1. Enter the TAP that was displayed in the Azure portal.
+
+ ![Screenshot of how to enter a TAP](./media/how-to-authentication-temporary-access-pass/enter.png)
+
+>[!NOTE]
+>For federated domains, a TAP is preferred over federation. A user with a TAP will complete the authentication in Azure AD and will not get redirected to the federated Identity Provider (IdP).
+
+The user is now signed in and can update or register a method such as FIDO2 security key.
+Users who update their authentication methods due to losing their credentials or device should make sure they remove the old authentication methods.
+
+Users can also use their TAP to register for Passwordless phone sign-in directly from the Authenticator app. For more information, see [Add your work or school account to the Microsoft Authenticator app](../user-help/user-help-auth-app-add-work-school-account.md).
+
+![Screenshot of how to enter a TAP using work or school account](./media/how-to-authentication-temporary-access-pass/enter-work-school.png)
+
+## Delete a TAP
+
+An expired TAP canΓÇÖt be used. Under the **Authentication methods** for a user, the **Detail** column shows when the TAP expired. You can delete these expired TAPs using the following steps:
+
+1. In the Azure AD portal, browse to **Users**, select a user, such as *Tap User*, then choose **Authentication methods**.
+1. On the right-hand side of the **Temporary Access Pass (Preview)** authentication method shown in the list, select **Delete**.
+
+## Replace a TAP
+
+- A user can only have one TAP. The passcode can be used during the start and end time of the TAP.
+- If the user requires a new TAP:
+ - If the existing TAP is valid, the admin needs to delete the existing TAP and create a new pass for the user. Deleting a valid TAP will revoke the userΓÇÖs sessions.
+ - If the existing TAP has expired, a new TAP will override the existing TAP and will not revoke the userΓÇÖs sessions.
+
+For more information about NIST standards for onboarding and recovery, see [NIST Special Publication 800-63A](https://pages.nist.gov/800-63-3/sp800-63a.html#sec4).
+
+## Limitations
+
+Keep these limitations in mind:
+
+- When using a one-time TAP to register a Passwordless method such as FIDO2 or Phone sign-in, the user must complete the registration within 10 minutes of sign-in with the one-time TAP. This limitation does not apply to a TAP that can be used more than once.
+- Guest users can't sign in with a TAP.
+- Users in scope for Self Service Password Reset (SSPR) registration policy will be required to register one of the SSPR methods after they have signed in with TAP. If the user is only going to use FIDO2 key, exclude them from the SSPR policy or disable the SSPR registration policy.
+- TAP cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter.
+- When Seamless SSO is enabled on the tenant, the users are prompted to enter a password. The **Use your Temporary Access Pass instead** link will be available for the user to sign-in with TAP.
+
+![Screenshot of Use a TAP instead](./media/how-to-authentication-temporary-access-pass/alternative.png)
+
+## Troubleshooting
+
+- If TAP is not offered to a user during sign-in, check the following:
+ - The user is in scope for the TAP authentication method policy.
+ - The user has a valid TAP, and if it is one-time use, it wasnΓÇÖt used yet.
+- If **Temporary Access Pass sign in was blocked due to User Credential Policy** appears during sign-in with TAP, check the following:
+ - The user has a multi-use TAP while the authentication method policy requires a one-time TAP.
+ - A one-time TAP was already used.
+
+## Next steps
+
+- [Plan a passwordless authentication deployment in Azure Active Directory](howto-authentication-passwordless-deployment.md)
+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/identity-videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/identity-videos.md
@@ -27,16 +27,16 @@ ___
:::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=tkQJSHFsduY" target="_blank">The basics of modern authentication - Microsoft identity platform</a>(12:28)
+ <a href="https://www.youtube.com/watch?v=uDU1QTSw7Ps" target="_blank">What is the Microsoft identity platform?</a>(14:54)
:::column-end::: :::column:::
- > [!Video https://www.youtube.com/embed/tkQJSHFsduY]
+ > [!Video https://www.youtube.com/embed/uDU1QTSw7Ps]
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=7_vxnHiUA1M" target="_blank">Modern authentication: how we got here ΓÇô Microsoft identity platform</a>(15:47)
+ <a href="https://www.youtube.com/watch?v=tkQJSHFsduY" target="_blank">The basics of modern authentication - Microsoft identity platform</a>(12:28)
:::column-end::: :::column:::
- > [!Video https://www.youtube.com/embed/7_vxnHiUA1M]
+ > [!Video https://www.youtube.com/embed/tkQJSHFsduY]
:::column-end::: :::row-end::: :::row:::
@@ -47,8 +47,10 @@ ___
>[!Video https://www.youtube.com/embed/JpeMeTjQJ04] :::column-end::: :::column:::
+ <a href="https://www.youtube.com/watch?v=7_vxnHiUA1M" target="_blank">Modern authentication: how we got here ΓÇô Microsoft identity platform</a>(15:47)
:::column-end::: :::column:::
+ > [!Video https://www.youtube.com/embed/7_vxnHiUA1M]
:::column-end::: :::row-end:::
active-directory https://docs.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-delegate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate.md
@@ -133,9 +133,6 @@ For a user who is not a Global administrator or a User administrator, to add gro
| [Cloud application administrator](../roles/permissions-reference.md) | Catalog owner | | | :heavy_check_mark: | | | User | Catalog owner | Only if group owner | Only if group owner | Only if app owner | |
-> [!NOTE]
-> If a user adds a security group or Microsoft 365 group, then the group can't be role-assignable. If the user adds a group that is role-assignable when they create the access package, then they must also be the owner of that role-assignable group. For more information, reference [Create a role-assignable group in Azure Active Directory](../roles/groups-create-eligible.md).
- To determine the least privileged role for a task, you can also reference [Administrator roles by admin task in Azure Active Directory](../roles/delegate-by-task.md#entitlement-management). ## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-troubleshoot.md
@@ -3,7 +3,7 @@ Title: Troubleshoot entitlement management - Azure AD
description: Learn about some items you should check to help you troubleshoot Azure Active Directory entitlement management. documentationCenter: ''-+ editor: markwahl-msft
@@ -13,7 +13,7 @@ ms.devlang: na
Last updated 12/23/2020-+
@@ -45,7 +45,6 @@ This article describes some items you should check to help you troubleshoot Azur
* When you remove a member of a team, they are removed from the Microsoft 365 Group as well. Removal from the team's chat functionality might be delayed. For more information, see [Group membership](/microsoftteams/office-365-groups#group-membership).
-* Ensure your directory is not configured for multi-geo. Entitlement management currently does not support multi-geo locations for SharePoint Online. SharePoint Online sites must be in the default geo-location to be governed with entitlement management. For more information, see [Multi-Geo Capabilities in OneDrive and SharePoint Online](/Microsoft 365/Enterprise/multi-geo-capabilities-in-onedrive-and-sharepoint-online-in-office-365).
## Access packages
@@ -151,4 +150,4 @@ You can only cancel a pending request that has not yet been delivered or whose d
## Next steps - [Govern access for external users](entitlement-management-external-users.md)-- [View reports of how users got access in entitlement management](entitlement-management-reports.md)
+- [View reports of how users got access in entitlement management](entitlement-management-reports.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
@@ -78,11 +78,11 @@ The [Privileged authentication administrator](#privileged-authentication-adminis
The [Authentication policy administrator](#authentication-policy-administrator) role has permissions to set the tenant's authentication method policy that determines which methods each user can register and use.
-| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
+| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
| - | - | - | - | - | - | | Authentication administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No |
-| Privileged authentication administrator| Yes for all users | Yes for all users |No | No |No |
-| Authentication policy administrator | No |No | Yes | Yes | Yes |
+| Privileged authentication administrator| Yes for all users | Yes for all users | No | No | No |
+| Authentication policy administrator | No |No | Yes | Yes | Yes |
> [!IMPORTANT] > Users with this role can change credentials for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the credentials of a user may mean the ability to assume that user's identity and permissions. For example:
@@ -102,11 +102,11 @@ Users with this role can configure the authentication methods policy, tenant-wid
The [Authentication administrator](#authentication-administrator) and [Privileged authentication administrator](#privileged-authentication-administrator) roles have permission to manage registered authentication methods on users and can force re-registration and multi-factor authentication for all users.
-| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
+| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
| - | - | - | - | - | - | | Authentication administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No |
-| Privileged authentication administrator| Yes for all users | Yes for all users |No | No |No |
-| Authentication policy administrator | No |No | Yes | Yes | Yes |
+| Privileged authentication administrator| Yes for all users | Yes for all users | No | No | No |
+| Authentication policy administrator | No | No | Yes | Yes | Yes |
> [!IMPORTANT] > This role is not currently capable of managing MFA settings in the legacy MFA management portal.
@@ -237,7 +237,7 @@ Users with this role add or delete custom attributes available to all user flows
This administrator manages federation between Azure AD organizations and external identity providers. With this role, users can add new identity providers and configure all available settings (e.g. authentication path, service ID, assigned key containers). This user can enable the Azure AD organization to trust authentications from external identity providers. The resulting impact on end-user experiences depends on the type of organization:
-* Azure AD organizations for employees and partners: The addition  of a federation (e.g. with Gmail) will immediately impact all guest invitations not yet redeemed. See [Adding Google as an identity provider for B2B guest users](../external-identities/google-federation.md).
+* Azure AD organizations for employees and partners: The addition of a federation (e.g. with Gmail) will immediately impact all guest invitations not yet redeemed. See [Adding Google as an identity provider for B2B guest users](../external-identities/google-federation.md).
* Azure Active Directory B2C organizations: The addition of a federation (for example, with Facebook, or with another Azure AD organization) does not immediately impact end-user flows until the identity provider is added as an option in a user flow (also called a built-in policy). See [Configuring a Microsoft account as an identity provider](../../active-directory-b2c/identity-provider-microsoft-account.md) for an example. To change user flows, the limited role of "B2C User Flow Administrator" is required. ### [Global Administrator](#global-administrator-permissions)
@@ -288,7 +288,7 @@ This role was previously called "Password Administrator" in the [Azure portal](h
### [Hybrid Identity Administrator](#hybrid-identity-administrator-permissions)
-Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning as well as manage federation settings. Users can also troubleshoot and monitor logs using this role.
+Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning as well as manage federation settings. Users can also troubleshoot and monitor logs using this role.
### [Insights Administrator](#insights-administrator-permissions) Users in this role can access the full set of administrative capabilities in the [M365 Insights application](https://go.microsoft.com/fwlink/?linkid=2129521). This role has the ability to read directory information, monitor service health, file support tickets, and access the Insights admin settings aspects.
@@ -331,10 +331,10 @@ Users with the Modern Commerce User role typically have administrative permissio
**When is the Modern Commerce User role assigned?**
-* **Self-service purchase in Microsoft 365 admin center** ΓÇô Self-service purchase gives users a chance to try out new products by buying or signing up for them on their own. These products are managed in the admin center. Users who make a self-service purchase are assigned a role in the commerce system, and the Modern Commerce User role so they can manage their purchases in admin center. Admins can block self-service purchases (for Power BI, Power Apps, Power automate) through [PowerShell](/microsoft-365/commerce/subscriptions/allowselfservicepurchase-powershell). For more information, see [Self-service purchase FAQ](/microsoft-365/commerce/subscriptions/self-service-purchase-faq).
-* **Purchases from Microsoft commercial marketplace** ΓÇô Similar to self-service purchase, when a user buys a product or service from Microsoft AppSource or Azure Marketplace, the Modern Commerce User role is assigned if they donΓÇÖt have the Global Administrator or Billing admin role. In some cases, users might be blocked from making these purchases. For more information, see [Microsoft commercial marketplace](../../marketplace/marketplace-faq-publisher-guide.md#what-could-block-a-customer-from-completing-a-purchase).
-* **Proposals from Microsoft** ΓÇô A proposal is a formal offer from Microsoft for your organization to buy Microsoft products and services. When the person who is accepting the proposal doesnΓÇÖt have a Global Administrator or Billing admin role in Azure AD, they are assigned both a commerce-specific role to complete the proposal and the Modern Commerce User role to access admin center. When they access the admin center they can only use features that are authorized by their commerce-specific role.
-* **Commerce-specific roles** ΓÇô Some users are assigned commerce-specific roles. If a user isn't a Global or Billing admin, they get the Modern Commerce User role so they can access the admin center.
+* **Self-service purchase in Microsoft 365 admin center** ΓÇô Self-service purchase gives users a chance to try out new products by buying or signing up for them on their own. These products are managed in the admin center. Users who make a self-service purchase are assigned a role in the commerce system, and the Modern Commerce User role so they can manage their purchases in admin center. Admins can block self-service purchases (for Power BI, Power Apps, Power automate) through [PowerShell](/microsoft-365/commerce/subscriptions/allowselfservicepurchase-powershell). For more information, see [Self-service purchase FAQ](/microsoft-365/commerce/subscriptions/self-service-purchase-faq).
+* **Purchases from Microsoft commercial marketplace** ΓÇô Similar to self-service purchase, when a user buys a product or service from Microsoft AppSource or Azure Marketplace, the Modern Commerce User role is assigned if they donΓÇÖt have the Global Administrator or Billing admin role. In some cases, users might be blocked from making these purchases. For more information, see [Microsoft commercial marketplace](../../marketplace/marketplace-faq-publisher-guide.md#what-could-block-a-customer-from-completing-a-purchase).
+* **Proposals from Microsoft** ΓÇô A proposal is a formal offer from Microsoft for your organization to buy Microsoft products and services. When the person who is accepting the proposal doesnΓÇÖt have a Global Administrator or Billing admin role in Azure AD, they are assigned both a commerce-specific role to complete the proposal and the Modern Commerce User role to access admin center. When they access the admin center they can only use features that are authorized by their commerce-specific role.
+* **Commerce-specific roles** ΓÇô Some users are assigned commerce-specific roles. If a user isn't a Global or Billing admin, they get the Modern Commerce User role so they can access the admin center.
If the Modern Commerce User role is unassigned from a user, they lose access to Microsoft 365 admin center. If they were managing any products, either for themselves or for your organization, they wonΓÇÖt be able to manage them. This might include assigning licenses, changing payment methods, paying bills, or other tasks for managing subscriptions.
@@ -384,11 +384,11 @@ The [Authentication administrator](#authentication-administrator) role has permi
The [Authentication policy administrator](#authentication-policy-administrator) role has permissions to set the tenant's authentication method policy that determines which methods each user can register and use.
-| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
+| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
| - | - | - | - | - | - | | Authentication administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No |
-| Privileged authentication administrator| Yes for all users | Yes for all users |No | No |No |
-| Authentication policy administrator | No |No | Yes | Yes | Yes |
+| Privileged authentication administrator| Yes for all users | Yes for all users | No | No | No |
+| Authentication policy administrator | No | No | Yes | Yes | Yes |
> [!IMPORTANT] > Users with this role can change credentials for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the credentials of a user may mean the ability to assume that user's identity and permissions. For example:
@@ -713,7 +713,7 @@ Manage secrets for federation and encryption in the Identity Experience Framewor
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.aad.b2c/trustFramework/keySets/allTasks | Read and configure key sets in  Azure Active Directory B2C. |
+> | microsoft.aad.b2c/trustFramework/keySets/allTasks | Read and configure key sets in Azure Active Directory B2C. |
### B2C IEF Policy Administrator permissions
@@ -722,7 +722,7 @@ Create and manage trust framework policies in the Identity Experience Framework.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.aad.b2c/trustFramework/policies/allTasks | Read and configure custom policies in  Azure Active Directory B2C. |
+> | microsoft.aad.b2c/trustFramework/policies/allTasks | Read and configure custom policies in Azure Active Directory B2C. |
### Billing Administrator permissions
@@ -987,7 +987,7 @@ Can read & write basic directory information. For granting access to application
> | | | > | microsoft.directory/groups/appRoleAssignments/update | Update groups.appRoleAssignments property in Azure Active Directory. | > | microsoft.directory/groups/assignLicense | Manage licenses on groups in Azure Active Directory. |
-> | microsoft.directory/groups/basic/update | Update basic properties on groups in Azure Active Directory.  |
+> | microsoft.directory/groups/basic/update | Update basic properties on groups in Azure Active Directory. |
> | microsoft.directory/groups/classification/update | Update classification property of the group in Azure Active Directory. | > | microsoft.directory/groups/create | Create groups in Azure Active Directory. | > | microsoft.directory/groups/groupType/update | Update the groupType property of a group in Azure Active Directory. |
@@ -1076,7 +1076,7 @@ Create and manage all aspects of user flows.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.aad.b2c/userFlows/allTasks | Read and configure user flows in  Azure Active Directory B2C. |
+> | microsoft.aad.b2c/userFlows/allTasks | Read and configure user flows in Azure Active Directory B2C. |
### External ID User Flow Attribute Administrator permissions
@@ -1085,7 +1085,7 @@ Create and manage the attribute schema available to all user flows.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.aad.b2c/userAttributes/allTasks | Read and configure user attributes in  Azure Active Directory B2C. |
+> | microsoft.aad.b2c/userAttributes/allTasks | Read and configure user attributes in Azure Active Directory B2C. |
### External Identity Provider Administrator permissions
@@ -1094,7 +1094,7 @@ Configure identity providers for use in direct federation.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.aad.b2c/identityProviders/allTasks | Read and configure identity providers in  Azure Active Directory B2C. |
+> | microsoft.aad.b2c/identityProviders/allTasks | Read and configure identity providers in Azure Active Directory B2C. |
### Global Administrator permissions
@@ -1178,69 +1178,69 @@ Can read everything that a Global Administrator can, but not edit anything.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.commerce.billing/allEntities/read | Read all aspects of billing. |
-> | microsoft.directory/administrativeUnits/basic/read | Read basic properties on administrativeUnits in Azure Active Directory. |
-> | microsoft.directory/administrativeUnits/members/read | Read administrativeUnits.members property in Azure Active Directory. |
-> | microsoft.directory/applications/basic/read | Read basic properties on applications in Azure Active Directory. |
-> | microsoft.directory/applications/owners/read | Read applications.owners property in Azure Active Directory. |
-> | microsoft.directory/applications/policies/read | Read applications.policies property in Azure Active Directory. |
+> | microsoft.commerce.billing/allEntities/read | Read all aspects of billing. |
+> | microsoft.directory/administrativeUnits/basic/read | Read basic properties on administrativeUnits in Azure Active Directory. |
+> | microsoft.directory/administrativeUnits/members/read | Read administrativeUnits.members property in Azure Active Directory. |
+> | microsoft.directory/applications/basic/read | Read basic properties on applications in Azure Active Directory. |
+> | microsoft.directory/applications/owners/read | Read applications.owners property in Azure Active Directory. |
+> | microsoft.directory/applications/policies/read | Read applications.policies property in Azure Active Directory. |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker key objects and properties (including recovery key) in Azure Active Directory. |
-> | microsoft.directory/contacts/basic/read | Read basic properties on contacts in Azure Active Directory. |
-> | microsoft.directory/contacts/memberOf/read | Read contacts.memberOf property in Azure Active Directory. |
-> | microsoft.directory/contracts/basic/read | Read basic properties on contracts in Azure Active Directory. |
-> | microsoft.directory/devices/basic/read | Read basic properties on devices in Azure Active Directory. |
-> | microsoft.directory/devices/memberOf/read | Read devices.memberOf property in Azure Active Directory. |
-> | microsoft.directory/devices/registeredOwners/read | Read devices.registeredOwners property in Azure Active Directory. |
-> | microsoft.directory/devices/registeredUsers/read | Read devices.registeredUsers property in Azure Active Directory. |
-> | microsoft.directory/directoryRoles/basic/read | Read basic properties on directoryRoles in Azure Active Directory. |
-> | microsoft.directory/directoryRoles/eligibleMembers/read | Read directoryRoles.eligibleMembers property in Azure Active Directory. |
-> | microsoft.directory/directoryRoles/members/read | Read directoryRoles.members property in Azure Active Directory. |
-> | microsoft.directory/domains/basic/read | Read basic properties on domains in Azure Active Directory. |
+> | microsoft.directory/contacts/basic/read | Read basic properties on contacts in Azure Active Directory. |
+> | microsoft.directory/contacts/memberOf/read | Read contacts.memberOf property in Azure Active Directory. |
+> | microsoft.directory/contracts/basic/read | Read basic properties on contracts in Azure Active Directory. |
+> | microsoft.directory/devices/basic/read | Read basic properties on devices in Azure Active Directory. |
+> | microsoft.directory/devices/memberOf/read | Read devices.memberOf property in Azure Active Directory. |
+> | microsoft.directory/devices/registeredOwners/read | Read devices.registeredOwners property in Azure Active Directory. |
+> | microsoft.directory/devices/registeredUsers/read | Read devices.registeredUsers property in Azure Active Directory. |
+> | microsoft.directory/directoryRoles/basic/read | Read basic properties on directoryRoles in Azure Active Directory. |
+> | microsoft.directory/directoryRoles/eligibleMembers/read | Read directoryRoles.eligibleMembers property in Azure Active Directory. |
+> | microsoft.directory/directoryRoles/members/read | Read directoryRoles.members property in Azure Active Directory. |
+> | microsoft.directory/domains/basic/read | Read basic properties on domains in Azure Active Directory. |
> | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management. |
-> | microsoft.directory/groups/appRoleAssignments/read | Read groups.appRoleAssignments property in Azure Active Directory. |
-> | microsoft.directory/groups/basic/read | Read basic properties on groups in Azure Active Directory. |
-> | microsoft.directory/groups/hiddenMembers/read | Read groups.hiddenMembers property in Azure Active Directory. |
-> | microsoft.directory/groups/memberOf/read | Read groups.memberOf property in Azure Active Directory. |
-> | microsoft.directory/groups/members/read | Read groups.members property in Azure Active Directory. |
-> | microsoft.directory/groups/owners/read | Read groups.owners property in Azure Active Directory. |
-> | microsoft.directory/groups/settings/read | Read groups.settings property in Azure Active Directory. |
-> | microsoft.directory/groupSettings/basic/read | Read basic properties on groupSettings in Azure Active Directory. |
-> | microsoft.directory/groupSettingTemplates/basic/read | Read basic properties on groupSettingTemplates in Azure Active Directory. |
-> | microsoft.directory/oAuth2PermissionGrants/basic/read | Read basic properties on oAuth2PermissionGrants in Azure Active Directory. |
-> | microsoft.directory/organization/basic/read | Read basic properties on organization in Azure Active Directory. |
-> | microsoft.directory/organization/trustedCAsForPasswordlessAuth/read | Read organization.trustedCAsForPasswordlessAuth property in Azure Active Directory. |
-> | microsoft.directory/policies/standard/read | Read standard policies in Azure Active Directory. |
+> | microsoft.directory/groups/appRoleAssignments/read | Read groups.appRoleAssignments property in Azure Active Directory. |
+> | microsoft.directory/groups/basic/read | Read basic properties on groups in Azure Active Directory. |
+> | microsoft.directory/groups/hiddenMembers/read | Read groups.hiddenMembers property in Azure Active Directory. |
+> | microsoft.directory/groups/memberOf/read | Read groups.memberOf property in Azure Active Directory. |
+> | microsoft.directory/groups/members/read | Read groups.members property in Azure Active Directory. |
+> | microsoft.directory/groups/owners/read | Read groups.owners property in Azure Active Directory. |
+> | microsoft.directory/groups/settings/read | Read groups.settings property in Azure Active Directory. |
+> | microsoft.directory/groupSettings/basic/read | Read basic properties on groupSettings in Azure Active Directory. |
+> | microsoft.directory/groupSettingTemplates/basic/read | Read basic properties on groupSettingTemplates in Azure Active Directory. |
+> | microsoft.directory/oAuth2PermissionGrants/basic/read | Read basic properties on oAuth2PermissionGrants in Azure Active Directory. |
+> | microsoft.directory/organization/basic/read | Read basic properties on organization in Azure Active Directory. |
+> | microsoft.directory/organization/trustedCAsForPasswordlessAuth/read | Read organization.trustedCAsForPasswordlessAuth property in Azure Active Directory. |
+> | microsoft.directory/policies/standard/read | Read standard policies in Azure Active Directory. |
> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs. |
-> | microsoft.directory/roleAssignments/basic/read | Read basic properties on roleAssignments in Azure Active Directory. |
-> | microsoft.directory/roleDefinitions/basic/read | Read basic properties on roleDefinitions in Azure Active Directory. |
-> | microsoft.directory/servicePrincipals/appRoleAssignedTo/read | Read servicePrincipals.appRoleAssignedTo property in Azure Active Directory. |
-> | microsoft.directory/servicePrincipals/appRoleAssignments/read | Read servicePrincipals.appRoleAssignments property in Azure Active Directory. |
-> | microsoft.directory/servicePrincipals/basic/read | Read basic properties on servicePrincipals in Azure Active Directory. |
-> | microsoft.directory/servicePrincipals/memberOf/read | Read servicePrincipals.memberOf property in Azure Active Directory. |
-> | microsoft.directory/servicePrincipals/oAuth2PermissionGrants/basic/read | Read servicePrincipals.oAuth2PermissionGrants property in Azure Active Directory. |
-> | microsoft.directory/servicePrincipals/ownedObjects/read | Read servicePrincipals.ownedObjects property in Azure Active Directory. |
-> | microsoft.directory/servicePrincipals/owners/read | Read servicePrincipals.owners property in Azure Active Directory. |
-> | microsoft.directory/servicePrincipals/policies/read | Read servicePrincipals.policies property in Azure Active Directory. |
-> | microsoft.directory/signInReports/allProperties/read | Read all properties (including privileged properties) on signInReports in Azure Active Directory. |
-> | microsoft.directory/subscribedSkus/basic/read | Read basic properties on subscribedSkus in Azure Active Directory. |
-> | microsoft.directory/users/appRoleAssignments/read | Read users.appRoleAssignments property in Azure Active Directory. |
-> | microsoft.directory/users/basic/read | Read basic properties on users in Azure Active Directory. |
-> | microsoft.directory/users/directReports/read | Read users.directReports property in Azure Active Directory. |
-> | microsoft.directory/users/manager/read | Read users.manager property in Azure Active Directory. |
-> | microsoft.directory/users/memberOf/read | Read users.memberOf property in Azure Active Directory. |
-> | microsoft.directory/users/oAuth2PermissionGrants/basic/read | Read users.oAuth2PermissionGrants property in Azure Active Directory. |
-> | microsoft.directory/users/ownedDevices/read | Read users.ownedDevices property in Azure Active Directory. |
-> | microsoft.directory/users/ownedObjects/read | Read users.ownedObjects property in Azure Active Directory. |
-> | microsoft.directory/users/registeredDevices/read | Read users.registeredDevices property in Azure Active Directory. |
-> | microsoft.directory/users/strongAuthentication/read | Read strong authentication properties like MFA credential information. |
-> | microsoft.office365.exchange/allEntities/read | Read all aspects of Exchange Online. |
-> | microsoft.office365.messageCenter/messages/read | Read messages in microsoft.office365.messageCenter. |
-> | microsoft.office365.messageCenter/securityMessages/read | Read securityMessages in microsoft.office365.messageCenter. |
+> | microsoft.directory/roleAssignments/basic/read | Read basic properties on roleAssignments in Azure Active Directory. |
+> | microsoft.directory/roleDefinitions/basic/read | Read basic properties on roleDefinitions in Azure Active Directory. |
+> | microsoft.directory/servicePrincipals/appRoleAssignedTo/read | Read servicePrincipals.appRoleAssignedTo property in Azure Active Directory. |
+> | microsoft.directory/servicePrincipals/appRoleAssignments/read | Read servicePrincipals.appRoleAssignments property in Azure Active Directory. |
+> | microsoft.directory/servicePrincipals/basic/read | Read basic properties on servicePrincipals in Azure Active Directory. |
+> | microsoft.directory/servicePrincipals/memberOf/read | Read servicePrincipals.memberOf property in Azure Active Directory. |
+> | microsoft.directory/servicePrincipals/oAuth2PermissionGrants/basic/read | Read servicePrincipals.oAuth2PermissionGrants property in Azure Active Directory. |
+> | microsoft.directory/servicePrincipals/ownedObjects/read | Read servicePrincipals.ownedObjects property in Azure Active Directory. |
+> | microsoft.directory/servicePrincipals/owners/read | Read servicePrincipals.owners property in Azure Active Directory. |
+> | microsoft.directory/servicePrincipals/policies/read | Read servicePrincipals.policies property in Azure Active Directory. |
+> | microsoft.directory/signInReports/allProperties/read | Read all properties (including privileged properties) on signInReports in Azure Active Directory. |
+> | microsoft.directory/subscribedSkus/basic/read | Read basic properties on subscribedSkus in Azure Active Directory. |
+> | microsoft.directory/users/appRoleAssignments/read | Read users.appRoleAssignments property in Azure Active Directory. |
+> | microsoft.directory/users/basic/read | Read basic properties on users in Azure Active Directory. |
+> | microsoft.directory/users/directReports/read | Read users.directReports property in Azure Active Directory. |
+> | microsoft.directory/users/manager/read | Read users.manager property in Azure Active Directory. |
+> | microsoft.directory/users/memberOf/read | Read users.memberOf property in Azure Active Directory. |
+> | microsoft.directory/users/oAuth2PermissionGrants/basic/read | Read users.oAuth2PermissionGrants property in Azure Active Directory. |
+> | microsoft.directory/users/ownedDevices/read | Read users.ownedDevices property in Azure Active Directory. |
+> | microsoft.directory/users/ownedObjects/read | Read users.ownedObjects property in Azure Active Directory. |
+> | microsoft.directory/users/registeredDevices/read | Read users.registeredDevices property in Azure Active Directory. |
+> | microsoft.directory/users/strongAuthentication/read | Read strong authentication properties like MFA credential information. |
+> | microsoft.office365.exchange/allEntities/read | Read all aspects of Exchange Online. |
+> | microsoft.office365.messageCenter/messages/read | Read messages in microsoft.office365.messageCenter. |
+> | microsoft.office365.messageCenter/securityMessages/read | Read securityMessages in microsoft.office365.messageCenter. |
> | microsoft.office365.network/performance/allProperties/read | Read network performance pages in Microsoft 365 Admin Center. |
-> | microsoft.office365.protectionCenter/allEntities/read | Read all aspects of Office 365 Protection Center. |
-> | microsoft.office365.securityComplianceCenter/allEntities/read | Read all standard properties in microsoft.office365.securityComplianceCenter. |
-> | microsoft.office365.usageReports/allEntities/read | Read Office 365 usage reports. |
-> | microsoft.office365.webPortal/allEntities/standard/read | Read standard properties on all resources in microsoft.office365.webPortal. |
+> | microsoft.office365.protectionCenter/allEntities/read | Read all aspects of Office 365 Protection Center. |
+> | microsoft.office365.securityComplianceCenter/allEntities/read | Read all standard properties in microsoft.office365.securityComplianceCenter. |
+> | microsoft.office365.usageReports/allEntities/read | Read Office 365 usage reports. |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read standard properties on all resources in microsoft.office365.webPortal. |
### Groups Administrator permissions
@@ -1308,8 +1308,8 @@ Can manage AD to Azure AD cloud provisioning and federation settings.
> | | | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
-> | microsoft.directory/applications/audience/update | Update applications.audience property in Azure Active Directory. |
-> | microsoft.directory/applications/authentication/update | Update applications.authentication property in Azure Active Directory. |
+> | microsoft.directory/applications/audience/update | Update applications.audience property in Azure Active Directory. |
+> | microsoft.directory/applications/authentication/update | Update applications.authentication property in Azure Active Directory. |
> | microsoft.directory/applications/basic/update | Update basic properties on applications in Azure Active Directory. | > | microsoft.directory/applications/create | Create applications in Azure Active Directory. | > | microsoft.directory/applications/credentials/update | Update applications.credentials property in Azure Active Directory. |
@@ -1486,7 +1486,7 @@ Can manage network locations and review enterprise network design insights for M
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.office365.network/performance/allProperties/read | Read network performance pages in M365 Admin Center. |
+> | microsoft.office365.network/performance/allProperties/read | Read network performance pages in M365 Admin Center. |
> | microsoft.office365.network/locations/allProperties/allTasks | Read and configure network locations properties for each location. | ### Office Apps Administrator permissions
@@ -1891,7 +1891,7 @@ Can manage all aspects of the Skype for Business product.
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | > | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online. | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
-> | microsoft.office365.usageReports/allEntities/read | Read Office 365 usage reports. |
+> | microsoft.office365.usageReports/allEntities/read | Read Office 365 usage reports. |
> | microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. | ### Teams Administrator permissions
@@ -1984,7 +1984,7 @@ Can perform management related tasks on Teams certified devices.
> | Actions | Description | > | | | > | microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
-> | microsoft.teams/devices/basic/read | Manage all aspects of Teams-certified devices including configuration policies. |
+> | microsoft.teams/devices/basic/read | Manage all aspects of Teams-certified devices including configuration policies. |
### Usage Summary Reports Reader permissions
@@ -2139,7 +2139,7 @@ Device Join | Deprecated | [Deprecated roles documentation](permissions-referenc
Device Managers | Deprecated | [Deprecated roles documentation](permissions-reference.md#deprecated-roles) Device Users | Deprecated | [Deprecated roles documentation](permissions-reference.md#deprecated-roles) Directory Synchronization Accounts | Not shown because it shouldn't be used | [Directory Synchronization Accounts documentation](permissions-reference.md#directory-synchronization-accounts)
-Guest User | Not shown because it can't be used | NA
+Guest User | Not shown because it can't be used | NA
Partner Tier 1 Support | Not shown because it shouldn't be used | [Partner Tier1 Support documentation](permissions-reference.md#partner-tier1-support) Partner Tier 2 Support | Not shown because it shouldn't be used | [Partner Tier2 Support documentation](permissions-reference.md#partner-tier2-support) Restricted Guest User | Not shown because it can't be used | NA
aks https://docs.microsoft.com/en-us/azure/aks/cluster-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-configuration.md
@@ -3,7 +3,7 @@ Title: Cluster configuration in Azure Kubernetes Services (AKS)
description: Learn how to configure a cluster in Azure Kubernetes Service (AKS) Previously updated : 01/13/2020 Last updated : 02/09/2020
@@ -14,13 +14,19 @@ As part of creating an AKS cluster, you may need to customize your cluster confi
## OS configuration
-AKS now supports Ubuntu 18.04 as the node operating system (OS) in general availability for clusters in kubernetes versions higher than 1.18.8. For versions below 1.18.x, AKS Ubuntu 16.04 is still the default base image. From kubernetes v1.18.x and onward, the default base is AKS Ubuntu 18.04.
+AKS now supports Ubuntu 18.04 as the default node operating system (OS) in general availability (GA) for clusters in kubernetes versions higher than 1.18 For versions below 1.18, AKS Ubuntu 16.04 is still the default base image. From kubernetes v1.18 and higher, the default base is AKS Ubuntu 18.04.
-### Use AKS Ubuntu 18.04 Generally Available on new clusters
+> [!IMPORTANT]
+> Node pools created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the node pool Kubernetes version is updated to v1.18 or greater.
+>
+> It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater.
++
+### Use AKS Ubuntu 18.04 (GA) on new clusters
Clusters created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 will still receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the cluster or node pool Kubernetes version is updated to v1.18 or greater.
-It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater. Read about how to [test Ubuntu 18.04 node pools](#test-aks-ubuntu-1804-generally-available-on-existing-clusters).
+It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater.
To create a cluster using `AKS Ubuntu 18.04` node image, simply create a cluster running kubernetes v1.18 or greater as shown below
@@ -28,11 +34,11 @@ To create a cluster using `AKS Ubuntu 18.04` node image, simply create a cluster
az aks create --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14 ```
-### Use AKS Ubuntu 18.04 Generally Available on existing clusters
+### Use AKS Ubuntu 18.04 (GA) on existing clusters
Clusters created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 will still receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the cluster or node pool Kubernetes version is updated to v1.18 or greater.
-It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater. Read about how to [test Ubuntu 18.04 node pools](#test-aks-ubuntu-1804-generally-available-on-existing-clusters).
+It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater.
If your clusters or node pools are ready for `AKS Ubuntu 18.04` node image, you can simply upgrade them to a v1.18 or higher as below.
@@ -46,7 +52,7 @@ If you just want to upgrade just one node pool:
az aks nodepool upgrade -name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14 ```
-### Test AKS Ubuntu 18.04 Generally Available on existing clusters
+### Test AKS Ubuntu 18.04 (GA) on existing clusters
Node pools created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 will still receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the node pool Kubernetes version is updated to v1.18 or greater.
@@ -61,58 +67,6 @@ az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes
az aks nodepool add --name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14 ```
-### Use AKS Ubuntu 18.04 on new clusters (Preview)
-
-The following section will explain how you case use and test AKS Ubuntu 18.04 on clusters that aren't yet using a kubernetes version 1.18.x or higher, or were created before this feature became generally available, by using the OS configuration preview.
-
-You must have the following resources installed:
--- [The Azure CLI][azure-cli-install], version 2.2.0 or later-- The aks-preview 0.4.35 extension-
-To install the aks-preview 0.4.35 extension or later, use the following Azure CLI commands:
-
-```azurecli
-az extension add --name aks-preview
-az extension list
-```
-
-Register the `UseCustomizedUbuntuPreview` feature:
-
-```azurecli
-az feature register --name UseCustomizedUbuntuPreview --namespace Microsoft.ContainerService
-```
-
-It might take several minutes for the status to show as **Registered**. You can check the registration status by using the [az feature list](/cli/azure/feature#az-feature-list) command:
-
-```azurecli
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/UseCustomizedUbuntuPreview')].{Name:name,State:properties.state}"
-```
-
-When the status shows as registered, refresh the registration of the `Microsoft.ContainerService` resource provider by using the [az provider register](/cli/azure/provider#az-provider-register) command:
-
-```azurecli
-az provider register --namespace Microsoft.ContainerService
-```
-
-Configure the cluster to use Ubuntu 18.04 when the cluster is created. Use the `--aks-custom-headers` flag to set the Ubuntu 18.04 as the default OS.
-
-```azurecli
-az aks create --name myAKSCluster --resource-group myResourceGroup --aks-custom-headers CustomizedUbuntu=aks-ubuntu-1804
-```
-
-If you want to create clusters with the AKS Ubuntu 16.04 image, you can do so by omitting the custom `--aks-custom-headers` tag.
-
-### Use AKS Ubuntu 18.04 existing clusters (Preview)
-
-Configure a new node pool to use Ubuntu 18.04. Use the `--aks-custom-headers` flag to set the Ubuntu 18.04 as the default OS for that node pool.
-
-```azurecli
-az aks nodepool add --name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --aks-custom-headers CustomizedUbuntu=aks-ubuntu-1804
-```
-
-If you want to create node pools with the AKS Ubuntu 16.04 image, you can do so by omitting the custom `--aks-custom-headers` tag.
- ## Container runtime configuration A container runtime is software that executes containers and manages container images on a node. The runtime helps abstract away sys-calls or operating system (OS) specific functionality to run containers on Linux or Windows. AKS clusters using Kubernetes version 1.19 node pools and greater use `containerd` as its container runtime. AKS clusters using Kubernetes prior to v1.19 for node pools use [Moby](https://mobyproject.org/) (upstream docker) as its container runtime.
@@ -134,69 +88,6 @@ By using `containerd` for AKS nodes, pod startup latency improves and node resou
> > It is highly recommended to test your workloads on AKS node pools with `containerD` prior to using clusters on 1.19 or greater.
-The following section will explain how you can use and test AKS with `containerD` on clusters that aren't yet using a Kubernetes version 1.19 or higher, or were created before this feature became generally available, by using the container runtime configuration preview.
-
-### Use `containerd` as your container runtime (preview)
-
-You must have the following pre-requisites:
--- [The Azure CLI][azure-cli-install], version 2.8.0 or later installed-- The aks-preview extension version 0.4.53 or later-- The `UseCustomizedContainerRuntime` feature flag registered-- The `UseCustomizedUbuntuPreview` feature flag registered-
-To install the aks-preview 0.4.53 extension or later, use the following Azure CLI commands:
-
-```azurecli
-az extension add --name aks-preview
-az extension list
-```
-
-Register the `UseCustomizedContainerRuntime` and `UseCustomizedUbuntuPreview` features:
-
-```azurecli
-az feature register --name UseCustomizedContainerRuntime --namespace Microsoft.ContainerService
-az feature register --name UseCustomizedUbuntuPreview --namespace Microsoft.ContainerService
-
-```
-
-It might take several minutes for the status to show as **Registered**. You can check the registration status by using the [az feature list](/cli/azure/feature#az-feature-list) command:
-
-```azurecli
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/UseCustomizedContainerRuntime')].{Name:name,State:properties.state}"
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/UseCustomizedUbuntuPreview')].{Name:name,State:properties.state}"
-```
-
-When the status shows as registered, refresh the registration of the `Microsoft.ContainerService` resource provider by using the [az provider register](/cli/azure/provider#az-provider-register) command:
-
-```azurecli
-az provider register --namespace Microsoft.ContainerService
-```
-
-### Use `containerd` on new clusters (preview)
-
-Configure the cluster to use `containerd` when the cluster is created. Use the `--aks-custom-headers` flag to set `containerd` as the container runtime.
-
-> [!NOTE]
-> The `containerd` runtime is only supported on nodes and node pools using the AKS Ubuntu 18.04 image.
-
-```azurecli
-az aks create --name myAKSCluster --resource-group myResourceGroup --aks-custom-headers CustomizedUbuntu=aks-ubuntu-1804,ContainerRuntime=containerd
-```
-
-If you want to create clusters with the Moby (docker) runtime, you can do so by omitting the custom `--aks-custom-headers` tag.
-
-### Use `containerd` on existing clusters (preview)
-
-Configure a new node pool to use `containerd`. Use the `--aks-custom-headers` flag to set `containerd` as the runtime for that node pool.
-
-```azurecli
-az aks nodepool add --name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --aks-custom-headers CustomizedUbuntu=aks-ubuntu-1804,ContainerRuntime=containerd
-```
-
-If you want to create node pools with the Moby (docker) runtime, you can do so by omitting the custom `--aks-custom-headers` tag.
-- ### `Containerd` limitations/differences * To use `containerd` as the container runtime you must use AKS Ubuntu 18.04 as your base OS image.
@@ -208,9 +99,9 @@ If you want to create node pools with the Moby (docker) runtime, you can do so b
* You can no longer access the docker engine, `/var/run/docker.sock`, or use Docker-in-Docker (DinD). * If you currently extract application logs or monitoring data from Docker Engine, please use something like [Azure Monitor for Containers](../azure-monitor/insights/container-insights-enable-new-cluster.md) instead. Additionally AKS doesn't support running any out of band commands on the agent nodes that could cause instability. * Even when using Moby/docker, building images and directly leveraging the docker engine via the methods above is strongly discouraged. Kubernetes isn't fully aware of those consumed resources, and those approaches present numerous issues detailed [here](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/) and [here](https://securityboulevard.com/2018/05/escaping-the-whale-things-you-probably-shouldnt-do-with-docker-part-1/), for example.
-* Building images - You can continue to use your current docker build workflow as normal, unless you are building imagages inside your AKS cluster. In this case, please consider switching to the recommended approach for building images using [ACR Tasks](../container-registry/container-registry-quickstart-task-cli.md), or a more secure in-cluster option like [docker buildx](https://github.com/docker/buildx).
+* Building images - You can continue to use your current docker build workflow as normal, unless you are building images inside your AKS cluster. In this case, please consider switching to the recommended approach for building images using [ACR Tasks](../container-registry/container-registry-quickstart-task-cli.md), or a more secure in-cluster option like [docker buildx](https://github.com/docker/buildx).
-## Generation 2 virtual machines (Preview)
+## Generation 2 virtual machines
Azure supports [Generation 2 (Gen2) virtual machines (VMs)](../virtual-machines/generation-2.md). Generation 2 VMs support key features that aren't supported in generation 1 VMs (Gen1). These features include increased memory, Intel Software Guard Extensions (Intel SGX), and virtualized persistent memory (vPMEM).
@@ -219,59 +110,6 @@ Only specific SKUs and sizes support Gen2 VMs. Check the [list of supported size
Additionally not all VM images support Gen2, on AKS Gen2 VMs will use the new [AKS Ubuntu 18.04 image](#os-configuration). This image supports all Gen2 SKUs and sizes.
-To use Gen2 VMs during preview, you'll require:
-- The `aks-preview` CLI extension installed.-- The `Gen2VMPreview` feature flag registered.-
-Register the `Gen2VMPreview` feature:
-
-```azurecli
-az feature register --name Gen2VMPreview --namespace Microsoft.ContainerService
-```
-
-It might take several minutes for the status to show as **Registered**. You can check the registration status by using the [az feature list](/cli/azure/feature#az-feature-list) command:
-
-```azurecli
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/Gen2VMPreview')].{Name:name,State:properties.state}"
-```
-
-When the status shows as registered, refresh the registration of the `Microsoft.ContainerService` resource provider by using the [az provider register](/cli/azure/provider#az-provider-register) command:
-
-```azurecli
-az provider register --namespace Microsoft.ContainerService
-```
-
-To install the aks-preview CLI extension, use the following Azure CLI commands:
-
-```azurecli
-az extension add --name aks-preview
-```
-
-To update the aks-preview CLI extension, use the following Azure CLI commands:
-
-```azurecli
-az extension update --name aks-preview
-```
-
-### Use Gen2 VMs on new clusters (Preview)
-Configure the cluster to use Gen2 VMs for the selected SKU when the cluster is created. Use the `--aks-custom-headers` flag to set Gen2 as the VM generation on a new cluster.
-
-```azurecli
-az aks create --name myAKSCluster --resource-group myResourceGroup -s Standard_D2s_v3 --aks-custom-headers usegen2vm=true
-```
-
-If you want to create a regular cluster using Generation 1 (Gen1) VMs, you can do so by omitting the custom `--aks-custom-headers` tag. You can also choose to add more Gen1 or Gen2 VMs as per below.
-
-### Use Gen2 VMs on existing clusters (Preview)
-Configure a new node pool to use Gen2 VMs. Use the `--aks-custom-headers` flag to set Gen2 as the VM generation for that node pool.
-
-```azurecli
-az aks nodepool add --name gen2 --cluster-name myAKSCluster --resource-group myResourceGroup -s Standard_D2s_v3 --aks-custom-headers usegen2vm=true
-```
-
-If you want to create regular Gen1 node pools, you can do so by omitting the custom `--aks-custom-headers` tag.
-- ## Ephemeral OS By default, Azure automatically replicates the operating system disk for an virtual machine to Azure storage to avoid data loss should the VM need to be relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks, including slower node provisioning and higher read/write latency.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-serialization-and-persistence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-serialization-and-persistence.md
@@ -0,0 +1,159 @@
+
+ Title: Data persistence and serialization in Durable Functions - Azure
+description: Learn how the Durable Functions extension for Azure Functions persists data
++ Last updated : 02/11/2021+
+#Customer intent: As a developer, I want to understand what data is persisted to durable storage, how that data is serialized, and how
+#I can customize it when it doesn't work the way my app needs it to.
++
+# Data persistence and serialization in Durable Functions (Azure Functions)
+
+Durable Functions automatically persists function parameters, return values, and other state to a durable backend in order to provide reliable execution. However, the amount and frequency of data persisted to durable storage can impact application performance and storage transaction costs. Depending on the type of data your application stores, data retention and privacy policies may also need to be considered.
+
+## Azure Storage
+
+By default, Durable Functions persists data to queues, tables, and blobs in an [Azure Storage](https://azure.microsoft.com/services/storage/) account that you specify.
+
+### Queues
+
+Durable Functions uses Azure Storage queues to reliably schedule all function executions. These queue messages contain function inputs or outputs, depending on whether the message is being used to schedule an execution or return a value back to a calling function. These queue messages also include additional metadata that Durable Functions uses for internal purposes, like routing and end-to-end correlation. After a function has finished executing in response to a received message, that message is deleted and the result of the execution may also be persisted to either Azure Storage Tables or Azure Storage Blobs.
+
+Within a single [task hub](durable-functions-task-hubs.md), Durable Functions creates and adds messages to a *work-item* queue named `<taskhub>-workitem` for scheduling activity functions and one or more *control queues* named `<taskhub>-control-##` to schedule or resume orchestrator and entity functions. The number of control queues is equal to the number of partitions configured for your application. For more information about queues and partitions, see the [Performance and Scalability documentation](durable-functions-perf-and-scale.md).
+
+### Tables
+
+Once orchestrations process messages successfully, records of their resulting actions are persisted to the *History* table named `<taskhub>History`. Orchestration inputs, outputs, and custom status data is also persisted to the *Instances* table named `<taskhub>Instances`.
+
+### Blobs
+
+In most cases, Durable Functions doesn't use Azure Storage Blobs to persist data. However, queues and tables have [size limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-queue-storage-limits) that can prevent Durable Functions from persisting all of the required data into a storage row or queue message. For example, when a piece of data that needs to be persisted to a queue is greater than 45 KB when serialized, Durable Functions will compress the data and store it in a blob instead. When persisting data to blob storage in this way, Durable Function stores a reference to that blob in the table row or queue message. When Durable Functions needs to retrieve the data it will automatically fetch it from the blob. These blobs are stored in the blob container `<taskhub>-largemessages`.
+
+> [!NOTE]
+> The extra compression and blob operation steps for large messages can be expensive in terms of CPU and I/O latency costs. Additionally, Durable Functions needs to load persisted data in memory, and may do so for many different function executions at the same time. As a result, persisting large data payloads can cause high memory usage as well. To minimize memory overhead, consider persisting large data payloads manually (for example, in blob storage) and instead pass around references to this data. This way your code can load the data only when needed to avoid redundant loads during [orchestrator function replays](durable-functions-orchestrations.md#reliability). However, storing payloads to disk is *not* recommended since on-disk state is not guaranteed to be available since functions may execute on different VMs throughout their lifetimes.
+
+### Types of data that is serialized and persisted
+The following is a list of the different types of data that will be serialized and persisted when using features of Durable Functions:
+
+- All inputs and outputs of orchestrator, activity, and entity functions, including any IDs and unhandled exceptions
+- Orchestrator, activity, and entity function names
+- External event names and payloads
+- Custom orchestration status payloads
+- Orchestration termination messages
+- Durable timer payloads
+- Durable HTTP request and response URLs, headers, and payloads
+- Entity call and signal payloads
+- Entity state payloads
+
+### Working with sensitive data
+When using Azure Storage, all data is automatically encrypted at rest. However, anyone with access to the storage account can read the data in its unencrypted form. If you need stronger protection for sensitive data, consider first encrypting the data using your own encryption keys so that Durable Functions persists the data in a pre-encrypted form.
+
+Alternatively, .NET users have the option of implementing custom serialization providers that provide automatic encryption. An example of custom serialization with encryption can be found in [this GitHub sample](https://github.com/charleszipp/azure-durable-entities-encryption).
+
+> [!NOTE]
+> If you decide to implement application-level encryption, be aware that orchestrations and entities can exist for indefinite amounts of time. This matters when it comes time to rotate your encryption keys because an orchestration or entities may run longer than your key rotation policy. If a key rotation happens, the key used to encrypt your data may no longer be available to decrypt it the next time your orchestration or entity executes. Customer encryption is therefore recommended only when orchestrations and entities are expected to run for relatively short periods of time.
+
+## Customizing serialization and deserialization
+
+# [C#](#tab/csharp)
+
+### Default serialization logic
+
+Durable Functions internally uses [Json.NET](https://www.newtonsoft.com/json/help/html/Introduction.htm) to serialize orchestration and entity data to JSON. The default settings Durable Functions uses for Json.NET are:
+
+**Inputs, Outputs, and State:**
+
+```csharp
+JsonSerializerSettings
+{
+ TypeNameHandling = TypeNameHandling.None,
+ DateParseHandling = DateParseHandling.None,
+}
+```
+
+**Exceptions:**
+
+```csharp
+JsonSerializerSettings
+{
+ ContractResolver = new ExceptionResolver(),
+ TypeNameHandling = TypeNameHandling.Objects,
+ ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
+}
+```
+
+Read more detailed documentation about `JsonSerializerSettings` [here](https://www.newtonsoft.com/json/help/html/SerializationSettings.htm).
+
+## Customizing serialization with .NET attributes
+
+When serializing data, Json.NET looks for [various attributes](https://www.newtonsoft.com/json/help/html/SerializationAttributes.htm) on classes and properties that control how the data is serialized and deserialized from JSON. If you own the source code for data type passed to Durable Functions APIs, consider adding these attributes to the type to customize serialization and deserialization.
+
+## Customizing serialization with Dependency Injection
+
+Function apps that target .NET and run on the Functions V3 runtime can use [Dependency Injection (DI)](../functions-dotnet-dependency-injection.md) to customize how data and exceptions are serialized. The sample code below demonstrates how to use DI to override the default Json.NET serialization settings using custom implementations of the `IMessageSerializerSettingsFactory` and `IErrorSerializerSettingsFactory` service interfaces.
+
+```csharp
+using Microsoft.Azure.Functions.Extensions.DependencyInjection;
+using Microsoft.Azure.WebJobs.Extensions.DurableTask;
+using Microsoft.Extensions.DependencyInjection;
+using Newtonsoft.Json;
+using System.Collections.Generic;
+
+[assembly: FunctionsStartup(typeof(MyApplication.Startup))]
+namespace MyApplication
+{
+ public class Startup : FunctionsStartup
+ {
+ public override void Configure(IFunctionsHostBuilder builder)
+ {
+ builder.Services.AddSingleton<IMessageSerializerSettingsFactory, CustomMessageSerializerSettingFactory>();
+ builder.Services.AddSingleton<IErrorSerializerSettingsFactory, CustomErrorSerializerSettingsFactory>();
+ }
+
+ /// <summary>
+ /// A factory that provides the serialization for all inputs and outputs for activities and
+ /// orchestrations, as well as entity state.
+ /// </summary>
+ internal class CustomMessageSerializerSettingsFactory : IMessageSerializerSettingsFactory
+ {
+ public JsonSerializerSettings CreateJsonSerializerSettings()
+ {
+ // Return your custom JsonSerializerSettings here
+ }
+ }
+
+ /// <summary>
+ /// A factory that provides the serialization for all exceptions thrown by activities
+ /// and orchestrations
+ /// </summary>
+ internal class CustomErrorSerializerSettingsFactory : IErrorSerializerSettingsFactory
+ {
+ public JsonSerializerSettings CreateJsonSerializerSettings()
+ {
+ // Return your custom JsonSerializerSettings here
+ }
+ }
+ }
+}
+```
+
+# [JavaScript](#tab/javascript)
+
+### Serialization and deserialization logic
+
+Azure Functions Node applications use [`JSON.stringify()` for serialization](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify) and [`JSON.Parse()` for deserialization](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse). Most types should serialize and deserialize seamlessly. In cases where the default logic is insufficient, defining a `toJSON()` method on the object will hijack the serialization logic. However, no analog exists for object deserialization.
+
+For full customization of the serialization/deserialization pipeline, consider handling the serialization and deserialization with your own code and passing around data as strings.
++
+# [Python](#tab/python)
+
+### Serialization and deserialization logic
+
+It is strongly recommended to use type annotations to ensure Durable Functions serializes and deserializes your data correctly. While many built-in types are handled automatically, some built-in data types require type annotations to preserve the type during deserialization.
+
+For custom data types, you must define the JSON serialization and deserialization of a data type by exporting a static `to_json` and `from_json` method from your class.
++
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/storage-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/storage-considerations.md
@@ -23,7 +23,7 @@ Azure Functions requires an Azure Storage account when you create a function app
## Storage account requirements
-When creating a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. This is because Functions relies on Azure Storage for operations such as managing triggers and logging function executions. Some storage accounts don't support queues and tables. These accounts include blob-only storage accounts, Azure Premium Storage, and general-purpose storage accounts with ZRS replication.
+When creating a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. This is because Functions relies on Azure Storage for operations such as managing triggers and logging function executions. Some storage accounts don't support queues and tables. These accounts include blob-only storage accounts and Azure Premium Storage.
To learn more about storage account types, see [Introducing the Azure Storage Services](../storage/common/storage-introduction.md#core-storage-services).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/change-analysis-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis-troubleshoot.md
@@ -0,0 +1,67 @@
+
+ Title: Troubleshoot Application Change Analysis - Azure Monitor
+description: Learn how to troubleshoot problems in Application Change Analysis.
+++ Last updated : 02/11/2021+++
+# Troubleshoot Application Change Analysis (preview)
+
+## Having trouble registering Microsoft. Change Analysis resource provider from Change history tab
+
+If it's the first time you view Change history after its integration with Application Change Analysis, you will see it automatically registering a resource provider **Microsoft.ChangeAnalysis**. In rare cases it might fail for the following reasons:
+
+- **You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider**. This error message means your role in the current subscription does not have the **Microsoft.Support/register/action** scope associated with it. This might happen if you are not the owner of a subscription and got shared access permissions through a coworker (that is, view access to a resource group). To fix this, you can contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. This can be done in Azure portal through **Subscriptions | Resource providers** and search for ```Microsoft.ChangeAnalysis``` and register in the UI, or through Azure PowerShell or Azure CLI.
+
+ Register resource provider through PowerShell:
+ ```PowerShell
+ # Register resource provider
+ Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
+ ```
+
+- **Failed to register Microsoft.ChangeAnalysis resource provider**. This message means something failed immediately as the UI sent request to register the resource provider, and it's not related to permission issue. Likely it might be a temporary internet connectivity issue. Try refreshing the page and checking your internet connection. If the error persists, contact changeanalysishelp@microsoft.com
+
+- **This is taking longer than expected**. This message means the registration is taking longer than 2 minutes. This is unusual but does not necessarily mean something went wrong. You can go to **Subscriptions | Resource provider** to check for **Microsoft.ChangeAnalysis** resource provider registration status. You can try to use the UI to unregister, re-register, or refresh to see if it helps. If issue persists, contact changeanalysishelp@microsoft.com for support.
+ ![Troubleshoot RP registration taking too long](./media/change-analysis/troubleshoot-registration-taking-too-long.png)
+
+![Screenshot of the Diagnose and Solve Problems tool for a Virtual Machine with Troubleshooting tools selected.](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
+
+![Screenshot of the tile for the Analyze recent changes troubleshooting tool for a Virtual Machine.](./media/change-analysis/analyze-recent-changes.png)
+
+## Azure Lighthouse subscription is not supported
+
+- **Failed to query Microsoft.ChangeAnalysis resource provider** with message *Azure lighthouse subscription is not supported, the changes are only available in the subscription's home tenant*. There is a limitation right now for Change Analysis resource provider to be registered through Azure Lighthouse subscription for users not in home tenant. We are working on addressing this limitation. If this is a blocking issue for you, there is a workaround that involves creating a service principal and explicitly assigning the role to allow the access. Contact changeanalysishelp@microsoft.com to learn more about it.
+
+## An error occurred while getting changes. Please refresh this page or come back later to view changes
+
+This is the general error message presented by Application Change Analysis service when changes could not be loaded. A few known causes are:
+
+- Internet connectivity error from the client device
+- Change Analysis service being temporarily unavailable
+Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact changeanalysishelp@micorosoft.com
+
+## You don't have enough permissions to view some changes. Contact your Azure subscription administrator
+
+This is the general unauthorized error message, explaining the current user does not have sufficient permissions to view the change. At least reader access is required on the resource to view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager. For web app in-guest file changes and configuration changes, at least contributor role is required.
+
+## Failed to register Microsoft.ChangeAnalysis resource provider
+
+This message means something failed immediately as the UI sent request to register the resource provider, and it's not related to permission issue. Likely it might be a temporary internet connectivity issue. Try refreshing the page and checking your internet connection. If the error persists, contact changeanalysishelp@microsoft.com
+
+## You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider. Contact your Azure subscription administrator
+
+This error message means your role in the current subscription does not have the **Microsoft.Support/register/action** scope associated with it. This might happen if you are not the owner of a subscription and got shared access permissions through a coworker (that is, view access to a resource group). To fix this, you can contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. This can be done in Azure portal through **Subscriptions | Resource providers** and search for ```Microsoft.ChangeAnalysis``` and register in the UI, or through Azure PowerShell or Azure CLI.
+
+Register resource provider through PowerShell:
+
+```PowerShell
+# Register resource provider
+Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
+```
+
+## Next steps
+
+- Learn more about [Azure Resource Graph](../../governance/resource-graph/overview.md), which helps power Change Analysis.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/change-analysis-visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis-visualizations.md
@@ -0,0 +1,92 @@
+
+ Title: Visualizations for Application Change Analysis - Azure Monitor
+description: Learn how to use visualizations in Application Change Analysis in Azure Monitor.
+++ Last updated : 02/11/2021+++
+# Visualizations for Application Change Analysis (preview)
+
+## Standalone UI
+
+In Azure Monitor, there is a standalone pane for Change Analysis to view all changes with insights into application dependencies and resources.
+
+Search for Change Analysis in the search bar on Azure portal to launch the experience.
+
+![Screenshot of searching Change Analysis in Azure portal](./media/change-analysis/search-change-analysis.png)
+
+All resources under a selected subscription are displayed with changes from the past 24 hours. To optimize for the page load performance, the service is displaying 10 resources at a time. Select the next page to view more resources. We are working on removing this limitation.
+
+![Screenshot of Change Analysis blade in Azure portal](./media/change-analysis/change-analysis-standalone-blade.png)
+
+Clicking into a resource to view all its changes. If needed, drill down into a change to view json formatted change details and insights.
+
+![Screenshot of change details](./media/change-analysis/change-details.png)
+
+For any feedback, use the send feedback button or email changeanalysisteam@microsoft.com.
+
+![Screenshot of feedback button in Change Analysis tab](./media/change-analysis/change-analysis-feedback.png)
+
+### Multiple subscription support
+
+The UI supports selecting multiple subscriptions to view resource changes. Use the subscription filter:
+
+![Screenshot of subscription filter that supports selecting multiple subscriptions](./media/change-analysis/multiple-subscriptions-support.png)
+
+### Web App Diagnose and Solve Problems
+
+In Azure Monitor, Change Analysis is also built into the self-service **Diagnose and solve problems** experience. Access this experience from the **Overview** page of your App Service application.
+
+![Screenshot of the "Overview" button and the "Diagnose and solve problems" button](./media/change-analysis/change-analysis.png)
+
+## Application Change Analysis in the Diagnose and solve problems tool
+
+Application Change Analysis is a standalone detector in the Web App diagnose and solve problems tools. It is also aggregated in **Application Crashes** and **Web App Down detectors**. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered. Follow these instructions to enable web app in-guest change tracking.
+
+1. Select **Availability and Performance**.
+
+ ![Screenshot of the "Availability and Performance" troubleshooting options](./media/change-analysis/availability-and-performance.png)
+
+2. Select **Application Changes**. The feature is also available in **Application Crashes**.
+
+ ![Screenshot of the "Application Crashes" button](./media/change-analysis/application-changes.png)
+
+3. The link leads to Application Change Analysis UI scoped to the web app. If web app in-guest change tracking is not enabled, follow the banner to get file and app settings changes.
+
+ ![Screenshot of "Application Crashes" options](./media/change-analysis/enable-changeanalysis.png)
+
+4. Turn on **Change Analysis** and select **Save**. The tool displays all web apps under an App Service plan. You can use the plan level switch to turn on Change Analysis for all web apps under a plan.
+
+ ![Screenshot of the "Enable Change Analysis" user interface](./media/change-analysis/change-analysis-on.png)
+
+5. Change data is also available in select **Web App Down** and **Application Crashes** detectors. You'll see a graph that summarizes the type of changes over time along with details on those changes. By default, changes in the past 24 hours are displayed to help with immediate problems.
+
+ ![Screenshot of the change diff view](./media/change-analysis/change-view.png)
+
+## Virtual Machine Diagnose and Solve Problems
+
+Go to Diagnose and Solve Problems tool for a Virtual Machine. Go to **Troubleshooting Tools**, browse down the page and select **Analyze recent changes** to view changes on the Virtual Machine.
+
+![Screenshot of the VM Diagnose and Solve Problems](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
+
+![Change analyzer in troubleshooting tools](./media/change-analysis/analyze-recent-changes.png)
+
+## Activity Log Change History
+
+The [View change history](../platform/activity-log.md#view-change-history) feature in Activity Log calls Application Change Analysis service backend to get changes associated with an operation. **Change history** used to call [Azure Resource Graph](../../governance/resource-graph/overview.md) directly, but swapped the backend to call Application Change Analysis so changes returned will include resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md), resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md), and in-guest changes from PaaS services such as App Services web app.
+In order for the Application Change Analysis service to be able to scan for changes in users' subscriptions, a resource provider needs to be registered. The first time entering **Change History** tab, the tool will automatically start to register **Microsoft.ChangeAnalysis** resource provider. After registered, changes from **Azure Resource Graph** will be available immediately and cover the past 14 days. Changes from other sources will be available after ~4 hours after subscription is onboard.
+
+![Activity Log change history integration](./media/change-analysis/activity-log-change-history.png)
+
+## VM Insights integration
+
+Users having [VM Insights](../insights/vminsights-overview.md) enabled can view what changed in their virtual machines that might of caused any spikes in a metrics chart such as CPU or Memory. Change data is integrated in the VM Insights side navigation bar. User can view if any changes happened in the VM and select **Investigate Changes** to view change details in Application Change Analysis standalone UI.
+
+[![VM insights integration](./media/change-analysis/vm-insights.png)](./media/change-analysis/vm-insights.png#lightbox)
+
+## Next steps
+
+- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/change-analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis.md
@@ -33,12 +33,12 @@ Application Change Analysis service supports resource property level changes in
- App Service - Azure Kubernetes service - Azure Function-- Networking resources: i.e. Network Security Group, Virtual Network, Application Gateway, etc.-- Data
+- Networking resources: Network Security Group, Virtual Network, Application Gateway, etc.
+- Data
## Data sources
-Application change analysis queries for Azure Resource Manager tracked properties, proxied configurations and web app in-guest changes. In addition, the service tracks resource dependency changes to diagnose and monitor an application end-to-end.
+Application change analysis queries for Azure Resource Manager tracked properties, proxied configurations, and web app in-guest changes. In addition, the service tracks resource dependency changes to diagnose and monitor an application end-to-end.
### Azure Resource Manager tracked properties changes
@@ -56,17 +56,20 @@ Change Analysis captures the deployment and configuration state of an applicatio
### Dependency changes
-Changes to resource dependencies can also cause issues in a resource. For example, if a web app calls into a Redis cache, the Redis cache SKU could affect the web app performance. Another example is if port 22 was closed in a Virtual Machine's Network Security Group, it will cause connectivity errors.
+Changes to resource dependencies can also cause issues in a resource. For example, if a web app calls into a Redis cache, the Redis cache SKU could affect the web app performance. Another example is if port 22 was closed in a Virtual Machine's Network Security Group, it will cause connectivity errors.
#### Web App diagnose and solve problems navigator (Preview)+ To detect changes in dependencies, Change Analysis checks the web app's DNS record. In this way, it identifies changes in all app components that could cause issues. Currently the following dependencies are supported in **Web App Diagnose and solve problems | Navigator (Preview)**:+ - Web Apps - Azure Storage - Azure SQL #### Related resources
-Application Change Analysis detects related resources. Common examples are Network Security Group, Virtual Network, Application Gateway and Load Balancer related to a Virtual Machine.
+
+Application Change Analysis detects related resources. Common examples are Network Security Group, Virtual Network, Application Gateway, and Load Balancer related to a Virtual Machine.
The network resources are usually automatically provisioned in the same resource group as the resources using it, so filtering the changes by resource group will show all changes for the Virtual Machine and related networking resources. ![Screenshot of Networking changes](./media/change-analysis/network-changes.png)
@@ -74,91 +77,12 @@ The network resources are usually automatically provisioned in the same resource
## Application Change Analysis service enablement The Application Change Analysis service computes and aggregates change data from the data sources mentioned above. It provides a set of analytics for users to easily navigate through all resource changes and to identify which change is relevant in the troubleshooting or monitoring context.
-"Microsoft.ChangeAnalysis" resource provider needs to be registered with a subscription for the Azure Resource Manager tracked properties and proxied settings change data to be available. As you enter the Web App diagnose and solve problems tool or bring up the Change Analysis standalone tab, this resource provider is automatically registered.
-For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](#application-change-analysis-in-the-diagnose-and-solve-problems-tool) section later in this article for more details.
+"Microsoft.ChangeAnalysis" resource provider needs to be registered with a subscription for the Azure Resource Manager tracked properties and proxied settings change data to be available. As you enter the Web App diagnose and solve problems tool or bring up the Change Analysis standalone tab, this resource provider is automatically registered.
+For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#application-change-analysis-in-the-diagnose-and-solve-problems-tool) section later in this article for more details.
## Cost
-Application Change Analysis is a free service - it does not incur any billing cost to subscriptions with it enabled. The service also does not have any performance impact for scanning Azure Resource properties changes. When you enable Change Analysis for web apps in-guest file changes (or enable the Diagnose and Solve problems tool), it will have negligible performance impact on the web app and no billing cost.
-
-## Visualizations for Application Change Analysis
-
-### Standalone UI
-
-In Azure Monitor, there is a standalone pane for Change Analysis to view all changes with insights into application dependencies and resources.
-
-Search for Change Analysis in the search bar on Azure portal to launch the experience.
-
-![Screenshot of searching Change Analysis in Azure portal](./media/change-analysis/search-change-analysis.png)
-
-All resources under a selected subscription are displayed with changes from the past 24 hours. To optimize for the page load performance the service is displaying 10 resources at a time. Click on next pages to view more resources. We are working on removing this limitation.
-
-![Screenshot of Change Analysis blade in Azure portal](./media/change-analysis/change-analysis-standalone-blade.png)
-
-Clicking into a resource to view all its changes. If needed, drill down into a change to view json formatted change details and insights.
-
-![Screenshot of change details](./media/change-analysis/change-details.png)
-
-For any feedback, use the send feedback button in the blade or email changeanalysisteam@microsoft.com.
-
-![Screenshot of feedback button in Change Analysis blade](./media/change-analysis/change-analysis-feedback.png)
-
-#### Multiple subscription support
-The UI supports selecting multiple subscriptions to view resource changes. Use the subscription filter:
-
-![Screenshot of subscription filter that supports selecting multiple subscriptions](./media/change-analysis/multiple-subscriptions-support.png)
-
-### Web App Diagnose and Solve Problems
-
-In Azure Monitor, Change Analysis is also built into the self-service **Diagnose and solve problems** experience. Access this experience from the **Overview** page of your App Service application.
-
-![Screenshot of the "Overview" button and the "Diagnose and solve problems" button](./media/change-analysis/change-analysis.png)
-
-### Application Change Analysis in the Diagnose and solve problems tool
-
-Application Change Analysis is a standalone detector in the Web App diagnose and solve problems tools. It is also aggregated in **Application Crashes** and **Web App Down detectors**. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered. Follow these instructions to enable web app in-guest change tracking.
-
-1. Select **Availability and Performance**.
-
- ![Screenshot of the "Availability and Performance" troubleshooting options](./media/change-analysis/availability-and-performance.png)
-
-2. Select **Application Changes**. The feature is also available in **Application Crashes**.
-
- ![Screenshot of the "Application Crashes" button](./media/change-analysis/application-changes.png)
-
-3. The link leads to Application Change Aalysis UI scoped to the web app. If web app in-guest change tracking is not enabled, follow the banner to get file and app settings changes.
-
- ![Screenshot of "Application Crashes" options](./media/change-analysis/enable-changeanalysis.png)
-
-4. Turn on **Change Analysis** and select **Save**. The tool displays all web apps under an App Service plan. You can use the plan level switch to turn on Change Analysis for all web apps under a plan.
-
- ![Screenshot of the "Enable Change Analysis" user interface](./media/change-analysis/change-analysis-on.png)
-
-5. Change data is also available in select **Web App Down** and **Application Crashes** detectors. You'll see a graph that summarizes the type of changes over time along with details on those changes. By default, changes in the past 24 hours are displayed to help with immediate problems.
-
- ![Screenshot of the change diff view](./media/change-analysis/change-view.png)
---
-### Virtual Machine Diagnose and Solve Problems
-
-Go to Diagnose and Solve Problems tool for a Virtual Machine. Go to **Troubleshooting Tools**, browse down the page and select **Analyze recent changes** to view changes on the Virtual Machine.
-
-![Screenshot of the VM Diagnose and Solve Problems](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
-
-![Change analyzer in troubleshooting tools](./media/change-analysis/analyze-recent-changes.png)
-
-### Activity Log Change History
-The [View change history](../platform/activity-log.md#view-change-history) feature in Activity Log calls Application Change Analysis service backend to get changes associated with an operation. **Change history** used to call [Azure Resource Graph](../../governance/resource-graph/overview.md) directly, but swapped the backend to call Application Change Analysis so changes returned will include resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md), resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md), and in-guest changes from PaaS services such as App Services web app.
-In order for the Application Change Analysis service to be able to scan for changes in users' subscriptions, a resource provider needs to be registered. The first time entering **Change History** tab, the tool will automatically start to register **Microsoft.ChangeAnalysis** resource provider. After registered, changes from **Azure Resource Graph** will be available immediately and cover the past 14 days. Changes from other sources will be available after ~4 hours after subscription is onboard.
-
-![Activity Log change history integration](./media/change-analysis/activity-log-change-history.png)
-
-### VM Insights integration
-Users having [VM Insights](../insights/vminsights-overview.md) enabled can view what changed in their virtual machines that might caused any spikes in a metrics chart such as CPU or Memory and wonder what caused it. Change data is integrated in the VM Insights side navigation bar. User can view if any changes happened in the VM and click **Investigate Changes** to view change details in Application Change Analysis standalone UI.
-
-[![VM insights integration](./media/change-analysis/vm-insights.png)](./media/change-analysis/vm-insights.png#lightbox)
-
+Application Change Analysis is a free service - it does not incur any billing cost to subscriptions with it enabled. The service also does not have any performance impact for scanning Azure Resource properties changes. When you enable Change Analysis for web apps in-guest file changes (or enable the Diagnose and Solve problems tool), it will have negligible performance impact on the web app and no billing cost.
## Enable Change Analysis at scale
@@ -194,58 +118,9 @@ foreach ($webapp in $webapp_list)
```
-## Troubleshoot
-
-### Having trouble registering Microsoft.Change Analysis resource provider from Change history tab
-If it's the first time you view Change history after its integration with Application Change Analysis, you will see it automatically registering a resource provider **Microsoft.ChangeAnalysis**. In rare cases it might fail for the following reasons:
--- **You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider**. This error message means your role in the current subscription does not have the **Microsoft.Support/register/action** scope associated with it. This might happen if you are not the owner of a subscription and got shared access permissions through a coworker. i.e. view access to a resource group. To fix this, You can contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. This can be done in Azure portal through **Subscriptions | Resource providers** and search for ```Microsoft.ChangeAnalysis``` and register in the UI, or through Azure PowerShell or Azure CLI.-
- Register resource provider through PowerShell:
- ```PowerShell
- # Register resource provider
- Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
- ```
--- **Failed to register Microsoft.ChangeAnalysis resource provider**. This message means something failed immediately as the UI sent request to register the resource provider, and it's not related to permission issue. Likely it might be a temporary internet connectivity issue. Try refreshing the page and checking your internet connection. If the error persists, contact changeanalysishelp@microsoft.com--- **This is taking longer than expected**. This message means the registration is taking longer than 2 minutes. This is unusual but does not necessarily mean something went wrong. You can go to **Subscriptions | Resource provider** to check for **Microsoft.ChangeAnalysis** resource provider registration status. You can try to use the UI to unregister, re-register or refresh to see if it helps. If issue persists, contact changeanalysishelp@microsoft.com for support.
- ![Troubleshoot RP registration taking too long](./media/change-analysis/troubleshoot-registration-taking-too-long.png)
-
-![Screenshot of the Diagnose and Solve Problems tool for a Virtual Machine with Troubleshooting tools selected.](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
-
-![Screenshot of the tile for the Analyze recent changes troubleshooting tool for a Virtual Machine.](./media/change-analysis/analyze-recent-changes.png)
-
-### Azure Lighthouse subscription is not supported
--- **Failed to query Microsoft.ChangeAnalysis resource provider** with message *Azure lighthouse subscription is not supported, the changes are only available in the subscription's home tenant*. There is a limitation right now for Change Analysis resource provider to be registered through Azure Lighthouse subscription for users not in home tenant. We expect this limitation to be addressed in the near future. If this is a blocking issue for you, there is a workaround that involves creating a service principal and explicitly assigning the role to allow the access. Contact changeanalysishelp@microsoft.com to learn more about it.-
-### An error occurred while getting changes. Please refresh this page or come back later to view changes
-
-This is the general error message presented by Application Change Analysis service when changes could not be loaded. A few known causes are:
-- Internet connectivity error from the client device-- Change Analysis service being temporarily unavailable
-Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact changeanalysishelp@micorosoft.com
-
-### You don't have enough permissions to view some changes. Contact your Azure subscription administrator
-
-This is the general unauthorized error message, explaining the current user does not have sufficient permissions to view the change. At least reader access is required on the resource to view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager. For web app in-guest file changes and configuration changes, at least contributor role is required.
-
-### Failed to register Microsoft.ChangeAnalysis resource provider
-This message means something failed immediately as the UI sent request to register the resource provider, and it's not related to permission issue. Likely it might be a temporary internet connectivity issue. Try refreshing the page and checking your internet connection. If the error persists, contact changeanalysishelp@microsoft.com
-
-### You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider. Contact your Azure subscription administrator.
-This error message means your role in the current subscription does not have the **Microsoft.Support/register/action** scope associated with it. This might happen if you are not the owner of a subscription and got shared access permissions through a coworker. i.e. view access to a resource group. To fix this, You can contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. This can be done in Azure portal through **Subscriptions | Resource providers** and search for ```Microsoft.ChangeAnalysis``` and register in the UI, or through Azure PowerShell or Azure CLI.
-
-Register resource provider through PowerShell:
-
-```PowerShell
-# Register resource provider
-Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
-```
- ## Next steps
+- Learn about [visualizations in Change Analysis](change-analysis-visualizations.md)
+- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
- Enable Application Insights for [Azure App Services apps](azure-web-apps.md). - Enable Application Insights for [Azure VM and Azure virtual machine scale set IIS-hosted apps](azure-vm-vmss-apps.md).-- Learn more about [Azure Resource Graph](../../governance/resource-graph/overview.md), which helps power Change Analysis.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
@@ -165,7 +165,7 @@ The latest values for Azure Machine Learning Compute quotas can be found in the
### Application Insights ## Azure Policy limits
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-tutorial-linked-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-tutorial-linked-template.md
@@ -29,7 +29,7 @@ You can separate the storage account resource into a linked template:
The following template is the main template. The highlighted `Microsoft.Resources/deployments` object shows how to call a linked template. The linked template cannot be stored as a local file or a file that is only available on your local network. You can either provide a URI value of the linked template that includes either HTTP or HTTPS, or use the _relativePath_ property to deploy a remote linked template at a location relative to the parent template. One option is to place both the main template and the linked template in a storage account. ## Store the linked template
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-architecture.md
@@ -74,7 +74,7 @@ Details of how traffic shall be migrated to new Gateways in specific regions are
| Australia Central 2 | 20.36.113.0, 20.36.112.6 | | Australia East | 13.75.149.87, 40.79.161.1, 13.70.112.9 | | Australia South East | 191.239.192.109, 13.73.109.251, 13.77.48.10 |
-| Brazil South | 104.41.11.5, 191.233.200.14, 191.234.144.16, 191.234.152.3 |
+| Brazil South | 191.233.200.14, 191.234.144.16, 191.234.152.3 |
| Canada Central | 40.85.224.249, 52.246.152.0, 20.38.144.1 | | Canada East | 40.86.226.166, 52.242.30.154, 40.69.105.9 , 40.69.105.10 | | Central US | 13.67.215.62, 52.182.137.15, 23.99.160.139, 104.208.16.96, 104.208.21.1, 13.89.169.20 |
@@ -82,8 +82,8 @@ Details of how traffic shall be migrated to new Gateways in specific regions are
| China East 2 | 40.73.82.1 | | China North | 139.219.15.17 | | China North 2 | 40.73.50.0 |
-| East Asia | 191.234.2.139, 52.175.33.150, 13.75.32.4, 13.75.32.14 |
-| East US | 40.121.158.30, 40.79.153.12, 191.238.6.43, 40.78.225.32 |
+| East Asia | 52.175.33.150, 13.75.32.4, 13.75.32.14 |
+| East US | 40.121.158.30, 40.79.153.12, 40.78.225.32 |
| East US 2 | 40.79.84.180, 52.177.185.181, 52.167.104.0, 191.239.224.107, 104.208.150.3 | | France Central | 40.79.137.0, 40.79.129.1, 40.79.137.8, 40.79.145.12 | | France South | 40.79.177.0, 40.79.177.10 ,40.79.177.12 |
@@ -93,18 +93,18 @@ Details of how traffic shall be migrated to new Gateways in specific regions are
| India Central | 104.211.96.159, 104.211.86.30 , 104.211.86.31 | | India South | 104.211.224.146 | | India West | 104.211.160.80, 104.211.144.4 |
-| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 191.237.240.43, 40.79.192.5 |
-| Japan West | 104.214.148.156, 40.74.100.192, 191.238.68.11, 40.74.97.10 |
+| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 40.79.192.5 |
+| Japan West | 104.214.148.156, 40.74.100.192, 40.74.97.10 |
| Korea Central | 52.231.32.42, 52.231.17.22 ,52.231.17.23 | | Korea South | 52.231.200.86 | | North Central US | 23.96.178.199, 23.98.55.75, 52.162.104.33 |
-| North Europe | 40.113.93.91, 191.235.193.75, 52.138.224.1, 13.74.104.113 |
+| North Europe | 40.113.93.91, 52.138.224.1, 13.74.104.113 |
| Norway East | 51.120.96.0 | | Norway West | 51.120.216.0 | | South Africa North | 102.133.152.0, 102.133.120.2 | | South Africa West | 102.133.24.0 |
-| South Central US | 13.66.62.124, 23.98.162.75, 104.214.16.32, 20.45.121.1, 20.49.88.1 |
-| South East Asia | 104.43.15.0, 23.100.117.95, 40.78.232.3 |
+| South Central US | 13.66.62.124, 104.214.16.32, 20.45.121.1, 20.49.88.1 |
+| South East Asia | 104.43.15.0, 40.78.232.3 |
| Switzerland North | 51.107.56.0, 51.107.57.0 | | Switzerland West | 51.107.152.0, 51.107.153.0 | | UAE Central | 20.37.72.64 |
@@ -112,8 +112,8 @@ Details of how traffic shall be migrated to new Gateways in specific regions are
| UK South | 51.140.184.11, 51.105.64.0 | | UK West | 51.141.8.11 | | West Central US | 13.78.145.25, 13.78.248.43 |
-| West Europe | 40.68.37.158, 191.237.232.75, 104.40.168.105, 52.236.184.163 |
-| West US | 104.42.238.205, 23.99.34.75, 13.86.216.196 |
+| West Europe | 40.68.37.158, 104.40.168.105, 52.236.184.163 |
+| West US | 104.42.238.205, 13.86.216.196 |
| West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 | | | |
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/gateway-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/gateway-migration.md
@@ -23,26 +23,42 @@ The most up-to-date information will be maintained in the [Azure SQL Database ga
## Status updates # [In progress](#tab/in-progress-ip)
+## March 2021
+The following SQL Gateways in multiple regions are in the process of being deactivated:
+
+- Brazil South: 104.41.11.5
+- East Asia: 191.234.2.139
+- East US: 191.238.6.43
+- Japan East: 191.237.240.43
+- Japan West: 191.238.68.11
+- North Europe: 191.235.193.75
+- South Central US: 23.98.162.75
+- Southeast Asia: 23.100.117.95
+- West Europe: 191.237.232.75
+- West US: 23.99.34.75
+
+No customer impact is anticipated since these Gateways (running on older hardware) are not routing any customer traffic. The IP addresses for these Gateways shall be deactivated on 15th March 2021.
+ ## February 2021 New SQL Gateways are being added to the following regions: -- Central US : 13.89.169.20
+- Central US: 13.89.169.20
These SQL Gateways shall start accepting customer traffic on 28 February 2021. ## January 2021 New SQL Gateways are being added to the following regions: -- Australia Central : 20.36.104.6 , 20.36.104.7 -- Australia Central 2 : 20.36.112.6 -- Brazil South : 191.234.144.16 ,191.234.152.3 -- Canada East : 40.69.105.9 ,40.69.105.10-- India Central : 104.211.86.30 , 104.211.86.31 -- East Asia : 13.75.32.14 -- France Central : 40.79.137.8, 40.79.145.12 -- France South : 40.79.177.10 ,40.79.177.12-- Korea Central : 52.231.17.22 ,52.231.17.23-- India West : 104.211.144.4
+- Australia Central: 20.36.104.6 , 20.36.104.7
+- Australia Central 2: 20.36.112.6
+- Brazil South: 191.234.144.16 ,191.234.152.3
+- Canada East: 40.69.105.9 ,40.69.105.10
+- India Central: 104.211.86.30 , 104.211.86.31
+- East Asia: 13.75.32.14
+- France Central: 40.79.137.8, 40.79.145.12
+- France South: 40.79.177.10 ,40.79.177.12
+- Korea Central: 52.231.17.22 ,52.231.17.23
+- India West: 104.211.144.4
These SQL Gateways shall start accepting customer traffic on 31 January 2021.
@@ -53,53 +69,53 @@ The following gateway migrations are complete:
New SQL Gateways are being added to the following regions: -- Germany West Central : 51.116.240.0, 51.116.248.0
+- Germany West Central: 51.116.240.0, 51.116.248.0
These SQL Gateways shall start accepting customer traffic on 12 October 2020. ### September 2020 New SQL Gateways are being added to the following regions. These SQL Gateways shall start accepting customer traffic on **15 September 2020**: -- Australia Southeast : 13.77.48.10-- Canada East : 40.86.226.166, 52.242.30.154-- UK South : 51.140.184.11, 51.105.64.0
+- Australia Southeast: 13.77.48.10
+- Canada East: 40.86.226.166, 52.242.30.154
+- UK South: 51.140.184.11, 51.105.64.0
Existing SQL Gateways will start accepting traffic in the following regions. These SQL Gateways shall start accepting customer traffic on **15 September 2020** : -- Australia Southeast : 191.239.192.109 and 13.73.109.251-- Central US : 13.67.215.62, 52.182.137.15, 23.99.160.139, 104.208.16.96, and 104.208.21.1-- East Asia : 191.234.2.139, 52.175.33.150, and 13.75.32.4-- East US : 40.121.158.30, 40.79.153.12, 191.238.6.43, and 40.78.225.32-- East US 2 : 40.79.84.180, 52.177.185.181, 52.167.104.0, 191.239.224.107, and 104.208.150.3-- France Central : 40.79.137.0 and 40.79.129.1
+- Australia Southeast: 191.239.192.109 and 13.73.109.251
+- Central US: 13.67.215.62, 52.182.137.15, 23.99.160.139, 104.208.16.96, and 104.208.21.1
+- East Asia: 191.234.2.139, 52.175.33.150, and 13.75.32.4
+- East US: 40.121.158.30, 40.79.153.12, 191.238.6.43, and 40.78.225.32
+- East US 2: 40.79.84.180, 52.177.185.181, 52.167.104.0, 191.239.224.107, and 104.208.150.3
+- France Central: 40.79.137.0 and 40.79.129.1
- Japan West: 104.214.148.156, 40.74.100.192, 191.238.68.11, and 40.74.97.10-- North Central US : 23.96.178.199, 23.98.55.75, and 52.162.104.33-- Southeast Asia : 104.43.15.0, 23.100.117.95, and 40.78.232.3
+- North Central US: 23.96.178.199, 23.98.55.75, and 52.162.104.33
+- Southeast Asia: 104.43.15.0, 23.100.117.95, and 40.78.232.3
- West US: 104.42.238.205, 23.99.34.75, and 13.86.216.196 New SQL Gateways are being added to the following regions. These SQL Gateways shall start accepting customer traffic on **10 September 2020**: -- West Central US : 13.78.248.43 -- South Africa North : 102.133.120.2
+- West Central US: 13.78.248.43
+- South Africa North: 102.133.120.2
New SQL Gateways are being added to the following regions. These SQL Gateways shall start accepting customer traffic on **1 September 2020**: -- North Europe : 13.74.104.113 -- West US2 : 40.78.248.10 -- West Europe : 52.236.184.163 -- South Central US : 20.45.121.1, 20.49.88.1
+- North Europe: 13.74.104.113
+- West US2: 40.78.248.10
+- West Europe: 52.236.184.163
+- South Central US: 20.45.121.1, 20.49.88.1
-Existing SQL Gateways will start accepting traffic in the following regions. These SQL Gateways shall start accepting customer traffic on **1 September 2020** :
-- Japan East : 40.79.184.8, 40.79.192.5
+Existing SQL Gateways will start accepting traffic in the following regions. These SQL Gateways shall start accepting customer traffic on **1 September 2020**:
+- Japan East: 40.79.184.8, 40.79.192.5
### August 2020 New SQL Gateways are being added to the following regions: -- Australia East : 13.70.112.9-- Canada Central : 52.246.152.0, 20.38.144.1 -- West US 2 : 40.78.240.8
+- Australia East: 13.70.112.9
+- Canada Central: 52.246.152.0, 20.38.144.1
+- West US 2: 40.78.240.8
These SQL Gateways shall start accepting customer traffic on 10 August 2020.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-vcore-single-databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
@@ -36,14 +36,14 @@ The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Min-max vCores|0.5-1|0.5-2|0.5-4|0.75-6|1.0-8| |Min-max memory (GB)|2.02-3|2.05-6|2.10-12|2.25-18|3.00-24| |Min-max auto-pause delay (minutes)|60-10080|60-10080|60-10080|60-10080|60-10080|
-|Columnstore support|Yes|Yes|Yes|Yes|Yes|
+|Columnstore support|Yes*|Yes|Yes|Yes|Yes|
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|512|1024|1024|1024|1536| |Max log size (GB)|154|307|307|307|461| |TempDB max data size (GB)|32|64|128|192|256| |Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|
-|Max data IOPS *|320|640|1280|1920|2560|
+|Max data IOPS \*\*|320|640|1280|1920|2560|
|Max log rate (MBps)|4.5|9|18|27|36| |Max concurrent workers (requests)|75|150|300|450|600| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|
@@ -52,7 +52,8 @@ The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Read Scale-out|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
-\* The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
+\* Service objectives with smaller max vcore configurations may have insufficient memory for creating and using column store indexes. If encountering performance problems with column store, increase the max vcore configuration to increase the max memory available.
+\*\* The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
### Gen5 compute generation (part 2)
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/azure-security-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-security-integration.md
@@ -1,21 +1,21 @@
Title: Protect your Azure VMware Solution VMs with Azure Security Center integration
-description: Learn how to protect your Azure VMware Solution VMs with Azure's native security tools from a single dashboard in Azure Security Center.
+description: Protect your Azure VMware Solution VMs with Azure's native security tools from the Azure Security Center dashboard.
Previously updated : 02/04/2021 Last updated : 02/12/2021 # Protect your Azure VMware Solution VMs with Azure Security Center integration
-Azure native security tools provide a secure infrastructure for a hybrid environment of Azure, Azure VMware Solution, and on-premises virtual machines (VMs). This article shows you how to set up Azure tools for hybrid environment security. You'll use various tools to identify and address different types of threats.
+Azure native security tools provide protection for a hybrid environment of Azure, Azure VMware Solution, and on-premises virtual machines (VMs). This article shows you how to set up Azure tools for hybrid environment security. You'll use these tools to identify and address various threats.
## Azure native services
-Here is a quick summary of each Azure native service:
+Here's a quick summary of Azure native
- **Log Analytics workspace:** Log Analytics workspace is a unique environment to store log data. Each workspace has its own data repository and configuration. Data sources and solutions are configured to store their data in a specific workspace.-- **Azure Security Center:** Azure Security Center is a unified infrastructure security management system. It strengthens the security posture of the data centers, and provides advanced threat protection across the hybrid workloads in the cloud or on premises.-- **Azure Sentinel:** Azure Sentinel is a cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. It provides intelligent security analytics and threat intelligence across an environment. It is a single solution for alert detection, threat visibility, proactive hunting, and threat response.
+- **Azure Security Center:** Azure Security Center is a unified infrastructure security management system. It strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises.
+- **Azure Sentinel:** Azure Sentinel is a cloud-native, security information event management (SIEM) solution. It provides security analytics, alert detection, and automated threat response across an environment.
## Topology
@@ -25,13 +25,13 @@ The Log Analytics agent enables collection of log data from Azure, Azure VMware
Once the logs are collected by the Log Analytics workspace, you can configure the Log Analytics workspace with Azure Security Center. Azure Security Center will assess the vulnerability status of Azure VMware Solution VMs and raise an alert for any critical vulnerability. For instance, it assesses missing operating system patches, security misconfigurations, and [endpoint protection](../security-center/security-center-services.md).
-You can configure the Log Analytics workspace with Azure Sentinel for alert detection, threat visibility, proactive hunting, and threat response. In the preceding diagram, Azure Security Center is connected to Azure Sentinel using Azure Security Center connector. Azure Security Center will forward the environment vulnerability to Azure Sentinel to create an incident and map with other threats. You can also create the scheduled rules query to detect unwanted activity and convert it to the incidents.
+You can configure the Log Analytics workspace with Azure Sentinel for alert detection, threat visibility, hunting, and threat response. In the preceding diagram, Azure Security Center is connected to Azure Sentinel using Azure Security Center connector. Azure Security Center will forward the environment vulnerability to Azure Sentinel to create an incident and map with other threats. You can also create the scheduled rules query to detect unwanted activity and convert it to the incidents.
## Benefits - Azure native services can be used for hybrid environment security in Azure, Azure VMware Solution, and on-premises services. - Using a Log Analytics workspace, you can collect the data or the logs to a single point and present the same data to different Azure native services.-- Azure Security Center offers a number of features, including:
+- Azure Security Center offers many features, including:
- File integrity monitoring - Fileless attack detection - Operating system patch assessment
@@ -45,15 +45,15 @@ You can configure the Log Analytics workspace with Azure Sentinel for alert dete
## Create a Log Analytics workspace
-You will need a Log Analytics workspace to collect data from various sources. For more information, see [Create a Log Analytics workspace from the Azure portal](../azure-monitor/learn/quick-create-workspace.md).
+You'll need a Log Analytics workspace to collect data from various sources. For more information, see [Create a Log Analytics workspace from the Azure portal](../azure-monitor/learn/quick-create-workspace.md).
## Deploy Security Center and configure Azure VMware Solution VMs
-Azure Security Center is a pre-configured tool and does not require deployment. In the Azure portal, search for **Security Center** and select it.
+Azure Security Center is a pre-configured tool that doesn't require deployment. In the Azure portal, search for **Security Center** and select it.
### Enable Azure Defender
-Azure Defender extends Azure Security Center's advanced threat protection across your hybrid workloads both on premises and in the cloud. So to protect your Azure VMware Solution VMs, you will need to enable Azure Defender.
+Azure Defender extends Azure Security Center's advanced threat protection across your hybrid workloads, both on premises and in the cloud. So to protect your Azure VMware Solution VMs, you'll need to enable Azure Defender.
1. In Security Center, select **Getting started**.
@@ -144,7 +144,7 @@ Now you're ready to connect Azure Sentinel with your data sources, in this case,
## Create rules to identify security threats
-After connecting data sources to Azure Sentinel, you can create rules to generate alerts based on detected threats. In the following example, we'll create a rule to identify attempts to sign in to Windows server with the wrong password.
+After connecting data sources to Azure Sentinel, you can create rules to generate alerts for detected threats. In the following example, we'll create a rule for attempts to sign in to Windows server with the wrong password.
1. On the Azure Sentinel overview page, under Configurations, select **Analytics**.
@@ -191,7 +191,7 @@ After connecting data sources to Azure Sentinel, you can create rules to generat
After the third failed attempt to sign in to Windows server, the created rule triggers an incident for every unsuccessful attempt.
-## View generated alerts
+## View alerts
You can view generated incidents with Azure Sentinel. You can also assign incidents and close them once they're resolved, all from within Azure Sentinel.
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-delete-vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-delete-vault.md
@@ -53,11 +53,11 @@ First, read the **[Before you start](#before-you-start)** section to understand
To stop protection and delete the backup data, perform the following steps:
-1. From the portal, go to **Recovery Services vault**, and then go to **Backup items**. Then, choose the protected items in the cloud (for example, Azure Virtual Machines, Azure Storage [the Azure Files service], or SQL Server on Azure Virtual Machines).
+1. From the portal, go to **Recovery Services vault**, and then go to **Backup items**. Then, in the **Backup Management Type** list, select the protected items in the cloud (for example, Azure Virtual Machines, Azure Storage [the Azure Files service], or SQL Server on Azure Virtual Machines).
![Select the backup type.](./media/backup-azure-delete-vault/azure-storage-selected.png)
-2. Right-click to select the backup item. Depending on whether the backup item is protected or not, the menu displays either the **Stop Backup** pane or the **Delete Backup Data** pane.
+2. You'll see a list of all the items for the category. Right-click to select the backup item. Depending on whether the backup item is protected or not, the menu displays either the **Stop Backup** pane or the **Delete Backup Data** pane.
- If the **Stop Backup** pane appears, select **Delete Backup Data** from the drop-down menu. Enter the name of the backup item (this field is case-sensitive), and then select a reason from the drop-down menu. Enter your comments, if you have any. Then, select **Stop backup**.
bastion https://docs.microsoft.com/en-us/azure/bastion/bastion-connect-vm-ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-connect-vm-ssh.md
@@ -81,7 +81,9 @@ In order to connect to the Linux VM via SSH, you must have the following ports o
## <a name="akv"></a>Connect: Using a private key stored in Azure Key Vault
-The portal update for this feature is currently rolling out to regions.
+>[!NOTE]
+>The portal update for this feature is currently rolling out to regions.
+>
1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown. 1. After you select Bastion, a side bar appears that has three tabs ΓÇô RDP, SSH, and Bastion. If Bastion was provisioned for the virtual network, the Bastion tab is active by default. If you didn't provision Bastion for the virtual network, see [Configure Bastion](bastion-create-host-portal.md).
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-troubleshoot-constrained-allocation-failed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-constrained-allocation-failed.md
@@ -0,0 +1,137 @@
+
+ Title: Troubleshoot ConstrainedAllocationFailed when deploying a cloud service to Azure | Microsoft Docs
+description: This article shows how to resolve a ConstrainedAllocationFailed exception when deploying a cloud service to Azure.
+++++ Last updated : 02/04/2020++
+# Troubleshoot ConstrainedAllocationFailed when deploying a cloud service to Azure
+
+In this article, you'll troubleshoot allocation failures where Azure Cloud Services can't deploy because of constraints.
+
+Microsoft Azure allocates when you are:
+
+- Upgrading cloud services instances
+
+- Adding new web or worker role instances
+
+- Deploying instances to a cloud service
+
+You may occasionally receive errors during these operations even before you reach the Azure subscription limit.
+
+> [!TIP]
+> The information may also be useful when you plan the deployment of your services.
+
+## Symptom
+
+In Azure portal, navigate to your cloud service and in the sidebar select *Operation logs (classic)* to view the logs.
+
+When you're inspecting the logs of your cloud service, you'll see the following exception:
+
+|Exception Type |Error Message |
+|||
+|ConstrainedAllocationFailed |Azure operation '`{Operation ID}`' failed with code Compute.ConstrainedAllocationFailed. Details: Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there is an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Retry later or try reducing the VM size or number of role instances. Alternatively, if possible, remove the aforementioned constraints or try deploying to a different region.|
+
+## Cause
+
+There's a capacity issue with the region or cluster that you're deploying to. It occurs when the resource SKU you've selected isn't available for the location specified.
+
+> [!NOTE]
+> When the first node of a cloud service is deployed, it is *pinned* to a resource pool. A resource pool may be a single cluster, or a group of clusters.
+>
+> Over time, the resources in this resource pool may become fully utilized. If a cloud service makes an allocation request for additional resources when insufficient resources are available in the pinned resource pool, the request will result in an [allocation failure](cloud-services-allocation-failures.md).
+
+## Solution
+
+In this scenario, you should select a different region or SKU to deploy your cloud service to. Before deploying or upgrading your cloud service, you can determine which SKUs are available in a region or availability zone. Follow the [Azure CLI](#list-skus-in-region-using-azure-cli), [PowerShell](#list-skus-in-region-using-powershell), or [REST API](#list-skus-in-region-using-rest-api) processes below.
+
+### List SKUs in region using Azure CLI
+
+You can use the [az vm list-skus](https://docs.microsoft.com/cli/azure/vm.html#az_vm_list_skus) command.
+
+- Use the `--location` parameter to filter output to location you're using.
+- Use the `--size` parameter to search by a partial size name.
+- For more information, see the [Resolve error for SKU not available](../azure-resource-manager/templates/error-sku-not-available.md#solution-2azure-cli) guide.
+
+ **For example:**
+
+ ```azurecli
+ az vm list-skus --location southcentralus --size Standard_F --output table
+ ```
+
+ **Example results:**
+ ![Azure CLI output of running the 'az vm list-skus --location southcentralus --size Standard_F --output table' command, which shows the available SKUs.](./media/cloud-services-troubleshoot-constrained-allocation-failed/cloud-services-troubleshoot-constrained-allocation-failed-1.png)
+
+#### List SKUs in region using PowerShell
+
+You can use the [Get-AzComputeResourceSku](https://docs.microsoft.com/powershell/module/az.compute/get-azcomputeresourcesku) command.
+
+- Filter the results by location.
+- You must have the latest version of PowerShell for this command.
+- For more information, see the [Resolve error for SKU not available](../azure-resource-manager/templates/error-sku-not-available.md#solution-1powershell) guide.
+
+**For example:**
+
+```azurepowershell
+Get-AzComputeResourceSku | where {$_.Locations -icontains "centralus"}
+```
+
+**Some other useful commands:**
+
+Filter out the locations that contain size (Standard_DS14_v2):
+
+```azurepowershell
+Get-AzComputeResourceSku | where {$_.Locations.Contains("centralus") -and $_.ResourceType.Contains("virtualMachines") -and $_.Name.Contains("Standard_DS14_v2")}
+```
+
+Filter out all the locations that contain size (V3):
+
+```azurepowershell
+Get-AzComputeResourceSku | where {$_.Locations.Contains("centralus") -and $_.ResourceType.Contains("virtualMachines") -and $_.Name.Contains("v3")} | fc
+```
+
+#### List SKUs in region using REST API
+
+You can use the [Resource Skus - List](https://docs.microsoft.com/rest/api/compute/resourceskus/list) operation. It returns available SKUs and regions in the following format:
+
+```json
+{
+ "value": [
+ {
+ "resourceType": "virtualMachines",
+ "name": "Standard_A0",
+ "tier": "Standard",
+ "size": "A0",
+ "locations": [
+ "eastus"
+ ],
+ "restrictions": []
+ },
+ {
+ "resourceType": "virtualMachines",
+ "name": "Standard_A1",
+ "tier": "Standard",
+ "size": "A1",
+ "locations": [
+ "eastus"
+ ],
+ "restrictions": []
+ },
+ <Rest_of_your_file_is_located_here...>
+ ]
+}
+
+```
+
+## Next steps
+
+For more allocation failure solutions and to better understand how they're generated:
+
+> [!div class="nextstepaction"]
+> [Allocation failures (cloud services)](cloud-services-allocation-failures.md)
+
+If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Anomaly-Detector/tutorials/anomaly-detection-streaming-databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/tutorials/anomaly-detection-streaming-databricks.md
@@ -581,7 +581,7 @@ groupTime average
Then get the aggregated output result to Delta. Because anomaly detection requires a longer history window, we're using Delta to keep the history data for the point you want to detect. Replace the "[Placeholder: table name]" with a qualified Delta table name to be created (for example, "tweets"). Replace "[Placeholder: folder name for checkpoints]" with a string value that's unique each time you run this code (for example, "etl-from-eventhub-20190605").
-To learn more about Delta Lake on Azure Databricks, please refer to [Delta Lake Guide](https://docs.azuredatabricks.net/delta/https://docsupdatetracker.net/index.html)
+To learn more about Delta Lake on Azure Databricks, please refer to [Delta Lake Guide](/databricks/delta/)
```scala
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Face/APIReference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/APIReference.md
@@ -1,27 +0,0 @@
- Title: API Reference - Face-
-description: API reference provides information about the Person, LargePersonGroup/PersonGroup, LargeFaceList/FaceList, and Face Algorithms APIs.
------- Previously updated : 03/01/2018---
-# Face API reference list
-
-Azure Face is a cloud-based service that provides algorithms for face detection and recognition. The Face APIs comprise the following categories:
--- Face Algorithm APIs: Cover core functions such as [Detection](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237), [Verification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), and [Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).-- [FaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b): Used to manage a FaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).-- [LargePersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adcba3a7b9412a4d53f40): Used to manage LargePersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [LargePersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d): Used to manage a LargePersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [LargeFaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc): Used to manage a LargeFaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).-- [PersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c): Used to manage PersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [PersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244): Used to manage a PersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [Snapshot APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-take): Used to manage a Snapshot for data migration across subscriptions.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cassandra-spark-databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-spark-databricks.md
@@ -8,13 +8,12 @@
Last updated 09/24/2018- # Access Azure Cosmos DB Cassandra API data from Azure Databricks [!INCLUDE[appliesto-cassandra-api](includes/appliesto-cassandra-api.md)]
-This article details how to workwith Azure Cosmos DB Cassandra API from Spark on [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks).
+This article details how to work with Azure Cosmos DB Cassandra API from Spark on [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks).
## Prerequisites
@@ -58,9 +57,9 @@ Spark programs to be run as automated processes on Azure Databricks are submitte
The following are links to help you get started building Spark Scala programs to interact with Azure Cosmos DB Cassandra API. * [How to connect to Azure Cosmos DB Cassandra API from a Spark Scala program](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-connector-sample/blob/main/src/main/scala/com/microsoft/azure/cosmosdb/cassandra/SampleCosmosDBApp.scala)
-* [How to run a Spark Scala program as an automated job on Azure Databricks](https://docs.azuredatabricks.net/user-guide/jobs.html)
+* [How to run a Spark Scala program as an automated job on Azure Databricks](/azure/databricks/jobs)
* [Complete list of code samples for working with Cassandra API](cassandra-spark-generic.md#next-steps) ## Next steps
-Get started with [creating a Cassandra API account, database, and a table](create-cassandra-api-account-java.md) by using a Java application.
+Get started with [creating a Cassandra API account, database, and a table](create-cassandra-api-account-java.md) by using a Java application.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-tutorials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-tutorials.md
@@ -86,6 +86,8 @@ As updates are constantly made to the product, some features have added or diffe
[Row context via Window transformation](http://youtu.be/jqt1gmX2XUg)
+[Parse transformation](https://www.youtube.com/watch?v=r7O7AJcuqoY)
+ ## Source and sink [Reading and writing JSONs](https://www.youtube.com/watch?v=yY5aB7Kdhjg)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/scenario-dataflow-process-data-aml-models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/scenario-dataflow-process-data-aml-models.md
@@ -0,0 +1,109 @@
+
+ Title: Use Dataflow to process data from automated machine learning(AutoML) models
+description: Learn how to use Azure Data Factory data flows to process data from automated machine learning(AutoML) models.
++
+co-
+++++ Last updated : 1/31/2021+
+ms.co-
+++
+# Process data from automated machine learning(AutoML) models using data flow
+
+Automated machine learning(AutoML) is adopted by machine learning projects to train, tune and gain best model automatically using target metric you specify for classification, regression and time-series forecasting.
+
+One challenge is raw data from data warehouse or transactional database would be huge dataset, such as: 10GB, the large dataset requires longer time to train models, so optimize data processing is recommended before training Azure Machine Learning models. This tutorial will go through how to use ADF to partition dataset to parquet files for Azure Machine Learning dataset.
+
+In Automated machine learning(AutoML) project, it would apply the following three data processing scenarios:
+
+* Partition large data to parquet files before training models.
+
+ [Pandas data frame](https://pandas.pydata.org/pandas-docs/stable/getting_started/overview.html) is commonly used to process data before training models. Pandas data frame works well for data sizes less than 1GB, but if data is large than 1GB, Pandas data frame slow down to process data, sometime even will get out of memory error message. [Parquet file](https://parquet.apache.org/) formats are recommended for machine learning since it's binary columnar format.
+
+ Azure Data Factories Mapping data flows are visually designed data transformations with code-free to data engineers. It's powerful to process large data since the pipeline use scaled-out Spark clusters.
+
+* Split training dataset and test dataset.
+
+ Training dataset will be used for training model, test dataset will be used for evaluate models in machine learning project. Mapping data flows conditional split activity would split training data and test data.
+
+* Remove unqualified data.
+
+ You may want to remove unqualified data, such as: parquet file with zero row. In this tutorial, we will use Aggregate activity to get count numbers of rows, the row count will be a condition to remove unqualified data.
++
+## Preparation
+Use the following table of Azure SQL Database.
+```
+CREATE TABLE [dbo].[MyProducts](
+ [ID] [int] NULL,
+ [Col1] [char](124) NULL,
+ [Col2] [char](124) NULL,
+ [Col3] datetime NULL,
+ [Col4] int NULL
+
+)
+
+```
+
+## Convert data format to parquet
+
+Data flow will convert a table of Azure SQL Database to parquet file format.
+
+**Source Dataset**: Transaction table of Azure SQL Database
+
+**Sink Dataset**: Blob storage with Parquet format
++
+## Remove unqualified data based on row count
+
+Let's suppose to remove row count less than 2.
+
+1. Use Aggregate activity to get count number of rows: **Group by** based on Col2 and **Aggregates** with count(1) for row count.
+
+ ![configure Aggregate Activity to get count number of rows](./media/scenario-dataflow-process-data-aml-models/aggregate-activity-addrowcount.png)
+
+1. Use Sink activity, choose **Sink type** as Cache in **Sink** tab, then choose desired column from **key columns** dropdown list in **Settings** tab.
+
+ ![configure CacheSink Activity to get count number of rows in cached sink](./media/scenario-dataflow-process-data-aml-models/cachesink-activity-addrowcount.png)
+
+1. Use derived column activity to add row count column in in source stream. In **Derived column's settings** tab, use CacheSink#lookup expression getting row count from SinkCache.
+ ![configure Derived Column Activity to add count number of rows in source 1](./media/scenario-dataflow-process-data-aml-models/derived-column-activity-rowcount-source-1.png)
+
+1. Use Conditional split activity to remove unqualified data. In this example, row count based on Col2 column, and the condition is to remove row count less than 2, so two rows (ID=2 and ID=7) will be removed. You would save unqualified data to a blob storage for data management.
+
+ ![configure Conditional Split activity to get data which is greater or equal than 2](./media/scenario-dataflow-process-data-aml-models/conditionalsplit-greater-or-equal-than-2.png)
+
+> [!NOTE]
+> * Create a new source for getting count number of rows which will be used in original source in later steps.
+> * Use CacheSink from performance standpoint.
+
+## Split training data and test data
+
+1. We want to split training data and test data for each partition. In this example, for the same value of Col2, get top 2 rows as test data and the rest rows as training data.
+
+ Use Window activity to add one column row number for each partition. In **Over** tab choose column for partition(in this tutorial, will partition for Col2), give order in **Sort** tab(in this tutorial, will based on ID to order), and in **Window columns** tab to add one column as row number for each partition.
+ ![configure Window Activity to add one new column being row number](./media/scenario-dataflow-process-data-aml-models/window-activity-add-row-number.png)
+
+1. Use conditional split activity to split each partition top 2 rows to test dataset, and the rest rows to training dataset. In **Conditional split settings** tab, use expression lesserOrEqual(RowNum,2) as condition.
+
+ ![configure conditional split activity to split current dataset to training dataset and test dataset](./media/scenario-dataflow-process-data-aml-models/split-training-dataset-test-dataset.png)
+
+## Partition training dataset and test dataset with parquet format
+
+1. Use Sink activity, in **Optimize** tab, using **Unique value per partition** to set a column as a column key for partition.
+ ![configure Sink activity to set partition of training dataset](./media/scenario-dataflow-process-data-aml-models/partition-training-dataset-sink.png)
+
+ Let's look back the entire pipeline logic.
+ ![The logic of entire Pipeline](./media/scenario-dataflow-process-data-aml-models/entire-pipeline.png)
++
+## Next steps
+
+* Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md).
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/telemetry.md
@@ -34,7 +34,6 @@ The following [metrics](../azure-monitor/platform/metrics-supported.md#microsoft
| Metric | Metric Display Name | Unit | Aggregation Type | Description | | | | | | |
-| ByteCountΓÇï | Byte CountΓÇï | CountΓÇï | TotalΓÇï | Total number of Bytes transmitted within time periodΓÇï |
| BytesDroppedDDoSΓÇï | Inbound bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes dropped DDoSΓÇï| | BytesForwardedDDoSΓÇï | Inbound bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes forwarded DDoSΓÇï | | BytesInDDoSΓÇï | Inbound bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes DDoSΓÇï |
@@ -42,11 +41,9 @@ The following [metrics](../azure-monitor/platform/metrics-supported.md#microsoft
| DDoSTriggerTCPPacketsΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï | | DDoSTriggerUDPPacketsΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï | | IfUnderDDoSAttackΓÇï | Under DDoS attack or notΓÇï | CountΓÇï | MaximumΓÇï | Under DDoS attack or notΓÇï |
-| PacketCountΓÇï | Packet CountΓÇï | CountΓÇï | TotalΓÇï | Total number of Packets transmitted within time periodΓÇï |
| PacketsDroppedDDoSΓÇï | Inbound packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets dropped DDoSΓÇï | | PacketsForwardedDDoSΓÇï | Inbound packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets forwarded DDoSΓÇï | | PacketsInDDoSΓÇï | Inbound packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets DDoSΓÇï |
-| SynCountΓÇï | SYN CountΓÇï | CountΓÇï | TotalΓÇï | Total number of SYN Packets transmitted within time periodΓÇï |
| TCPBytesDroppedDDoSΓÇï | Inbound TCP bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes dropped DDoSΓÇï | | TCPBytesForwardedDDoSΓÇï | Inbound TCP bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes forwarded DDoSΓÇï | | TCPBytesInDDoSΓÇï | Inbound TCP bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes DDoSΓÇï |
@@ -59,7 +56,6 @@ The following [metrics](../azure-monitor/platform/metrics-supported.md#microsoft
| UDPPacketsDroppedDDoSΓÇï | Inbound UDP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets dropped DDoSΓÇï | | UDPPacketsForwardedDDoSΓÇï | Inbound UDP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets forwarded DDoSΓÇï | | UDPPacketsInDDoSΓÇï | Inbound UDP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets DDoSΓÇï |
-| VipAvailabilityΓÇï | Data Path AvailabilityΓÇï | CountΓÇï | AverageΓÇï | Average IP Address availability per time durationΓÇï |
## Prerequisites
@@ -106,4 +102,4 @@ In this tutorial, you learned how to:
To learn how to configure attack mitigation reports and flow logs, continue to the next tutorial. > [!div class="nextstepaction"]
-> [View and configure DDoS diagnostic logging](diagnostic-logging.md)
+> [View and configure DDoS diagnostic logging](diagnostic-logging.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-deploy-windows-cs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-windows-cs.md
@@ -69,11 +69,11 @@ To install the security agent, use the following workflow:
This script does the following actions: * Installs prerequisites.
-* Adds a service user (with interactive sign in disabled).
+* Adds a service user (with interactive sign-in disabled).
* Installs the agent as a **System Service**. * Configures the agent with the provided authentication parameters.
-For additional help, use the Get-Help command in PowerShell.
+For extra help, use the Get-Help command in PowerShell.
Get-Help example: ```Get-Help .\InstallSecurityAgent.ps1```
@@ -115,7 +115,7 @@ To turn on logging:
1. Restart the agent by running the following PowerShell or command line:
- **Powershell**
+ **PowerShell**
``` Restart-Service "ASC IoT Agent"
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-investigate-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-investigate-device.md
@@ -18,7 +18,7 @@
# Investigate a suspicious IoT device
-Defender for IoT service alerts provide clear indications when IoT devices are suspected of involvement in suspicious activities or when indications exist that a device is compromised.
+Defender for IoT service alerts provides clear indications when IoT devices are suspected of involvement in suspicious activities or when indications exist that a device is compromised.
In this guide, use the investigation suggestions provided to help determine the potential risks to your organization, decide how to remediate, and discover the best ways to prevent similar attacks in the future.
@@ -35,12 +35,12 @@ To locate your Log Analytics workspace for data storage:
1. Open your IoT hub, 1. Under **Security**, select **Settings**, and then select **Data Collection**. 1. Change your Log Analytics workspace configuration details.
-1. Click **Save**.
+1. Select **Save**.
Following configuration, do the following to access data stored in your Log Analytics workspace:
-1. Select and click on a Defender for IoT alert in your IoT Hub.
-1. Click **Further investigation**.
+1. Select and select on a Defender for IoT alert in your IoT Hub.
+1. Select **Further investigation**.
1. Select **To see which devices have this alert click here and view the DeviceId column**. ## Investigation steps for suspicious IoT devices
@@ -51,7 +51,7 @@ See the sample kql queries below to get started with investigating alerts and ac
### Related alerts
-To find out if other alerts were triggered around the same time use the following kql query:
+You can find out if other alerts were triggered around the same time through the following kql query:
``` let device = "YOUR_DEVICE_ID";
@@ -139,9 +139,9 @@ To find users that logged into the device use the following kql query:
Use the query results to discover: -- Which users logged in to the device?-- Are the users that logged in, supposed to log in?-- Did the users that logged in connect from expected or unexpected IP addresses?
+- Which users signed in to the device?
+- Are the users that signed in, supposed to sign in?
+- Did the users that signed in connect from expected or unexpected IP addresses?
### Process list
@@ -178,7 +178,7 @@ Use the query results to discover:
- Were there any suspicious processes running on the device? - Were processes executed by appropriate users?-- Did any command line executions contain the correct and expected arguments?
+- Did any command-line executions contain the correct and expected arguments?
## Next steps
event-grid https://docs.microsoft.com/en-us/azure/event-grid/sdk-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/sdk-overview.md
@@ -16,7 +16,7 @@ The management SDKs enable you to create, update, and delete event grid topics a
* [.NET](https://www.nuget.org/packages/Microsoft.Azure.Management.EventGrid) * [Go](https://github.com/Azure/azure-sdk-for-go) * [Java](https://search.maven.org/#search%7Cga%7C1%7Cazure-mgmt-eventgrid)
-* [Node](https://www.npmjs.com/package/azure-arm-eventgrid)
+* [Node](https://www.npmjs.com/package/@azure/arm-eventgrid)
* [Python](https://pypi.python.org/pypi/azure-mgmt-eventgrid) * [Ruby](https://rubygems.org/gems/azure_mgmt_event_grid)
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-ip-filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-ip-filtering.md
@@ -2,7 +2,7 @@
Title: Azure Event Hubs Firewall Rules | Microsoft Docs description: Use Firewall Rules to allow connections from specific IP addresses to Azure Event Hubs. Previously updated : 07/16/2020 Last updated : 02/12/2021 # Allow access to Azure Event Hubs namespaces from specific IP addresses or ranges
@@ -21,8 +21,9 @@ This section shows you how to use the Azure portal to create IP firewall rules f
1. Navigate to your **Event Hubs namespace** in the [Azure portal](https://portal.azure.com). 4. Select **Networking** under **Settings** on the left menu. You see the **Networking** tab only for **standard** or **dedicated** namespaces.
- > [!NOTE]
- > By default, the **Selected networks** option is selected as shown in the following image. If you don't specify an IP firewall rule or add a virtual network on this page, the namespace can be accessed via **public internet** (using the access key).
+
+ > [!WARNING]
+ > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed via **public internet** (using the access key).
:::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networks tab - selected networks option" lightbox="./media/event-hubs-firewall/selected-networks.png":::
@@ -50,24 +51,9 @@ This section shows you how to use the Azure portal to create IP firewall rules f
The following Resource Manager template enables adding an IP filter rule to an existing Event Hubs namespace.
-Template parameters:
--- **ipMask** is a single IPv4 address or a block of IP addresses in CIDR notation. For example, in CIDR notation 70.37.104.0/24 represents the 256 IPv4 addresses from 70.37.104.0 to 70.37.104.255, with 24 indicating the number of significant prefix bits for the range.-
-> [!NOTE]
-> While there are no deny rules possible, the Azure Resource Manager template has the default action set to **"Allow"** which doesn't restrict connections.
-> When making Virtual Network or Firewalls rules, we must change the
-> ***"defaultAction"***
->
-> from
-> ```json
-> "defaultAction": "Allow"
-> ```
-> to
-> ```json
-> "defaultAction": "Deny"
-> ```
->
+**ipMask** in the template is a single IPv4 address or a block of IP addresses in CIDR notation. For example, in CIDR notation 70.37.104.0/24 represents the 256 IPv4 addresses from 70.37.104.0 to 70.37.104.255, with 24 indicating the number of significant prefix bits for the range.
+
+When adding virtual network or firewalls rules, set the value of `defaultAction` to `Deny`.
```json {
@@ -133,6 +119,9 @@ Template parameters:
To deploy the template, follow the instructions for [Azure Resource Manager][lnk-deploy].
+> [!IMPORTANT]
+> If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
+ ## Next steps For constraining access to Event Hubs to Azure virtual networks, see the following link:
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-service-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-service-endpoints.md
@@ -2,7 +2,7 @@
Title: Virtual Network service endpoints - Azure Event Hubs | Microsoft Docs description: This article provides information on how to add a Microsoft.EventHub service endpoint to a virtual network. Previously updated : 07/29/2020 Last updated : 02/12/2021 # Allow access to Azure Event Hubs namespaces from specific virtual networks
@@ -41,8 +41,8 @@ This section shows you how to use Azure portal to add a virtual network service
1. Navigate to your **Event Hubs namespace** in the [Azure portal](https://portal.azure.com). 4. Select **Networking** under **Settings** on the left menu. You see the **Networking** tab only for **standard** or **dedicated** namespaces.
- > [!NOTE]
- > By default, the **Selected networks** option is selected as shown in the following image. If you don't specify an IP firewall rule or add a virtual network on this page, the namespace can be accessed via **public internet** (using the access key).
+ > [!WARNING]
+ > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed via **public internet** (using the access key).
:::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networks tab - selected networks option" lightbox="./media/event-hubs-firewall/selected-networks.png":::
@@ -74,29 +74,12 @@ This section shows you how to use Azure portal to add a virtual network service
[!INCLUDE [event-hubs-trusted-services](../../includes/event-hubs-trusted-services.md)] ## Use Resource Manager template
+The following sample Resource Manager template adds a virtual network rule to an existing Event Hubs namespace. For the network rule, it specifies the ID of a subnet in a virtual network.
-The following Resource Manager template enables adding a virtual network rule to an existing Event Hubs namespace.
-
-Template parameters:
+The ID is a fully qualified Resource Manager path for the virtual network subnet. For example, `/subscriptions/{id}/resourceGroups/{rg}/providers/Microsoft.Network/virtualNetworks/{vnet}/subnets/default` for the default subnet of a virtual network.
-* `namespaceName`: Event Hubs namespace.
-* `vnetRuleName`: Name for the Virtual Network rule to be created.
-* `virtualNetworkingSubnetId`: Fully qualified Resource Manager path for the virtual network subnet; for example, `/subscriptions/{id}/resourceGroups/{rg}/providers/Microsoft.Network/virtualNetworks/{vnet}/subnets/default` for the default subnet of a virtual network.
+When adding virtual network or firewalls rules, set the value of `defaultAction` to `Deny`.
-> [!NOTE]
-> While there are no deny rules possible, the Azure Resource Manager template has the default action set to **"Allow"** which doesn't restrict connections.
-> When making Virtual Network or Firewalls rules, we must change the
-> ***"defaultAction"***
->
-> from
-> ```json
-> "defaultAction": "Allow"
-> ```
-> to
-> ```json
-> "defaultAction": "Deny"
-> ```
->
```json {
@@ -199,6 +182,9 @@ Template parameters:
To deploy the template, follow the instructions for [Azure Resource Manager][lnk-deploy].
+> [!IMPORTANT]
+> If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
+ ## Next steps For more information about virtual networks, see the following links:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-export-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
@@ -4,7 +4,7 @@ description: How to use the new data export to export your IoT data to Azure and
Previously updated : 11/05/2020 Last updated : 01/27/2021
@@ -161,6 +161,17 @@ Now that you have a destination to export your data to, set up data export in yo
1. When you've finished setting up your export, select **Save**. After a few minutes, your data appears in your destinations.
+## Monitor your export
+
+In addition to seeing the status of your exports in IoT Central, you can monitor how much data is flowing through your exports and observe export errors in the Azure Monitor data platform. You can access metrics about your exports and device health in charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI. Currently, you can monitor these data export metrics in Azure Monitor:
+
+1. Number of messages incoming to export before filters are applied
+2. Number of messages that pass through filters
+3. Number of messages successfully exported to destinations
+4. Number of errors encountered
+
+[Learn more about how to access IoT Central metrics.](howto-monitor-application-health.md)
+ ## Destinations ### Azure Blob Storage destination
@@ -228,7 +239,6 @@ The following example shows an exported telemetry message:
} } ```- ### Message properties Telemetry messages have properties for metadata in addition to the telemetry payload. The previous snippet shows examples of system messages such as `deviceId` and `enqueuedTime`. To learn more about the system message properties, see [System Properties of D2C IoT Hub messages](../../iot-hub/iot-hub-devguide-messages-construct.md#system-properties-of-d2c-iot-hub-messages).
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-monitor-application-health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-monitor-application-health.md
@@ -3,22 +3,22 @@ Title: Monitor the health of an Azure IoT Central application | Microsoft Docs
description: As an operator or administrator, monitor the overall health of the devices connected to your IoT Central application. Previously updated : 05/14/2020 Last updated : 01/27/2021
-# As an operator, I want to monitor the overall health of the devices connected to your IoT Central application.
+# As an operator, I want to monitor the overall health of the devices and data exports in my IoT Central application.
-# Monitor the overall health of the devices connected to an IoT Central application
+# Monitor the overall health of an IoT Central application
> [!NOTE] > Metrics are only available for version 3 IoT Central applications. To learn how to check your application version, see [About your application](./howto-get-app-info.md). *This article applies to operators and administrators.*
-In this article, you learn how to use the set of metrics provided by IoT Central to assess the overall health of the devices connected to your IoT Central application.
+In this article, you learn how to use the set of metrics provided by IoT Central to assess the health of devices connected to your IoT Central application and the health of your running data exports.
Metrics are enabled by default for your IoT Central application and you access them from the [Azure portal](https://portal.azure.com/). The [Azure Monitor data platform exposes these metrics](../../azure-monitor/platform/data-platform-metrics.md) and provides several ways for you to interact with them. For example, you can use charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI.
@@ -28,7 +28,7 @@ Applications that use the free trial plan don't have an associated Azure subscri
## View metrics in the Azure portal
-The following steps assume you have an [IoT Central application](./quick-deploy-iot-central.md) with some [connected devices](./tutorial-connect-device.md).
+The following steps assume you have an [IoT Central application](./quick-deploy-iot-central.md) with some [connected devices](./tutorial-connect-device.md) or a running [data export](howto-export-data.md).
To view IoT Central metrics in the portal:
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
@@ -136,6 +136,7 @@ IoT Edge uses the Microsoft.Azure.Devices.Client SDK. For more information, see
| IoT Edge version | Microsoft.Azure.Devices.Client SDK version | ||--|
+| 1.1.0 (LTS) | 1.28.0 |
| 1.0.10 | 1.28.0 | | 1.0.9 | 1.21.1 | | 1.0.8 | 1.20.3 |
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-public-network-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-public-network-access.md
@@ -6,7 +6,7 @@
Previously updated : 07/01/2020 Last updated : 02/12/2021 # Managing public network access for your IoT hub
@@ -23,7 +23,13 @@ To restrict access to only [private endpoint for your IoT hub in your VNet](virt
:::image type="content" source="media/iot-hub-publicnetworkaccess/turn-off-public-network-access.png" alt-text="Image showing Azure portal where to turn off public network access" lightbox="media/iot-hub-publicnetworkaccess/turn-off-public-network-access.png":::
-To turn on public network access, selected **Enabled**, then **Save**.
+To turn on public network access, selected **All networks**, then **Save**.
+
+## IoT Hub endpoint, IP address, and ports after disabling public network access
+
+IoT Hub is a multi-tenant Platform-as-a-Service (PaaS), so different customers share the same pool of compute, networking, and storage hardware resources. IoT Hub's hostnames map to a public endpoint with a publicly routable IP address over the internet. Different customers share this IoT Hub public endpoint, and IoT devices in over wide-area networks and on-premises networks can all access it.
+
+Disabling public network access is enforced on a specific IoT hub resource, ensuring isolation. To keep the service active for other customer resources using the public path, its public endpoint remains resolvable, IP addresses discoverable, and ports remain open. This is not a cause for concern as Microsoft integrates multiple layers of security to ensure complete isolation between tenants. To learn more, see [Isolation in the Azure Public Cloud](../security/fundamentals/isolation-choices.md#tenant-level-isolation).
## IP Filter
@@ -31,4 +37,4 @@ If public network access is disabled, all [IP Filter](iot-hub-ip-filtering.md) r
## Bug fix with built-in Event Hub compatible endpoint
-There is a bug with IoT Hub where the [built-in Event Hub compatible endpoint](iot-hub-devguide-messages-read-builtin.md) continues to be accessible via public internet when public network access to the IoT Hub is disabled. To learn more and contact us about this bug, see [Disabling public network access for IoT Hub disables access to built-in Event Hub endpoint](https://azure.microsoft.com/updates/iot-hub-public-network-access-bug-fix).
+There is a bug with IoT Hub where the [built-in Event Hub compatible endpoint](iot-hub-devguide-messages-read-builtin.md) continues to be accessible via public internet when public network access to the IoT Hub is disabled. To learn more and contact us about this bug, see [Disabling public network access for IoT Hub disables access to built-in Event Hub endpoint](https://azure.microsoft.com/updates/iot-hub-public-network-access-bug-fix).
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/howto-load-balancer-imds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/howto-load-balancer-imds.md
@@ -14,11 +14,11 @@
## Prerequisites
-* Use the [latest API version](/virtual-machines/windows/instance-metadata-service?tabs=windows#supported-api-versions) for your request.
+* Use the [latest API version](../virtual-machines/windows/instance-metadata-service.md?tabs=windows#supported-api-versions) for your request.
## Sample request and response > [!IMPORTANT]
-> This example bypasses proxies. You **must** bypass proxies when querying IMDS. For more information, see [Proxies](/virtual-machines/windows/instance-metadata-service?tabs=windows#proxies).
+> This example bypasses proxies. You **must** bypass proxies when querying IMDS. For more information, see [Proxies](../virtual-machines/windows/instance-metadata-service.md?tabs=windows#proxies).
### [Windows](#tab/windows/) ```powershell
@@ -77,9 +77,9 @@ curl -H "Metadata:true" --noproxy "*" "http://169.254.169.254:80/metadata/loadba
## Next steps [Common error codes and troubleshooting steps](troubleshoot-load-balancer-imds.md)
-Learn more about [Azure Instance Metadata Service](/virtual-machines/windows/instance-metadata-service)
+Learn more about [Azure Instance Metadata Service](../virtual-machines/windows/instance-metadata-service.md)
-[Retrieve all metadata for an instance](/virtual-machines/windows/instance-metadata-service?tabs=windows#access-azure-instance-metadata-service)
+[Retrieve all metadata for an instance](../virtual-machines/windows/instance-metadata-service.md?tabs=windows#access-azure-instance-metadata-service)
[Deploy a standard load balancer](quickstart-load-balancer-standard-public-portal.md)
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/instance-metadata-service-load-balancer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/instance-metadata-service-load-balancer.md
@@ -24,7 +24,7 @@ The metadata includes the following information for the virtual machines or virt
## Access the load balancer metadata using the IMDS
-For more information on how to access the load balancer metadata, see [Use the Azure Instance Metadata Service to access load balancer information.](howto-load-balancer-imds.md)to access load balancer information.
+For more information on how to access the load balancer metadata, see [Use the Azure Instance Metadata Service to access load balancer information](howto-load-balancer-imds.md).
## Troubleshoot common error codes
@@ -35,7 +35,7 @@ For more information on common error codes and their mitigation methods, see [Tr
If you're unable to retrieve a metadata response after multiple attempts, create a support issue in the Azure portal. ## Next steps
-Learn more about [Azure Instance Metadata Service](/virtual-machines/windows/instance-metadata-service)
+Learn more about [Azure Instance Metadata Service](../virtual-machines/windows/instance-metadata-service.md)
[Deploy a standard load balancer](quickstart-load-balancer-standard-public-portal.md)
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/troubleshoot-load-balancer-imds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/troubleshoot-load-balancer-imds.md
@@ -23,8 +23,8 @@ This article describes common deployment errors and how to resolve those errors
| 400 | Unexpected request. Please check the query parameters and retry. | The error code indicates that the request format is not configured properly. </br> For more information, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response) to fix the request body and issue a retry. | | 404 | No load balancer metadata is found. Please check if your VM is using any non-basic SKU load balancer and retry later. | The error code indicates that your virtual machine isn't associated with a load balancer or the load balancer is basic SKU instead of standard. </br> For more information, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md?tabs=option-1-create-load-balancer-standard) to deploy a standard load balancer.| | 404 | API is not found: Path = "\<UrlPath>", Method = "\<Method>" | The error code indicates a misconfiguration of the path. </br> For more information, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response) to fix the request body and issue a retry.|
-| 405 | Http method is not allowed: Path = "\<UrlPath>", Method = "\<Method>" | The error code indicates an unsupported HTTP verb. </br> For more information, see [Azure Instance Metadata Service (IMDS)](/virtual-machines/windows/instance-metadata-service?tabs=windows.md#http-verbs) for supported verbs. |
-| 429 | Too many requests | The error code indicates a rate limit. </br> For more information on rate limiting, see [Azure Instance Metadata Service (IMDS)](/virtual-machines/windows/instance-metadata-service?tabs=windows#rate-limiting).|
+| 405 | Http method is not allowed: Path = "\<UrlPath>", Method = "\<Method>" | The error code indicates an unsupported HTTP verb. </br> For more information, see [Azure Instance Metadata Service (IMDS)](../virtual-machines/windows/instance-metadata-service.md?tabs=windows#http-verbs) for supported verbs. |
+| 429 | Too many requests | The error code indicates a rate limit. </br> For more information on rate limiting, see [Azure Instance Metadata Service (IMDS)](../virtual-machines/windows/instance-metadata-service.md?tabs=windows#rate-limiting).|
| 400 | Request body is larger than MaxBodyLength: … | The error code indicates a request larger than the MaxBodyLength. </br> For more information on body length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response).| | 400 | Parameter key length is larger than MaxParameterKeyLength: … | The error code indicates a parameter key length larger than the MaxParameterKeyLength. </br> For more information on body length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response). | | 400 | Parameter value length is larger than MaxParameterValueLength: … | The error code indicates a parameter key length larger than the MaxParameterValueLength. </br> For more information on value length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response).|
@@ -36,5 +36,5 @@ This article describes common deployment errors and how to resolve those errors
## Next steps
-Learn more about [Azure Instance Metadata Service](/virtual-machines/windows/instance-metadata-service.md)
+Learn more about [Azure Instance Metadata Service](../virtual-machines/windows/instance-metadata-service.md)
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/create-managed-service-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-managed-service-identity.md
@@ -5,7 +5,7 @@
ms.suite: integration Previously updated : 01/15/2021 Last updated : 02/12/2021 # Authenticate access to Azure resources by using managed identities in Azure Logic Apps
@@ -26,6 +26,11 @@ Currently, only [specific built-in triggers and actions](../logic-apps/logic-app
* HTTP * HTTP + Webhook
+> [!NOTE]
+> While the HTTP trigger and action can authenticate connections to Azure Storage
+> accounts behind Azure firewalls by using the system-assigned managed identity,
+> they can't use the user-assigned managed identity to authenticate the same connections.
+ **Managed connectors** * Azure Automation
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/ai-gallery-control-personal-data-dsr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/ai-gallery-control-personal-data-dsr.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Manage Azure AI Gallery data - Azure'
description: You can export and delete your in-product user data from Azure AI Gallery using the interface or AI Gallery Catalog API. This article shows you how. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/algorithm-parameters-optimize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/algorithm-parameters-optimize.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Optimize algorithms - Azure'
description: Explains how to choose the optimal parameter set for an algorithm in Azure Machine Learning Studio (classic). -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/azure-ml-netsharp-reference-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/azure-ml-netsharp-reference-guide.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Net# custom neural networks - Azure'
description: Syntax guide for the Net# neural networks specification language. Learn how to create custom neural network models in Azure Machine Learning Studio (classic). -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/consume-web-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/consume-web-services.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Consume web services - Azure'
description: Once a machine learning service is deployed from Azure Machine Learning Studio (classic), the RESTFul Web service can be consumed either as real-time request-response service or as a batch execution service. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/consuming-from-excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/consuming-from-excel.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Consume web service in Excel - Azure'
description: Azure Machine Learning Studio (classic) makes it easy to call web services directly from Excel without the need to write any code. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/create-endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-endpoint.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Create web service endpoints - Azure'
description: Create web service endpoints in Azure Machine Learning Studio (classic). Each endpoint in the web service is independently addressed, throttled, and managed. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/create-experiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-experiment.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Quickstart: Create a data science experiment - Azur
description: This machine learning quickstart walks you through an easy data science experiment. We'll predict the price of a car using a regression algorithm. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/create-models-and-endpoints-with-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-models-and-endpoints-with-powershell.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Create multiple model & endpoints - Azure'
description: Use PowerShell to create multiple Machine Learning models and web service endpoints with the same algorithm but different training datasets. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/create-workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-workspace.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Create a workspace - Azure'
description: To use Azure Machine Learning Studio (classic), you need to have a Machine Learning Studio (classic) workspace. This workspace contains the tools you need to create, manage, and publish experiments. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/custom-r-modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/custom-r-modules.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Create & deploy custom R modules - Azure'
description: Learn how to author and deploy a custom R modules in ML Studio (classic). -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/data-science-for-beginners-ask-a-question-you-can-answer-with-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-ask-a-question-you-can-answer-with-data.md
@@ -3,7 +3,7 @@ Title: 'Ask a question data can answer - ML Studio (classic) - Azure'
description: Learn how to formulate a sharp data science question in Data Science for Beginners video 3. Includes a comparison of classification and regression questions. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/data-science-for-beginners-copy-other-peoples-work-to-do-data-science https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-copy-other-peoples-work-to-do-data-science.md
@@ -3,7 +3,7 @@ Title: 'Copy data science examples - ML Studio (classic) - Azure'
description: 'Trade secret of data science: Get others to do your work for you. Get machine learning examples from the Azure AI Gallery.' -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/data-science-for-beginners-is-your-data-ready-for-data-science https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-is-your-data-ready-for-data-science.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Data evaluation - Azure'
description: Four criteria your data needs to meet to be ready for data science. This video has concrete examples to help with basic data evaluation. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/data-science-for-beginners-predict-an-answer-with-a-simple-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-predict-an-answer-with-a-simple-model.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Predict answers with regression models - Azure'
description: How to create a simple regression model to predict a price in Data Science for Beginners video 4. Includes a linear regression with target data. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/data-science-for-beginners-the-5-questions-data-science-answers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-the-5-questions-data-science-answers.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Data Science for Beginners - Azure'
description: Data Science for Beginners is teaches basic concepts in 5 short videos, starting with The 5 Questions Data Science Answers. From Azure Machine Learning. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/deploy-a-machine-learning-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-a-machine-learning-web-service.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Deploy a web service - Azure'
description: How to convert a training experiment to a predictive experiment, prepare it for deployment, then deploy it as an Azure Machine Learning Studio (classic) web service. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/deploy-consume-web-service-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-consume-web-service-guide.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Deployment and consumption - Azure'
description: You can use Azure Machine Learning Studio (classic) to deploy machine learning workflows and models as web services. These web services can then be used to call the machine learning models from applications over the internet to do predictions in real time or in batch mode. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/deploy-with-resource-manager-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-with-resource-manager-template.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Deploy workspaces with Azure Resource Manager - Azu
description: How to deploy a workspace for Azure Machine Learning Studio (classic) using Azure Resource Manager template -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/evaluate-model-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/evaluate-model-performance.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Evaluate & cross-validate models - Azure'
description: Learn about the metrics you can use to monitor model performance in Azure Machine Learning Studio (classic). -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/excel-add-in-for-web-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/excel-add-in-for-web-services.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Excel add-in for web services - Azure'
description: How to use Azure Machine Learning Web services directly in Excel without writing any code. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/execute-python-scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/execute-python-scripts.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Execute Python scripts - Azure'
description: Learn how to use the Execute Python Script module to use Python code in Machine Learning Studio (classic) experiments and web services. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/export-delete-personal-data-dsr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/export-delete-personal-data-dsr.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Export & delete your data - Azure'
description: In-product data stored by Azure Machine Learning Studio (classic) is available for export and deletion through the Azure portal and also through authenticated REST APIs. Telemetry data can be accessed through the Azure Privacy Portal. This article shows you how. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/gallery-how-to-use-contribute-publish https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/gallery-how-to-use-contribute-publish.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Azure AI Gallery - Azure'
description: Share and discover analytics resources and more in the Azure AI Gallery. Learn from others and make your own contributions to the community. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/import-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/import-data.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Import training data - Azure'
description: How to import your data into Azure Machine Learning Studio (classic) from various data sources. Learn what data types and data formats are supported. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/interpret-model-results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/interpret-model-results.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Interpret model results - Azure'
description: How to choose the optimal parameter set for an algorithm using and visualizing score model outputs. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/manage-experiment-iterations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-experiment-iterations.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): View & rerun experiments - Azure'
description: Manage experiment runs in Azure Machine Learning Studio (classic). You can review previous runs of your experiments at any time in order to challenge, revisit, and ultimately either confirm or refine previous assumptions. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/manage-new-webservice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-new-webservice.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Manage web services - Azure'
description: Manage your Machine Learning New and Classic Web services using the Microsoft Azure Machine Learning Web Services portal. Since Classic Web services and New Web services are based on different underlying technologies, you have slightly different management capabilities for each of them. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/manage-web-service-endpoints-using-api-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-web-service-endpoints-using-api-management.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Manage web services using API Management - Azure'
description: A guide showing how to manage AzureML web services using API Management. Manage your REST API endpoints by defining user access, usage throttling, and dashboard monitoring. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/manage-workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-workspace.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Manage workspaces - Azure'
description: Manage access to Azure Machine Learning Studio (classic) workspaces, and deploy and manage Machine Learning API web services -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/model-progression-experiment-to-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/model-progression-experiment-to-web-service.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): How a model becomes a web service - Azure'
description: An overview of the mechanics of how your Azure Machine Learning Studio (classic) model progresses from a development experiment to a Web service. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/powershell-module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/powershell-module.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): PowerShell modules - Azure'
description: Use PowerShell to create and manage Azure Machine Learning Studio (classic) workspaces, experiments, web services, and more. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/r-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/r-get-started.md
@@ -3,7 +3,7 @@ Title: Use R with Machine Learning Studio (classic) - Azure
description: Use this R programming tutorial to get started with Azure Machine Learning Studio (classic) in R to create a forecasting solution. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/retrain-classic-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/retrain-classic-web-service.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): retrain classic web service - Azure'
description: Learn how to retrain a model and update a classic web service to use the newly trained model in Azure Machine Learning Studio (classic). -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/retrain-machine-learning-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/retrain-machine-learning-model.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): retrain a web service - Azure'
description: Learn how to update a web service to use a newly trained machine learning model in Azure Machine Learning Studio (classic). -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/sample-experiments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/sample-experiments.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): start experiments from examples - Azure'
description: Learn how to use example machine learning experiments to create new experiments with Azure AI Gallery and Azure Machine Learning Studio (classic). -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/studio-classic-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/studio-classic-overview.md
@@ -8,7 +8,7 @@
ms.assetid: e65c8fe1-7991-4a2a-86ef-fd80a7a06269 -+ Last updated 08/19/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/support-aml-studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/support-aml-studio.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic) tutorial support & training - Azure'
description: Get support and training and provide feedback for Azure Machine Learning Studio (classic) -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/tutorial-part1-credit-risk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/tutorial-part1-credit-risk.md
@@ -6,7 +6,7 @@
-+ Last updated 02/11/2019
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/tutorial-part2-credit-risk-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/tutorial-part2-credit-risk-train.md
@@ -6,7 +6,7 @@
-+ Last updated 02/11/2019
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/tutorial-part3-credit-risk-deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/tutorial-part3-credit-risk-deploy.md
@@ -6,7 +6,7 @@
-+ Last updated 07/27/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/use-data-from-an-on-premises-sql-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/use-data-from-an-on-premises-sql-server.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): On-premises SQL Server - Azure'
description: Use data from a SQL Server database to perform advanced analytics with Azure Machine Learning Studio (classic). -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/use-sample-datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/use-sample-datasets.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Use the sample datasets - Azure'
description: Descriptions of the datasets used in sample models included in Machine Learning Studio (classic). You can use these sample datasets for your experiments. -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/version-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/version-control.md
@@ -3,7 +3,7 @@ Title: 'ML Studio (classic): Application lifecycle management - Azure'
description: Apply Application Lifecycle Management best practices in Azure Machine Learning Studio (classic) -+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/web-service-error-codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-service-error-codes.md
@@ -9,7 +9,7 @@
editor: cgronlun ms.assetid: 0923074b-3728-439d-a1b8-8a7245e39be4 -+ Last updated 11/16/2016
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/web-service-parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-service-parameters.md
@@ -5,10 +5,9 @@
ms.assetid: c49187db-b976-4731-89d6-11a0bf653db1 -+ Last updated 01/12/2017
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/web-services-logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-services-logging.md
@@ -5,10 +5,9 @@
ms.assetid: c54d41e1-0300-46ef-bbfc-d6f7dca85086 -+ Last updated 06/15/2017
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/web-services-that-use-import-export-modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-services-that-use-import-export-modules.md
@@ -5,10 +5,9 @@
ms.assetid: 3a7ac351-ebd3-43a1-8c5d-18223903d08e -+ Last updated 03/28/2017
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-common-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-common-identity.md
@@ -4,8 +4,7 @@
description: Learn how to create common user accounts that can be used across multiple Data Science Virtual Machines. You can use Azure Active Directory or an on-premises Active Directory to authenticate users to the Data Science Virtual Machine. keywords: deep learning, AI, data science tools, data science virtual machine, geospatial analytics, team data science process --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-enterprise-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-enterprise-overview.md
@@ -4,8 +4,7 @@
description: Patterns for deploying the Data Science VM in an enterprise team environment. keywords: deep learning, AI, data science tools, data science virtual machine, geospatial analytics, team data science process --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-pools.md
@@ -4,8 +4,7 @@
description: Learn how to create & deploy a shared pool of Data Science Virtual Machines (DSVMs) as a shared resource for a team. keywords: deep learning, AI, data science tools, data science virtual machine, geospatial analytics, team data science process --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-samples-and-walkthroughs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-samples-and-walkthroughs.md
@@ -4,8 +4,7 @@
description: Through these samples and walkthroughs, learn how to handle common tasks and scenarios with the Data Science Virtual Machine. keywords: data science tools, data science virtual machine, tools for data science, linux data science --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-secure-access-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-secure-access-keys.md
@@ -4,8 +4,7 @@
description: Learn how to securely store access credentials on the Data Science Virtual Machine. You'll learn how to use managed service identities and Azure Key Vault to store access credentials. keywords: deep learning, AI, data science tools, data science virtual machine, geospatial analytics, team data science process --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tools-data-platforms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-data-platforms.md
@@ -4,8 +4,7 @@
description: Learn about the supported data platforms and tools for the Azure Data Science Virtual Machine. keywords: data science tools, data science virtual machine, tools for data science, linux data science --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tools-data-science https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-data-science.md
@@ -4,8 +4,7 @@
description: Learn about the machine-learning tools and frameworks that are preinstalled on the Data Science Virtual Machine. keywords: data science tools, data science virtual machine, tools for data science, linux data science --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tools-deep-learning-frameworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-deep-learning-frameworks.md
@@ -4,8 +4,7 @@
description: Available deep learning frameworks and tools on Azure Data Science Virtual Machine. keywords: data science tools, data science virtual machine, tools for data science, linux data science --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tools-development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-development.md
@@ -4,8 +4,7 @@
description: Learn about the tools and integrated development environments available on the Data Science Virtual Machine. keywords: data science tools, data science virtual machine, tools for data science, linux data science --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tools-ingestion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-ingestion.md
@@ -4,8 +4,7 @@
description: Learn about the data ingestion tools and utilities that are preinstalled on the Data Science Virtual Machine. keywords: data science tools, data science virtual machine, tools for data science, linux data science --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tools-languages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-languages.md
@@ -4,8 +4,7 @@
description: The supported program languages and related tools pre-installed on the Data Science Virtual Machine. keywords: data science tools, data science virtual machine, tools for data science, linux data science --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tools-productivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-productivity.md
@@ -4,8 +4,7 @@
description: Learn about the productivity tools on the Data Science Virtual Machines. keywords: deep learning, AI, data science tools, data science virtual machine, geospatial analytics, team data science process --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager.md
@@ -7,8 +7,7 @@
Last updated 06/10/2020--+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md
@@ -2,8 +2,7 @@
Title: 'Quickstart: Create an Ubuntu Data Science Virtual Machine' description: Configure and create a Data Science Virtual Machine for Linux (Ubuntu) to do analytics and machine learning.--+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/how-to-track-experiments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/how-to-track-experiments.md
@@ -3,8 +3,7 @@ Title: Experiment tracking and deploying models
description: Learn how to track and log experiments from the Data Science Virtual Machine with Azure Machine Learning and/or MLFlow. --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/linux-dsvm-walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/linux-dsvm-walkthrough.md
@@ -3,8 +3,7 @@ Title: Explore Linux
description: Learn how to complete several common data science tasks by using the Linux Data Science Virtual Machine. --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/overview.md
@@ -4,8 +4,7 @@
description: Overview of Azure Data Science Virtual Machine - An easy to use virtual machine on the Azure cloud platform with preinstalled and configured tools and libraries for doing data science. keywords: data science tools, data science virtual machine, tools for data science, linux data science --+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-attach-compute-targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-compute-targets.md
@@ -219,7 +219,7 @@ To attach Azure Databricks as a compute target, provide the following informatio
* __Databricks compute name__: The name you want to assign to this compute resource. * __Databricks workspace name__: The name of the Azure Databricks workspace.
-* __Databricks access token__: The access token used to authenticate to Azure Databricks. To generate an access token, see the [Authentication](https://docs.azuredatabricks.net/dev-tools/api/latest/authentication.html) document.
+* __Databricks access token__: The access token used to authenticate to Azure Databricks. To generate an access token, see the [Authentication](/azure/databricks/dev-tools/api/latest/authentication) document.
The following code demonstrates how to attach Azure Databricks as a compute target with the Azure Machine Learning SDK (__The Databricks workspace need to be present in the same subscription as your AML workspace__):
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-databricks-automl-environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-databricks-automl-environment.md
@@ -127,7 +127,7 @@ Try it out:
psutil cryptography==1.5 pyopenssl==16.0.0 ipython==2.2.0 ```
- Alternatively, you can use init scripts if you keep facing install issues with Python libraries. This approach isn't officially supported. For more information, see [Cluster-scoped init scripts](https://docs.azuredatabricks.net/user-guide/clusters/init-scripts.html#cluster-scoped-init-scripts).
+ Alternatively, you can use init scripts if you keep facing install issues with Python libraries. This approach isn't officially supported. For more information, see [Cluster-scoped init scripts](/azure/databricks/clusters/init-scripts#cluster-scoped-init-scripts).
* **Import error: cannot import name `Timedelta` from `pandas._libs.tslibs`**: If you see this error when you use automated machine learning, run the two following lines in your notebook: ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-secure-training-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
@@ -270,7 +270,7 @@ To use Azure Databricks in a virtual network with your workspace, the following
> * If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network as the Azure Databricks cluster. > * In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
-For specific information on using Azure Databricks with a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-inject.html).
+For specific information on using Azure Databricks with a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
<a id="vmorhdi"></a>
mysql https://docs.microsoft.com/en-us/azure/mysql/concepts-servers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-servers.md
@@ -40,7 +40,7 @@ The following elements help ensure safe access to your database.
| **Firewall** | To help protect your data, a firewall rule prevents all access to your database server, until you specify which computers have permission. See [Azure Database for MySQL Server firewall rules](./concepts-firewall-rules.md). | | **SSL** | The service supports enforcing SSL connections between your applications and your database server. See [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./howto-configure-ssl.md). |
-## Stop/Start an Azure Database for MySQL (Preview)
+## Stop/Start an Azure Database for MySQL
Azure Database for MySQL gives you the ability to **Stop** the server when not in use and **Start** the server when you resume activity. This is essentially done to save costs on the database servers and only pay for the resource when in use. This becomes even more important for dev-test workloads and when you are only using the server for part of the day. When you stop the server, all active connections will be dropped. Later, when you want to bring the server back online, you can either use the [Azure portal](how-to-stop-start-server.md) or [CLI](how-to-stop-start-server.md).
mysql https://docs.microsoft.com/en-us/azure/mysql/how-to-stop-start-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/how-to-stop-start-server.md
@@ -11,7 +11,7 @@ Last updated 09/21/2020
# Stop/Start an Azure Database for MySQL > [!IMPORTANT]
-> Stop/Start functionality for Azure Database for MySQL is currently in public preview.
+> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can choose to **Stop** it again if you are not using the server.
This article provides step-by-step procedure to perform Stop and Start of the single server.
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-backup.md
@@ -61,7 +61,7 @@ There are two types of restore available:
- **Point-in-time restore** is available with either backup redundancy option and creates a new server in the same region as your original server. - **Geo-restore** is available only if you configured your server for geo-redundant storage and it allows you to restore your server to a different region.
-The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours.
+The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time varies depending on the the last data backup and the amount of recovery needs to be performed. It is usually less than 12 hours.
> [!NOTE] > If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-business-continuity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-business-continuity.md
@@ -27,7 +27,7 @@ The following table compares RTO and RPO in a **typical workload** scenario:
| **Capability** | **Basic** | **General Purpose** | **Memory optimized** | | :: | :-: | :--: | :: |
-| Point in Time Restore from backup | Any restore point within the retention period | Any restore point within the retention period | Any restore point within the retention period |
+| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min |
| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h | | Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-supported-versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-supported-versions.md
@@ -23,11 +23,8 @@ The current minor release is 10.11. Refer to the [PostgreSQL documentation](http
## PostgreSQL version 9.6 The current minor release is 9.6.16. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/release-9-6-16.html) to learn more about improvements and fixes in this minor release.
-## PostgreSQL version 9.5
-The current minor release is 9.5.20. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/9.5/static/release-9-5-20.html) to learn about improvements and fixes in this minor release.
-
-> [!NOTE]
-> Aligning with Postgres community [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL will be retiring Postgres version 9.5 on February 11, 2021. Please see [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions.
+## PostgreSQL version 9.5 (retired)
+Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired Postgres version 9.5 as of February 11, 2021. Please see [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you are running this major version, please upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
## Managing upgrades The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL automatically patches servers with minor releases during the service's monthly deployments.
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-version-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-version-policy.md
@@ -22,7 +22,7 @@ Azure Database for PostgreSQL supports the following database versions.
| PostgreSQL 11 | X | X | | PostgreSQL 10 | X | | | PostgreSQL 9.6 | X | |
-| PostgreSQL 9.5 | X | |
+| *PostgreSQL 9.5 (retired)* | X | |
## Major version support Each major version of PostgreSQL will be supported by Azure Database for PostgreSQL from the date on which Azure begins supporting the version until the version is retired by the PostgreSQL community, as provided in the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
@@ -35,7 +35,7 @@ The table below provides the retirement details for PostgreSQL major versions. T
| Version | What's New | Azure support start date | Retirement date| | -- | -- | | -- |
-| PostgreSQL 9.5| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
+| [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
| [PostgreSQL 9.6](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021 | [PostgreSQL 10](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022 | [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2023
postgresql https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-compute-storage.md
@@ -146,8 +146,11 @@ When marked with a \*, I/O bandwidth is limited by the VM type you selected. Oth
When you reach the storage limit, the server will start returning errors and prevent any further modifications. This may also cause problems with other operational activities, such as backups and WAL archival.
+To avoid this situation, when the storage usage reaches 95% or if the available capacity is less than 5 GiB, the server is automatically switched to **read-only mode**.
+ We recommend to actively monitor the disk space that is in use, and increase the disk size ahead of any out of storage situation. You can set up an alert to notify you when your server storage is approaching out of disk so you can avoid any issues with running out of disk. For more information, see the documentation on [how to set up an alert](howto-alert-on-metrics.md). + ### Storage auto-grow Storage auto-grow is not yet available for Flexible Server.
postgresql https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-limits.md
@@ -61,6 +61,13 @@ A PostgreSQL connection, even idle, can occupy about 10 MB of memory. Also, crea
- Automated migration between major database engine versions is currently not supported. If you would like to upgrade to the next major version, take a [dump and restore](../howto-migrate-using-dump-and-restore.md) it to a server that was created with the new engine version.
+### Storage
+
+- Once configured, storage size cannot be reduced.
+- Currently, storage auto-grow feature is not available. Please monitor the usage and increase the storage to a higher size.
+- When the storage usage reaches 95% or if the available capacity is less than 5 GiB, the server is automatically switched to **read-only mode** to avoid errors associated with disk-full situations.
+- We recommend to set alert rules for `storage used` or `storage percent` when they exceed certain thresholds so that you can proactively take action such as increasing the storage size. For example, you can set an alert if the storage percent exceeds 80% usage.
+
### Networking - Moving in and out of VNET is currently not supported.
purview https://docs.microsoft.com/en-us/azure/purview/register-scan-power-bi-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-power-bi-tenant.md
@@ -18,7 +18,7 @@ This article shows how to use Azure Purview portal to register and scan a Power
## Create a security group for permissions
-To set up authentication, create a security group and add the catalog's managed identity to it.
+To set up authentication, create a security group and add the Purview managed identity to it.
1. In the [Azure portal](https://portal.azure.com), search for **Azure Active Directory**. 1. Create a new security group in your Azure Active Directory, by following [Create a basic group and add members using Azure Active Directory](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
@@ -30,11 +30,11 @@ To set up authentication, create a security group and add the catalog's managed
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/security-group.png" alt-text="Security group type":::
-1. Add your catalog's managed identity to this security group. Select **Members**, then select **+ Add members**.
+1. Add your Purview managed identity to this security group. Select **Members**, then select **+ Add members**.
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-group-member.png" alt-text="Add the catalog's managed instance to group.":::
-1. Search for your catalog and select it.
+1. Search for your Purview managed identity and select it.
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-catalog-to-group-by-search.png" alt-text="Add catalog by searching for it":::
@@ -56,14 +56,14 @@ To set up authentication, create a security group and add the catalog's managed
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/allow-service-principals-power-bi-admin.png" alt-text="Image showing how to allow service principals to get read-only Power BI admin API permissions"::: > [!Caution]
- > When you allow the security group you created (that has your data catalog managed identity as a member) to use read-only Power BI admin APIs, you also allow it to access the metadata (e.g. dashboard and report names, owners, descriptions, etc.) for all of your Power BI artifacts in this tenant. Once the metadata has been pulled into the Azure Purview, Purview's permissions, not Power BI permissions, determine who can see that metadata.
+ > When you allow the security group you created (that has your Purview managed identity as a member) to use read-only Power BI admin APIs, you also allow it to access the metadata (e.g. dashboard and report names, owners, descriptions, etc.) for all of your Power BI artifacts in this tenant. Once the metadata has been pulled into the Azure Purview, Purview's permissions, not Power BI permissions, determine who can see that metadata.
> [!Note] > You can remove the security group from your developer settings, but the metadata previously extracted won't be removed from the Purview account. You can delete it separately, if you wish. ## Register your Power BI and set up a scan
-Now that you've given the catalog permissions to connect to the Admin API of your Power BI tenant, you can set up your scan from the catalog portal.
+Now that you've given the Purview Managed Identity permissions to connect to the Admin API of your Power BI tenant, you can set up your scan from the Azure Purview Studio.
First, add a special feature flag to your Purview URL
@@ -110,4 +110,4 @@ First, add a special feature flag to your Purview URL
## Next steps - [Browse the Azure Purview Data catalog](how-to-browse-catalog.md)-- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
+- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
search https://docs.microsoft.com/en-us/azure/search/index-ranking-similarity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/index-ranking-similarity.md
@@ -28,7 +28,7 @@ While conceptually similar to the older Classic Similarity algorithm, BM25 takes
When you create a new index, you can set a **similarity** property to specify the algorithm. You can use the `api-version=2019-05-06-Preview`, as shown below, or `api-version=2020-06-30`.
-```
+```http
PUT https://[search service name].search.windows.net/indexes/[index name]?api-version=2019-05-06-Preview ```
search https://docs.microsoft.com/en-us/azure/search/index-similarity-and-scoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/index-similarity-and-scoring.md
@@ -15,7 +15,7 @@ Scoring refers to the computation of a search score for every item returned in s
By default, the top 50 are returned in the response, but you can use the **$top** parameter to return a smaller or larger number of items (up to 1000 in a single response), and **$skip** to get the next set of results.
-The search score is computed based on statistical properties of the data and the query. Azure Cognitive Search finds documents that match on search terms (some or all, depending on [searchMode](/rest/api/searchservice/search-documents#searchmodeany--all-optional)), favoring documents that contain many instances of the search term. The search score goes up even higher if the term is rare across the data index, but common within the document. The basis for this approach to computing relevance is known as *TF-IDF or* term frequency-inverse document frequency.
+The search score is computed based on statistical properties of the data and the query. Azure Cognitive Search finds documents that match on search terms (some or all, depending on [searchMode](/rest/api/searchservice/search-documents#query-parameters)), favoring documents that contain many instances of the search term. The search score goes up even higher if the term is rare across the data index, but common within the document. The basis for this approach to computing relevance is known as *TF-IDF or* term frequency-inverse document frequency.
Search score values can be repeated throughout a result set. When multiple hits have the same search score, the ordering of the same scored items is not defined, and is not stable. Run the query again, and you might see items shift position, especially if you are using the free service or a billable service with multiple replicas. Given two items with an identical score, there is no guarantee which one appears first.
search https://docs.microsoft.com/en-us/azure/search/search-api-preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-api-preview.md
@@ -19,7 +19,7 @@ Preview features that transition to general availability are removed from this l
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability | |||-|| | [**Azure Machine Learning (AML) skill**](cognitive-search-aml-skill.md) | AI enrichment| A new skill type to integrate an inferencing endpoint from Azure Machine Learning. Get started with [this tutorial](cognitive-search-tutorial-aml-custom-skill.md). | Use [Search REST API 2020-06-30-Preview](/rest/api/searchservice/) or 2019-05-06-Preview. Also available in the portal, in skillset design, assuming Cognitive Search and Azure ML services are deployed in the same subscription. |
-| [**featuresMode parameter**](/rest/api/searchservice/search-documents#featuresmode) | Relevance (scoring) | Relevance score expansion to include details: per field similarity score, per field term frequency, and per field number of unique tokens matched. You can consume these data points in [custom scoring solutions](https://github.com/Azure-Samples/search-ranking-tutorial). | Add this query parameter using [Search Documents (REST)](/rest/api/searchservice/search-documents) with api-version=2020-06-30-Preview or 2019-05-06-Preview. |
+| [**featuresMode parameter**](/rest/api/searchservice/preview-api/search-documents#query-parameters) | Relevance (scoring) | Relevance score expansion to include details: per field similarity score, per field term frequency, and per field number of unique tokens matched. You can consume these data points in [custom scoring solutions](https://github.com/Azure-Samples/search-ranking-tutorial). | Add this query parameter using [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) with api-version=2020-06-30-Preview or 2019-05-06-Preview. |
| [**Debug Sessions**](cognitive-search-debug-session.md) | Portal, AI enrichment (skillset) | An in-session skillset editor used to investigate and resolve issues with a skillset. Fixes applied during a debug session can be saved to a skillset in the service. | Portal only, using mid-page links on the Overview page to open a debug session. | | [**Native blob soft delete**](search-howto-index-changed-deleted-blobs.md) | Indexers, Azure blobs| The Azure Blob Storage indexer in Azure Cognitive Search will recognize blobs that are in a soft deleted state, and remove the corresponding search document during indexing. | Add this configuration setting using [Create Indexer (REST)](/rest/api/searchservice/create-indexer) with api-version=2020-06-30-Preview or api-version=2019-05-06-Preview. | | [**Custom Entity Lookup skill**](cognitive-search-skill-custom-entity-lookup.md ) | AI enrichment (skillset) | A cognitive skill that looks for text from a custom, user-defined list of words and phrases. Using this list, it labels all documents with any matching entities. The skill also supports a degree of fuzzy matching that can be applied to find matches that are similar but not quite exact. | Reference this preview skill using the Skillset editor in the portal or [Create Skillset (REST)](/rest/api/searchservice/create-skillset) with api-version=2020-06-30-Preview or api-version=2019-05-06-Preview. |
search https://docs.microsoft.com/en-us/azure/search/search-api-versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-api-versions.md
@@ -92,19 +92,19 @@ The following table provides links to more recent SDK versions.
| SDK version | Status | Description | |-|--||
-| [Java azure-search-documents 11](https://newreleases.io/project/github/Azure/azure-sdk-for-java/release/azure-search-documents_11.1.0) | Stable | New client library from Azure .NET SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
+| [Java azure-search-documents 11](https://newreleases.io/project/github/Azure/azure-sdk-for-java/release/azure-search-documents_11.1.0) | Stable | New client library from Azure Java SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
| [Java Management Client 1.35.0](/java/api/overview/azure/search/management) | Stable | Targets the Management REST api-version=2015-08-19. | ## Azure SDK for JavaScript | SDK version | Status | Description | |-|--||
-| [JavaScript azure-search 11.0](https://azure.github.io/azure-sdk-for-node/azure-search/latest/) | Stable | New client library from Azure .NET SDK, released July 2020. Targets the Search REST api-version=2016-09-01. |
-| [JavaScript azure-arm-search](https://azure.github.io/azure-sdk-for-node/azure-arm-search/latest/) | Stable | Targets the Management REST api-version=2015-08-19. |
+| [JavaScript @azure/search-documents 11.0](https://www.npmjs.com/package/@azure/search-documents) | Stable | New client library from Azure JavaScript & TypesScript SDK, released July 2020. Targets the Search REST api-version=2016-09-01. |
+| [JavaScript @azure/arm-search](https://www.npmjs.com/package/@azure/arm-search) | Stable | Targets the Management REST api-version=2015-08-19. |
## Azure SDK for Python | SDK version | Status | Description | |-|--||
-| [Python azure-search-documents 11.0](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-search-documents/11.0.0/https://docsupdatetracker.net/index.html) | Stable | New client library from Azure .NET SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
-| [Python azure-mgmt-search 1.0](/python/api/overview/azure/search) | Stable | Targets the Management REST api-version=2015-08-19. |
+| [Python azure-search-documents 11.0](https://pypi.org/project/azure-search-documents/) | Stable | New client library from Azure Python SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
+| [Python azure-mgmt-search 8.0](https://pypi.org/project/azure-mgmt-search/) | Stable | Targets the Management REST api-version=2015-08-19. |
search https://docs.microsoft.com/en-us/azure/search/search-traffic-analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-traffic-analytics.md
@@ -165,7 +165,7 @@ Every time that a search request is issued by a user, you should log that as a s
+ **ScoringProfile**: (string) name of the scoring profile used, if any > [!NOTE]
-> Request the count of user generated queries by adding $count=true to your search query. For more information, see [Search Documents (REST)](/rest/api/searchservice/search-documents#counttrue--false).
+> Request the count of user generated queries by adding $count=true to your search query. For more information, see [Search Documents (REST)](/rest/api/searchservice/search-documents#query-parameters).
> **Use C#**
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-pricing.md
@@ -11,7 +11,7 @@ ms.devlang: na
na Previously updated : 02/11/2021 Last updated : 02/14/2021
@@ -113,7 +113,18 @@ If you've already got a license for Microsoft Defender for Endpoint, you won't h
To confirm your discount, contact Security Center's support team and provide the relevant workspace ID, region, and license information for each relevant license. ### My subscription has Azure Defender for servers enabled, do I pay for not-running servers?
-No. When you enable [Azure Defender for servers](defender-for-servers-introduction.md) on a subscription, you won't be charged for any servers that are in the "deallocated" state while they're in that state.
+No. When you enable [Azure Defender for servers](defender-for-servers-introduction.md) on a subscription, you won't be charged for any machines that are in the deallocated power state while they're in that state. Machines are billed according to their power state as shown in the following table:
+
+| State | Description | Instance usage billed |
+|--|--|--|
+| Starting | VM is starting up. | Not billed |
+| Running | Normal working state for a VM | Billed |
+| Stopping | This is a transitional state. When completed, it will show as Stopped. | Billed |
+| Stopped | The VM has been shut down from within the guest OS or using the PowerOff APIs. Hardware is still allocated to the VM and it remains on the host. | Billed (1) |
+| Deallocating | Transitional state. When completed, the VM will show as Deallocated. | Not billed (1) |
+| Deallocated | The VM has been stopped successfully and removed from the host. | Not billed |
+
+(1) Some Azure resources, such as Disks and Networking, incur charges. Software licenses on the instance do not incur charges.
:::image type="content" source="media/security-center-pricing/deallocated-virtual-machines.png" alt-text="Azure Virtual Machines showing a deallocated machine":::
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-ip-filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-ip-filtering.md
@@ -2,7 +2,7 @@
Title: Configure IP firewall rules for Azure Service Bus description: How to use Firewall Rules to allow connections from specific IP addresses to Azure Service Bus. Previously updated : 06/23/2020 Last updated : 02/12/2021 # Allow access to Azure Service Bus namespace from specific IP addresses or ranges
@@ -32,7 +32,8 @@ This section shows you how to use the Azure portal to create IP firewall rules f
> [!NOTE] > You see the **Networking** tab only for **premium** namespaces.
- By default, the **Selected networks** option is selected. If you don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
+ >[!WARNING]
+ > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
:::image type="content" source="./media/service-bus-ip-filtering/default-networking-page.png" alt-text="Networking page - default" lightbox="./media/service-bus-ip-filtering/default-networking-page.png":::
@@ -56,29 +57,12 @@ This section shows you how to use the Azure portal to create IP firewall rules f
[!INCLUDE [service-bus-trusted-services](../../includes/service-bus-trusted-services.md)] ## Use Resource Manager template
-This section has a sample Azure Resource Manager template that creates a virtual network and a firewall rule.
+This section has a sample Azure Resource Manager template that adds a virtual network and a firewall rule to an existing Service Bus namespace.
+**ipMask** is a single IPv4 address or a block of IP addresses in CIDR notation. For example, in CIDR notation 70.37.104.0/24 represents the 256 IPv4 addresses from 70.37.104.0 to 70.37.104.255, with 24 indicating the number of significant prefix bits for the range.
-The following Resource Manager template enables adding a virtual network rule to an existing Service Bus namespace.
+When adding virtual network or firewalls rules, set the value of `defaultAction` to `Deny`.
-Template parameters:
--- **ipMask** is a single IPv4 address or a block of IP addresses in CIDR notation. For example, in CIDR notation 70.37.104.0/24 represents the 256 IPv4 addresses from 70.37.104.0 to 70.37.104.255, with 24 indicating the number of significant prefix bits for the range.-
-> [!NOTE]
-> While there are no deny rules possible, the Azure Resource Manager template has the default action set to **"Allow"** which doesn't restrict connections.
-> When making Virtual Network or Firewalls rules, we must change the
-> ***"defaultAction"***
->
-> from
-> ```json
-> "defaultAction": "Allow"
-> ```
-> to
-> ```json
-> "defaultAction": "Deny"
-> ```
->
```json {
@@ -144,6 +128,10 @@ Template parameters:
To deploy the template, follow the instructions for [Azure Resource Manager][lnk-deploy].
+> [!IMPORTANT]
+> If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
++ ## Next steps For constraining access to Service Bus to Azure virtual networks, see the following link:
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-java-how-to-use-queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-java-how-to-use-queues.md
@@ -3,7 +3,7 @@ Title: Use Azure Service Bus queues with Java (azure-messaging-servicebus)
description: In this tutorial, you learn how to use Java to send messages to and receive messages from an Azure Service Bus queue. You use the new azure-messaging-servicebus package. ms.devlang: Java Previously updated : 11/09/2020 Last updated : 02/13/2021
@@ -27,14 +27,41 @@ In this section, you'll create a Java console project, and add code to send mess
Create a Java project using Eclipse or a tool of your choice. ### Configure your application to use Service Bus
-Add a reference to Azure Service Bus library. The Java client library for Service Bus is available in the [Maven Central Repository](https://search.maven.org/search?q=a:azure-messaging-servicebus). You can reference this library using the following dependency declaration inside your Maven project file:
+Add references to Azure Core and Azure Service Bus libraries.
+
+If you are using Eclipse and created a Java console application, convert your Java project to a Maven: right-click the project in the **Package Explorer** window, select **Configure** -> **Convert to Maven project**. Then, add dependencies to these two libraries as shown in the following example.
```xml
-<dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-messaging-servicebus</artifactId>
- <version>7.0.0</version>
-</dependency>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>org.myorg.sbusquickstarts</groupId>
+ <artifactId>sbustopicqs</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <build>
+ <sourceDirectory>src</sourceDirectory>
+ <plugins>
+ <plugin>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <version>3.8.1</version>
+ <configuration>
+ <release>15</release>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-core</artifactId>
+ <version>1.13.0</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ <version>7.0.2</version>
+ </dependency>
+ </dependencies>
+</project>
``` ### Add code to send messages to the queue
@@ -42,9 +69,9 @@ Add a reference to Azure Service Bus library. The Java client library for Servic
```java import com.azure.messaging.servicebus.*;
- import com.azure.messaging.servicebus.models.*;
+
+ import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
- import java.util.function.Consumer;
import java.util.Arrays; import java.util.List; ```
@@ -89,7 +116,7 @@ Add a reference to Azure Service Bus library. The Java client library for Servic
``` 1. Add a method named `sendMessageBatch` method to send messages to the queue you created. This method creates a `ServiceBusSenderClient` for the queue, invokes the `createMessages` method to get the list of messages, prepares one or more batches, and sends the batches to the queue.
-```java
+ ```java
static void sendMessageBatch() { // create a Service Bus Sender client for the queue
@@ -134,39 +161,29 @@ Add a reference to Azure Service Bus library. The Java client library for Servic
//close the client senderClient.close(); }
-```
+ ```
## Receive messages from a queue In this section, you'll add code to retrieve messages from the queue. 1. Add a method named `receiveMessages` to receive messages from the queue. This method creates a `ServiceBusProcessorClient` for the queue by specifying a handler for processing messages and another one for handling errors. Then, it starts the processor, waits for few seconds, prints the messages that are received, and then stops and closes the processor.
+ > [!IMPORTANT]
+ > Replace `QueueTest` in `QueueTest::processMessage` in the code with the name of your class.
+ ```java // handles received messages static void receiveMessages() throws InterruptedException {
- // consumer that processes a single message received from Service Bus
- Consumer<ServiceBusReceivedMessageContext> messageProcessor = context -> {
- ServiceBusReceivedMessage message = context.getMessage();
- System.out.println("Received message: " + message.getBody().toString());
- };
+ CountDownLatch countdownLatch = new CountDownLatch(1);
- // handles any errors that occur when receiving messages
- Consumer<Throwable> errorHandler = throwable -> {
- System.out.println("Error when receiving messages: " + throwable.getMessage());
- if (throwable instanceof ServiceBusReceiverException) {
- ServiceBusReceiverException serviceBusReceiverException = (ServiceBusReceiverException) throwable;
- System.out.println("Error source: " + serviceBusReceiverException.getErrorSource());
- }
- };
-
- // create an instance of the processor through the ServiceBusClientBuilder
+ // Create an instance of the processor through the ServiceBusClientBuilder
ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder() .connectionString(connectionString) .processor() .queueName(queueName)
- .processMessage(messageProcessor)
- .processError(errorHandler)
+ .processMessage(QueueTest::processMessage)
+ .processError(context -> processError(context, countdownLatch))
.buildProcessorClient(); System.out.println("Starting the processor");
@@ -175,8 +192,54 @@ In this section, you'll add code to retrieve messages from the queue.
TimeUnit.SECONDS.sleep(10); System.out.println("Stopping and closing the processor"); processorClient.close();
+ }
+ ```
+2. Add the `processMessage` method to process a message received from the Service Bus subscription.
+
+ ```java
+ private static void processMessage(ServiceBusReceivedMessageContext context) {
+ ServiceBusReceivedMessage message = context.getMessage();
+ System.out.printf("Processing message. Session: %s, Sequence #: %s. Contents: %s%n", message.getMessageId(),
+ message.getSequenceNumber(), message.getBody());
} ```
+3. Add the `processError` method to handle error messages.
+
+ ```java
+ private static void processError(ServiceBusErrorContext context, CountDownLatch countdownLatch) {
+ System.out.printf("Error when receiving messages from namespace: '%s'. Entity: '%s'%n",
+ context.getFullyQualifiedNamespace(), context.getEntityPath());
+
+ if (!(context.getException() instanceof ServiceBusException)) {
+ System.out.printf("Non-ServiceBusException occurred: %s%n", context.getException());
+ return;
+ }
+
+ ServiceBusException exception = (ServiceBusException) context.getException();
+ ServiceBusFailureReason reason = exception.getReason();
+
+ if (reason == ServiceBusFailureReason.MESSAGING_ENTITY_DISABLED
+ || reason == ServiceBusFailureReason.MESSAGING_ENTITY_NOT_FOUND
+ || reason == ServiceBusFailureReason.UNAUTHORIZED) {
+ System.out.printf("An unrecoverable error occurred. Stopping processing with reason %s: %s%n",
+ reason, exception.getMessage());
+
+ countdownLatch.countDown();
+ } else if (reason == ServiceBusFailureReason.MESSAGE_LOCK_LOST) {
+ System.out.printf("Message lock lost for message: %s%n", context.getException());
+ } else if (reason == ServiceBusFailureReason.SERVICE_BUSY) {
+ try {
+ // Choosing an arbitrary amount of time to wait until trying again.
+ TimeUnit.SECONDS.sleep(1);
+ } catch (InterruptedException e) {
+ System.err.println("Unable to sleep for period of time");
+ }
+ } else {
+ System.out.printf("Error source %s, reason %s, message: %s%n", context.getErrorSource(),
+ reason, context.getException());
+ }
+ }
+ ```
2. Update the `main` method to invoke `sendMessage`, `sendMessageBatch`, and `receiveMessages` methods and to throw `InterruptedException`. ```java
@@ -194,10 +257,10 @@ When you run the application, you see the following messages in the console wind
Sent a single message to the queue: myqueue Sent a batch of messages to the queue: myqueue Starting the processor
-Received message: Hello, World!
-Received message: First message in the batch
-Received message: Second message in the batch
-Received message: Three message in the batch
+Processing message. Session: 88d961dd801f449e9c3e0f8a5393a527, Sequence #: 1. Contents: Hello, World!
+Processing message. Session: e90c8d9039ce403bbe1d0ec7038033a0, Sequence #: 2. Contents: First message
+Processing message. Session: 311a216a560c47d184f9831984e6ac1d, Sequence #: 3. Contents: Second message
+Processing message. Session: f9a871be07414baf9505f2c3d466c4ab, Sequence #: 4. Contents: Third message
Stopping and closing the processor ```
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions.md
@@ -3,7 +3,7 @@ Title: Use Azure Service Bus topics and subscriptions with Java (azure-messaging
description: In this quickstart, you write Java code using the azure-messaging-servicebus package to send messages to an Azure Service Bus topic and then receive messages from subscriptions to that topic. ms.devlang: Java Previously updated : 11/09/2020 Last updated : 02/13/2021 # Send messages to an Azure Service Bus topic and receive messages from subscriptions to the topic (Java)
@@ -26,14 +26,41 @@ In this section, you'll create a Java console project, and add code to send mess
Create a Java project using Eclipse or a tool of your choice. ### Configure your application to use Service Bus
-Add a reference to Azure Service Bus library. The Java client library for Service Bus is available in the [Maven Central Repository](https://search.maven.org/search?q=a:azure-messaging-servicebus). You can reference this library using the following dependency declaration inside your Maven project file:
+Add references to Azure Core and Azure Service Bus libraries.
+
+If you are using Eclipse and created a Java console application, convert your Java project to a Maven: right-click the project in the **Package Explorer** window, select **Configure** -> **Convert to Maven project**. Then, add dependencies to these two libraries as shown in the following example.
```xml
-<dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-messaging-servicebus</artifactId>
- <version>7.0.0</version>
-</dependency>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>org.myorg.sbusquickstarts</groupId>
+ <artifactId>sbustopicqs</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <build>
+ <sourceDirectory>src</sourceDirectory>
+ <plugins>
+ <plugin>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <version>3.8.1</version>
+ <configuration>
+ <release>15</release>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-core</artifactId>
+ <version>1.13.0</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ <version>7.0.2</version>
+ </dependency>
+ </dependencies>
+</project>
``` ### Add code to send messages to the topic
@@ -41,9 +68,9 @@ Add a reference to Azure Service Bus library. The Java client library for Servic
```java import com.azure.messaging.servicebus.*;
- import com.azure.messaging.servicebus.models.*;
+
+ import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
- import java.util.function.Consumer;
import java.util.Arrays; import java.util.List; ```
@@ -59,7 +86,7 @@ Add a reference to Azure Service Bus library. The Java client library for Servic
3. Add a method named `sendMessage` in the class to send one message to the topic. ```java
- static void sendMessage()
+ static void sendMessage()
{ // create a Service Bus Sender client for the queue ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
@@ -89,7 +116,7 @@ Add a reference to Azure Service Bus library. The Java client library for Servic
``` 1. Add a method named `sendMessageBatch` method to send messages to the topic you created. This method creates a `ServiceBusSenderClient` for the topic, invokes the `createMessages` method to get the list of messages, prepares one or more batches, and sends the batches to the topic.
-```java
+ ```java
static void sendMessageBatch() { // create a Service Bus Sender client for the topic
@@ -134,31 +161,21 @@ Add a reference to Azure Service Bus library. The Java client library for Servic
//close the client senderClient.close(); }
-```
+ ```
## Receive messages from a subscription In this section, you'll add code to retrieve messages from a subscription to the topic. 1. Add a method named `receiveMessages` to receive messages from the subscription. This method creates a `ServiceBusProcessorClient` for the subscription by specifying a handler for processing messages and another one for handling errors. Then, it starts the processor, waits for few seconds, prints the messages that are received, and then stops and closes the processor.
+ > [!IMPORTANT]
+ > Replace `ServiceBusTopicTest` in `ServiceBusTopicTest::processMessage` in the code with the name of your class.
+ ```java // handles received messages static void receiveMessages() throws InterruptedException {
- // Consumer that processes a single message received from Service Bus
- Consumer<ServiceBusReceivedMessageContext> messageProcessor = context -> {
- ServiceBusReceivedMessage message = context.getMessage();
- System.out.println("Received message: " + message.getBody().toString() + " from the subscription: " + subName);
- };
-
- // Consumer that handles any errors that occur when receiving messages
- Consumer<Throwable> errorHandler = throwable -> {
- System.out.println("Error when receiving messages: " + throwable.getMessage());
- if (throwable instanceof ServiceBusReceiverException) {
- ServiceBusReceiverException serviceBusReceiverException = (ServiceBusReceiverException) throwable;
- System.out.println("Error source: " + serviceBusReceiverException.getErrorSource());
- }
- };
+ CountDownLatch countdownLatch = new CountDownLatch(1);
// Create an instance of the processor through the ServiceBusClientBuilder ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder()
@@ -166,8 +183,8 @@ In this section, you'll add code to retrieve messages from a subscription to the
.processor() .topicName(topicName) .subscriptionName(subName)
- .processMessage(messageProcessor)
- .processError(errorHandler)
+ .processMessage(ServiceBusTopicTest::processMessage)
+ .processError(context -> processError(context, countdownLatch))
.buildProcessorClient(); System.out.println("Starting the processor");
@@ -176,9 +193,55 @@ In this section, you'll add code to retrieve messages from a subscription to the
TimeUnit.SECONDS.sleep(10); System.out.println("Stopping and closing the processor"); processorClient.close();
- }
+ }
+ ```
+2. Add the `processMessage` method to process a message received from the Service Bus subscription.
+
+ ```java
+ private static void processMessage(ServiceBusReceivedMessageContext context) {
+ ServiceBusReceivedMessage message = context.getMessage();
+ System.out.printf("Processing message. Session: %s, Sequence #: %s. Contents: %s%n", message.getMessageId(),
+ message.getSequenceNumber(), message.getBody());
+ }
+ ```
+3. Add the `processError` method to handle error messages.
+
+ ```java
+ private static void processError(ServiceBusErrorContext context, CountDownLatch countdownLatch) {
+ System.out.printf("Error when receiving messages from namespace: '%s'. Entity: '%s'%n",
+ context.getFullyQualifiedNamespace(), context.getEntityPath());
+
+ if (!(context.getException() instanceof ServiceBusException)) {
+ System.out.printf("Non-ServiceBusException occurred: %s%n", context.getException());
+ return;
+ }
+
+ ServiceBusException exception = (ServiceBusException) context.getException();
+ ServiceBusFailureReason reason = exception.getReason();
+
+ if (reason == ServiceBusFailureReason.MESSAGING_ENTITY_DISABLED
+ || reason == ServiceBusFailureReason.MESSAGING_ENTITY_NOT_FOUND
+ || reason == ServiceBusFailureReason.UNAUTHORIZED) {
+ System.out.printf("An unrecoverable error occurred. Stopping processing with reason %s: %s%n",
+ reason, exception.getMessage());
+
+ countdownLatch.countDown();
+ } else if (reason == ServiceBusFailureReason.MESSAGE_LOCK_LOST) {
+ System.out.printf("Message lock lost for message: %s%n", context.getException());
+ } else if (reason == ServiceBusFailureReason.SERVICE_BUSY) {
+ try {
+ // Choosing an arbitrary amount of time to wait until trying again.
+ TimeUnit.SECONDS.sleep(1);
+ } catch (InterruptedException e) {
+ System.err.println("Unable to sleep for period of time");
+ }
+ } else {
+ System.out.printf("Error source %s, reason %s, message: %s%n", context.getErrorSource(),
+ reason, context.getException());
+ }
+ }
```
-2. Update the `main` method to invoke `sendMessage`, `sendMessageBatch`, and `receiveMessages` methods and to throw `InterruptedException`.
+1. Update the `main` method to invoke `sendMessage`, `sendMessageBatch`, and `receiveMessages` methods and to throw `InterruptedException`.
```java public static void main(String[] args) throws InterruptedException {
@@ -192,12 +255,13 @@ In this section, you'll add code to retrieve messages from a subscription to the
Run the program to see the output similar to the following output: ```console
+Sent a single message to the topic: mytopic
Sent a batch of messages to the topic: mytopic Starting the processor
-Received message: First message from the subscription: mysub
-Received message: Second message from the subscription: mysub
-Received message: Third message from the subscription: mysub
-Stopping and closing the processor
+Processing message. Session: e0102f5fbaf646988a2f4b65f7d32385, Sequence #: 1. Contents: Hello, World!
+Processing message. Session: 3e991e232ca248f2bc332caa8034bed9, Sequence #: 2. Contents: First message
+Processing message. Session: 56d3a9ea7df446f8a2944ee72cca4ea0, Sequence #: 3. Contents: Second message
+Processing message. Session: 7bd3bd3e966a40ebbc9b29b082da14bb, Sequence #: 4. Contents: Third message
``` On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-metrics-azure-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-metrics-azure-monitor.md
@@ -2,7 +2,7 @@
Title: Azure Service Bus metrics in Azure Monitor| Microsoft Docs description: This article explains how to use Azure Monitor to monitor Service Bus entities (queues, topics, and subscriptions). Previously updated : 11/18/2020 Last updated : 02/12/2021 # Azure Service Bus metrics in Azure Monitor
@@ -69,7 +69,7 @@ The following two types of errors are classified as user errors:
| Metric Name | Description | | - | -- |
-|Incoming Messages|The number of events or messages sent to Service Bus over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+|Incoming Messages|The number of events or messages sent to Service Bus over a specified period. This metric doesn't include messages that are auto forwarded.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
|Outgoing Messages|The number of events or messages received from Service Bus over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name| | Messages| Count of messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name | | Active Messages| Count of active messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-service-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-service-endpoints.md
@@ -2,7 +2,7 @@
Title: Configure virtual network service endpoints for Azure Service Bus description: This article provides information on how to add a Microsoft.ServiceBus service endpoint to a virtual network. Previously updated : 06/23/2020 Last updated : 02/12/2021
@@ -52,7 +52,8 @@ This section shows you how to use Azure portal to add a virtual network service
> [!NOTE] > You see the **Networking** tab only for **premium** namespaces.
- By default, the **Selected networks** option is selected. If you don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
+ >[!WARNING]
+ > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
:::image type="content" source="./media/service-bus-ip-filtering/default-networking-page.png" alt-text="Networking page - default" lightbox="./media/service-bus-ip-filtering/default-networking-page.png":::
@@ -83,28 +84,11 @@ This section shows you how to use Azure portal to add a virtual network service
[!INCLUDE [service-bus-trusted-services](../../includes/service-bus-trusted-services.md)] ## Use Resource Manager template
-The following Resource Manager template enables adding a virtual network rule to an existing Service Bus
-namespace.
+The following sample Resource Manager template adds a virtual network rule to an existing Service Bus namespace. For the network rule, it specifies the ID of a subnet in a virtual network.
-Template parameters:
+The ID is a fully qualified Resource Manager path for the virtual network subnet. For example, `/subscriptions/{id}/resourceGroups/{rg}/providers/Microsoft.Network/virtualNetworks/{vnet}/subnets/default` for the default subnet of a virtual network.
-* **namespaceName**: Service Bus namespace.
-* **virtualNetworkingSubnetId**: Fully qualified Resource Manager path for the virtual network subnet; for example, `/subscriptions/{id}/resourceGroups/{rg}/providers/Microsoft.Network/virtualNetworks/{vnet}/subnets/default` for the default subnet of a virtual network.
-
-> [!NOTE]
-> While there are no deny rules possible, the Azure Resource Manager template has the default action set to **"Allow"** which doesn't restrict connections.
-> When making Virtual Network or Firewalls rules, we must change the
-> ***"defaultAction"***
->
-> from
-> ```json
-> "defaultAction": "Allow"
-> ```
-> to
-> ```json
-> "defaultAction": "Deny"
-> ```
->
+When adding virtual network or firewalls rules, set the value of `defaultAction` to `Deny`.
Template:
@@ -209,6 +193,9 @@ Template:
To deploy the template, follow the instructions for [Azure Resource Manager][lnk-deploy].
+> [!IMPORTANT]
+> If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
+ ## Next steps For more information about virtual networks, see the following links:
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-how-to-enable-replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
@@ -51,7 +51,7 @@ Enable replication. This procedure assumes that the primary Azure region is East
- **Target storage accounts (source VM doesn't use managed disks)**: By default, Site Recovery creates a new target storage account mimicking your source VM storage configuration. In case storage account already exists, it is reused. - **Replica-managed disks (source VM uses managed disks)**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk. - **Cache Storage accounts**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
- - **Target availability sets**: By default, Site Recovery creates a new availability set in the target region with the "Azure Site Recovery" suffix in the name, for VMs that are part of an availability set in the source region. If the availability set created by Site Recovery already exists, it is reused.
+ - **Target availability sets**: By default, Site Recovery creates a new availability set in the target region with the "asr" suffix in the name, for VMs that are part of an availability set in the source region. If the availability set created by Site Recovery already exists, it is reused.
>[!NOTE] >While configuring the target availability sets, please configure different availability sets for differently sized VMs. >
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-data-scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-data-scenarios.md
@@ -113,7 +113,7 @@ Here's a list of tools that you can use to run data analysis jobs on data that i
|Tool | Guidance | ||--| |Azure HDInsight | [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md) |
-|Azure Databricks | [Azure Data Lake Storage Gen2](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html)<br><br>[Quickstart: Analyze data in Azure Data Lake Storage Gen2 by using Azure Databricks](./data-lake-storage-quickstart-create-databricks-account.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)<br><br>[Tutorial: Extract, transform, and load data by using Azure Databricks](/azure/databricks/scenarios/databricks-extract-load-sql-data-warehouse?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
+|Azure Databricks | [Azure Data Lake Storage Gen2](/azure/databricks/dat?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)<br><br>[Tutorial: Extract, transform, and load data by using Azure Databricks](/azure/databricks/scenarios/databricks-extract-load-sql-data-warehouse?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
## Visualize the data
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-events.md
@@ -15,7 +15,7 @@
This tutorial shows you how to handle events in a storage account that has a hierarchical namespace.
-You'll build a small solution that enables a user to populate a Databricks Delta table by uploading a comma-separated values (csv) file that describes a sales order. You'll build this solution by connecting together an Event Grid subscription, an Azure Function, and a [Job](https://docs.azuredatabricks.net/user-guide/jobs.html) in Azure Databricks.
+You'll build a small solution that enables a user to populate a Databricks Delta table by uploading a comma-separated values (csv) file that describes a sales order. You'll build this solution by connecting together an Event Grid subscription, an Azure Function, and a [Job](/azure/databricks/jobs) in Azure Databricks.
In this tutorial, you will:
@@ -28,7 +28,7 @@ We'll build this solution in reverse order, starting with the Azure Databricks w
## Prerequisites
-* If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
* Create a storage account that has a hierarchical namespace (Azure Data Lake Storage Gen2). This tutorial uses a storage account named `contosoorders`. Make sure that your user account has the [Storage Blob Data Contributor role](../common/storage-auth-aad-rbac-portal.md) assigned to it.
@@ -111,7 +111,7 @@ In this section, you create an Azure Databricks workspace using the Azure portal
4. Select **Create cluster**. Once the cluster is running, you can attach notebooks to the cluster and run Spark jobs.
-For more information on creating clusters, see [Create a Spark cluster in Azure Databricks](https://docs.azuredatabricks.net/user-guide/clusters/create.html).
+For more information on creating clusters, see [Create a Spark cluster in Azure Databricks](/azure/databricks/clusters/create).
### Create a notebook
@@ -148,7 +148,7 @@ For more information on creating clusters, see [Create a Spark cluster in Azure
This code creates a widget named **source_file**. Later, you'll create an Azure Function that calls this code and passes a file path to that widget. This code also authenticates your service principal with the storage account, and creates some variables that you'll use in other cells. > [!NOTE]
- > In a production setting, consider storing your authentication key in Azure Databricks. Then, add a look up key to your code block instead of the authentication key. <br><br>For example, instead of using this line of code: `spark.conf.set("fs.azure.account.oauth2.client.secret", "<password>")`, you would use the following line of code: `spark.conf.set("fs.azure.account.oauth2.client.secret", dbutils.secrets.get(scope = "<scope-name>", key = "<key-name-for-service-credential>"))`. <br><br>After you've completed this tutorial, see the [Azure Data Lake Storage Gen2](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html) article on the Azure Databricks Website to see examples of this approach.
+ > In a production setting, consider storing your authentication key in Azure Databricks. Then, add a look up key to your code block instead of the authentication key. <br><br>For example, instead of using this line of code: `spark.conf.set("fs.azure.account.oauth2.client.secret", "<password>")`, you would use the following line of code: `spark.conf.set("fs.azure.account.oauth2.client.secret", dbutils.secrets.get(scope = "<scope-name>", key = "<key-name-for-service-credential>"))`. <br><br>After you've completed this tutorial, see the [Azure Data Lake Storage Gen2](/azure/databricks/data/data-sources/azure/azure-datalake-gen2) article on the Azure Databricks Website to see examples of this approach.
2. Press the **SHIFT + ENTER** keys to run the code in this block.
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-quickstart-create-databricks-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-quickstart-create-databricks-account.md
@@ -71,7 +71,7 @@ In this section, you create an Azure Databricks workspace using the Azure portal
4. Select **Create cluster**. Once the cluster is running, you can attach notebooks to the cluster and run Spark jobs.
-For more information on creating clusters, see [Create a Spark cluster in Azure Databricks](https://docs.azuredatabricks.net/user-guide/clusters/create.html).
+For more information on creating clusters, see [Create a Spark cluster in Azure Databricks](/azure/databricks/clusters/create).
## Create notebook
@@ -144,7 +144,7 @@ Perform the following tasks to run a Spark SQL job on the data.
Once the command successfully completes, you have all the data from the JSON file as a table in Databricks cluster.
- The `%sql` language magic command enables you to run a SQL code from the notebook, even if the notebook is of another type. For more information, see [Mixing languages in a notebook](https://docs.azuredatabricks.net/user-guide/notebooks/https://docsupdatetracker.net/index.html#mixing-languages-in-a-notebook).
+ The `%sql` language magic command enables you to run a SQL code from the notebook, even if the notebook is of another type. For more information, see [Mixing languages in a notebook](/azure/databricks/notebooks/notebooks-use#mix-languages).
2. Let's look at a snapshot of the sample JSON data to better understand the query that you run. Paste the following snippet in the code cell and press **SHIFT + ENTER**.
@@ -195,6 +195,6 @@ Advance to the next article to learn how to perform an ETL operation (extract, t
> [!div class="nextstepaction"] >[Extract, transform, and load data using Azure Databricks](/azure/databricks/scenarios/databricks-extract-load-sql-data-warehouse). -- To learn how to import data from other data sources into Azure Databricks, see [Spark data sources](https://docs.azuredatabricks.net/spark/latest/data-sources/https://docsupdatetracker.net/index.html).
+- To learn how to import data from other data sources into Azure Databricks, see [Spark data sources](/azure/databricks/data/data-sources/).
-- To learn about other ways to access Azure Data Lake Storage Gen2 from an Azure Databricks workspace, see [Azure Data Lake Storage Gen2](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html).
+- To learn about other ways to access Azure Data Lake Storage Gen2 from an Azure Databricks workspace, see [Azure Data Lake Storage Gen2](/azure/databricks/data/data-sources/azure/azure-datalake-gen2).
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-supported-azure-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-supported-azure-services.md
@@ -24,7 +24,7 @@ This table lists the Azure services that you can use with Azure Data Lake Storag
|Azure service |Support level |Azure AD |Shared Key| Related articles | ||-|||| |Azure Data Factory|Generally available|Yes|Yes|[Load data into Azure Data Lake Storage Gen2 with Azure Data Factory](../../data-factory/load-azure-data-lake-storage-gen2.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
-|Azure Databricks|Generally available|Yes|Yes|[Use with Azure Databricks](https://docs.azuredatabricks.net/dat)|
+|Azure Databricks|Generally available|Yes|Yes|[Use with Azure Databricks](/azure/databricks/dat)|
|Azure Event Hub|Generally available|No|Yes|[Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage](../../event-hubs/event-hubs-capture-overview.md)| |Azure Event Grid|Generally available|Yes|Yes|[Tutorial: Implement the data lake capture pattern to update a Databricks Delta table](data-lake-storage-events.md)| |Azure Logic Apps|Generally available|No|Yes|[Overview - What is Azure Logic Apps?](../../logic-apps/logic-apps-overview.md)|
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-supported-blob-storage-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-supported-blob-storage-features.md
@@ -5,9 +5,8 @@
Previously updated : 11/12/2020 Last updated : 02/11/2021 - # Blob storage features available in Azure Data Lake Storage Gen2
@@ -40,18 +39,19 @@ The following table shows how each Blob storage feature is supported with Data L
|Immutable storage|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|[Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md)| |Container soft delete|Preview|Preview|[Soft delete for containers (preview)](soft-delete-container-overview.md)| |Azure Storage inventory|Preview|Preview|[Use Azure Storage inventory to manage blob data (preview)](blob-inventory.md)|
+|Custom domains|Preview<div role="complementary" aria-labelledby="preview-form-2"><sup>2</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form-2"><sup>2</sup></div>|[Map a custom domain to an Azure Blob storage endpoint](storage-custom-domain-name.md)|
|Blob soft delete|Not yet supported|Not yet supported|[Soft delete for blobs](./soft-delete-blob-overview.md)| |Blobfuse|Generally available|Generally available|[How to mount Blob storage as a file system with blobfuse](storage-how-to-mount-container-linux.md)| |Anonymous public access |Generally available|Generally available| See [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).| |Customer-managed account failover|Not yet supported|Not yet supported|[Disaster recovery and account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| |Customer-provided keys|Not yet supported|Not yet supported|[Provide an encryption key on a request to Blob storage](encryption-customer-provided-keys.md)|
-|Custom domains|Not yet supported|Not yet supported|[Map a custom domain to an Azure Blob storage endpoint](storage-custom-domain-name.md)|
|Encryption scopes|Not yet supported|Not yet supported|[Create and manage encryption scopes (preview)](encryption-scope-manage.md)| |Change feed|Not yet supported|Not yet supported|[Change feed support in Azure Blob storage](storage-blob-change-feed.md)| |Object replication|Not yet supported|Not yet supported|[Configure object replication for block blobs](object-replication-configure.md)| |Blob versioning|Not yet supported|Not yet supported|[Enable and manage blob versioning](versioning-enable.md)| <div id="preview-form"><sup>1</sup>To use snapshots, immutable storage, or static websites with Data Lake Storage Gen2, you need to enroll in the preview by completing this <a href=https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2EUNXd_ZNJCq_eDwZGaF5VUOUc3NTNQSUdOTjgzVUlVT1pDTzU4WlRKRy4u>form</a>. </div>
+<div id="preview-form-2"><sup>2</sup>A custom domain name can map only to the blob service or static website endpoint. The Data Lake storage endpoint is not supported.</a>. </div>
## See also
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-use-hdfs-data-lake-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-use-hdfs-data-lake-storage.md
@@ -21,7 +21,7 @@ HDInsight provides access to the distributed container that is locally attached
For more information on HDFS CLI, see the [official documentation](https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/FileSystemShell.html) and the [HDFS Permissions Guide](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html) >[!NOTE]
->If you're using Azure Databricks instead of HDInsight, and you want to interact with your data by using a command line interface, you can use the Databricks CLI to interact with the Databricks file system. See [Databricks CLI](https://docs.azuredatabricks.net/user-guide/dev-tools/databricks-cli.html).
+>If you're using Azure Databricks instead of HDInsight, and you want to interact with your data by using a command line interface, you can use the Databricks CLI to interact with the Databricks file system. See [Databricks CLI](/azure/databricks/dev-tools/cli/).
## Use the HDFS CLI with an HDInsight Hadoop cluster on Linux
storage https://docs.microsoft.com/en-us/azure/storage/blobs/storage-custom-domain-name https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-custom-domain-name.md
@@ -5,7 +5,7 @@ description: Map a custom domain to a Blob Storage or web endpoint in an Azure s
Previously updated : 01/23/2020 Last updated : 02/12/2021
@@ -15,8 +15,6 @@
You can map a custom domain to a blob service endpoint or a [static website](storage-blob-static-website.md) endpoint. - > [!NOTE] > This mapping works only for subdomains (for example: `www.contoso.com`). If you want your web endpoint to be available on the root domain (for example: `contoso.com`), then you'll have to use Azure CDN. For guidance, see the [Map a custom domain with HTTPS enabled](#enable-https) section of this article. Because you're going to that section of this article to enable the root domain of your custom domain, the step within that section for enabling HTTPS is optional.
@@ -56,8 +54,11 @@ The host name is the storage endpoint URL without the protocol identifier and th
2. In the menu pane, under **Settings**, select **Properties**. 3. Copy the value of the **Primary Blob Service Endpoint** or the **Primary static website endpoint** to a text file.
+
+ > [!NOTE]
+ > The Data Lake storage endpoint is not supported (For example: `https://mystorageaccount.dfs.core.windows.net/`).
-4. Remove the protocol identifier (*e.g.*, HTTPS) and the trailing slash from that string. The following table contains examples.
+4. Remove the protocol identifier (For example: `HTTPS`) and the trailing slash from that string. The following table contains examples.
| Type of endpoint | endpoint | host name | ||--|-|
@@ -70,7 +71,7 @@ The host name is the storage endpoint URL without the protocol identifier and th
#### Step 2: Create a canonical name (CNAME) record with your domain provider
-Create a CNAME record to point to your host name. A CNAME record is a type of DNS record that maps a source domain name to a destination domain name.
+Create a CNAME record to point to your host name. A CNAME record is a type of Domain Name System (DNS) record that maps a source domain name to a destination domain name.
1. Sign in to your domain registrar's website, and then go to the page for managing DNS setting.
@@ -90,9 +91,14 @@ Create a CNAME record to point to your host name. A CNAME record is a type of DN
#### Step 3: Register your custom domain with Azure
+##### [Portal](#tab/azure-portal)
+ 1. In the [Azure portal](https://portal.azure.com), go to your storage account.
-2. In the menu pane, under **Blob Service**, select **Custom domain**.
+2. In the menu pane, under **Blob Service**, select **Custom domain**.
+
+ > [!NOTE]
+ > This option does not appear in accounts that have the hierarchical namespace feature enabled. For those accounts, use either PowerShell or the Azure CLI to complete this step.
![custom domain option](./media/storage-custom-domain-name/custom-domain-button.png "custom domain")
@@ -106,24 +112,66 @@ Create a CNAME record to point to your host name. A CNAME record is a type of DN
After the CNAME record has propagated through the Domain Name Servers (DNS), and if your users have the appropriate permissions, they can view blob data by using the custom domain.
+##### [PowerShell](#tab/azure-powershell)
+
+Run the following PowerShell command
+
+```powershell
+Set-AzStorageAccount -ResourceGroupName <resource-group-name> -Name <storage-account-name> -CustomDomainName <custom-domain-name> -UseSubDomain $false
+```
+
+- Replace the `<resource-group-name>` placeholder with the name of the resource group.
+
+- Replace the `<storage-account-name>` placeholder with the name of the storage account.
+
+- Replace the `<custom-domain-name>` placeholder with the name of your custom domain, including the subdomain.
+
+ For example, if your domain is *contoso.com* and your subdomain alias is *www*, enter `www.contoso.com`. If your subdomain is *photos*, enter `photos.contoso.com`.
+
+After the CNAME record has propagated through the Domain Name Servers (DNS), and if your users have the appropriate permissions, they can view blob data by using the custom domain.
+
+##### [Azure CLI](#tab/azure-cli)
+
+Run the following PowerShell command
+
+```azurecli
+az storage account update \
+ --resource-group <resource-group-name> \
+ --name <storage-account-name> \
+ --custom-domain <custom-domain-name> \
+ --use-subdomain false
+ ```
+
+- Replace the `<resource-group-name>` placeholder with the name of the resource group.
+
+- Replace the `<storage-account-name>` placeholder with the name of the storage account.
+
+- Replace the `<custom-domain-name>` placeholder with the name of your custom domain, including the subdomain.
+
+ For example, if your domain is *contoso.com* and your subdomain alias is *www*, enter `www.contoso.com`. If your subdomain is *photos*, enter `photos.contoso.com`.
+
+After the CNAME record has propagated through the Domain Name Servers (DNS), and if your users have the appropriate permissions, they can view blob data by using the custom domain.
+++ #### Step 4: Test your custom domain To confirm that your custom domain is mapped to your blob service endpoint, create a blob in a public container within your storage account. Then, in a web browser, access the blob by using a URI in the following format: `http://<subdomain.customdomain>/<mycontainer>/<myblob>`
-For example, to access a web form in the *myforms* container in the *photos.contoso.com* custom subdomain, you might use the following URI: `http://photos.contoso.com/myforms/applicationform.htm`
+For example, to access a web form in the `myforms` container in the *photos.contoso.com* custom subdomain, you might use the following URI: `http://photos.contoso.com/myforms/applicationform.htm`
<a id="zero-down-time"></a> ### Map a custom domain with zero downtime > [!NOTE]
-> If you are unconcerned that the domain is briefly unavailable to your users, then consider following the steps in the [Map a custom domain](#map-a-domain) section of this article. It's a simpler approach with fewer steps.
+> If you are unconcerned that the domain is briefly unavailable to your users, then consider using the steps in the [Map a custom domain](#map-a-domain) section of this article. It's a simpler approach with fewer steps.
If your domain currently supports an application with a service-level agreement (SLA) that requires zero downtime, then follow these steps to ensure that users can access your domain while the DNS mapping takes place. :heavy_check_mark: Step 1: Get the host name of your storage endpoint.
-:heavy_check_mark: Step 2: Create a intermediary canonical name (CNAME) record with your domain provider.
+:heavy_check_mark: Step 2: Create an intermediary canonical name (CNAME) record with your domain provider.
:heavy_check_mark: Step 3: Pre-register the custom domain with Azure.
@@ -143,7 +191,10 @@ The host name is the storage endpoint URL without the protocol identifier and th
3. Copy the value of the **Primary Blob Service Endpoint** or the **Primary static website endpoint** to a text file.
-4. Remove the protocol identifier (*e.g.*, HTTPS) and the trailing slash from that string. The following table contains examples.
+ > [!NOTE]
+ > The Data Lake storage endpoint is not supported (For example: `https://mystorageaccount.dfs.core.windows.net/`).
+
+4. Remove the protocol identifier (For example: `HTTPS`) and the trailing slash from that string. The following table contains examples.
| Type of endpoint | endpoint | host name | ||--|-|
@@ -152,7 +203,7 @@ The host name is the storage endpoint URL without the protocol identifier and th
Set this value aside for later.
-#### Step 2: Create a intermediary canonical name (CNAME) record with your domain provider
+#### Step 2: Create an intermediary canonical name (CNAME) record with your domain provider
Create a temporary CNAME record to point to your host name. A CNAME record is a type of DNS record that maps a source domain name to a destination domain name.
@@ -174,17 +225,18 @@ Create a temporary CNAME record to point to your host name. A CNAME record is a
Add the subdomain `asverify` to the host name. For example: `asverify.mystorageaccount.blob.core.windows.net`.
-4. To register the custom domain, choose the **Save** button.
-
- If the registration is successful, the portal notifies you that your storage account was successfully updated. Your custom domain has been verified by Azure, but traffic to your domain is not yet being routed to your storage account.
- #### Step 3: Pre-register your custom domain with Azure When you pre-register your custom domain with Azure, you permit Azure to recognize your custom domain without having to modify the DNS record for the domain. That way, when you do modify the DNS record for the domain, it will be mapped to the blob endpoint with no downtime.
+##### [Portal](#tab/azure-portal)
+ 1. In the [Azure portal](https://portal.azure.com), go to your storage account.
-2. In the menu pane, under **Blob Service**, select **Custom domain**.
+2. In the menu pane, under **Blob Service**, select **Custom domain**.
+
+ > [!NOTE]
+ > This option does not appear in accounts that have the hierarchical namespace feature enabled. For those accounts, use either PowerShell or the Azure CLI to complete this step.
![custom domain option](./media/storage-custom-domain-name/custom-domain-button.png "custom domain")
@@ -198,7 +250,49 @@ When you pre-register your custom domain with Azure, you permit Azure to recogni
5. To register the custom domain, choose the **Save** button.
- After the CNAME record has propagated through the Domain Name Servers (DNS), and if your users have the appropriate permissions, they can view blob data by using the custom domain.
+ If the registration is successful, the portal notifies you that your storage account was successfully updated. Your custom domain has been verified by Azure, but traffic to your domain is not yet being routed to your storage account until you create a CNAME record with your domain provider. You'll do that in the next section.
+
+##### [PowerShell](#tab/azure-powershell)
+
+Run the following PowerShell command
+
+```powershell
+Set-AzStorageAccount -ResourceGroupName <resource-group-name> -Name <storage-account-name> -CustomDomainName <custom-domain-name> -UseSubDomain $true
+```
+
+- Replace the `<resource-group-name>` placeholder with the name of the resource group.
+
+- Replace the `<storage-account-name>` placeholder with the name of the storage account.
+
+- Replace the `<custom-domain-name>` placeholder with the name of your custom domain, including the subdomain.
+
+ For example, if your domain is *contoso.com* and your subdomain alias is *www*, enter `www.contoso.com`. If your subdomain is *photos*, enter `photos.contoso.com`.
+
+Traffic to your domain is not yet being routed to your storage account until you create a CNAME record with your domain provider. You'll do that in the next section.
+
+##### [Azure CLI](#tab/azure-cli)
+
+Run the following PowerShell command
+
+```azurecli
+az storage account update \
+ --resource-group <resource-group-name> \
+ --name <storage-account-name> \
+ --custom-domain <custom-domain-name> \
+ --use-subdomain true
+ ```
+
+- Replace the `<resource-group-name>` placeholder with the name of the resource group.
+
+- Replace the `<storage-account-name>` placeholder with the name of the storage account.
+
+- Replace the `<custom-domain-name>` placeholder with the name of your custom domain, including the subdomain.
+
+ For example, if your domain is *contoso.com* and your subdomain alias is *www*, enter `www.contoso.com`. If your subdomain is *photos*, enter `photos.contoso.com`.
+
+Traffic to your domain is not yet being routed to your storage account until you create a CNAME record with your domain provider. You'll do that in the next section.
++ #### Step 4: Create a CNAME record with your domain provider
@@ -222,7 +316,7 @@ Create a temporary CNAME record to point to your host name.
To confirm that your custom domain is mapped to your blob service endpoint, create a blob in a public container within your storage account. Then, in a web browser, access the blob by using a URI in the following format: `http://<subdomain.customdomain>/<mycontainer>/<myblob>`
-For example, to access a web form in the *myforms* container in the *photos.contoso.com* custom subdomain, you might use the following URI: `http://photos.contoso.com/myforms/applicationform.htm`
+For example, to access a web form in the `myforms` container in the *photos.contoso.com* custom subdomain, you might use the following URI: `http://photos.contoso.com/myforms/applicationform.htm`
### Remove a custom domain mapping
@@ -230,8 +324,6 @@ To remove a custom domain mapping, deregister the custom domain. Use one of the
#### [Portal](#tab/azure-portal)
-To remove the custom domain setting, do the following:
- 1. In the [Azure portal](https://portal.azure.com), go to your storage account. 2. In the menu pane, under **Blob Service**, select **Custom domain**.
@@ -241,29 +333,7 @@ To remove the custom domain setting, do the following:
4. Select the **Save** button.
-After the custom domain has been removed successfully, you will see a portal notification that your storage account was successfully updated
-
-#### [Azure CLI](#tab/azure-cli)
-
-To remove a custom domain registration, use the [az storage account update](/cli/azure/storage/account) CLI command, and then specify an empty string (`""`) for the `--custom-domain` argument value.
-
-* Command format:
-
- ```azurecli
- az storage account update \
- --name <storage-account-name> \
- --resource-group <resource-group-name> \
- --custom-domain ""
- ```
-
-* Command example:
-
- ```azurecli
- az storage account update \
- --name mystorageaccount \
- --resource-group myresourcegroup \
- --custom-domain ""
- ```
+After the custom domain has been removed successfully, you will see a portal notification that your storage account was successfully updated.
#### [PowerShell](#tab/azure-powershell)
@@ -288,6 +358,28 @@ To remove a custom domain registration, use the [Set-AzStorageAccount](/powershe
-AccountName "mystorageaccount" ` -CustomDomainName "" ```+
+#### [Azure CLI](#tab/azure-cli)
+
+To remove a custom domain registration, use the [az storage account update](/cli/azure/storage/account) CLI command, and then specify an empty string (`""`) for the `--custom-domain` argument value.
+
+* Command format:
+
+ ```azurecli
+ az storage account update \
+ --name <storage-account-name> \
+ --resource-group <resource-group-name> \
+ --custom-domain ""
+ ```
+
+* Command example:
+
+ ```azurecli
+ az storage account update \
+ --name mystorageaccount \
+ --resource-group myresourcegroup \
+ --custom-domain ""
+ ```
<a id="enable-https"></a>
@@ -298,8 +390,6 @@ This approach involves more steps, but it enables HTTPS access.
If you don't need users to access your blob or web content by using HTTPS, then see the [Map a custom domain with only HTTP enabled](#enable-http) section of this article.
-To map a custom domain and enable HTTPS access, do the following:
- 1. Enable [Azure CDN](../../cdn/cdn-overview.md) on your blob or web endpoint. For a Blob Storage endpoint, see [Integrate an Azure storage account with Azure CDN](../../cdn/cdn-create-a-storage-account-with-cdn.md).
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-files-scale-targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-scale-targets.md
@@ -4,7 +4,7 @@ description: Learn about the scalability and performance targets for Azure Files
Previously updated : 02/08/2021 Last updated : 02/12/2021
@@ -26,11 +26,12 @@ Azure supports multiple types of storage accounts for different storage scenario
| Attribute | GPv2 storage accounts (standard) | FileStorage storage accounts (premium) | |-|-|-|
+| Number of storage accounts per region per subscription | 250 | 250 |
| Maximum storage account capacity | 5 PiB<sup>1</sup> | 100 TiB (provisioned) |
-| Maximum number of file shares | Unlimited | Unlimited, but total provisioned size of all shares must be less than max than the max storage account capacity |
+| Maximum number of file shares | Unlimited | Unlimited, total provisioned size of all shares must be less than max than the max storage account capacity |
| Maximum concurrent request rate | 20,000 IOPS<sup>1</sup> | 100,000 IOPS |
-| Maximum ingress | <ul><li>US/Europe: 1.16 GiB/sec<sup>1</sup></li><li>Other regions (LRS/ZRS): 1.16 GiB/sec<sup>1</sup></li><li>Other regions (GRS): 0.58 GiB/sec<sup>1</sup></li></ul> | 4,136 MiB/sec |
-| Maximum egress | 5.82 GiB/sec<sup>1</sup> | 6,204 MiB/sec |
+| Maximum ingress | <ul><li>US/Europe: 10 Gbp/sec<sup>1</sup></li><li>Other regions (LRS/ZRS): 10 Gbp/sec<sup>1</sup></li><li>Other regions (GRS): 5 Gbp/sec<sup>1</sup></li></ul> | 4,136 MiB/sec |
+| Maximum egress | 50 Gbp/sec<sup>1</sup> | 6,204 MiB/sec |
| Maximum number of virtual network rules | 200 | 200 | | Maximum number of IP address rules | 200 | 200 | | Management read operations | 800 per 5 minutes | 800 per 5 minutes |
@@ -46,7 +47,7 @@ Azure supports multiple types of storage accounts for different storage scenario
| Provisioned size increase/decrease unit | N/A | 1 GiB | | Maximum size of a file share | <ul><li>100 TiB, with large file share feature enabled<sup>2</sup></li><li>5 TiB, default</li></ul> | 100 TiB | | Maximum number of files in a file share | No limit | No limit |
-| Maximum request rate | Storage account limit | <ul><li>Baseline IOPS: 400 + 1 IOPS per GiB, up to 100,000</li><li>IOPS bursting: Max (4000,3x IOPS per GiB), up to 100,000</li></ul> |
+| Maximum request rate (Max IOPS) | <ul><li>10,000, with large file share feature enabled<sup>2</sup></li><li>1,000 or 100 requests per 100 ms, default</li></ul> | <ul><li>Baseline IOPS: 400 + 1 IOPS per GiB, up to 100,000</li><li>IOPS bursting: Max (4000,3x IOPS per GiB), up to 100,000</li></ul> |
| Maximum ingress for a single file share | <ul><li>Up to 300 MiB/sec, with large file share feature enabled<sup>2</sup></li><li>Up to 60 MiB/sec, default</li></ul> | 40 MiB/s + 0.04 * provisioned GiB | | Maximum egress for a single file share | <ul><li>Up to 300 MiB/sec, with large file share feature enabled<sup>2</sup></li><li>Up to 60 MiB/sec, default</li></ul> | 60 MiB/s + 0.06 * provisioned GiB | | Maximum number of share snapshots | 200 snapshots | 200 snapshots |
@@ -69,9 +70,8 @@ Azure supports multiple types of storage accounts for different storage scenario
| Maximum egress for a file | 60 MiB/sec | 300 MiB/sec (Up to 1 GiB/s with SMB Multichannel preview)<sup>2</sup> | | Maximum concurrent handles | 2,000 handles | 2,000 handles |
-<sup>1</sup> Applies to read and write IOs (typically smaller IO sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower.
-
-<sup>2</sup> Subject to machine network limits, available bandwidth, IO sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./storage-files-smb-multichannel-performance.md).
+<sup>1 Applies to read and write IOs (typically smaller IO sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower.</sup>
+<sup>2 Subject to machine network limits, available bandwidth, IO sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./storage-files-smb-multichannel-performance.md).</sup>
## Azure File Sync scale targets The following table indicates the boundaries of Microsoft's testing and also indicates which targets are hard limits:
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-azure-portal-add-libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
@@ -132,6 +132,8 @@ The files should be uploaded to the following path in the storage account's defa
abfss://<file_system>@<account_name>.dfs.core.windows.net/synapse/workspaces/<workspace_name>/sparkpools/<pool_name>/libraries/python/ ```
+You may need to add the ```python``` folder within the ```libraries``` folder if it does not already exist.
+ >[!IMPORTANT] >Custom-packages can be added or modified between sessions. However, you will need to wait for the pool and session to restart to see the updated package.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/convert-disk-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/convert-disk-storage.md
@@ -1,21 +1,21 @@
Title: Convert managed disks storage between standard and premium SSD
-description: How to convert Azure managed disks storage from standard to premium or premium to standard by using the Azure CLI.
+ Title: Convert managed disks storage between different disk types using Azure CLI
+description: How to convert Azure managed disks between the different disks types by using the Azure CLI.
Previously updated : 07/12/2018- Last updated : 02/13/2021+ # Convert Azure managed disks storage from Standard to Premium or Premium to Standard
-There are four disk types of Azure managed disks: Azure ultra disks, premium SSD, standard SSD, and standard HDD. You can switch between the three GA disk types (premium SSD, standard SSD, and standard HDD) based on your performance needs. You are not yet able to switch from or to an ultra disk, you must deploy a new one.
+There are four disk types of Azure managed disks: Azure ultra disks, premium SSD, standard SSD, and standard HDD. You can switch between premium SSD, standard SSD, and standard HDD based on your performance needs. You are not yet able to switch from or to an ultra disk, you must deploy a new one.
This functionality is not supported for unmanaged disks. But you can easily [convert an unmanaged disk to a managed disk](convert-unmanaged-to-managed-disks.md) to be able to switch between disk types.
-This article shows how to convert managed disks from Standard to Premium or Premium to Standard by using the Azure CLI. To install or upgrade the tool, see [Install Azure CLI](/cli/azure/install-azure-cli).
+This article shows how to convert managed disks from one disk type to another by using the Azure CLI. To install or upgrade the tool, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Before you begin
@@ -23,9 +23,9 @@ This article shows how to convert managed disks from Standard to Premium or Prem
* For unmanaged disks, first [convert to managed disks](convert-unmanaged-to-managed-disks.md) so you can switch between storage options.
-## Switch all managed disks of a VM between Premium and Standard
+## Switch all managed disks of a VM between from one account to another
-This example shows how to convert all of a VM's disks from Standard to Premium storage or from Premium to Standard storage. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also switches to a size that supports Premium storage.
+This example shows how to convert all of a VM's disks to premium storage. However, by changing the sku variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. Please note that To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also switches to a size that supports Premium storage.
```azurecli
@@ -39,7 +39,7 @@ vmName='yourVM'
#Required only if converting from Standard to Premium size='Standard_DS2_v2'
-#Choose between Standard_LRS and Premium_LRS based on your scenario
+#Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
sku='Premium_LRS' #Deallocate the VM before changing the size of the VM
@@ -60,9 +60,9 @@ az vm show -n $vmName -g $rgName --query storageProfile.osDisk.managedDisk -o ts
az vm start --name $vmName --resource-group $rgName ```
-## Switch individual managed disks between Standard and Premium
+## Switch individual managed disks from one disk type to another
-For your dev/test workload, you might want to have a mix of Standard and Premium disks to reduce your costs. You can choose to upgrade only those disks that need better performance. This example shows how to convert a single VM disk from Standard to Premium storage or from Premium to Standard storage. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also switches to a size that supports Premium storage.
+For your dev/test workload, you might want to have a mix of Standard and Premium disks to reduce your costs. You can choose to upgrade only those disks that need better performance. This example shows how to convert a single VM disk from Standard to Premium storage. However, by changing the sku variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also switches to a size that supports Premium storage.
```azurecli
@@ -76,7 +76,7 @@ diskName='yourManagedDiskName'
#Required only if converting from Standard to Premium size='Standard_DS2_v2'
-#Choose between Standard_LRS and Premium_LRS based on your scenario
+#Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
sku='Premium_LRS' #Get the parent VM Id
@@ -95,34 +95,7 @@ az disk update --sku $sku --name $diskName --resource-group $rgName
az vm start --ids $vmId ```
-## Switch managed disks between Standard HDD and Standard SSD
-
-This example shows how to convert a single VM disk from Standard HDD to Standard SSD or from Standard SSD to Standard HDD.
-
- ```azurecli
-
-#resource group that contains the managed disk
-rgName='yourResourceGroup'
-
-#Name of your managed disk
-diskName='yourManagedDiskName'
-
-#Choose between Standard_LRS and StandardSSD_LRS based on your scenario
-sku='StandardSSD_LRS'
-
-#Get the parent VM ID
-vmId=$(az disk show --name $diskName --resource-group $rgName --query managedBy --output tsv)
-
-#Deallocate the VM before changing the disk type
-az vm deallocate --ids $vmId
-
-# Update the SKU
-az disk update --sku $sku --name $diskName --resource-group $rgName
-
-az vm start --ids $vmId
-```
-
-## Switch managed disks between Standard and Premium in Azure portal
+## Switch managed disks from one disk type to another
Follow these steps:
@@ -132,7 +105,7 @@ Follow these steps:
4. In the pane for the VM, select **Disks** from the menu. 5. Select the disk that you want to convert. 6. Select **Configuration** from the menu.
-7. Change the **Account type** from **Standard HDD** to **Premium SSD** or from **Premium SSD** to **Standard HDD**.
+7. Change the **Account type** from the original disk type to the desired disk type.
8. Select **Save**, and close the disk pane. The update of the disk type is instantaneous. You can restart your VM after the conversion.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/windows/convert-disk-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/convert-disk-storage.md
@@ -1,30 +1,30 @@
Title: Convert managed disks storage between standard and premium SSD by using Azure PowerShell
-description: How to convert Azure managed disks from Standard to Premium or Premium to Standard by using Azure PowerShell.
+ Title: Convert managed disks storage between different disk types by using Azure PowerShell
+description: How to convert Azure managed disks between the different disks types by using Azure PowerShell.
Previously updated : 02/22/2019- Last updated : 02/13/2021+ # Update the storage type of a managed disk
-There are four disk types of Azure managed disks: Azure ultra disks, premium SSD, standard SSD, and standard HDD. You can switch between the three GA disk types (premium SSD, standard SSD, and standard HDD) based on your performance needs. You are not yet able to switch from or to an ultra disk, you must deploy a new one.
+There are four disk types of Azure managed disks: Azure ultra disks, premium SSD, standard SSD, and standard HDD. You can switch between premium SSD, standard SSD, and standard HDD based on your performance needs. You are not yet able to switch from or to an ultra disk, you must deploy a new one.
This functionality is not supported for unmanaged disks. But you can easily [convert an unmanaged disk to a managed disk](convert-unmanaged-to-managed-disks.md) to be able to switch between disk types.
-
-## Prerequisites
+
+## Before you begin
* Because conversion requires a restart of the virtual machine (VM), you should schedule the migration of your disk storage during a pre-existing maintenance window. * If your disk is unmanaged, first [convert it to a managed disk](convert-unmanaged-to-managed-disks.md) so you can switch between storage options.
-## Switch all managed disks of a VM between Premium and Standard
+## Switch all managed disks of a VM between from one account to another
-This example shows how to convert all of a VM's disks from Standard to Premium storage or from Premium to Standard storage. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also switches to a size that supports premium storage:
+This example shows how to convert all of a VM's disks to premium storage. However, by changing the $storageType variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also switches to a size that supports premium storage:
```azurepowershell-interactive # Name of the resource group that contains the VM
@@ -33,7 +33,7 @@ $rgName = 'yourResourceGroup'
# Name of the your virtual machine $vmName = 'yourVM'
-# Choose between Standard_LRS and Premium_LRS based on your scenario
+# Choose between Standard_LRS, StandardSDD_LRS and Premium_LRS based on your scenario
$storageType = 'Premium_LRS' # Premium capable size
@@ -68,14 +68,14 @@ Start-AzVM -ResourceGroupName $rgName -Name $vmName
## Switch individual managed disks between Standard and Premium
-For your dev/test workload, you might want a mix of Standard and Premium disks to reduce your costs. You can choose to upgrade only those disks that need better performance. This example shows how to convert a single VM disk from Standard to Premium storage or from Premium to Standard storage. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also shows how to switch to a size that supports Premium storage:
+For your dev/test workload, you might want a mix of Standard and Premium disks to reduce your costs. You can choose to upgrade only those disks that need better performance. This example shows how to convert a single VM disk from Standard to Premium storage. However, by changing the $storageType variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also shows how to switch to a size that supports Premium storage:
```azurepowershell-interactive $diskName = 'yourDiskName' # resource group that contains the managed disk $rgName = 'yourResourceGroupName'
-# Choose between Standard_LRS and Premium_LRS based on your scenario
+# Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
$storageType = 'Premium_LRS' # Premium capable size $size = 'Standard_DS2_v2'
@@ -102,50 +102,21 @@ $disk | Update-AzDisk
Start-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name ```
-## Convert managed disks from Standard to Premium in the Azure portal
+## Switch managed disks from one disk type to another
Follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select the VM from the list of **Virtual machines** in the portal.
-3. If the VM isn't stopped, select **Stop** at the top of VM **Overview** pane, and wait for the VM to stop.
-3. In the pane for the VM, select **Disks** from the menu.
-4. Select the disk that you want to convert.
-5. Select **Configuration** from the menu.
-6. Change the **Account type** from **Standard HDD** to **Premium SSD**.
-7. Click **Save**, and close the disk pane.
+2. Select the VM from the list of **Virtual machines**.
+3. If the VM isn't stopped, select **Stop** at the top of the VM **Overview** pane, and wait for the VM to stop.
+4. In the pane for the VM, select **Disks** from the menu.
+5. Select the disk that you want to convert.
+6. Select **Configuration** from the menu.
+7. Change the **Account type** from the original disk type to the desired disk type.
+8. Select **Save**, and close the disk pane.
The disk type conversion is instantaneous. You can start your VM after the conversion.
-## Switch managed disks between Standard HDD and Standard SSD
-
-This example shows how to convert a single VM disk from Standard HDD to Standard SSD or from Standard SSD to Standard HDD:
-
-```azurepowershell-interactive
-
-$diskName = 'yourDiskName'
-# resource group that contains the managed disk
-$rgName = 'yourResourceGroupName'
-# Choose between Standard_LRS and StandardSSD_LRS based on your scenario
-$storageType = 'StandardSSD_LRS'
-
-$disk = Get-AzDisk -DiskName $diskName -ResourceGroupName $rgName
-
-# Get parent VM resource
-$vmResource = Get-AzResource -ResourceId $disk.ManagedBy
-
-# Stop and deallocate the VM before changing the storage type
-Stop-AzVM -ResourceGroupName $vmResource.ResourceGroupName -Name $vmResource.Name -Force
-
-$vm = Get-AzVM -ResourceGroupName $vmResource.ResourceGroupName -Name $vmResource.Name
-
-# Update the storage type
-$disk.Sku = [Microsoft.Azure.Management.Compute.Models.DiskSku]::new($storageType)
-$disk | Update-AzDisk
-
-Start-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name
-```
- ## Next steps Make a read-only copy of a VM by using a [snapshot](snapshot-copy-managed-disk.md).
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-troubleshoot-peering-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-troubleshoot-peering-issues.md
@@ -229,7 +229,7 @@ To resolve this issue, delete the peering from both virtual networks, and then r
### Failed to peer a Databricks virtual network
-To resolve this issue, configure the virtual network peering under **Azure Databricks**, and then specify the target virtual network by using **Resource ID**. For more information, see [Peer a Databricks virtual network to a remote virtual network](https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-peering.html#id2).
+To resolve this issue, configure the virtual network peering under **Azure Databricks**, and then specify the target virtual network by using **Resource ID**. For more information, see [Peer a Databricks virtual network to a remote virtual network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-peering#id2).
### The remote virtual network lacks a gateway
virtual-wan https://docs.microsoft.com/en-us/azure/virtual-wan/pricing-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/pricing-concepts.md
@@ -19,8 +19,7 @@ Azure Virtual WAN brings multiple network and security services together in a un
Each service in Virtual WAN is priced. Therefore, suggesting a single price is not applicable to Virtual WAN. The [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) provides a mechanism to derive the cost, which is based on the services provisioned in a Virtual WAN. This article discusses commonly asked questions about Virtual WAN pricing. >[!NOTE]
->For current pricing information, see [Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/).
->
+>For current pricing information, see [Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/). Inter-hub (hub to hub) charges do not show in the Virtual WAN pricing page as it is subject to Inter-Region (Intra/Inter-continental) charges [Azure data transfer charges](https://azure.microsoft.com/pricing/details/bandwidth/).
## <a name="questions"></a>Common pricing questions
@@ -70,4 +69,4 @@ Virtual WAN comes in two flavors:
* For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
-* For current pricing, see [Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/).
+* For current pricing, see [Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/).
virtual-wan https://docs.microsoft.com/en-us/azure/virtual-wan/scenario-route-through-nva https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/scenario-route-through-nva.md
@@ -26,9 +26,9 @@ When working with Virtual WAN virtual hub routing, there are quite a few availab
In this scenario we will use the naming convention:
-* "NVA VNets" for virtual networks where users have deployed an NVA and have connected other virtual networks as spokes (VNet 2 and VNet 4 in the **connectivity matrix**, below).
-* "NVA Spokes" for virtual networks connected to an NVA VNet (VNet 5, VNet 6, VNet 7, and VNet 8 in the **connectivity matrix**, below).
-* "Non-NVA VNets" for virtual networks connected to Virtual WAN that do not have an NVA or other VNets peered with them (VNet 1 and VNet 3 in the **connectivity matrix**, below).
+* "NVA VNets" for virtual networks where users have deployed an NVA and have connected other virtual networks as spokes (VNet 2 and VNet 4 in the **Figure 2** further down in the article).
+* "NVA Spokes" for virtual networks connected to an NVA VNet (VNet 5, VNet 6, VNet 7, and VNet 8 in the **Figure 2** further down in the article).
+* "Non-NVA VNets" for virtual networks connected to Virtual WAN that do not have an NVA or other VNets peered with them (VNet 1 and VNet 3 in the **Figure 2** further down in the article).
* "Hubs" for Microsoft-managed Virtual WAN Hubs, where NVA VNets are connected to. NVA spoke VNets don't need to be connected to Virtual WAN hubs, only to NVA VNets. The following connectivity matrix, summarizes the flows supported in this scenario:
@@ -45,7 +45,7 @@ The following connectivity matrix, summarizes the flows supported in this scenar
Each of the cells in the connectivity matrix describes how a VNet or branch (the "From" side of the flow, the row headers in the table) communicates with a destination VNet or branch (the "To" side of the flow, the column headers in italics in the table). "Direct" means that connectivity is provided natively by Virtual WAN, "Peering" means that connectivity is provided by a User-Defined Route int he VNet, "Over NVA VNet" means that the connectivity traverses the NVA deployed in the NVA VNet. Consider the following: * NVA Spokes are not managed by Virtual WAN. As a result, the mechanisms with which they will communicate to other VNets or branches are maintained by the user. Connectivity to the NVA VNet is provided by a VNet peering, and a Default route to 0.0.0.0/0 pointing to the NVA as next hop should cover connectivity to the Internet, to other spokes, and to branches
-* NVA VNets will know about their own NVA spokes, but not about NVA spokes connected to other NVA VNets. For example, in Table 1, VNet 2 knows about VNet 5 and VNet 6, but not about other spokes such as VNet 7 and VNet 8. A static route is required to inject other spokes' prefixes into NVA VNets
+* NVA VNets will know about their own NVA spokes, but not about NVA spokes connected to other NVA VNets. For example, in the Figure 2 further down in this article, VNet 2 knows about VNet 5 and VNet 6, but not about other spokes such as VNet 7 and VNet 8. A static route is required to inject other spokes' prefixes into NVA VNets
* Similarly, branches and non-NVA VNets will not know about any NVA spoke, since NVA spokes are not connected to Virtual WAN hubs. As a result, static routes will be needed here as well. Taking into account that the NVA spokes are not managed by Virtual WAN, all other rows show the same connectivity pattern. As a result, a single route table (the Default one) will do: